chapter
stringlengths
1.97k
1.53M
path
stringlengths
47
241
By Shigehiro Oishi University of Virginia This module asks two questions: “Is happiness good?” and “Is happier better?” (i.e., is there any benefit to be happier, even if one is already moderately happy?) The answer to the first question is by and large “yes.” The answer to the second question is, “it depends.” That is, the optimal level of happiness differs, depending on specific life domains. In terms of romantic relationships and volunteer activities, happier is indeed better. In contrast, in terms of income, education, and political participation, the moderate level of happiness is the best; beyond the moderate level of happiness, happier is not better. learning objectives • Learn about the research on the relationship between happiness and important life outcomes. • Learn about the levels of happiness that are associated with the highest levels of outcomes. Introduction Are you happy? If someone asked this question, how would you answer it? Some of you will surely say yes immediately, while others will hesitate to say yes. As scientific research on happiness over the past 30 years has repeatedly shown, there are huge individual variations among levels of happiness (Diener, Suh, Lucas, & Smith, 1999). Not surprisingly, some people are very happy, while others are not so happy, and still others are very unhappy. What about the next question: Do you want to be happy? If someone asked this question, I would bet the vast majority of college students (in particular, Americans) would immediately say yes. Although there are large individual differences in the actual levels of happiness, nearly everyone wants to be happy, and most of us want to be happier, even if we are already fairly happy (Oishi, Diener, & Lucas, 2007). The next important questions then are “Is happiness good?” and “Is happier better?” (i.e., is there any benefit to being happier, even if one is already moderately happy?) This module will tackle these two questions. Is happiness good? The ancient philosopher Aristotle thought so. He argued that happiness is the ultimate goal of human beings, because everything else, ranging from being respected by others, to being with a wonderful partner, to living in a fabulous house, is all instrumental, namely, to achieve some other goals (Thomson, 1953). In contrast, all other aspirations (e.g., money, health, reputation, friendship) are instrumental goals pursued in order to meet higher goals, including happiness. Thus, according to Aristotle, it is only rational that happiness is the ultimate objective in life. There are of course plenty of thinkers who disagree with Aristotle and see happiness as a frivolous pursuit. For instance, the famous French novelist Gustav Flaubert is believed to have said: “To be stupid, selfish, and have good health are three requirements for happiness, though if stupidity is lacking, all is lost” (Diener & Biswas-Diener, 2008, p. 19). Flaubert clearly associated happiness with selfishness and thoughtlessness. So what does the science of happiness tell us about the utility of happiness? There are two major reviews on this topic so far (Lyubomirsky, King, & Diener, 2005; Veenhoven, 1989). Both reviews found that happiness is good; happy people tend to be more likely to be successful at work (Cropanzano & Wright, 1999; Roberts, Caspi, & Moffitt, 2003), tend to be more likely to find romantic partners (Lucas, Clark, Georgellis, & Diener, 2003), tend to be better citizens, engage in more prosocial behaviors (Carlson, Charlin, & Miller, 1988), and tend to be healthier and live longer than unhappy people (Pressman & Cohen, 2012). The correlation between happiness and various life outcomes is almost never negative. That is, harmful effects of happiness are rare. However, the effect size is modest at best (r = .20–.30), with a great deal of heterogeneity, suggesting that important moderators (particularly individual differences) remain to be discovered. Thus, although happiness is generally associated with positive life outcomes, the next important question is whether it is wise to seek greater happiness when one is already reasonably happy. A major review on the question of “Is happier better?” revealed that the answer depends on life domains (Oishi, Diener, & Lucas, 2007). In achievement-related domains such as income and education, once one is moderately happy, greater levels of happiness were not associated with better outcomes. In contrast, in relationship-related life domains, even if one is moderately happy, greater levels of happiness were indeed positively associated with better outcomes. I will describe specific findings below. In one study, researchers (Diener, Nickerson, Lucas, & Sandvik, 2002) followed college students from their freshman year until middle adulthood (when they were in their late 30s). When participants were incoming college freshman, they reported their cheerfulness. Nineteen years later, at about age 37, the same participants also reported their annual income. There was a positive association between cheerfulness in the freshman year and income 19 years later; specifically, the participants who were in the highest 10% of cheerfulness in 1976 earned an average of \$62,681 in 1995. In contrast, the participants in the lowest 10% of cheerfulness in 1976 earned an average of \$54,318. So, in general, cheerful college students later made more money than depressed ones. Interestingly, however, this association was not linear. Those who were moderately cheerful (“above average” on cheerfulness) in college earned the most, \$66,144. That is, the moderately cheerful college students were later making nearly \$3,500 more than the most cheerful college students. Thus, if we use income as a criterion, the optimal level of “cheerfulness” was not the highest level, but a more moderate level. Diener et al.’s (2002) finding has been replicated in other large longitudinal studies. For instance, Oishi et al. (2007) analyzed the Australian Youth in Transition study, a longitudinal study of nationally representative cohorts of young people in Australia and found the non-linear effect of happiness on later income. Participants in the Australian study indicated their life satisfaction (“satisfaction with life as a whole”) when they were 18 years old. They also reported their gross income when they were 33 years old. Like American data, Australian data also showed that teenagers satisfied with their lives were later earning more money than those unsatisfied. However, Australians who were moderately satisfied when they were 18 years old were making the most in their 30s rather than those who were very satisfied with their lives. Respondents from the Australian Youth in Transition Study also reported the number of years of schooling they completed beyond primary education when they were 26 years old. Similar to the income findings, the highest levels of education were reported by those individuals who had moderate levels of satisfaction when they were 18 years old. The “very satisfied” teenagers did not pursue as much education later as teenagers who were moderately satisfied. One reason why moderately satisfied individuals later made the most money could be due to the years of education that people pursued: Very satisfied teenagers do not seem to pursue more education and, therefore, somewhat limiting their earning in their 30s. Oishi et al. (2007) also analyzed two other longitudinal data sets: the German Socio-Economic Panel Study (GSOEP) and the British Household Panel Study (BHPS). Both studies used nationally representative samples whose participants were followed longitudinally. These two data sets again showed that people who were satisfied with their lives early on were making more money years later than those who were not satisfied with their lives. However, again, the relationship between earlier life satisfaction and later income was not linear. That is, as in Australian data depicted in Figure 10.3.1, those who were most satisfied early on were not making as much money as those who were moderately satisfied. In short, four large, longitudinal studies conducted in the United States, United Kingdom, Australia, and Germany all converged to indicate that happiness is good up to a certain point; however, higher levels of happiness beyond a moderate level are not associated with higher incomes or more education. What about other life domains? Is the moderate level of happiness also associated with the best outcome in terms of, say, romantic relationships? The respondents in the Australian Youth in Transition Study also reported the length of their current intimate relationship later. In contrast to the income and education findings, individuals from the “very satisfied” teenagers were involved in longer intimate relationships later than were individuals from the second and third most-satisfied teenagers. There are now many other data sets available to test the issues of the optimal levels of happiness. For instance, the World Values Survey, which was administered in 1981, 1990, 1995, and 2000, includes 118,519 respondents from 96 countries and regions around the world (see www.worldvaluessurvey.org for more information about the survey questions and samples). Respondents rated their overall life satisfaction on a 10-point scale (“All things considered, how satisfied are you with your life as a whole these days?”). They also indicated their income (in deciles from the lowest 10% in the nation to the highest 10% of the nation), highest education completed, their relationship status (i.e., whether they were currently in a stable long-term relationship), volunteer work they participated in (respondents indicated which, if any, of the 15 types of volunteer work they were involved in), and political actions they had taken (e.g., signing a petition, joining in boycotts). These questions were embedded in more than 200 questions about values and beliefs. Here, we consider income, highest education completed, relationship status, volunteer work, and political participation. The World Values Survey data also showed that the highest levels of income and education were reported by moderately satisfied individuals (8 or 9 on the 10-point scale) rather than very satisfied (10 on the 10-point scale). Similarly, the greatest level of political participation was reported by moderately satisfied individuals rather than by the most satisfied individuals. In contrast, the highest proportion of respondents in a stable intimate relationship was observed among the “very satisfied” rather than moderately satisfied individuals. Similarly, the highest level of volunteer activities was observed among the “very satisfied” individuals. So far, the research shows that the optimal level of happiness differs, depending on specific life domains. In terms of romantic relationships and volunteer activities, happier is indeed better. In contrast, in terms of income, education, and political participation, the moderate level of happiness is the best; beyond the moderate level of happiness, happier is not better. Why is it best to be happiest possible in terms of romantic relationships and volunteer activities, whereas best to be moderately happy in terms of achievement? At this point, this mechanism has not been empirically demonstrated. Thus, the following is just a speculation, or educated guess. First, with regard to the nonlinear association between happiness and achievement, the main reason why the very high level of happiness might not be associated with the highest level of achievement is that complete satisfaction with current conditions might prevent individuals from energetically pursuing challenge in achievement domains such as education and income. After all, the defining characteristics of need for achievement are high standards of excellence and constant striving for perfection (McClelland, 1961). Similarly, if individuals are completely satisfied with the current political situation, they might be less likely to actively participate in the political process. Achievement domains also have very clear objective criteria, in the form of either monetary value, degree, or skill levels. Improvement motivation (e.g., self-criticism, self-improvement) serves well in the achievement domains because this mindset makes clear what needs to be done to improve one’s skills and performance. In contrast, self-complacency and positive illusions prevent one from clearly seeing one’s weaknesses and working on these weaknesses. The diametric opposite of self-complacency, Tiger Woods spent long hours practicing to improve his already-amazing shot after winning his first Masters. Similarly, Kobe Bryant was known to show up at practice three hours early, so that he could improve some aspects of his game, even though he already was one of the best players in the NBA. This type of self-improvement motivation is often rewarded handsomely in terms of performance, income, status, and fame. The same type of motivation applied to an intimate relationship, however, does not work as well. This motivation might lead to a realization that the current partner is less than ideal and that a better partner is somewhere out there. Indeed, in a romantic relationship, idealization of the partner is known to be associated with higher relationship satisfaction and stable relationship (e.g., Murray, Holmes, & Griffin, 2003). In other words, positive illusion serves well in romantic relationships, in which one might not want to pay too much attention to his or her partner’s weaknesses. In the 1959 film Some Like It Hot, the millionaire Osgood Fielding III (played by Joe E. Brown) fell in love with Daphne (played by Jack Lemmon). In the memorable ending, Daphne confessed that she was actually a man and Osgood famously responded: “Well, nobody’s perfect!” In short, we argue that the highest possible level of happiness is associated with idealization of the partner and positive illusion about the relationship itself, which, in turn, results in relationship stability. In an area in which nobody can be perfect, improvement motives can be a poison. What about the optimal level of happiness for volunteer work? Why is the highest level of happiness better than moderate happiness? Many of us often idealize volunteer work the way we idealize relationships. Many of us volunteer, with an idealistic view of the world, to contribute to humanity. However, like a romantic partner, no volunteer organization or volunteer work is perfect. It is also time-consuming and requires serious commitment. Just like romantic relationships, then, it might be best to have a mindset with positive illusion, that one’s efforts are making a difference to the world. It might be that moderately happy people, or at least some of them, might become more likely to be disillusioned with the volunteer work than very happy people are, because moderately happy people are more realistic than very happy people. In short, volunteer work might be more similar to close relationships than to achievement domains in terms of its motivational mechanism. Conclusion In summary, the optimal levels of happiness differ, depending on life domains. In terms of income and education, the optimal levels of happiness seem to be moderate levels. That is, individuals who are moderately happy are likely to attain the higher levels of education and earn the most in the future. In contrast, in terms of romantic relationships and volunteer activities, the optimal levels of happiness seem to be the highest levels. Individuals who are very happy are likely to stay in a good romantic relationship or volunteer. The divergent optimal levels of happiness for relationship and achievement domains suggest that it is generally difficult to have an extremely high level of overall happiness, good romantic relationships, and high achievements. To this end, it is not surprising that icons of motivation—Tom Cruise, Vladamir Putin, Martha Stewart, and hockey player Petr Nedved—all had marital problems, while achieving unprecedented success in their respective fields. Other examples, such as Bill and Melinda Gates, and Barack and Michele Obama give us some hope that it is possible to "have it all." if you have talent in your chosen field, are passionate about it, and can switch your motivational strategies between work and love then optimal happiness is possible. Discussion Questions 1. Why do you think that the optimal level of happiness is the moderate level of happiness for future income, and the highest education achieved? Can you think of any other reasons than the ones described in this module why this is the case? 2. Do you think that the optimal level of happiness differs, not only across life domains, but across cultures? If so, how? In which culture, might the optimal level of happiness be lower? Higher? Why? 3. What might be the optimal level of happiness for health and longevity? Highest possible level of happiness or moderate level? Why do you think so? Vocabulary Happiness A state of well-being characterized by relative permanence, by dominantly agreeable emotion ranging in value from mere contentment to deep and intense joy in living, and by a natural desire for its continuation. Life domains Various domains of life, such as finances and job. Life satisfaction The degree to which one is satisfied with one’s life overall. Optimal level The level that is the most favorable for an outcome.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_10%3A_Well_Being/10.3%3A_Optimal_Levels_of_Happiness.txt
By Emily Hooker and Sarah Pressman University of Calfornia, Irvine Our emotions, thoughts, and behaviors play an important role in our health. Not only do they influence our day-to-day health practices, but they can also influence how our body functions. This module provides an overview of health psychology, which is a field devoted to understanding the connections between psychology and health. Discussed here are examples of topics a health psychologist might study, including stress, psychosocial factors related to health and disease, how to use psychology to improve health, and the role of psychology in medicine. learning objectives • Describe basic terminology used in the field of health psychology. • Explain theoretical models of health, as well as the role of psychological stress in the development of disease. • Describe psychological factors that contribute to resilience and improved health. • Defend the relevance and importance of psychology to the field of medicine. What Is Health Psychology? Today, we face more chronic disease than ever before because we are living longer lives while also frequently behaving in unhealthy ways. One example of a chronic disease is coronary heart disease (CHD): It is the number one cause of death worldwide (World Health Organization, 2013). CHD develops slowly over time and typically appears midlife, but related heart problems can persist for years after the original diagnosis or cardiovascular event. In managing illnesses that persist over time (other examples might include cancer, diabetes, and long-term disability) many psychological factors will determine the progression of the ailment. For example, do patients seek help when appropriate? Do they follow doctor recommendations? Do they develop negative psychological symptoms due to lasting illness (e.g., depression)? Also important is that psychological factors can play a significant role in who develops these diseases, the prognosis, and the nature of the symptoms related to the illness. Health psychology is a relatively new, interdisciplinary field of study that focuses on these very issues, or more specifically, the role of psychology in maintaining health, as well as preventing and treating illness. Consideration of how psychological and social factors influence health is especially important today because many of the leading causes of illness in developed countries are often attributed to psychological and behavioral factors. In the case of CHD, discussed above, psychosocial factors, such as excessive stress, smoking, unhealthy eating habits, and some personality traits can also lead to increased risk of disease and worse health outcomes. That being said, many of these factors can be adjusted using psychological techniques. For example, clinical health psychologists can improve health practices like poor dietary choices and smoking, they can teach important stress reduction techniques, and they can help treat psychological disorders tied to poor health. Health psychology considers how the choices we make, the behaviors we engage in, and even the emotions that we feel, can play an important role in our overall health (Cohen & Herbert, 1996; Taylor, 2012). Health psychology relies on the Biopsychosocial Model of Health. This model posits that biology, psychology, and social factors are just as important in the development of disease as biological causes (e.g., germs, viruses), which is consistent with the World Health Organization (1946) definition of health. This model replaces the older Biomedical Model of Health, which primarily considers the physical, or pathogenic, factors contributing to illness. Thanks to advances in medical technology, there is a growing understanding of the physiology underlying the mind–body connection, and in particular, the role that different feelings can have on our body’s function. Health psychology researchers working in the fields of psychosomatic medicine and psychoneuroimmunology, for example, are interested in understanding how psychological factors can “get under the skin” and influence our physiology in order to better understand how factors like stress can make us sick. Stress And Health You probably know exactly what it’s like to feel stress, but what you may not know is that it can objectively influence your health. Answers to questions like, “How stressed do you feel?” or “How overwhelmed do you feel?” can predict your likelihood of developing both minor illnesses as well as serious problems like future heart attack (Cohen, Janicki-Deverts, & Miller, 2007). (Want to measure your own stress level? Check out the links at the end of the module.) To understand how health psychologists study these types of associations, we will describe one famous example of a stress and health study. Imagine that you are a research subject for a moment. After you check into a hotel room as part of the study, the researchers ask you to report your general levels of stress. Not too surprising; however, what happens next is that you receive droplets of cold virus into your nose! The researchers intentionally try to make you sick by exposing you to an infectious illness. After they expose you to the virus, the researchers will then evaluate you for several days by asking you questions about your symptoms, monitoring how much mucus you are producing by weighing your used tissues, and taking body fluid samples—all to see if you are objectively ill with a cold. Now, the interesting thing is that not everyone who has drops of cold virus put in their nose develops the illness. Studies like this one find that people who are less stressed and those who are more positive at the beginning of the study are at a decreased risk of developing a cold (Cohen, Tyrrell, & Smith, 1991; Cohen, Alper, Doyle, Treanor, & Turner, 2006) (see Figure 10.4.1 for an example). Importantly, it is not just major life stressors (e.g., a family death, a natural disaster) that increase the likelihood of getting sick. Even small daily hassles like getting stuck in traffic or fighting with your girlfriend can raise your blood pressure, alter your stress hormones, and even suppress your immune system function (DeLongis, Folkman, & Lazarus, 1988; Twisk, Snel, Kemper, & van Machelen, 1999). It is clear that stress plays a major role in our mental and physical health, but what exactly is it? The term stresswas originally derived from the field of mechanics where it is used to describe materials under pressure. The word was first used in a psychological manner by researcher Hans Selye. He was examining the effect of an ovarian hormone that he thought caused sickness in a sample of rats. Surprisingly, he noticed that almost any injected hormone produced this same sickness. He smartly realized that it was not the hormone under investigation that was causing these problems, but instead, the aversive experience of being handled and injected by researchers that led to high physiological arousal and, eventually, to health problems like ulcers. Selye (1946) coined the term stressor to label a stimulus that had this effect on the body and developed a model of the stress response called the General Adaptation Syndrome. Since then, psychologists have studied stress in a myriad of ways, including stress as negative events (e.g., natural disasters or major life changes like dropping out of school), as chronically difficult situations (e.g., taking care of a loved one with Alzheimer’s), as short-term hassles, as a biological fight-or-flight response, and even as clinical illness like post-traumatic stress disorder (PTSD). It continues to be one of the most important and well-studied psychological correlates of illness, because excessive stress causes potentially damaging wear and tear on the body and can influence almost any imaginable disease process. Protecting Our Health An important question that health psychologists ask is: What keeps us protected from disease and alive longer? When considering this issue of resilience (Rutter, 1985), five factors are often studied in terms of their ability to protect (or sometimes harm) health. They are: 1. Coping 2. Control and Self-Efficacy 3. Social Relationships 4. Dispositions and Emotions 5. Stress Management Coping Strategies How individuals cope with the stressors they face can have a significant impact on health. Coping is often classified into two categories: problem-focused coping or emotion-focused coping (Carver, Scheier, & Weintraub, 1989). Problem-focused coping is thought of as actively addressing the event that is causing stress in an effort to solve the issue at hand. For example, say you have an important exam coming up next week. A problem-focused strategy might be to spend additional time over the weekend studying to make sure you understand all of the material. Emotion-focused coping, on the other hand, regulates the emotions that come with stress. In the above examination example, this might mean watching a funny movie to take your mind off the anxiety you are feeling. In the short term, emotion-focused coping might reduce feelings of stress, but problem-focused coping seems to have the greatest impact on mental wellness (Billings & Moos, 1981; Herman-Stabl, Stemmler, & Petersen, 1995). That being said, when events are uncontrollable (e.g., the death of a loved one), emotion-focused coping directed at managing your feelings, at first, might be the better strategy. Therefore, it is always important to consider the match of the stressor to the coping strategy when evaluating its plausible benefits. Control and Self-Efficacy Another factor tied to better health outcomes and an improved ability to cope with stress is having the belief that you have control over a situation. For example, in one study where participants were forced to listen to unpleasant (stressful) noise, those who were led to believe that they had control over the noise performed much better on proofreading tasks afterwards (Glass & Singer, 1972). In other words, even though participants did not have actual control over the noise, the control belief aided them in completing the task. In similar studies, perceived control benefited immune system functioning (Sieber et al., 1992). Outside of the laboratory, studies have shown that older residents in assisted living facilities, which are notorious for low control, lived longer and showed better health outcomes when given control over something as simple as watering a plant or choosing when student volunteers came to visit (Rodin & Langer, 1977; Schulz & Hanusa, 1978). In addition, feeling in control of a threatening situation can actually change stress hormone levels (Dickerson & Kemeny, 2004). Believing that you have control over your own behaviors can also have a positive influence on important outcomes like smoking cessation, contraception use, and weight management (Wallston & Wallston, 1978). When individuals do not believe they have control, they do not try to change. Self-efficacy is closely related to control, in that people with high levels of this trait believe they can complete tasks and reach their goals. Just as feeling in control can reduce stress and improve health, higher self-efficacy can reduce stress and negative health behaviors, and is associated with better health (O’Leary, 1985). Social Relationships Research has shown that the impact of social isolation on our risk for disease and death is similar in magnitude to the risk associated with smoking regularly (Holt-Lunstad, Smith, & Layton, 2010; House, Landis, & Umberson, 1988). In fact, the importance of social relationships for our health is so significant that some scientists believe our body has developed a physiological system that encourages us to seek out our relationships, especially in times of stress (Taylor et al., 2000). Social integration is the concept used to describe the number of social roles that you have (Cohen & Wills, 1985), as well as the lack of isolation. For example, you might be a daughter, a basketball team member, a Humane Society volunteer, a coworker, and a student. Maintaining these different roles can improve your health via encouragement from those around you to maintain a healthy lifestyle. Those in your social network might also provide you with social support (e.g., when you are under stress). This support might include emotional help (e.g., a hug when you need it), tangible help (e.g., lending you money), or advice. By helping to improve health behaviors and reduce stress, social relationships can have a powerful, protective impact on health, and in some cases, might even help people with serious illnesses stay alive longer (Spiegel, Kraemer, Bloom, & Gottheil, 1989). Dispositions and Emotions: What’s Risky and What’s Protective? Negative dispositions and personality traits have been strongly tied to an array of health risks. One of the earliest negative trait-to-health connections was discovered in the 1950s by two cardiologists. They made the interesting discovery that there were common behavioral and psychological patterns among their heart patients that were not present in other patient samples. This pattern included being competitive, impatient, hostile, and time urgent. They labeled it Type A Behavior. Importantly, it was found to be associated with double the risk of heart disease as compared with Type B Behavior (Friedman & Rosenman, 1959). Since the 1950s, researchers have discovered that it is the hostility and competitiveness components of Type A that are especially harmful to heart health (Iribarren et al., 2000; Matthews, Glass, Rosenman, & Bortner, 1977; Miller, Smith, Turner, Guijarro, & Hallet, 1996). Hostile individuals are quick to get upset, and this angry arousal can damage the arteries of the heart. In addition, given their negative personality style, hostile people often lack a heath-protective supportive social network. Positive traits and states, on the other hand, are often health protective. For example, characteristics like positive emotions (e.g., feeling happy or excited) have been tied to a wide range of benefits such as increased longevity, a reduced likelihood of developing some illnesses, and better outcomes once you are diagnosed with certain diseases (e.g., heart disease, HIV) (Pressman & Cohen, 2005). Across the world, even in the most poor and underdeveloped nations, positive emotions are consistently tied to better health (Pressman, Gallagher, & Lopez, 2013). Positive emotions can also serve as the “antidote” to stress, protecting us against some of its damaging effects (Fredrickson, 2001; Pressman & Cohen, 2005; see Figure 10.4.2). Similarly, looking on the bright side can also improve health. Optimism has been shown to improve coping, reduce stress, and predict better disease outcomes like recovering from a heart attack more rapidly (Kubzansky, Sparrow, Vokonas, & Kawachi, 2001; Nes & Segerstrom, 2006; Scheier & Carver, 1985; Segerstrom, Taylor, Kemeny, & Fahey, 1998). Stress Management About 20 percent of Americans report having stress, with 18–33 year-olds reporting the highest levels (American Psychological Association, 2012). Given that the sources of our stress are often difficult to change (e.g., personal finances, current job), a number of interventions have been designed to help reduce the aversive responses to duress. For example, relaxation activities and forms of meditation are techniques that allow individuals to reduce their stress via breathing exercises, muscle relaxation, and mental imagery. Physiological arousal from stress can also be reduced via biofeedback, a technique where the individual is shown bodily information that is not normally available to them (e.g., heart rate), and then taught strategies to alter this signal. This type of intervention has even shown promise in reducing heart and hypertension risk, as well as other serious conditions (e.g., Moravec, 2008; Patel, Marmot, & Terry, 1981). But reducing stress does not have to be complicated! For example, exercise is a great stress reduction activity (Salmon, 2001) that has a myriad of health benefits. The Importance Of Good Health Practices As a student, you probably strive to maintain good grades, to have an active social life, and to stay healthy (e.g., by getting enough sleep), but there is a popular joke about what it’s like to be in college: you can only pick two of these things (see Figure 10.4.3 for an example). The busy life of a college student doesn’t always allow you to maintain all three areas of your life, especially during test-taking periods. In one study, researchers found that students taking exams were more stressed and, thus, smoked more, drank more caffeine, had less physical activity, and had worse sleep habits (Oaten & Chang, 2005), all of which could have detrimental effects on their health. Positive health practices are especially important in times of stress when your immune system is compromised due to high stress and the elevated frequency of exposure to the illnesses of your fellow students in lecture halls, cafeterias, and dorms. Psychologists study both health behaviors and health habits. The former are behaviors that can improve or harm your health. Some examples include regular exercise, flossing, and wearing sunscreen, versus negative behaviors like drunk driving, pulling all-nighters, or smoking. These behaviors become habits when they are firmly established and performed automatically. For example, do you have to think about putting your seatbelt on or do you do it automatically? Habits are often developed early in life thanks to parental encouragement or the influence of our peer group. While these behaviors sound minor, studies have shown that those who engaged in more of these protective habits (e.g., getting 7–8 hours of sleep regularly, not smoking or drinking excessively, exercising) had fewer illnesses, felt better, and were less likely to die over a 9–12-year follow-up period (Belloc & Breslow 1972; Breslow & Enstrom 1980). For college students, health behaviors can even influence academic performance. For example, poor sleep quality and quantity are related to weaker learning capacity and academic performance (Curcio, Ferrara, & De Gennaro, 2006). Due to the effects that health behaviors can have, much effort is put forward by psychologists to understand how to change unhealthy behaviors, and to understand why individuals fail to act in healthy ways. Health promotion involves enabling individuals to improve health by focusing on behaviors that pose a risk for future illness, as well as spreading knowledge on existing risk factors. These might be genetic risks you are born with, or something you developed over time like obesity, which puts you at risk for Type 2 diabetes and heart disease, among other illnesses. Psychology And Medicine There are many psychological factors that influence medical treatment outcomes. For example, older individuals, (Meara, White, & Cutler, 2004), women (Briscoe, 1987), and those from higher socioeconomic backgrounds (Adamson, Ben-Shlomo, Chaturvedi, & Donovan, 2008) are all more likely to seek medical care. On the other hand, some individuals who need care might avoid it due to financial obstacles or preconceived notions about medical practitioners or the illness. Thanks to the growing amount of medical information online, many people now use the Internet for health information and 38% percent report that this influences their decision to see a doctor (Fox & Jones, 2009). Unfortunately, this is not always a good thing because individuals tend to do a poor job assessing the credibility of health information. For example, college-student participants reading online articles about HIV and syphilis rated a physician’s article and a college student’s article as equally credible if the participants said they were familiar with the health topic (Eastin, 2001). Credibility of health information often means how accurate or trustworthy the information is, and it can be influenced by irrelevant factors, such as the website’s design, logos, or the organization’s contact information (Freeman & Spyridakis, 2004). Similarly, many people post health questions on online, unmoderated forums where anyone can respond, which allows for the possibility of inaccurate information being provided for serious medical conditions by unqualified individuals. After individuals decide to seek care, there is also variability in the information they give their medical provider. Poor communication (e.g., due to embarrassment or feeling rushed) can influence the accuracy of the diagnosis and the effectiveness of the prescribed treatment. Similarly, there is variation following a visit to the doctor. While most individuals are tasked with a health recommendation (e.g., buying and using a medication appropriately, losing weight, going to another expert), not everyone adheres to medical recommendations (Dunbar-Jacob & Mortimer-Stephens, 2010). For example, many individuals take medications inappropriately (e.g., stopping early, not filling prescriptions) or fail to change their behaviors (e.g., quitting smoking). Unfortunately, getting patients to follow medical orders is not as easy as one would think. For example, in one study, over one third of diabetic patients failed to get proper medical care that would prevent or slow down diabetes-related blindness (Schoenfeld, Greene, Wu, & Leske, 2001)! Fortunately, as mobile technology improves, physicians now have the ability to monitor adherence and work to improve it (e.g., with pill bottles that monitor if they are opened at the right time). Even text messages are useful for improving treatment adherence and outcomes in depression, smoking cessation, and weight loss (Cole-Lewis, & Kershaw, 2010). Being A Health Psychologist Training as a clinical health psychologist provides a variety of possible career options. Clinical health psychologists often work on teams of physicians, social workers, allied health professionals, and religious leaders. These teams may be formed in locations like rehabilitation centers, hospitals, primary care offices, emergency care centers, or in chronic illness clinics. Work in each of these settings will pose unique challenges in patient care, but the primary responsibility will be the same. Clinical health psychologists will evaluate physical, personal, and environmental factors contributing to illness and preventing improved health. In doing so, they will then help create a treatment strategy that takes into account all dimensions of a person’s life and health, which maximizes its potential for success. Those who specialize in health psychology can also conduct research to discover new health predictors and risk factors, or develop interventions to prevent and treat illness. Researchers studying health psychology work in numerous locations, such as universities, public health departments, hospitals, and private organizations. In the related field of behavioral medicine, careers focus on the application of this type of research. Occupations in this area might include jobs in occupational therapy, rehabilitation, or preventative medicine. Training as a health psychologist provides a wide skill set applicable in a number of different professional settings and career paths. The Future Of Health Psychology Much of the past medical research literature provides an incomplete picture of human health. “Health care” is often “illness care.” That is, it focuses on the management of symptoms and illnesses as they arise. As a result, in many developed countries, we are faced with several health epidemics that are difficult and costly to treat. These include obesity, diabetes, and cardiovascular disease, to name a few. The National Institutes of Health have called for researchers to use the knowledge we have about risk factors to design effective interventions to reduce the prevalence of preventable illness. Additionally, there are a growing number of individuals across developed countries with multiple chronic illnesses and/or lasting disabilities, especially with older age. Addressing their needs and maintaining their quality of life will require skilled individuals who understand how to properly treat these populations. Health psychologists will be on the forefront of work in these areas. With this focus on prevention, it is important that health psychologists move beyond studying risk (e.g., depression, stress, hostility, low socioeconomic status) in isolation, and move toward studying factors that confer resilience and protection from disease. There is, fortunately, a growing interest in studying the positive factors that protect our health (e.g., Diener & Chan, 2011; Pressman & Cohen, 2005; Richman, Kubzansky, Maselko, Kawachi, Choo, & Bauer, 2005) with evidence strongly indicating that people with higher positivity live longer, suffer fewer illnesses, and generally feel better. Seligman (2008) has even proposed a field of “Positive Health” to specifically study those who exhibit “above average” health—something we do not think about enough. By shifting some of the research focus to identifying and understanding these health-promoting factors, we may capitalize on this information to improve public health. Innovative interventions to improve health are already in use and continue to be studied. With recent advances in technology, we are starting to see great strides made to improve health with the aid of computational tools. For example, there are hundreds of simple applications (apps) that use email and text messages to send reminders to take medication, as well as mobile apps that allow us to monitor our exercise levels and food intake (in the growing mobile-health, or m-health, field). These m-health applications can be used to raise health awareness, support treatment and compliance, and remotely collect data on a variety of outcomes. Also exciting are devices that allow us to monitor physiology in real time; for example, to better understand the stressful situations that raise blood pressure or heart rate. With advances like these, health psychologists will be able to serve the population better, learn more about health and health behavior, and develop excellent health-improving strategies that could be specifically targeted to certain populations or individuals. These leaps in equipment development, partnered with growing health psychology knowledge and exciting advances in neuroscience and genetic research, will lead health researchers and practitioners into an exciting new time where, hopefully, we will understand more and more about how to keep people healthy. Outside Resources App: 30 iPhone apps to monitor your health http://www.hongkiat.com/blog/iphone-health-app/ Quiz: Hostility http://www.mhhe.com/socscience/hhp/f...sheet_090.html Self-assessment: Perceived Stress Scale www.ncsu.edu/assessment/resou...ress_scale.pdf Self-assessment: What’s your real age (based on your health practices and risk factors)? http://www.realage.com Video: Try out a guided meditation exercise to reduce your stress Web: American Psychosomatic Society http://www.psychosomatic.org/home/index.cfm Web: APA Division 38, Health Psychology http://www.health-psych.org Web: Society of Behavioral Medicine http://www.sbm.org Discussion Questions 1. What psychological factors contribute to health? 2. Which psychosocial constructs and behaviors might help protect us from the damaging effects of stress? 3. What kinds of interventions might help to improve resilience? Who will these interventions help the most? 4. How should doctors use research in health psychology when meeting with patients? 5. Why do clinical health psychologists play a critical role in improving public health? Vocabulary Adherence In health, it is the ability of a patient to maintain a health behavior prescribed by a physician. This might include taking medication as prescribed, exercising more, or eating less high-fat food. Behavioral medicine A field similar to health psychology that integrates psychological factors (e.g., emotion, behavior, cognition, and social factors) in the treatment of disease. This applied field includes clinical areas of study, such as occupational therapy, hypnosis, rehabilitation or medicine, and preventative medicine. Biofeedback The process by which physiological signals, not normally available to human perception, are transformed into easy-to-understand graphs or numbers. Individuals can then use this information to try to change bodily functioning (e.g., lower blood pressure, reduce muscle tension). Biomedical Model of Health A reductionist model that posits that ill health is a result of a deviation from normal function, which is explained by the presence of pathogens, injury, or genetic abnormality. Biopsychosocial Model of Health An approach to studying health and human function that posits the importance of biological, psychological, and social (or environmental) processes. Chronic disease A health condition that persists over time, typically for periods longer than three months (e.g., HIV, asthma, diabetes). Control Feeling like you have the power to change your environment or behavior if you need or want to. Daily hassles Irritations in daily life that are not necessarily traumatic, but that cause difficulties and repeated stress. Emotion-focused coping Coping strategy aimed at reducing the negative emotions associated with a stressful event. General Adaptation Syndrome A three-phase model of stress, which includes a mobilization of physiological resources phase, a coping phase, and an exhaustion phase (i.e., when an organism fails to cope with the stress adequately and depletes its resources). Health According to the World Health Organization, it is a complete state of physical, mental, and social well-being and not merely the absence of disease or infirmity. Health behavior Any behavior that is related to health—either good or bad. Hostility An experience or trait with cognitive, behavioral, and emotional components. It often includes cynical thoughts, feelings of emotion, and aggressive behavior. Mind–body connection The idea that our emotions and thoughts can affect how our body functions. Problem-focused coping A set of coping strategies aimed at improving or changing stressful situations. Psychoneuroimmunology A field of study examining the relationship among psychology, brain function, and immune function. Psychosomatic medicine An interdisciplinary field of study that focuses on how biological, psychological, and social processes contribute to physiological changes in the body and health over time. Resilience The ability to “bounce back” from negative situations (e.g., illness, stress) to normal functioning or to simply not show poor outcomes in the face of adversity. In some cases, resilience may lead to better functioning following the negative experience (e.g., post-traumatic growth). Self-efficacy The belief that one can perform adequately in a specific situation. Social integration The size of your social network, or number of social roles (e.g., son, sister, student, employee, team member). Social support The perception or actuality that we have a social network that can help us in times of need and provide us with a variety of useful resources (e.g., advice, love, money). Stress A pattern of physical and psychological responses in an organism after it perceives a threatening event that disturbs its homeostasis and taxes its abilities to cope with the event. Stressor An event or stimulus that induces feelings of stress. Type A Behavior Type A behavior is characterized by impatience, competitiveness, neuroticism, hostility, and anger. Type B Behavior Type B behavior reflects the absence of Type A characteristics and is represented by less competitive, aggressive, and hostile behavior patterns.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_10%3A_Well_Being/10.4%3A_The_Healthy_Life.txt
• 11.1: An Introduction to the Science of Social Psychology The science of social psychology investigates the ways other people affect our thoughts, feelings, and behaviors. It is an exciting field of study because it is so familiar and relevant to our day-to-day lives. Social psychologists study a wide range of topics that can roughly be grouped into 5 categories: attraction, attitudes, peace & conflict, social influence, and social cognition. • 11.2: The Psychology of Groups Each of us is an autonomous individual seeking our own objectives, yet we are also members of groups—groups that constrain us, guide us, and sustain us. Just as each of us influences the group and the people in the group, so, too, do groups change each one of us. Joining groups satisfies our need to belong, gain information and understanding through social comparison, define our sense of self and social identity, and achieve goals that might elude us if we worked alone. • 11.3: Culture Although the most visible elements of culture are dress, cuisine and architecture, culture is a highly psychological phenomenon. Culture is a pattern of meaning for understanding how the world works. This knowledge is shared among a group of people and passed from one generation to the next. This module defines culture, addresses methodological issues, and introduces the idea that culture is a process. Understanding cultural processes can help people get along better with others. • 11.4: Research Methods in Social Psychology To explore these concepts requires special research methods. Following a brief overview of traditional research designs, this module introduces how complex experimental designs, field experiments, naturalistic observation, experience sampling techniques, survey research, subtle and nonconscious techniques such as priming, and archival research and the use of big data may each be adapted to address social psychological questions. • 11.5: Social Cognition and Attitudes Social cognition is the area of social psychology that examines how people perceive and think about their social world. This module provides an overview of key topics within social cognition and attitudes, including judgmental heuristics, social prediction, affective and motivational influences on judgment, and explicit and implicit attitudes. • 11.6: Cooperation Humans are social animals. This means we work together in groups to achieve goals that benefit everyone. From building skyscrapers to delivering packages to remote island nations, modern life requires that people cooperate with one another. However, people are also motivated by self-interest, which often stands as an obstacle to effective cooperation. This module explores the concept of cooperation and the processes that both help and hinder it. • 11.7: The Family Each and every one of us has a family. However, these families exist in many variations around the world. In this module, we discuss definitions of family, family forms, the developmental trajectory of families, and commonly used theories to understand families. We also cover factors that influence families such as culture and societal expectations while incorporating the latest family relevant statistics. • 11.8: Love, Friendship, and Social Support Friendship and love, and more broadly, the relationships that people cultivate in their lives, are some of the most valuable treasures a person can own. This module explores ways in which we try to understand how friendships form, what attracts one person to another, and how love develops. It also explores how the Internet influences how we meet people and develop deep relationships. Finally, this module will examine social support and how this can help many through the hardest times. • 11.9: Relationships and Well-being The relationships we cultivate in our lives are essential to our well-being—namely, happiness and health. Why is that so? We begin to answer this question by exploring the types of relationships—family, friends, colleagues, and lovers—we have in our lives and how they are measured. We also explore the different aspects of happiness and health, and show how the quantity and quality of relationships can affect our happiness and health. • 11.10: Positive Relationships Most research in the realm of relationships has examined that which can go wrong in relationships (e.g., conflict, infidelity, intimate partner violence). I summarize much of what has been examined about what goes right in a relationship and call these positive relationship deposits. Some research indicates that relationships need five positive interactions for every negative interaction. Thumbnail: The Scream by Edvard Munch. Chapter 11: Social Part I By Robert Biswas-Diener Portland State University The science of social psychology investigates the ways other people affect our thoughts, feelings, and behaviors. It is an exciting field of study because it is so familiar and relevant to our day-to-day lives. Social psychologists study a wide range of topics that can roughly be grouped into 5 categories: attraction, attitudes, peace & conflict, social influence, and social cognition. learning Objectives • Define social psychology and understand how it is different from other areas of psychology. • Understand “levels of analysis” and why this concept is important to science. • List at least three major areas of study in social psychology. • Define the “need to belong”. Introduction We live in a world where, increasingly, people of all backgrounds have smart phones. In economically developing societies, cellular towers are often less expensive to install than traditional landlines. In many households in industrialized societies, each person has his or her own mobile phone instead of using a shared home phone. As this technology becomes increasingly common, curious researchers have wondered what effect phones might have on relationships. Do you believe that smart phones help foster closer relationships? Or do you believe that smart phones can hinder connections? In a series of studies, researchers have discovered that the mere presence of a mobile phone lying on a table can interfere with relationships. In studies of conversations between both strangers and close friends—conversations occurring in research laboratories and in coffee shops—mobile phones appeared to distract people from connecting with one another. The participants in these studies reported lower conversation quality, lower trust, and lower levels of empathy for the other person (Przybylski & Weinstein, 2013). This is not to discount the usefulness of mobile phones, of course. It is merely a reminder that they are better used in some situations than they are in others. It is also a real-world example of how social psychology can help produce insights about the ways we understand and interact with one another. Social psychology is the branch of psychological science mainly concerned with understanding how the presence of others affects our thoughts, feelings, and behaviors. Just as clinical psychology focuses on mental disorders and their treatment, and developmental psychology investigates the way people change across their lifespan, social psychology has its own focus. As the name suggests, this science is all about investigating the ways groups function, the costs and benefits of social status, the influences of culture, and all the other psychological processes involving two or more people. Social psychology is such an exciting science precisely because it tackles issues that are so familiar and so relevant to our everyday life. Humans are “social animals.” Like bees and deer, we live together in groups. Unlike those animals, however, people are unique, in that we care a great deal about our relationships. In fact, a classic study of life stress found that the most stressful events in a person’s life—the death of a spouse, divorce, and going to jail—are so painful because they entail the loss of relationships (Holmes & Rahe, 1967). We spend a huge amount of time thinking about and interacting with other people, and researchers are interested in understanding these thoughts and actions. Giving up a seat on the bus for another person is an example of social psychology. So is disliking a person because he is wearing a shirt with the logo of a rival sports team. Flirting, conforming, arguing, trusting, competing—these are all examples of topics that interest social psychology researchers. At times, science can seem abstract and far removed from the concerns of daily life. When neuroscientists discuss the workings of the anterior cingulate cortex, for example, it might sound important. But the specific parts of the brain and their functions do not always seem directly connected to the stuff you care about: parking tickets, holding hands, or getting a job. Social psychology feels so close to home because it often deals with universal psychological processes to which people can easily relate. For example, people have a powerful need to belong(Baumeister & Leary, 1995). It doesn’t matter if a person is from Israel, Mexico, or the Philippines; we all have a strong need to make friends, start families, and spend time together. We fulfill this need by doing things such as joining teams and clubs, wearing clothing that represents “our group,” and identifying ourselves based on national or religious affiliation. It feels good to belong to a group. Research supports this idea. In a study of the most and least happy people, the differentiating factor was not gender, income, or religion; it was having high-quality relationships (Diener & Seligman, 2002). Even introverts report being happier when they are in social situations (Pavot, Diener & Fujita, 1990). Further evidence can be found by looking at the negative psychological experiences of people who do not feel they belong. People who feel lonely or isolated are more vulnerable to depression and problems with physical health (Cacioppo, & Patrick, 2008). Social Psychology is a Science The need to belong is also a useful example of the ways the various aspects of psychology fit together. Psychology is a science that can be sub-divided into specialties such as “abnormal psychology” (the study of mental illness) or “developmental psychology” (the study of how people develop across the life span). In daily life, however, we don’t stop and examine our thoughts or behaviors as being distinctly social versus developmental versus personality-based versus clinical. In daily life, these all blend together. For example, the need to belong is rooted in developmental psychology. Developmental psychologists have long paid attention to the importance of attaching to a caregiver, feeling safe and supported during childhood, and the tendency to conform to peer pressure during adolescence. Similarly, clinical psychologists—those who research mental disorders-- have pointed to people feeling a lack of belonging to help explain loneliness, depression, and other psychological pains. In practice, psychologists separate concepts into categories such as “clinical,” “developmental,” and “social” only out of scientific necessity. It is easier to simplify thoughts, feelings, and behaviors in order to study them. Each psychological sub-discipline has its own unique approaches to research. You may have noticed that this is almost always how psychology is taught, as well. You take a course in personality, another in human sexuality, and a third in gender studies, as if these topics are unrelated. In day-to-day life, however, these distinctions do not actually exist, and there is heavy overlap between the various areas of psychology. In psychology, there are varying levels of analysis. Figure 11.1.1 summarizes the different levels at which scientists might understand a single event. Take the example of a toddler watching her mother make a phone call: the toddler is curious, and is using observational learning to teach herself about this machine called a telephone. At the most specific levels of analysis, we might understand that various neurochemical processes are occurring in the toddler’s brain. We might be able to use imaging techniques to see that the cerebellum, among other parts of the brain, is activated with electrical energy. If we could “pull back” our scientific lens, we might also be able to gain insight into the toddler’s own experience of the phone call. She might be confused, interested, or jealous. Moving up to the next level of analysis, we might notice a change in the toddler’s behavior: during the call she furrows her brow, squints her eyes, and stares at her mother and the phone. She might even reach out and grab at the phone. At still another level of analysis, we could see the ways that her relationships enter into the equation. We might observe, for instance, that the toddler frowns and grabs at the phone when her mother uses it, but plays happily and ignores it when her stepbrother makes a call. All of these chemical, emotional, behavioral, and social processes occur simultaneously. None of them is the objective truth. Instead, each offers clues into better understanding what, psychologically speaking, is happening. Social psychologists attend to all levels of analysis but—historically—this branch of psychology has emphasized the higher levels of analysis. Researchers in this field are drawn to questions related to relationships, groups, and culture. This means that they frame their research hypotheses in these terms. Imagine for a moment that you are a social researcher. In your daily life, you notice that older men on average seem to talk about their feelings less than do younger men. You might want to explore your hypothesis by recording natural conversations between males of different ages. This would allow you to see if there was evidence supporting your original observation. It would also allow you to begin to sift through all the factors that might influence this phenomenon: What happens when an older man talks to a younger man? What happens when an older man talks to a stranger versus his best friend? What happens when two highly educated men interact versus two working class men? Exploring each of these questions focuses on interactions, behavior, and culture rather than on perceptions, hormones, or DNA. In part, this focus on complex relationships and interactions is one of the things that makes research in social psychology so difficult. High quality research often involves the ability to control the environment, as in the case of laboratory experiments. The research laboratory, however, is artificial, and what happens there may not translate to the more natural circumstances of life. This is why social psychologists have developed their own set of unique methods for studying attitudes and social behavior. For example, they use naturalistic observation to see how people behave when they don’t know they are being watched. Whereas people in the laboratory might report that they personally hold no racist views or opinions (biases most people wouldn’t readily admit to), if you were to observe how close they sat next to people of other ethnicities while riding the bus, you might discover a behavioral clue to their actual attitudes and preferences. What is Included in Social Psychology? Social psychology is the study of group processes: how we behave in groups, and how we feel and think about one another. While it is difficult to summarize the many areas of social psychology research, it can be helpful to lump them into major categories as a starting point to wrap our minds around. There is, in reality, no specific number of definitive categories, but for the purpose of illustration, let’s use five. Most social psychology research topics fall into one (but sometimes more) of each of these areas: Attraction A large amount of study in social psychology has focused on the process of attraction. Think about a young adult going off to college for the first time. He takes an art history course and sits next to a young woman he finds attractive. This feeling raises several interesting questions: Where does the attraction come from? Is it biological or learned? Why do his standards for beauty differ somewhat from those of his best friend? The study of attraction covers a huge range of topics. It can begin with first impressions, then extend to courtship and commitment. It involves the concepts of beauty, sex, and evolution. Attraction researchers might study stalking behavior. They might research divorce or remarriage. They might study changing standards of beauty across decades. In a series of studies focusing on the topic of attraction, researchers were curious how people make judgments of the extent to which the faces of their friends and of strangers are good looking (Wirtz, Biswas-Diener, Diener & Drogos, 2011). To do this, the researchers showed a set of photographs of faces of young men and women to several assistants who were blind to the research hypothesis. Some of the people in the photos were Caucasian, some were African-American, and some were Maasai, a tribe of traditional people from Kenya. The assistants were asked to rate the various facial features in the photos, including skin smoothness, eye size, prominence of cheekbones, symmetry (how similar the left and the right halves of the face are), and other characteristics. The photos were then shown to the research participants—of the same three ethnicities as the people in the photos—who were asked to rate the faces for overall attractiveness. Interestingly, when rating the faces of strangers, white people, Maasai, and African-Americans were in general agreement about which faces were better looking. Not only that, but there was high consistency in which specific facial features were associated with being good looking. For instance, across ethnicities and cultures, everyone seemed to find smooth skin more attractive than blemished skin. Everyone seemed to also agree that larger chins made men more attractive, but not women. Then came an interesting discovery. The researchers found that Maasai tribal people agreed about the faces of strangers—but not about the faces of people they knew! Two people might look at the same photo of someone they knew; one would give a thumbs up for attractiveness, the other one, not so much. It appeared that friends were using some other standard of beauty than simply nose, eyes, skin, and other facial features. To explore this further, the researchers conducted a second study in the United States. They brought university students into their laboratory in pairs. Each pair were friends; some were same-sex friends and some were opposite-sex friends. They had their photographs taken and were then asked to privately rate each other’s attractiveness, along with photos of other participants whom they did not know (strangers). Friends were also asked to rate each other on personality traits, including “admirable,” “generous,” “likable,” “outgoing,” “sensitive,” and “warm.” In doing this, the researchers discovered two things. First, they found the exact same pattern as in the earlier study: when the university students rated strangers, they focused on actual facial features, such as skin smoothness and large eyes, to make their judgments (whether or not they realized it). But when it came to the hotness-factor of their friends, these features appeared not to be very important. Suddenly, likable personality characteristics were a better predictor of who was considered good looking. This makes sense. Attractiveness is, in part, an evolutionary and biological process. Certain features such as smooth skin are signals of health and reproductive fitness—something especially important when scoping out strangers. Once we know a person, however, it is possible to swap those biological criteria for psychological ones. People tend to be attracted not just to muscles and symmetrical faces but also to kindness and generosity. As more information about a person’s personality becomes available, it becomes the most important aspect of a person’s attractiveness. Understanding how attraction works is more than an intellectual exercise; it can also lead to better interventions. Insights from studies on attraction can find their way into public policy conversations, couples therapy, and sex education programs. Attitudes Social psychology shares with its intellectual cousins sociology and political science an interest in attitudes. Attitudes are opinions, feelings, and beliefs about a person, concept, or group. People hold attitudes about all types of things: the films they see, political issues, and what constitutes a good date. Social psychology researchers are interested in what attitudes people hold, where these attitudes come from, and how they change over time. Researchers are especially interested in social attitudes people hold about categories of people, such as the elderly, military veterans, or people with mental disabilities. Among the most studied topics in attitude research are stereotyping and prejudice. Although people often use these words interchangeably, they are actually different concepts. Stereotyping is a way of using information shortcuts about a group to effectively navigate social situations or make decisions. For instance, you might hold a stereotype that elderly people are physically slower and frailer than twenty-year-olds. If so, you are more likely to treat interactions with the elderly in a different manner than interactions with younger people. Although you might delight in jumping on your friend’s back, punching a buddy in the arm, or jumping out and scaring a friend you probably do not engage in these behaviors with the elderly. Stereotypical information may or may not be correct. Also, stereotypical information may be positive or negative. Regardless of accuracy, all people use stereotypes, because they are efficient and inescapable ways to deal with huge amounts of social information. It is important to keep in mind, however, that stereotypes, even if they are correct in general, likely do not apply to every member of the group. As a result, it can seem unfair to judge an individual based on perceived group norms. Prejudice, on the other hand, refers to how a person feels about an individual based on their group membership. For example, someone with a prejudice against tattoos may feel uncomfortable sitting on the metro next to a young man with multiple, visible tattoos. In this case, the person is pre-judging the man with tattoos based on group members (people with tattoos) rather than getting to know the man as an individual. Like stereotypes, prejudice can be positive or negative. Discrimination occurs when a person is biased against an individual, simply because of the individual’s membership in a social category. For instance, if you were to learn that a person has gone to rehabilitation for alcohol treatment, it might be unfair to treat him or her as untrustworthy. You might hold a stereotype that people who have been involved with drugs are untrustworthy or that they have an arrest record. Discrimination would come when you act on that stereotype by, for example, refusing to hire the person for a job for which they are otherwise qualified. Understanding the psychological mechanisms of problems like prejudice can be the first step in solving them. Social psychology focuses on basic processes, but also on applications. That is, researchers are interested in ways to make the world a better place, so they look for ways to put their discoveries into constructive practice. This can be clearly seen in studies on attitude change. In such experiments, researchers are interested in how people can overcome negative attitudes and feel more empathy towards members of other groups. Take, for example, a study by Daniel Batson and his colleagues (1997) on attitudes about people from stigmatized groups. In particular, the researchers were curious how college students in their study felt about homeless people. They had students listen to a recording of a fictitious homeless man—Harold Mitchell—describing his life. Half of the participants were told to be objective and fair in their consideration of his story. The other half were instructed to try to see life through Harold’s eyes and imagine how he felt. After the recording finished, the participants rated their attitudes toward homeless people in general. They addressed attitudes such as “Most homeless people could get a job if they wanted to,” or “Most homeless people choose to live that way.” It turns out that when people are instructed to have empathy—to try to see the world through another person’s eyes—it gives them not only more empathy for that individual, but also for the group as a whole. In the Batson et al. experiment (1997), the high empathy participants reported a favorable rating of homeless people than did those participants in the low empathy condition. Studies like these are important because they reveal practical possibilities for creating a more positive society. In this case, the results tell us that it is possible for people to change their attitudes and look more favorably on people they might otherwise avoid or be prejudiced against. In fact, it appears that it takes relatively little—simply the effort to see another’s point of view—to nudge people toward being a bit kinder and more generous toward one another. In a world where religious and political divisions are highly publicized, this type of research might be an important step toward working together. Peace & Conflict Social psychologists are also interested in peace and conflict. They research conflicts ranging from the small—such as a spat between lovers—to the large—such as wars between nations. Researchers are interested in why people fight, how they fight, and what the possible costs and benefits of fighting are. In particular, social psychologists are interested in the mental processes associated with conflict and reconciliation. They want to understand how emotions, thoughts, and sense of identity play into conflicts, as well as making up afterward. Take, for instance, a 1996 study by Dov Cohen and his colleagues. They were interested in people who come from a “culture of honor”—that is, a cultural background that emphasizes personal or family reputation and social status. Cohen and his colleagues realized that cultural forces influence why people take offense and how they behave when others offend them. To investigate how people from a culture of honor react to aggression, the Cohen research team invited dozens of university students into the laboratory, half of whom were from a culture of honor. In their experiment, they had a research confederate “accidentally” bump the research participant as they passed one another in the hallway, then say “asshole” quietly. They discovered that people from the Northern United States were likely to laugh off the incident with amusement (only 35% became angry), while 85% of folks from the Southern United States—a culture of honor region—became angry. In a follow-up study, the researchers were curious as to whether this anger would boil over and lead people from cultures of honor to react more violently than others (Cohen, Nisbett, Bowdle, & Schwarz, 1996). In a cafeteria setting, the researchers “accidentally” knocked over drinks of people from cultures of honor as well as drinks of people not from honor cultures. As expected, the people from honor cultures became angrier; however, they did not act out more aggressively. Interestingly, in follow-up interviews, the people from cultures of honor said they would expect their peers—other people from their culture of honor—to act violently even though they, themselves, had not. This follow-up study provides insights into the links between emotions and social behavior. It also sheds light on the ways that people perceive certain groups. This line of research is just a single example of how social psychologists study the forces that give rise to aggression and violence. Just as in the case of attitudes, a better understanding of these forces might help researchers, therapists, and policy makers intervene more effectively in conflicts. Social Influence Take a moment and think about television commercials. How influenced do you think you are by the ads you see? A very common perception voiced among psychology students is “Other people are influenced by ads, but not me!” To some degree, it is an unsettling thought that outside influences might sway us to spend money on, make decisions about, or even feel what they want us to. Nevertheless, none of us can escape social influence. Perhaps, more than any other topic, social influence is the heart and soul of social psychology. Our most famous studies deal with the ways that other people affect our behavior; they are studies on conformity—being persuaded to give up our own opinions and go along with the group—and obedience—following orders or requests from people in authority. Among the most researched topics is persuasion. Persuasion is the act of delivering a particular message so that it influences a person’s behavior in a desired way. Your friends try to persuade you to join their group for lunch. Your parents try to persuade you to go to college and to take your studies seriously. Doctors try to persuade you to eat a healthy diet or exercise more often. And, yes, advertisers try to persuade you also. They showcase their products in a way that makes them seem useful, affordable, reliable, or cool. One example of persuasion can be seen in a very common situation: tipping the serving staff at a restaurant. In some societies, especially in the United States, tipping is an important part of dining. As you probably know, servers hope to get a large tip in exchange for good service. One group of researchers was curious what servers do to coax diners into giving bigger tips. Occasionally, for instance, servers write a personal message of thanks on the bill. In a series of studies, the researchers were interested in how gift-giving would affect tipping. First, they had two male waiters in New York deliver a piece of foil-wrapped chocolate along with the bill at the end of the meal. Half of 66 diners received the chocolate and the other half did not. When patrons were given the unexpected sweet, they tipped, on average, 2% more (Strohmetz, Rind, Fisher & Lynn 2002). In a follow-up study, the researchers changed the conditions. In this case, two female servers brought a small basket of assorted chocolates to the table (Strohmetz et al., 2002). In one research condition, they told diners they could pick two sweets; in a separate research condition, however, they told diners they could pick one sweet, but then—as the diners were getting ready to leave—the waiters returned and offered them a second sweet. In both situations, the diners received the same number of sweets, but in the second condition the waiters appeared to be more generous, as if they were making a personal decision to give an additional little gift. In both of these conditions the average amount of tips went up, but tips increased a whopping 21% in the “very generous” condition. The researchers concluded that giving a small gift puts people in the frame of mind to give a little something back, a principle called reciprocity. Research on persuasion is very useful. Although it is tempting to dismiss it as a mere attempt by advertisers to get you to purchase goods and services, persuasion is used for many purposes. For example, medical professionals often hope people will donate their organs after they die. Donated organs can be used to train medical students, advance scientific discovery, or save other people’s lives through transplantation. For years, doctors and researchers tried to persuade people to donate, but relatively few people did. Then, policy makers offered an organ donation option for people getting their driver’s license, and donations rose. When people received their license, they could tick a box that signed them up for the organ donation program. By coupling the decision to donate organs with a more common event—getting a license—policy makers were able to increase the number of donors. Then, they had the further idea of “nudging” people to donate—by making them “opt out” rather than “opt in.” Now, people are automatically signed up to donate organs unless they make the effort to check a box indicating they don’t want to. By making organ donation the default, more people have donated and more lives have been saved. This is a small but powerful example of how we can be persuaded to behave certain ways, often without even realizing what is influencing us. Social Cognition You, me, all of us—we spend much of our time thinking about other people. We make guesses as to their honesty, their motives, and their opinions. Social cognition is the term for the way we think about the social world and how we perceive others. In some sense, we are continually telling a story in our own minds about the people around us. We struggle to understand why a date failed to show up, whether we can trust the notes of a fellow student, or if our friends are laughing at our jokes because we are funny or if they are just being nice. When we make educated guesses about the efforts or motives of others, this is called social attribution. We are “attributing” their behavior to a particular cause. For example, we might attribute the failure of a date to arrive on time to car trouble, forgetfulness, or the wrong-headed possibility that we are not worthy of being loved. Because the information we have regarding other people’s motives and behavior is not as complete as our insights into our own, we are likely to make unreliable judgments of them. Imagine, for example, that a person on the freeway speeds up behind you, follows dangerously close, then swerves around and passes you illegally. As the driver speeds off into the distance you might think to yourself, “What a jerk!” You are beginning to tell yourself a story about why that person behaved that way. Because you don’t have any information about his or her situation—rushing to the hospital, or escaping a bank robbery?—you default to judgments of character: clearly, that driver is impatient, aggressive, and downright rude. If you were to do the exact same thing, however—cut someone off on the freeway—you would be less likely to attribute the same behavior to poor character, and more likely to chalk it up to the situation. (Perhaps you were momentarily distracted by the radio.) The consistent way we attribute people’s actions to personality traits while overlooking situational influences is called the fundamental attribution error. The fundamental attribution error can also emerge in other ways. It can include groups we belong to versus opposing groups. Imagine, for example, that you are a fan of rugby. Your favorite team is the All Blacks, from New Zealand. In one particular match, you notice how unsporting the opposing team is. They appear to pout and seem to commit an unusually high number of fouls. Their fouling behavior is clearly linked to their character; they are mean people! Yet, when a player from the All Blacks is called for a foul, you may be inclined to see that as a bad call by the referee or a product of the fact that your team is pressured from a tough schedule and a number of injuries to their star players. This mental process allows a person to maintain his or her own high self-esteem while dismissing the bad behavior of others. Conclusion People are more connected to one another today than at any time in history. For the first time, it is easy to have thousands of acquaintances on social media. It is easier than ever before to travel and meet people from different cultures. Businesses, schools, religious groups, political parties, and governments interact more than they ever have. For the first time, people in greater numbers live clustered in cities than live spread out across rural settings. These changes have psychological consequences. Over the last hundred years, we have seen dramatic shifts in political engagement, ethnic relations, and even the very definition of family itself. Social psychologists are scientists who are interested in understanding the ways we relate to one another, and the impact these relationships have on us, individually and collectively. Not only can social psychology research lead to a better understanding of personal relationships, but it can lead to practical solutions for many social ills. Lawmakers, teachers and parents, therapists, and policy makers can all use this science to help develop societies with less conflict and more social support. Outside Resources Web: A collection of links on the topic of peace psychology https://www.socialpsychology.org/peace.htm Web: A great resource for all things social psychology, all in one place - Social Psychology Network http://www.socialpsychology.org/ Web: A list of profiles of major historical figures in social psychology https://www.socialpsychology.org/social-figures.htm Web: A review of the history of social psychology as well as the topics of interest in the field https://en.Wikipedia.org/wiki/Social_psychology Web: A succinct review of major historical figures in social psychology http://www.simplypsychology.org/soci...sychology.html Web: An article on the definition and areas of influence of peace psychology https://en.Wikipedia.org/wiki/Peace_psychology Web: Article describing another way of conceptualizing levels of analysis in social psychology http://psych.colorado.edu/~oreilly/cecn/node11.html Web: Extended list of major historical figures in social psychology http://www.sparknotes.com/psychology...haracters.html Web: History and principles of social psychology https://opentextbc.ca/socialpsychology/chapter/defining-social-psychology-history-and-principles/ Web: Links to sources on history of social psychology as well as major historical figures https://www.socialpsychology.org/history.htm Web: The Society for the Study of Peace, Conflict and Violence http://www.peacepsych.org/ Discussion Questions 1. List the types of relationships you have. How do these people affect your behavior? Are there actions you perform or things you do that you might not otherwise if it weren't for them? 2. When you think about where each person in your psychology class sits, what influences the seat he or she chooses to use? Is it just a matter of personal preference or are there other influences at work? 3. Do you ever try to persuade friends or family members to do something? How do you try to persuade them? How do they try to persuade you? Give specific examples. 4. If you were a social psychologist, what would you want to research? Why? How would you go about it? Vocabulary Attitude A way of thinking or feeling about a target that is often reflected in a person’s behavior. Examples of attitude targets are individuals, concepts, and groups. Attraction The psychological process of being sexually interested in another person. This can include, for example, physical attraction, first impressions, and dating rituals. Blind to the research hypothesis When participants in research are not aware of what is being studied. Conformity Changing one’s attitude or behavior to match a perceived social norm. Culture of honor A culture in which personal or family reputation is especially important. Discrimination Discrimination is behavior that advantages or disadvantages people merely based on their group membership. Fundamental attribution error The tendency to emphasize another person’s personality traits when describing that person’s motives and behaviors and overlooking the influence of situational factors. Hypothesis A possible explanation that can be tested through research. Levels of analysis Complementary views for analyzing and understanding a phenomenon. Need to belong A strong natural impulse in humans to form social connections and to be accepted by others. Obedience Responding to an order or command from a person in a position of authority. Observational learning Learning by observing the behavior of others. Prejudice An evaluation or emotion toward people based merely on their group membership. Reciprocity The act of exchanging goods or services. By giving a person a gift, the principle of reciprocity can be used to influence others; they then feel obligated to give back. Research confederate A person working with a researcher, posing as a research participant or as a bystander. Research participant A person being studied as part of a research program. Social attribution The way a person explains the motives or behaviors of others. Social cognition The way people process and apply information about others. Social influence When one person causes a change in attitude or behavior in another person, whether intentionally or unintentionally. Social psychology The branch of psychological science that is mainly concerned with understanding how the presence of others affects our thoughts, feelings, and behaviors. Stereotyping A mental process of using information shortcuts about a group to effectively navigate social situations or make decisions. Stigmatized group A group that suffers from social disapproval based on some characteristic that sets them apart from the majority.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_11%3A_Social_Part_I/11.01%3A_An_Introduction_to_the_Science_of_Social_Psychology.txt
By Donelson R. Forsyth University of Richmond This module assumes that a thorough understanding of people requires a thorough understanding of groups. Each of us is an autonomous individual seeking our own objectives, yet we are also members of groups—groups that constrain us, guide us, and sustain us. Just as each of us influences the group and the people in the group, so, too, do groups change each one of us. Joining groups satisfies our need to belong, gain information and understanding through social comparison, define our sense of self and social identity, and achieve goals that might elude us if we worked alone. Groups are also practically significant, for much of the world’s work is done by groups rather than by individuals. Success sometimes eludes our groups, but when group members learn to work together as a cohesive team their success becomes more certain. People also turn to groups when important decisions must be made, and this choice is justified as long as groups avoid such problems as group polarization and groupthink. learning objectives • Review the evidence that suggests humans have a fundamental need to belong to groups. • Compare the sociometer model of self-esteem to a more traditional view of self-esteem. • Use theories of social facilitation to predict when a group will perform tasks slowly or quickly (e.g., students eating a meal as a group, workers on an assembly line, or a study group). • Summarize the methods used by Latané, Williams, and Harkins to identify the relative impact of social loafing and coordination problems on group performance. • Describe how groups change over time. • Apply the theory of groupthink to a well-known decision-making group, such as the group of advisors responsible for planning the Bay of Pigs operation. • List and discuss the factors that facilitate and impede group performance and decision making. • Develop a list of recommendations that, if followed, would minimize the possibility of groupthink developing in a group. The Psychology of Groups Psychologists study groups because nearly all human activities—working, learning, worshiping, relaxing, playing, and even sleeping—occur in groups. The lone individual who is cut off from all groups is a rarity. Most of us live out our lives in groups, and these groups have a profound impact on our thoughts, feelings, and actions. Many psychologists focus their attention on single individuals, but social psychologists expand their analysis to include groups, organizations, communities, and even cultures. This module examines the psychology of groups and group membership. It begins with a basic question: What is the psychological significance of groups? People are, undeniably, more often in groups rather than alone. What accounts for this marked gregariousness and what does it say about our psychological makeup? The module then reviews some of the key findings from studies of groups. Researchers have asked many questions about people and groups: Do people work as hard as they can when they are in groups? Are groups more cautious than individuals? Do groups make wiser decisions than single individuals? In many cases the answers are not what common sense and folk wisdom might suggest. The Psychological Significance of Groups Many people loudly proclaim their autonomy and independence. Like Ralph Waldo Emerson, they avow, “I must be myself. I will not hide my tastes or aversions . . . . I will seek my own” (1903/2004, p. 127). Even though people are capable of living separate and apart from others, they join with others because groups meet their psychological and social needs. The Need to Belong Across individuals, societies, and even eras, humans consistently seek inclusion over exclusion, membership over isolation, and acceptance over rejection. As Roy Baumeister and Mark Leary conclude, humans have a need to belong: “a pervasive drive to form and maintain at least a minimum quantity of lasting, positive, and impactful interpersonal relationships” (1995, p. 497). And most of us satisfy this need by joining groups. When surveyed, 87.3% of Americans reported that they lived with other people, including family members, partners, and roommates (Davis & Smith, 2007). The majority, ranging from 50% to 80%, reported regularly doing things in groups, such as attending a sports event together, visiting one another for the evening, sharing a meal together, or going out as a group to see a movie (Putnam, 2000). People respond negatively when their need to belong is unfulfilled. For example, college students often feel homesick and lonely when they first start college, but not if they belong to a cohesive, socially satisfying group (Buote et al., 2007). People who are accepted members of a group tend to feel happier and more satisfied. But should they be rejected by a group, they feel unhappy, helpless, and depressed. Studies of ostracism—the deliberate exclusion from groups—indicate this experience is highly stressful and can lead to depression, confused thinking, and even aggression (Williams, 2007). When researchers used a functional magnetic resonance imaging scanner to track neural responses to exclusion, they found that people who were left out of a group activity displayed heightened cortical activity in two specific areas of the brain—the dorsal anterior cingulate cortex and the anterior insula. These areas of the brain are associated with the experience of physical pain sensations (Eisenberger, Lieberman, & Williams, 2003). It hurts, quite literally, to be left out of a group. Affiliation in Groups Groups not only satisfy the need to belong, they also provide members with information, assistance, and social support. Leon Festinger’s theory of social comparison (1950, 1954) suggested that in many cases people join with others to evaluate the accuracy of their personal beliefs and attitudes. Stanley Schachter (1959) explored this process by putting individuals in ambiguous, stressful situations and asking them if they wished to wait alone or with others. He found that people affiliate in such situations—they seek the company of others. Although any kind of companionship is appreciated, we prefer those who provide us with reassurance and support as well as accurate information. In some cases, we also prefer to join with others who are even worse off than we are. Imagine, for example, how you would respond when the teacher hands back the test and yours is marked 85%. Do you want to affiliate with a friend who got a 95% or a friend who got a 78%? To maintain a sense of self-worth, people seek out and compare themselves to the less fortunate. This process is known as downward social comparison. Identity and Membership Groups are not only founts of information during times of ambiguity, they also help us answer the existentially significant question, “Who am I?” Common sense tells us that our sense of self is our private definition of who we are, a kind of archival record of our experiences, qualities, and capabilities. Yet, the self also includes all those qualities that spring from memberships in groups. People are defined not only by their traits, preferences, interests, likes, and dislikes, but also by their friendships, social roles, family connections, and group memberships. The self is not just a “me,” but also a “we.” Even demographic qualities such as sex or age can influence us if we categorize ourselves based on these qualities. Social identity theory, for example, assumes that we don’t just classify other people into such social categories as man, woman, Anglo, elderly, or college student, but we also categorize ourselves. Moreover, if we strongly identify with these categories, then we will ascribe the characteristics of the typical member of these groups to ourselves, and so stereotype ourselves. If, for example, we believe that college students are intellectual, then we will assume we, too, are intellectual if we identify with that group (Hogg, 2001). Groups also provide a variety of means for maintaining and enhancing a sense of self-worth, as our assessment of the quality of groups we belong to influences our collective self-esteem (Crocker & Luhtanen, 1990). If our self-esteem is shaken by a personal setback, we can focus on our group’s success and prestige. In addition, by comparing our group to other groups, we frequently discover that we are members of the better group, and so can take pride in our superiority. By denigrating other groups, we elevate both our personal and our collective self-esteem (Crocker & Major, 1989). Mark Leary’s sociometer model goes so far as to suggest that “self-esteem is part of a sociometer that monitors peoples’ relational value in other people’s eyes” (2007, p. 328). He maintains self-esteem is not just an index of one’s sense of personal value, but also an indicator of acceptance into groups. Like a gauge that indicates how much fuel is left in the tank, a dip in self-esteem indicates exclusion from our group is likely. Disquieting feelings of self-worth, then, prompt us to search for and correct characteristics and qualities that put us at risk of social exclusion. Self-esteem is not just high self-regard, but the self-approbation that we feel when included in groups (Leary & Baumeister, 2000). Evolutionary Advantages of Group Living Groups may be humans’ most useful invention, for they provide us with the means to reach goals that would elude us if we remained alone. Individuals in groups can secure advantages and avoid disadvantages that would plague the lone individuals. In his theory of social integration, Moreland concludes that groups tend to form whenever “people become dependent on one another for the satisfaction of their needs” (1987, p. 104). The advantages of group life may be so great that humans are biologically prepared to seek membership and avoid isolation. From an evolutionary psychology perspective, because groups have increased humans’ overall fitness for countless generations, individuals who carried genes that promoted solitude-seeking were less likely to survive and procreate compared to those with genes that prompted them to join groups (Darwin, 1859/1963). This process of natural selection culminated in the creation of a modern human who seeks out membership in groups instinctively, for most of us are descendants of “joiners” rather than “loners.” Motivation and Performance Groups usually exist for a reason. In groups, we solve problems, create products, create standards, communicate knowledge, have fun, perform arts, create institutions, and even ensure our safety from attacks by other groups. But do groups always outperform individuals? Social Facilitation in Groups Do people perform more effectively when alone or when part of a group? Norman Triplett (1898) examined this issue in one of the first empirical studies in psychology. While watching bicycle races, Triplett noticed that cyclists were faster when they competed against other racers than when they raced alone against the clock. To determine if the presence of others leads to the psychological stimulation that enhances performance, he arranged for 40 children to play a game that involved turning a small reel as quickly as possible (see Figure 11.2.1). When he measured how quickly they turned the reel, he confirmed that children performed slightly better when they played the game in pairs compared to when they played alone (see Stroebe, 2012; Strube, 2005). Triplett succeeded in sparking interest in a phenomenon now known as social facilitation: the enhancement of an individual’s performance when that person works in the presence of other people. However, it remained for Robert Zajonc (1965) to specify when social facilitation does and does not occur. After reviewing prior research, Zajonc noted that the facilitating effects of an audience usually only occur when the task requires the person to perform dominant responses, i.e., ones that are well-learned or based on instinctive behaviors. If the task requires nondominant responses, i.e., novel, complicated, or untried behaviors that the organism has never performed before or has performed only infrequently, then the presence of others inhibits performance. Hence, students write poorer quality essays on complex philosophical questions when they labor in a group rather than alone (Allport, 1924), but they make fewer mistakes in solving simple, low-level multiplication problems with an audience or a coactor than when they work in isolation (Dashiell, 1930). Social facilitation, then, depends on the task: other people facilitate performance when the task is so simple that it requires only dominant responses, but others interfere when the task requires nondominant responses. However, a number of psychological processes combine to influence when social facilitation, not social interference, occurs. Studies of the challenge-threat response and brain imaging, for example, confirm that we respond physiologically and neurologically to the presence of others (Blascovich, Mendes, Hunter, & Salomon, 1999). Other people also can trigger evaluation apprehension, particularly when we feel that our individual performance will be known to others, and those others might judge it negatively (Bond, Atoum, & VanLeeuwen, 1996). The presence of other people can also cause perturbations in our capacity to concentrate on and process information (Harkins, 2006). Distractions due to the presence of other people have been shown to improve performance on certain tasks, such as the Stroop task, but undermine performance on more cognitively demanding tasks(Huguet, Galvaing, Monteil, & Dumas, 1999). Social Loafing Groups usually outperform individuals. A single student, working alone on a paper, will get less done in an hour than will four students working on a group project. One person playing a tug-of-war game against a group will lose. A crew of movers can pack up and transport your household belongings faster than you can by yourself. As the saying goes, “Many hands make light the work” (Littlepage, 1991; Steiner, 1972). Groups, though, tend to be underachievers. Studies of social facilitation confirmed the positive motivational benefits of working with other people on well-practiced tasks in which each member’s contribution to the collective enterprise can be identified and evaluated. But what happens when tasks require a truly collective effort? First, when people work together they must coordinate their individual activities and contributions to reach the maximum level of efficiency—but they rarely do (Diehl & Stroebe, 1987). Three people in a tug-of-war competition, for example, invariably pull and pause at slightly different times, so their efforts are uncoordinated. The result is coordination loss: the three-person group is stronger than a single person, but not three times as strong. Second, people just don’t exert as much effort when working on a collective endeavor, nor do they expend as much cognitive effort trying to solve problems, as they do when working alone. They display social loafing(Latané, 1981). Bibb Latané, Kip Williams, and Stephen Harkins (1979) examined both coordination losses and social loafing by arranging for students to cheer or clap either alone or in groups of varying sizes. The students cheered alone or in 2- or 6-person groups, or they were lead to believe they were in 2- or 6-person groups (those in the “pseudo-groups” wore blindfolds and headsets that played masking sound). As Figure 11.2.2 indicates, groups generated more noise than solitary subjects, but the productivity dropped as the groups became larger in size. In dyads, each subject worked at only 66% of capacity, and in 6-person groups at 36%. Productivity also dropped when subjects merely believed they were in groups. If subjects thought that one other person was shouting with them, they shouted 82% as intensely, and if they thought five other people were shouting, they reached only 74% of their capacity. These loses in productivity were not due to coordination problems; this decline in production could be attributed only to a reduction in effort—to social loafing (Latané et al., 1979, Experiment 2). Teamwork Social loafing is no rare phenomenon. When sales personnel work in groups with shared goals, they tend to “take it easy” if another salesperson is nearby who can do their work (George, 1992). People who are trying to generate new, creative ideas in group brainstorming sessions usually put in less effort and are thus less productive than people who are generating new ideas individually (Paulus & Brown, 2007). Students assigned group projects often complain of inequity in the quality and quantity of each member’s contributions: Some people just don’t work as much as they should to help the group reach its learning goals (Neu, 2012). People carrying out all sorts of physical and mental tasks expend less effort when working in groups, and the larger the group, the more they loaf (Karau & Williams, 1993). Groups can, however, overcome this impediment to performance through teamwork. A group may include many talented individuals, but they must learn how to pool their individual abilities and energies to maximize the team’s performance. Team goals must be set, work patterns structured, and a sense of group identity developed. Individual members must learn how to coordinate their actions, and any strains and stresses in interpersonal relations need to be identified and resolved (Salas, Rosen, Burke, & Goodwin, 2009). Researchers have identified two key ingredients to effective teamwork: a shared mental representation of the task and group unity. Teams improve their performance over time as they develop a shared understanding of the team and the tasks they are attempting. Some semblance of this shared mental model is present nearly from its inception, but as the team practices, differences among the members in terms of their understanding of their situation and their team diminish as a consensus becomes implicitly accepted (Tindale, Stawiski, & Jacobs, 2008). Effective teams are also, in most cases, cohesive groups (Dion, 2000). Group cohesion is the integrity, solidarity, social integration, or unity of a group. In most cases, members of cohesive groups like each other and the group and they also are united in their pursuit of collective, group-level goals. Members tend to enjoy their groups more when they are cohesive, and cohesive groups usually outperform ones that lack cohesion. This cohesion-performance relationship, however, is a complex one. Meta-analytic studies suggest that cohesion improves teamwork among members, but that performance quality influences cohesion more than cohesion influences performance (Mullen & Copper, 1994; Mullen, Driskell, & Salas, 1998; see Figure 11.2.3). Cohesive groups also can be spectacularly unproductive if the group’s norms stress low productivity rather than high productivity (Seashore, 1954). Group Development In most cases groups do not become smooth-functioning teams overnight. As Bruce Tuckman’s (1965) theory of group development suggests, groups usually pass through several stages of development as they change from a newly formed group into an effective team. As noted in Focus Topic 1, in the forming phase, the members become oriented toward one another. In the storming phase, the group members find themselves in conflict, and some solution is sought to improve the group environment. In the norming, phase standards for behavior and roles develop that regulate behavior. In the performing, phase the group has reached a point where it can work as a unit to achieve desired goals, and the adjourning phase ends the sequence of development; the group disbands. Throughout these stages groups tend to oscillate between the task-oriented issues and the relationship issues, with members sometimes working hard but at other times strengthening their interpersonal bonds (Tuckman & Jensen, 1977). Focus Topic 1: Group Development Stages and Characteristics Stage 1 – “Forming”. Members expose information about themselves in polite but tentative interactions. They explore the purposes of the group and gather information about each other’s interests, skills, and personal tendencies. Stage 2 – “Storming”. Disagreements about procedures and purposes surface, so criticism and conflict increase. Much of the conflict stems from challenges between members who are seeking to increase their status and control in the group. Stage 3 – “Norming”. Once the group agrees on its goals, procedures, and leadership, norms, roles, and social relationships develop that increase the group’s stability and cohesiveness. Stage 4 – “Performing”. The group focuses its energies and attention on its goals, displaying higher rates of task-orientation, decision-making, and problem-solving. Stage 5 – “Adjourning”. The group prepares to disband by completing its tasks, reduces levels of dependency among members, and dealing with any unresolved issues. Sources based on Tuckman (1965) and Tuckman & Jensen (1977) We also experience change as we pass through a group, for we don’t become full-fledged members of a group in an instant. Instead, we gradually become a part of the group and remain in the group until we leave it. Richard Moreland and John Levine’s (1982) model of group socialization describes this process, beginning with initial entry into the group and ending when the member exits it. For example, when you are thinking of joining a new group—a social club, a professional society, a fraternity or sorority, or a sports team—you investigate what the group has to offer, but the group also investigates you. During this investigation stage you are still an outsider: interested in joining the group, but not yet committed to it in any way. But once the group accepts you and you accept the group, socialization begins: you learn the group’s norms and take on different responsibilities depending on your role. On a sports team, for example, you may initially hope to be a star who starts every game or plays a particular position, but the team may need something else from you. In time, though, the group will accept you as a full-fledged member and both sides in the process—you and the group itself—increase their commitment to one another. When that commitment wanes, however, your membership may come to an end as well. Making Decisions in Groups Groups are particularly useful when it comes to making a decision, for groups can draw on more resources than can a lone individual. A single individual may know a great deal about a problem and possible solutions, but his or her information is far surpassed by the combined knowledge of a group. Groups not only generate more ideas and possible solutions by discussing the problem, but they can also more objectively evaluate the options that they generate during discussion. Before accepting a solution, a group may require that a certain number of people favor it, or that it meets some other standard of acceptability. People generally feel that a group’s decision will be superior to an individual’s decision. Groups, however, do not always make good decisions. Juries sometimes render verdicts that run counter to the evidence presented. Community groups take radical stances on issues before thinking through all the ramifications. Military strategists concoct plans that seem, in retrospect, ill-conceived and short-sighted. Why do groups sometimes make poor decisions? Group Polarization Let’s say you are part of a group assigned to make a presentation. One of the group members suggests showing a short video that, although amusing, includes some provocative images. Even though initially you think the clip is inappropriate, you begin to change your mind as the group discusses the idea. The group decides, eventually, to throw caution to the wind and show the clip—and your instructor is horrified by your choice. This hypothetical example is consistent with studies of groups making decisions that involve risk. Common sense notions suggest that groups exert a moderating, subduing effect on their members. However, when researchers looked at groups closely, they discovered many groups shift toward more extreme decisions rather than less extreme decisions after group interaction. Discussion, it turns out, doesn’t moderate people’s judgments after all. Instead, it leads to group polarization: judgments made after group discussion will be more extreme in the same direction as the average of individual judgments made prior to discussion (Myers & Lamm, 1976). If a majority of members feel that taking risks is more acceptable than exercising caution, then the group will become riskier after a discussion. For example, in France, where people generally like their government but dislike Americans, group discussion improved their attitude toward their government but exacerbated their negative opinions of Americans (Moscovici & Zavalloni, 1969). Similarly, prejudiced people who discussed racial issues with other prejudiced individuals became even more negative, but those who were relatively unprejudiced exhibited even more acceptance of diversity when in groups (Myers & Bishop, 1970). Common Knowledge Effect One of the advantages of making decisions in groups is the group’s greater access to information. When seeking a solution to a problem, group members can put their ideas on the table and share their knowledge and judgments with each other through discussions. But all too often groups spend much of their discussion time examining common knowledge—information that two or more group members know in common—rather than unshared information. This common knowledge effect will result in a bad outcome if something known by only one or two group members is very important. Researchers have studied this bias using the hidden profile task. On such tasks, information known to many of the group members suggests that one alternative, say Option A, is best. However, Option B is definitely the better choice, but all the facts that support Option B are only known to individual groups members—they are not common knowledge in the group. As a result, the group will likely spend most of its time reviewing the factors that favor Option A, and never discover any of its drawbacks. In consequence, groups often perform poorly when working on problems with nonobvious solutions that can only be identified by extensive information sharing (Stasser & Titus, 1987). Groupthink Groups sometimes make spectacularly bad decisions. In 1961, a special advisory committee to President John F. Kennedy planned and implemented a covert invasion of Cuba at the Bay of Pigs that ended in total disaster. In 1986, NASA carefully, and incorrectly, decided to launch the Challenger space shuttle in temperatures that were too cold. Irving Janis (1982), intrigued by these kinds of blundering groups, carried out a number of case studies of such groups: the military experts that planned the defense of Pearl Harbor; Kennedy’s Bay of Pigs planning group; the presidential team that escalated the war in Vietnam. Each group, he concluded, fell prey to a distorted style of thinking that rendered the group members incapable of making a rational decision. Janis labeled this syndrome groupthink: “a mode of thinking that people engage in when they are deeply involved in a cohesive in-group, when the members’ strivings for unanimity override their motivation to realistically appraise alternative courses of action” (p. 9). Janis identified both the telltale symptoms that signal the group is experiencing groupthink and the interpersonal factors that combine to cause groupthink. To Janis, groupthink is a disease that infects healthy groups, rendering them inefficient and unproductive. And like the physician who searches for symptoms that distinguish one disease from another, Janis identified a number of symptoms that should serve to warn members that they may be falling prey to groupthink. These symptoms include overestimating the group’s skills and wisdom, biased perceptions and evaluations of other groups and people who are outside of the group, strong conformity pressures within the group, and poor decision-making methods. Janis also singled out four group-level factors that combine to cause groupthink: cohesion, isolation, biased leadership, and decisional stress. • Cohesion: Groupthink only occurs in cohesive groups. Such groups have many advantages over groups that lack unity. People enjoy their membership much more in cohesive groups, they are less likely to abandon the group, and they work harder in pursuit of the group’s goals. But extreme cohesiveness can be dangerous. When cohesiveness intensifies, members become more likely to accept the goals, decisions, and norms of the group without reservation. Conformity pressures also rise as members become reluctant to say or do anything that goes against the grain of the group, and the number of internal disagreements—necessary for good decision making—decreases. • Isolation. Groupthink groups too often work behind closed doors, keeping out of the limelight. They isolate themselves from outsiders and refuse to modify their beliefs to bring them into line with society’s beliefs. They avoid leaks by maintaining strict confidentiality and working only with people who are members of their group. • Biased leadership. A biased leader who exerts too much authority over group members can increase conformity pressures and railroad decisions. In groupthink groups, the leader determines the agenda for each meeting, sets limits on discussion, and can even decide who will be heard. • Decisional stress. Groupthink becomes more likely when the group is stressed, particularly by time pressures. When groups are stressed they minimize their discomfort by quickly choosing a plan of action with little argument or dissension. Then, through collective discussion, the group members can rationalize their choice by exaggerating the positive consequences, minimizing the possibility of negative outcomes, concentrating on minor details, and overlooking larger issues. You and Your Groups Most of us belong to at least one group that must make decisions from time to time: a community group that needs to choose a fund-raising project; a union or employee group that must ratify a new contract; a family that must discuss your college plans; or the staff of a high school discussing ways to deal with the potential for violence during football games. Could these kinds of groups experience groupthink? Yes they could, if the symptoms of groupthink discussed above are present, combined with other contributing causal factors, such as cohesiveness, isolation, biased leadership, and stress. To avoid polarization, the common knowledge effect, and groupthink, groups should strive to emphasize open inquiry of all sides of the issue while admitting the possibility of failure. The leaders of the group can also do much to limit groupthink by requiring full discussion of pros and cons, appointing devil’s advocates, and breaking the group up into small discussion groups. If these precautions are taken, your group has a much greater chance of making an informed, rational decision. Furthermore, although your group should review its goals, teamwork, and decision-making strategies, the human side of groups—the strong friendships and bonds that make group activity so enjoyable—shouldn’t be overlooked. Groups have instrumental, practical value, but also emotional, psychological value. In groups we find others who appreciate and value us. In groups we gain the support we need in difficult times, but also have the opportunity to influence others. In groups we find evidence of our self-worth, and secure ourselves from the threat of loneliness and despair. For most of us, groups are the secret source of well-being. Outside Resources Audio: This American Life. Episode 109 deals with the motivation and excitement of joining with others at summer camp. http://www.thisamericanlife.org/radi.../notes-on-camp Audio: This American Life. Episode 158 examines how people act when they are immersed in a large crowd. http://www.thisamericanlife.org/radi.../mob-mentality Audio: This American Life. Episode 61 deals with fiascos, many of which are perpetrated by groups. http://www.thisamericanlife.org/radi...sode/61/fiasco Audio: This American Life. Episode 74 examines how individuals act at conventions, when they join with hundreds or thousands of other people who are similar in terms of their avocations or employment. http://www.thisamericanlife.org/radi...74/conventions Forsyth, D. (2011). Group Dynamics. In R. Miller, E. Balcetis, S. Burns, D. Daniel, B. Saville, & W. Woody (Eds.), Promoting student engagement: Volume 2: Activities, exercises and demonstrations for psychology courses. (pp. 28-32) Washington, DC: Society for the Teaching of Psychology, American Psychological Association. http://teachpsych.org/ebooks/pse2011/vol2/index.php Forsyth, D.R. (n.d.) Group Dynamics: Instructional Resources. facultystaff.richmond.edu/~d...ources2014.pdf Journal Article: The Dynamogenic Factors in Pacemaking and Competition presents Norman Triplett’s original paper on what would eventually be known as social facilitation. http://psychclassics.yorku.ca/Triplett/ Resources for the Teaching of Social Psychology. http://jfmueller.faculty.noctrl.edu/crow/group.htm Social Psychology Network Student Activities http://www.socialpsychology.org/teac...ent-activities Society for Social and Personality Psychology http://www.spsp.org Tablante, C. B., & Fiske, S. T. (2015). Teaching social class. Teaching of Psychology, 42, 184-190. doi:10.1177/0098628315573148 The abstract to the article can be found at the following link, however your library will likely provide you access to the full text version. http://top.sagepub.com/content/42/2/184.abstract Video: Flash mobs illustrate the capacity of groups to organize quickly and complete complex tasks. One well-known example of a pseudo-flash mob is the rendition of “Do Re Mi” from the Sound of Music in the Central Station of Antwerp in 2009. Web: Group Development - This is a website developed by James Atherton that provides detailed information about group development, with application to the lifecycle of a typical college course. www.learningandteaching.info/teaching/group_ development.htm Web: Group Dynamics- A general repository of links, short articles, and discussions examining groups and group processes, including such topics as crowd behavior, leadership, group structure, and influence. http://donforsythgroups.wordpress.com/ Web: Stanford Crowd Project - This is a rich resource of information about all things related to crowds, with a particular emphasis on crowds and collective behavior in literature and the arts. press-media.stanford.edu/crowds/main.html Working Paper: Law of Group Polarization, by Cass Sunstein, is a wide-ranging application of the concept of polarization to a variety of legal and political decisions. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=199668 Discussion Questions 1. What are the advantages and disadvantages of sociality? Why do people often join groups? 2. Is self-esteem shaped by your personality qualities or by the value and qualities of groups to which you belong? 3. In what ways does membership in a group change a person’s self-concept and social identity? 4. What steps would you take if you were to base a self-esteem enrichment program in schools on the sociometer model of self-worth? 5. If you were a college professor, what would you do to increase the success of in-class learning teams? 6. What are the key ingredients to transforming a working group into a true team? 7. Have you ever been part of a group that made a poor decision and, if so, were any of the symptoms of groupthink present in your group? Vocabulary Collective self-esteem Feelings of self-worth that are based on evaluation of relationships with others and membership in social groups. Common knowledge effect The tendency for groups to spend more time discussing information that all members know (shared information) and less time examining information that only a few members know (unshared). Group cohesion The solidarity or unity of a group resulting from the development of strong and mutual interpersonal bonds among members and group-level forces that unify the group, such as shared commitment to group goals. Group polarization The tendency for members of a deliberating group to move to a more extreme position, with the direction of the shift determined by the majority or average of the members’ predeliberation preferences. Groupthink A set of negative group-level processes, including illusions of invulnerability, self-censorship, and pressures to conform, that occur when highly cohesive groups seek concurrence when making a decision. Ostracism Excluding one or more individuals from a group by reducing or eliminating contact with the person, usually by ignoring, shunning, or explicitly banishing them. Shared mental model Knowledge, expectations, conceptualizations, and other cognitive representations that members of a group have in common pertaining to the group and its members, tasks, procedures, and resources. Social comparison The process of contrasting one’s personal qualities and outcomes, including beliefs, attitudes, values, abilities, accomplishments, and experiences, to those of other people. Social facilitation Improvement in task performance that occurs when people work in the presence of other people. Social identity theory A theoretical analysis of group processes and intergroup relations that assumes groups influence their members’ self-concepts and self-esteem, particularly when individuals categorize themselves as group members and identify with the group. Social loafing The reduction of individual effort exerted when people work in groups compared with when they work alone. Sociometer model A conceptual analysis of self-evaluation processes that theorizes self-esteem functions to psychologically monitor of one’s degree of inclusion and exclusion in social groups. Teamwork The process by which members of the team combine their knowledge, skills, abilities, and other resources through a coordinated series of actions to produce an outcome.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_11%3A_Social_Part_I/11.02%3A_The_Psychology_of_Groups.txt
By Robert Biswas-Diener and Neil Thin Portland State University, University of Edinburgh Although the most visible elements of culture are dress, cuisine and architecture, culture is a highly psychological phenomenon. Culture is a pattern of meaning for understanding how the world works. This knowledge is shared among a group of people and passed from one generation to the next. This module defines culture, addresses methodological issues, and introduces the idea that culture is a process. Understanding cultural processes can help people get along better with others and be more socially responsible. learning objectives • Appreciate culture as an evolutionary adaptation common to all humans. • Understand cultural processes as variable patterns rather than as fixed scripts. • Understand the difference between cultural and cross-cultural research methods. • Appreciate cultural awareness as a source of personal well-being, social responsibility, and social harmony. • Explain the difference between individualism and collectivism. • Define “self-construal” and provide a real life example. Introduction When you think about different cultures, you likely picture their most visible features, such as differences in the way people dress, or in the architectural styles of their buildings. You might consider different types of food, or how people in some cultures eat with chopsticks while people in others use forks. There are differences in body language, religious practices, and wedding rituals. While these are all obvious examples of cultural differences, many distinctions are harder to see because they are psychological in nature. Just as culture can be seen in dress and food, it can also be seen in morality, identity, and gender roles. People from around the world differ in their views of premarital sex, religious tolerance, respect for elders, and even the importance they place on having fun. Similarly, many behaviors that may seem innate are actually products of culture. Approaches to punishment, for example, often depend on cultural norms for their effectiveness. In the United States, people who ride public transportation without buying a ticket face the possibility of being fined. By contrast, in some other societies, people caught dodging the fare are socially shamed by having their photos posted publicly. The reason this campaign of “name and shame” might work in one society but not in another is that members of different cultures differ in how comfortable they are with being singled out for attention. This strategy is less effective for people who are not as sensitive to the threat of public shaming. The psychological aspects of culture are often overlooked because they are often invisible. The way that gender roles are learned is a cultural process as is the way that people think about their own sense of duty toward their family members. In this module, you will be introduced to one of the most fascinating aspects of social psychology: the study of cultural processes. You will learn about research methods for studying culture, basic definitions related to this topic, and about the ways that culture affects a person’s sense of self. Social Psychology Research Methods Social psychologists are interested in the ways that cultural forces influence psychological processes. They study culture as a means of better understanding the ways it affects our emotions, identity, relationships, and decisions. Social psychologists generally ask different types of questions and use different methods than do anthropologists. Anthropologists are more likely to conduct ethnographic studies. In this type of research, the scientist spends time observing a culture and conducting interviews. In this way, anthropologists often attempt to understand and appreciate culture from the point of view of the people within it. Social psychologists who adopt this approach are often thought to be studying cultural psychology. They are likely to use interviews as a primary research methodology. For example, in a 2004 study Hazel Markus and her colleagues wanted to explore class culture as it relates to well-being. The researchers adopted a cultural psychology approach and interviewed participants to discover—in the participants own words—what “the good life” is for Americans of different social classes. Dozens of participants answered 30 open ended questions about well-being during recorded, face-to-face interviews. After the interview data were collected the researchers then read the transcripts. From these, they agreed on common themes that appeared important to the participants. These included, among others, “health,” “family,” “enjoyment,” and “financial security.” The Markus team discovered that people with a Bachelor’s Degree were more likely than high school educated participants to mention “enjoyment” as a central part of the good life. By contrast, those with a high school education were more likely to mention “financial security” and “having basic needs met.” There were similarities as well: participants from both groups placed a heavy emphasis on relationships with others. Their understanding of how these relationships are tied to well-being differed, however. The college educated—especially men—were more likely to list “advising and respecting” as crucial aspects of relationships while their high school educated counterparts were more likely to list “loving and caring” as important. As you can see, cultural psychological approaches place an emphasis on the participants’ own definitions, language, and understanding of their own lives. In addition, the researchers were able to make comparisons between the groups, but these comparisons were based on loose themes created by the researchers. Cultural psychology is distinct from cross-cultural psychology, and this can be confusing. Cross-cultural studiesare those that use standard forms of measurement, such as Likert scales, to compare people from different cultures and identify their differences. Both cultural and cross-cultural studies have their own advantages and disadvantages (see Table 1). Interestingly, researchers—and the rest of us!—have as much to learn from cultural similarities as cultural differences, and both require comparisons across cultures. For example, Diener and Oishi (2000) were interested in exploring the relationship between money and happiness. They were specifically interested in cross-cultural differences in levels of life satisfaction between people from different cultures. To examine this question they used international surveys that asked all participants the exact same question, such as “All things considered, how satisfied are you with your life as a whole these days?” and used a standard scale for answers; in this case one that asked people to use a 1-10 scale to respond. They also collected data on average income levels in each nation, and adjusted these for local differences in how many goods and services that money can buy. The Diener research team discovered that, across more than 40 nations there was a tendency for money to be associated with higher life satisfaction. People from richer countries such as Denmark, Switzerland and Canada had relatively high satisfaction while their counterparts from poorer countries such as India and Belarus had lower levels. There were some interesting exceptions, however. People from Japan—a wealthy nation—reported lower satisfaction than did their peers in nations with similar wealth. In addition, people from Brazil—a poorer nation—had unusually high scores compared to their income counterparts. One problem with cross-cultural studies is that they are vulnerable to ethnocentric bias. This means that the researcher who designs the study might be influenced by personal biases that could affect research outcomes—without even being aware of it. For example, a study on happiness across cultures might investigate the ways that personal freedom is associated with feeling a sense of purpose in life. The researcher might assume that when people are free to choose their own work and leisure, they are more likely to pick options they care deeply about. Unfortunately, this researcher might overlook the fact that in much of the world it is considered important to sacrifice some personal freedom in order to fulfill one’s duty to the group (Triandis, 1995). Because of the danger of this type of bias, social psychologists must continue to improve their methodology. What is Culture? Defining Culture Like the words “happiness” and “intelligence,” the word “culture” can be tricky to define. Culture is a word that suggests social patterns of shared meaning. In essence, it is a collective understanding of the way the world works, shared by members of a group and passed down from one generation to the next. For example, members of the Yanomamö tribe, in South America, share a cultural understanding of the world that includes the idea that there are four parallel levels to reality that include an abandoned level, and earthly level and heavenly and hell-like levels. Similarly, members of surfing culture understand their athletic pastime as being worthwhile and governed by formal rules of etiquette known only to insiders. There are several features of culture that are central to understanding the uniqueness and diversity of the human mind: 1. Versatility: Culture can change and adapt. Someone from the state of Orissa, in India, for example, may have multiple identities. She might see herself as Oriya when at home and speaking her native language. At other times, such as during the national cricket match against Pakistan, she might consider herself Indian. This is known as situational identity. 2. Sharing: Culture is the product of people sharing with one another. Humans cooperate and share knowledge and skills with other members of their networks. The ways they share, and the content of what they share, helps make up culture. Older adults, for instance, remember a time when long-distance friendships were maintained through letters that arrived in the mail every few months. Contemporary youth culture accomplishes the same goal through the use of instant text messages on smart phones. 3. Accumulation: Cultural knowledge is cumulative. That is, information is “stored.” This means that a culture’s collective learning grows across generations. We understand more about the world today than we did 200 years ago, but that doesn’t mean the culture from long ago has been erased by the new. For instance, members of the Haida culture—a First Nations people in British Columbia, Canada—profit from both ancient and modern experiences. They might employ traditional fishing practices and wisdom stories while also using modern technologies and services. 4. Patterns: There are systematic and predictable ways of behavior or thinking across members of a culture. Patterns emerge from adapting, sharing, and storing cultural information. Patterns can be both similar and different across cultures. For example, in both Canada and India it is considered polite to bring a small gift to a host’s home. In Canada, it is more common to bring a bottle of wine and for the gift to be opened right away. In India, by contrast, it is more common to bring sweets, and often the gift is set aside to be opened later. Understanding the changing nature of culture is the first step toward appreciating how it helps people. The concept of cultural intelligence is the ability to understand why members of other cultures act in the ways they do. Rather than dismissing foreign behaviors as weird, inferior, or immoral, people high in cultural intelligence can appreciate differences even if they do not necessarily share another culture’s views or adopt its ways of doing things. Thinking about Culture One of the biggest problems with understanding culture is that the word itself is used in different ways by different people. When someone says, “My company has a competitive culture,” does it mean the same thing as when another person says, “I’m taking my children to the museum so they can get some culture”? The truth is, there are many ways to think about culture. Here are three ways to parse this concept: 1. Progressive cultivation: This refers to a relatively small subset of activities that are intentional and aimed at “being refined.” Examples include learning to play a musical instrument, appreciating visual art, and attending theater performances, as well as other instances of so-called “high art.” This was the predominant use of the word culture through the mid-19th century. This notion of culture formed the basis, in part, of a superior mindset on the behalf of people from the upper economic classes. For instance, many tribal groups were seen as lacking cultural sophistication under this definition. In the late 19th century, as global travel began to rise, this understanding of culture was largely replaced with an understanding of it as a way of life. 2. Ways of Life: This refers to distinct patterns of beliefs and behaviors widely shared among members of a culture. The “ways of life” understanding of culture shifts the emphasis to patterns of belief and behavior that persist over many generations. Although cultures can be small—such as “school culture”—they usually describe larger populations, such as nations. People occasionally confuse national identity with culture. There are similarities in culture between Japan, China, and Korea, for example, even though politically they are very different. Indeed, each of these nations also contains a great deal of cultural variation within themselves. 3. Shared Learning: In the 20th century, anthropologists and social psychologists developed the concept of enculturation to refer to the ways people learn about and shared cultural knowledge. Where “ways of life” is treated as a noun “enculturation” is a verb. That is, enculturation is a fluid and dynamic process. That is, it emphasizes that culture is a process that can be learned. As children are raised in a society, they are taught how to behave according to regional cultural norms. As immigrants settle in a new country, they learn a new set of rules for behaving and interacting. In this way, it is possible for a person to have multiple cultural scripts. The understanding of culture as a learned pattern of views and behaviors is interesting for several reasons. First, it highlights the ways groups can come into conflict with one another. Members of different cultures simply learn different ways of behaving. Modern youth culture, for instance, interacts with technologies such as smart phones using a different set of rules than people who are in their 40s, 50s, or 60s. Older adults might find texting in the middle of a face-to-face conversation rude while younger people often do not. These differences can sometimes become politicized and a source of tension between groups. One example of this is Muslim women who wear a hijab, or head scarf. Non-Muslims do not follow this practice, so occasional misunderstandings arise about the appropriateness of the tradition. Second, understanding that culture is learned is important because it means that people can adopt an appreciation of patterns of behavior that are different than their own. For example, non-Muslims might find it helpful to learn about the hijab. Where did this tradition come from? What does it mean and what are various Muslim opinions about wearing one? Finally, understanding that culture is learned can be helpful in developing self-awareness. For instance, people from the United States might not even be aware of the fact that their attitudes about public nudity are influenced by their cultural learning. While women often go topless on beaches in Europe and women living a traditional tribal existence in places like the South Pacific also go topless, it is illegal for women in some of the United States to do so. These cultural norms for modesty—reflected in government laws and policies-- also enter the discourse on social issues such as the appropriateness of breast-feeding in public. Understanding that your preferences are—in many cases—the products of cultural learning might empower you to revise them if doing so will lead to a better life for you or others. The Self and Culture Traditionally, social psychologists have thought about how patterns of behavior have an overarching effect on populations’ attitudes. Harry Triandis, a cross-cultural psychologist, has studied culture in terms of individualism and collectivism. Triandis became interested in culture because of his unique upbringing. Born in Greece, he was raised under both the German and Italian occupations during World War II. The Italian soldiers broadcast classical music in the town square and built a swimming pool for the townspeople. Interacting with these foreigners—even though they were an occupying army—sparked Triandis’ curiosity about other cultures. He realized that he would have to learn English if he wanted to pursue academic study outside of Greece and so he practiced with the only local who knew the language: a mentally ill 70 year old who was incarcerated for life at the local hospital. He went on to spend decades studying the ways people in different cultures define themselves (Triandis, 2008). So, what exactly were these two patterns of culture Triandis focused on: individualism and collectivism? Individualists, such as most people born and raised in Australia or the United States, define themselves as individuals. They seek personal freedom and prefer to voice their own opinions and make their own decisions. By contrast, collectivists—such as most people born and raised in Korea or in Taiwan— are more likely to emphasize their connectedness to others. They are more likely to sacrifice their personal preferences if those preferences come in conflict with the preferences of the larger group (Triandis, 1995). Both individualism and collectivism can further be divided into vertical and horizontal dimensions (Triandis, 1995). Essentially, these dimensions describe social status among members of a society. People in vertical societies differ in status, with some people being more highly respected or having more privileges, while in horizontal societies people are relatively equal in status and privileges. These dimensions are, of course, simplifications. Neither individualism nor collectivism is the “correct way to live.” Rather, they are two separate patterns with slightly different emphases. People from individualistic societies often have more social freedoms, while collectivistic societies often have better social safety nets. There are yet other ways of thinking about culture, as well. The cultural patterns of individualism and collectivism are linked to an important psychological phenomenon: the way that people understand themselves. Known as self-construal, this is the way people define the way they “fit” in relation to others. Individualists are more likely to define themselves in terms of an independent self. This means that people see themselves as A) being a unique individual with a stable collection of personal traits, and B) that these traits drive behavior. By contrast, people from collectivist cultures are more likely to identify with the interdependent self. This means that people see themselves as A) defined differently in each new social context and B) social context, rather than internal traits, are the primary drivers of behavior (Markus & Kitiyama, 1991). What do the independent and interdependent self look like in daily life? One simple example can be seen in the way that people describe themselves. Imagine you had to complete the sentence starting with “I am…..”. And imagine that you had to do this 10 times. People with an independent sense of self are more likely to describe themselves in terms of traits such as “I am honest,” “I am intelligent,” or “I am talkative.” On the other hand, people with a more interdependent sense of self are more likely to describe themselves in terms of their relation to others such as “I am a sister,” “I am a good friend,” or “I am a leader on my team” (Markus, 1977). The psychological consequences of having an independent or interdependent self can also appear in more surprising ways. Take, for example, the emotion of anger. In Western cultures, where people are more likely to have an independent self, anger arises when people’s personal wants, needs, or values are attacked or frustrated (Markus & Kitiyama, 1994). Angry Westerners sometimes complain that they have been “treated unfairly.” Simply put, anger—in the Western sense—is the result of violations of the self. By contrast, people from interdependent self cultures, such as Japan, are likely to experience anger somewhat differently. They are more likely to feel that anger is unpleasant not because of some personal insult but because anger represents a lack of harmony between people. In this instance, anger is particularly unpleasant when it interferes with close relationships. Culture is Learned It’s important to understand that culture is learned. People aren’t born using chopsticks or being good at soccer simply because they have a genetic predisposition for it. They learn to excel at these activities because they are born in countries like Argentina, where playing soccer is an important part of daily life, or in countries like Taiwan, where chopsticks are the primary eating utensils. So, how are such cultural behaviors learned? It turns out that cultural skills and knowledge are learned in much the same way a person might learn to do algebra or knit. They are acquired through a combination of explicit teaching and implicit learning—by observing and copying. Cultural teaching can take many forms. It begins with parents and caregivers, because they are the primary influence on young children. Caregivers teach kids, both directly and by example, about how to behave and how the world works. They encourage children to be polite, reminding them, for instance, to say “Thankyou.” They teach kids how to dress in a way that is appropriate for the culture. They introduce children to religious beliefs and the rituals that go with them. They even teach children how to think and feel! Adult men, for example, often exhibit a certain set of emotional expressions—such as being tough and not crying—that provides a model of masculinity for their children. This is why we see different ways of expressing the same emotions in different parts of the world. In some societies, it is considered appropriate to conceal anger. Instead of expressing their feelings outright, people purse their lips, furrow their brows, and say little. In other cultures, however, it is appropriate to express anger. In these places, people are more likely to bare their teeth, furrow their brows, point or gesture, and yell (Matsumoto, Yoo, & Chung, 2010). Such patterns of behavior are learned. Often, adults are not even aware that they are, in essence, teaching psychology—because the lessons are happening through observational learning. Let’s consider a single example of a way you behave that is learned, which might surprise you. All people gesture when they speak. We use our hands in fluid or choppy motions—to point things out, or to pantomime actions in stories. Consider how you might throw your hands up and exclaim, “I have no idea!” or how you might motion to a friend that it’s time to go. Even people who are born blind use hand gestures when they speak, so to some degree this is a universal behavior, meaning all people naturally do it. However, social researchers have discovered that culture influences how a person gestures. Italians, for example, live in a society full of gestures. In fact, they use about 250 of them (Poggi, 2002)! Some are easy to understand, such as a hand against the belly, indicating hunger. Others, however, are more difficult. For example, pinching the thumb and index finger together and drawing a line backwards at face level means “perfect,” while knocking a fist on the side of one’s head means “stubborn.” Beyond observational learning, cultures also use rituals to teach people what is important. For example, young people who are interested in becoming Buddhist monks often have to endure rituals that help them shed feelings of specialness or superiority—feelings that run counter to Buddhist doctrine. To do this, they might be required to wash their teacher’s feet, scrub toilets, or perform other menial tasks. Similarly, many Jewish adolescents go through the process of bar and bat mitzvah. This is a ceremonial reading from scripture that requires the study of Hebrew and, when completed, signals that the youth is ready for full participation in public worship. Cultural Relativism When social psychologists research culture, they try to avoid making value judgments. This is known as value-free research and is considered an important approach to scientific objectivity. But, while such objectivity is the goal, it is a difficult one to achieve. With this in mind, anthropologists have tried to adopt a sense of empathy for the cultures they study. This has led to cultural relativism, the principle of regarding and valuing the practices of a culture from the point of view of that culture. It is a considerate and practical way to avoid hasty judgments. Take for example, the common practice of same-sex friends in India walking in public while holding hands: this is a common behavior and a sign of connectedness between two people. In England, by contrast, holding hands is largely limited to romantically involved couples, and often suggests a sexual relationship. These are simply two different ways of understanding the meaning of holding hands. Someone who does not take a relativistic view might be tempted to see their own understanding of this behavior as superior and, perhaps, the foreign practice as being immoral. Despite the fact that cultural relativism promotes the appreciation for cultural differences, it can also be problematic. At its most extreme it leaves no room for criticism of other cultures, even if certain cultural practices are horrific or harmful. Many practices have drawn criticism over the years. In Madagascar, for example, the famahidana funeral tradition includes bringing bodies out from tombs once every seven years, wrapping them in cloth, and dancing with them. Some people view this practice as disrespectful to the body of a deceased person. Another example can be seen in the historical Indian practice of sati—the burning to death of widows on their deceased husband’s funeral pyre. This practice was outlawed by the British when they colonized India. Today, a debate rages about the ritual cutting of genitals of children in several Middle Eastern and African cultures. To a lesser extent, this same debate arises around the circumcision of baby boys in Western hospitals. When considering harmful cultural traditions, it can be patronizing to the point of racism to use cultural relativism as an excuse for avoiding debate. To assume that people from other cultures are neither mature enough nor responsible enough to consider criticism from the outside is demeaning. Positive cultural relativism is the belief that the world would be a better place if everyone practiced some form of intercultural empathy and respect. This approach offers a potentially important contribution to theories of cultural progress: to better understand human behavior, people should avoid adopting extreme views that block discussions about the basic morality or usefulness of cultural practices. Conclusion We live in a unique moment in history. We are experiencing the rise of a global culture in which people are connected and able to exchange ideas and information better than ever before. International travel and business are on the rise. Instantaneous communication and social media are creating networks of contacts who would never otherwise have had a chance to connect. Education is expanding, music and films cross national borders, and state-of-the-art technology affects us all. In this world, an understanding of what culture is and how it happens, can set the foundation for acceptance of differences and respectful disagreements. The science of social psychology—along with the other culture-focused sciences, such as anthropology and sociology—can help produce insights into cultural processes. These insights, in turn, can be used to increase the quality of intercultural dialogue, to preserve cultural traditions, and to promote self-awareness. Outside Resources Articles: International Association of Cross-Cultural Psychology (IACCP) [Wolfgang Friedlmeier, ed] Online Readings in Psychology and Culture (ORPC) http://scholarworks.gvsu.edu/orpc/ Database: Human Relations Area Files (HRAF) ‘World Cultures’ database http://hraf.yale.edu/ Organization: Plous, Scott, et al, Social Psychology Network, Cultural Psychology Links by Subtopic https://www.socialpsychology.org/cultural.htm Study: Hofstede, Geert et al, The Hofstede Center: Strategy, Culture, Change geert-hofstede.com/national-culture.html Discussion Questions 1. How do you think the culture you live in is similar to or different from the culture your parents were raised in? 2. What are the risks of associating “culture” mainly with differences between large populations such as entire nations? 3. If you were a social psychologist, what steps would you take to guard against ethnocentricity in your research? 4. Name one value that is important to you. How did you learn that value? 5. In your opinion, has the internet increased or reduced global cultural diversity? 6. Imagine a social psychologist who researches the culture of extremely poor people, such as so-called “rag pickers,” who sort through trash for food or for items to sell. What ethical challenges can you identify in this type of study? Vocabulary Collectivism The cultural trend in which the primary unit of measurement is the group. Collectivists are likely to emphasize duty and obligation over personal aspirations. Cross-cultural psychology (or cross-cultural studies) An approach to researching culture that emphasizes the use of standard scales as a means of making meaningful comparisons across groups. Cross-cultural studies (or cross-cultural psychology) An approach to researching culture that emphasizes the use of standard scales as a means of making meaningful comparisons across groups. Cultural differences An approach to understanding culture primarily by paying attention to unique and distinctive features that set them apart from other cultures. Cultural intelligence The ability and willingness to apply cultural awareness to practical uses. Cultural psychology An approach to researching culture that emphasizes the use of interviews and observation as a means of understanding culture from its own point of view. Cultural relativism The principled objection to passing overly culture-bound (i.e., “ethnocentric”) judgements on aspects of other cultures. Cultural script Learned guides for how to behave appropriately in a given social situation. These reflect cultural norms and widely accepted values. Cultural similarities An approach to understanding culture primarily by paying attention to common features that are the same as or similar to those of other cultures Culture A pattern of shared meaning and behavior among a group of people that is passed from one generation to the next. Enculturation The uniquely human form of learning that is taught by one generation to another. Ethnocentric bias (or ethnocentrism) Being unduly guided by the beliefs of the culture you’ve grown up in, especially when this results in a misunderstanding or disparagement of unfamiliar cultures. Ethnographic studies Research that emphasizes field data collection and that examines questions that attempt to understand culture from it's own context and point of view. Independent self The tendency to define the self in terms of stable traits that guide behavior. Individualism The cultural trend in which the primary unit of measurement is the individual. Individualists are likely to emphasize uniqueness and personal aspirations over social duty. Interdependent self The tendency to define the self in terms of social contexts that guide behavior. Observational learning Learning by observing the behavior of others. Open ended questions Research questions that ask participants to answer in their own words. Ritual Rites or actions performed in a systematic or prescribed way often for an intended purpose. Example: The exchange of wedding rings during a marriage ceremony in many cultures. Self-construal The extent to which the self is defined as independent or as relating to others. Situational identity Being guided by different cultural influences in different situations, such as home versus workplace, or formal versus informal roles. Standard scale Research method in which all participants use a common scale—typically a Likert scale—to respond to questions. Value judgment An assessment—based on one’s own preferences and priorities—about the basic “goodness” or “badness” of a concept or practice. Value-free research Research that is not influenced by the researchers’ own values, morality, or opinions.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_11%3A_Social_Part_I/11.03%3A_Culture.txt
By Rajiv Jhangiani Kwantlen Polytechnic University Social psychologists are interested in the ways that other people affect thought, emotion, and behavior. To explore these concepts requires special research methods. Following a brief overview of traditional research designs, this module introduces how complex experimental designs, field experiments, naturalistic observation, experience sampling techniques, survey research, subtle and nonconscious techniques such as priming, and archival research and the use of big data may each be adapted to address social psychological questions. This module also discusses the importance of obtaining a representative sample along with some ethical considerations that social psychologists face. learning objectives • Describe the key features of basic and complex experimental designs. • Describe the key features of field experiments, naturalistic observation, and experience sampling techniques. • Describe survey research and explain the importance of obtaining a representative sample. • Describe the implicit association test and the use of priming. • Describe use of archival research techniques. • Explain five principles of ethical research that most concern social psychologists. Introduction Are you passionate about cycling? Norman Triplett certainly was. At the turn of last century he studied the lap times of cycling races and noticed a striking fact: riding in competitive races appeared to improve riders’ times by about 20-30 seconds every mile compared to when they rode the same courses alone. Triplett suspected that the riders’ enhanced performance could not be explained simply by the slipstream caused by other cyclists blocking the wind. To test his hunch, he designed what is widely described as the first experimental study in social psychology (published in 1898!)—in this case, having children reel in a length of fishing line as fast as they could. The children were tested alone, then again when paired with another child. The results? The children who performed the task in the presence of others out-reeled those that did so alone. Although Triplett’s research fell short of contemporary standards of scientific rigor (e.g., he eyeballed the data instead of measuring performance precisely; Stroebe, 2012), we now know that this effect, referred to as “social facilitation,” is reliable—performance on simple or well-rehearsed tasks tends to be enhanced when we are in the presence of others (even when we are not competing against them). To put it another way, the next time you think about showing off your pool-playing skills on a date, the odds are you’ll play better than when you practice by yourself. (If you haven’t practiced, maybe you should watch a movie instead!) Research Methods in Social Psychology One of the things Triplett’s early experiment illustrated is scientists’ reliance on systematic observation over opinion, or anecdotal evidence. The scientific method usually begins with observing the world around us (e.g., results of cycling competitions) and thinking of an interesting question (e.g., Why do cyclists perform better in groups?). The next step involves generating a specific testable prediction, or hypothesis (e.g., performance on simple tasks is enhanced in the presence of others). Next, scientists must operationalize the variables they are studying. This means they must figure out a way to define and measure abstract concepts. For example, the phrase “perform better” could mean different things in different situations; in Triplett’s experiment it referred to the amount of time (measured with a stopwatch) it took to wind a fishing reel. Similarly, “in the presence of others” in this case was operationalized as another child winding a fishing reel at the same time in the same room. Creating specific operational definitions like this allows scientists to precisely manipulate the independent variable, or “cause” (the presence of others), and to measure the dependent variable, or “effect” (performance)—in other words, to collect data. Clearly described operational definitions also help reveal possible limitations to studies (e.g., Triplett’s study did not investigate the impact of another child in the room who was not also winding a fishing reel) and help later researchers replicate them precisely. Laboratory Research As you can see, social psychologists have always relied on carefully designed laboratory environments to run experiments where they can closely control situations and manipulate variables (see the NOBA module on Research Designs for an overview of traditional methods). However, in the decades since Triplett discovered social facilitation, a wide range of methods and techniques have been devised, uniquely suited to demystifying the mechanics of how we relate to and influence one another. This module provides an introduction to the use of complex laboratory experiments, field experiments, naturalistic observation, survey research, nonconscious techniques, and archival research, as well as more recent methods that harness the power of technology and large data sets, to study the broad range of topics that fall within the domain of social psychology. At the end of this module we will also consider some of the key ethical principles that govern research in this diverse field. The use of complex experimental designs, with multiple independent and/or dependent variables, has grown increasingly popular because they permit researchers to study both the individual and joint effects of several factors on a range of related situations. Moreover, thanks to technological advancements and the growth of social neuroscience, an increasing number of researchers now integrate biological markers (e.g., hormones) or use neuroimaging techniques (e.g., fMRI) in their research designs to better understand the biological mechanisms that underlie social processes. We can dissect the fascinating research of Dov Cohen and his colleagues (1996) on “culture of honor” to provide insights into complex lab studies. A culture of honor is one that emphasizes personal or family reputation. In a series of lab studies, the Cohen research team invited dozens of university students into the lab to see how they responded to aggression. Half were from the Southern United States (a culture of honor) and half were from the Northern United States (not a culture of honor; this type of setup constitutes a participant variable of two levels). Region of origin was independent variable #1. Participants also provided a saliva sample immediately upon arriving at the lab; (they were given a cover story about how their blood sugar levels would be monitored over a series of tasks). The participants completed a brief questionnaire and were then sent down a narrow corridor to drop it off on a table. En route, they encountered a confederate at an open file cabinet who pushed the drawer in to let them pass. When the participant returned a few seconds later, the confederate, who had re-opened the file drawer, slammed it shut and bumped into the participant with his shoulder, muttering “asshole” before walking away. In a manipulation of an independent variable—in this case, the insult—some of the participants were insulted publicly (in view of two other confederates pretending to be doing homework) while others were insulted privately (no one else was around). In a third condition—the control group—participants experienced a modified procedure in which they were not insulted at all. Although this is a fairly elaborate procedure on its face, what is particularly impressive is the number of dependent variables the researchers were able to measure. First, in the public insult condition, the two additional confederates (who observed the interaction, pretending to do homework) rated the participants’ emotional reaction (e.g., anger, amusement, etc.) to being bumped into and insulted. Second, upon returning to the lab, participants in all three conditions were told they would later undergo electric shocks as part of a stress test, and were asked how much of a shock they would be willing to receive (between 10 volts and 250 volts). This decision was made in front of two confederates who had already chosen shock levels of 75 and 25 volts, presumably providing an opportunity for participants to publicly demonstrate their toughness. Third, across all conditions, the participants rated the likelihood of a variety of ambiguously provocative scenarios (e.g., one driver cutting another driver off) escalating into a fight or verbal argument. And fourth, in one of the studies, participants provided saliva samples, one right after returning to the lab, and a final one after completing the questionnaire with the ambiguous scenarios. Later, all three saliva samples were tested for levels of cortisol (a hormone associated with stress) and testosterone (a hormone associated with aggression). The results showed that people from the Northern United States were far more likely to laugh off the incident (only 35% having anger ratings as high as or higher than amusement ratings), whereas the opposite was true for people from the South (85% of whom had anger ratings as high as or higher than amusement ratings). Also, only those from the South experienced significant increases in cortisol and testosterone following the insult (with no difference between the public and private insult conditions). Finally, no regional differences emerged in the interpretation of the ambiguous scenarios; however, the participants from the South were more likely to choose to receive a greater shock in the presence of the two confederates. Field Research Because social psychology is primarily focused on the social context—groups, families, cultures—researchers commonly leave the laboratory to collect data on life as it is actually lived. To do so, they use a variation of the laboratory experiment, called a field experiment. A field experiment is similar to a lab experiment except it uses real-world situations, such as people shopping at a grocery store. One of the major differences between field experiments and laboratory experiments is that the people in field experiments do not know they are participating in research, so—in theory—they will act more naturally. In a classic example from 1972, Alice Isen and Paula Levin wanted to explore the ways emotions affect helping behavior. To investigate this they observed the behavior of people at pay phones (I know! Pay phones!). Half of the unsuspecting participants (determined by random assignment) found a dime planted by researchers (I know! A dime!) in the coin slot, while the other half did not. Presumably, finding a dime felt surprising and lucky and gave people a small jolt of happiness. Immediately after the unsuspecting participant left the phone booth, a confederate walked by and dropped a stack of papers. Almost 100% of those who found a dime helped to pick up the papers. And what about those who didn’t find a dime? Only 1 out 25 of them bothered to help. In cases where it’s not practical or ethical to randomly assign participants to different experimental conditions, we can use naturalistic observation—unobtrusively watching people as they go about their lives. Consider, for example, a classic demonstration of the “basking in reflected glory” phenomenon: Robert Cialdini and his colleagues used naturalistic observation at seven universities to confirm that students are significantly more likely to wear clothing bearing the school name or logo on days following wins (vs. draws or losses) by the school’s varsity football team (Cialdini et al., 1976). In another study, by Jenny Radesky and her colleagues (2014), 40 out of 55 observations of caregivers eating at fast food restaurants with children involved a caregiver using a mobile device. The researchers also noted that caregivers who were most absorbed in their device tended to ignore the children’s behavior, followed by scolding, issuing repeated instructions, or using physical responses, such as kicking the children’s feet or pushing away their hands. A group of techniques collectively referred to asexperience sampling methods represent yet another way of conducting naturalistic observation, often by harnessing the power of technology. In some cases, participants are notified several times during the day by a pager, wristwatch, or a smartphone app to record data (e.g., by responding to a brief survey or scale on their smartphone, or in a diary). For example, in a study by Reed Larson and his colleagues (1994), mothers and fathers carried pagers for one week and reported their emotional states when beeped at random times during their daily activities at work or at home. The results showed that mothers reported experiencing more positive emotional states when away from home (including at work), whereas fathers showed the reverse pattern. A more recently developed technique, known as the electronically activated recorder, or EAR, does not even require participants to stop what they are doing to record their thoughts or feelings; instead, a small portable audio recorder or smartphone app is used to automatically record brief snippets of participants’ conversations throughout the day for later coding and analysis. For a more in-depth description of the EAR technique and other experience-sampling methods, see the NOBA module on Conducting Psychology Research in the Real World. Survey Research In this diverse world, survey research offers itself as an invaluable tool for social psychologists to study individual and group differences in people’s feelings, attitudes, or behaviors. For example, the World Values Survey II was based on large representative samples of 19 countries and allowed researchers to determine that the relationship between income and subjective well-being was stronger in poorer countries (Diener & Oishi, 2000). In other words, an increase in income has a much larger impact on your life satisfaction if you live in Nigeria than if you live in Canada. In another example, a nationally-representative survey in Germany with 16,000 respondents revealed that holding cynical beliefs is related to lower income (e.g., between 2003-2012 the income of the least cynical individuals increased by \$300 per month, whereas the income of the most cynical individuals did not increase at all). Furthermore, survey data collected from 41 countries revealed that this negative correlation between cynicism and income is especially strong in countries where people in general engage in more altruistic behavior and tend not to be very cynical (Stavrova & Ehlebracht, 2016). Of course, obtaining large, cross-cultural, and representative samples has become far easier since the advent of the internet and the proliferation of web-based survey platforms—such as Qualtrics—and participant recruitment platforms—such as Amazon’s Mechanical Turk. And although some researchers harbor doubts about the representativeness of online samples, studies have shown that internet samples are in many ways more diverse and representative than samples recruited from human subject pools (e.g., with respect to gender; Gosling et al., 2004). Online samples also compare favorably with traditional samples on attentiveness while completing the survey, reliability of data, and proportion of non-respondents (Paolacci et al., 2010). Subtle/Nonconscious Research Methods The methods we have considered thus far—field experiments, naturalistic observation, and surveys—work well when the thoughts, feelings, or behaviors being investigated are conscious and directly or indirectly observable. However, social psychologists often wish to measure or manipulate elements that are involuntary or nonconscious, such as when studying prejudicial attitudes people may be unaware of or embarrassed by. A good example of a technique that was developed to measure people’s nonconscious (and often ugly) attitudes is known as the implicit association test (IAT) (Greenwald et al., 1998). This computer-based task requires participants to sort a series of stimuli (as rapidly and accurately as possible) into simple and combined categories while their reaction time is measured (in milliseconds). For example, an IAT might begin with participants sorting the names of relatives (such as “Niece” or “Grandfather”) into the categories “Male” and “Female,” followed by a round of sorting the names of disciplines (such as “Chemistry” or “English”) into the categories “Arts” and “Science.” A third round might combine the earlier two by requiring participants to sort stimuli into either “Male or Science” or “Female and Arts” before the fourth round switches the combinations to “Female or Science” and “Male and Arts.” If across all of the trials a person is quicker at accurately sorting incoming stimuli into the compound category “Male or Science” than into “Female or Science,” the authors of the IAT suggest that the participant likely has a stronger association between males and science than between females and science. Incredibly, this specific gender-science IAT has been completed by more than half a million participants across 34 countries, about 70% of whom show an implicit stereotype associating science with males more than with females (Nosek et al., 2009). What’s more, when the data are grouped by country, national differences in implicit stereotypes predict national differences in the achievement gap between boys and girls in science and math. Our automatic associations, apparently, carry serious societal consequences. Another nonconscious technique, known as priming, is often used to subtly manipulate behavior by activating or making more accessible certain concepts or beliefs. Consider the fascinating example of terror management theory (TMT), whose authors believe that human beings are (unconsciously) terrified of their mortality (i.e., the fact that, some day, we will all die; Pyszczynski et al., 2003). According to TMT, in order to cope with this unpleasant reality (and the possibility that our lives are ultimately essentially meaningless), we cling firmly to systems of cultural and religious beliefs that give our lives meaning and purpose. If this hypothesis is correct, one straightforward prediction would be that people should cling even more firmly to their cultural beliefs when they are subtly reminded of their own mortality. In one of the earliest tests of this hypothesis, actual municipal court judges in Arizona were asked to set a bond for an alleged prostitute immediately after completing a brief questionnaire. For half of the judges the questionnaire ended with questions about their thoughts and feelings regarding the prospect of their own death. Incredibly, judges in the experimental group that were primed with thoughts about their mortality set a significantly higher bond than those in the control group (\$455 vs. \$50!)—presumably because they were especially motivated to defend their belief system in the face of a violation of the law (Rosenblatt et al., 1989). Although the judges consciously completed the survey, what makes this a study of priming is that the second task (sentencing) was unrelated, so any influence of the survey on their later judgments would have been nonconscious. Similar results have been found in TMT studies in which participants were primed to think about death even more subtly, such as by having them complete questionnaires just before or after they passed a funeral home (Pyszczynski et al., 1996). To verify that the subtle manipulation (e.g., questions about one’s death) has the intended effect (activating death-related thoughts), priming studies like these often include a manipulation check following the introduction of a prime. For example, right after being primed, participants in a TMT study might be given a word fragment task in which they have to complete words such as COFF_ _ or SK _ _ L. As you might imagine, participants in the mortality-primed experimental group typically complete these fragments as COFFIN and SKULL, whereas participants in the control group complete them as COFFEE and SKILL. The use of priming to unwittingly influence behavior, known as social or behavioral priming (Ferguson & Mann, 2014), has been at the center of the recent “replication crisis” in Psychology (see the NOBA module on replication). Whereas earlier studies showed, for example, that priming people to think about old age makes them walk slower (Bargh, Chen, & Burrows, 1996), that priming them to think about a university professor boosts performance on a trivia game (Dijksterhuis & van Knippenberg, 1998), and that reminding them of mating motives (e.g., sex) makes them more willing to engage in risky behavior (Greitemeyer, Kastenmüller, & Fischer, 2013), several recent efforts to replicate these findings have failed (e.g., Harris et al., 2013; Shanks et al., 2013). Such failures to replicate findings highlight the need to ensure that both the original studies and replications are carefully designed, have adequate sample sizes, and that researchers pre-register their hypotheses and openly share their results—whether these support the initial hypothesis or not. Archival Research Imagine that a researcher wants to investigate how the presence of passengers in a car affects drivers’ performance. She could ask research participants to respond to questions about their own driving habits. Alternately, she might be able to access police records of the number of speeding tickets issued by automatic camera devices, then count the number of solo drivers versus those with passengers. This would be an example of archival research. The examination of archives, statistics, and other records such as speeches, letters, or even tweets, provides yet another window into social psychology. Although this method is typically used as a type of correlational research design—due to the lack of control over the relevant variables—archival research shares the higher ecological validity of naturalistic observation. That is, the observations are conducted outside the laboratory and represent real world behaviors. Moreover, because the archives being examined can be collected at any time and from many sources, this technique is especially flexible and often involves less expenditure of time and other resources during data collection. Social psychologists have used archival research to test a wide variety of hypotheses using real-world data. For example, analyses of major league baseball games played during the 1986, 1987, and 1988 seasons showed that baseball pitchers were more likely to hit batters with a pitch on hot days (Reifman et al., 1991). Another study compared records of race-based lynching in the United States between 1882-1930 to the inflation-adjusted price of cotton during that time (a key indicator of the Deep South’s economic health), demonstrating a significant negative correlation between these variables. Simply put, there were significantly more lynchings when the price of cotton stayed flat, and fewer lynchings when the price of cotton rose (Beck & Tolnay, 1990; Hovland & Sears, 1940). This suggests that race-based violence is associated with the health of the economy. More recently, analyses of social media posts have provided social psychologists with extremely large sets of data (“big data”) to test creative hypotheses. In an example of research on attitudes about vaccinations, Mitra and her colleagues (2016) collected over 3 million tweets sent by more than 32 thousand users over four years. Interestingly, they found that those who held (and tweeted) anti-vaccination attitudes were also more likely to tweet about their mistrust of government and beliefs in government conspiracies. Similarly, Eichstaedt and his colleagues (2015) used the language of 826 million tweets to predict community-level mortality rates from heart disease. That’s right: more anger-related words and fewer positive-emotion words in tweets predicted higher rates of heart disease. In a more controversial example, researchers at Facebook attempted to test whether emotional contagion—the transfer of emotional states from one person to another—would occur if Facebook manipulated the content that showed up in its users’ News Feed (Kramer et al., 2014). And it did. When friends’ posts with positive expressions were concealed, users wrote slightly fewer positive posts (e.g., “Loving my new phone!”). Conversely, when posts with negative expressions were hidden, users wrote slightly fewer negative posts (e.g., “Got to go to work. Ugh.”). This suggests that people’s positivity or negativity can impact their social circles. The controversial part of this study—which included 689,003 Facebook users and involved the analysis of over 3 million posts made over just one week—was the fact that Facebook did not explicitly request permission from users to participate. Instead, Facebook relied on the fine print in their data-use policy. And, although academic researchers who collaborated with Facebook on this study applied for ethical approval from their institutional review board (IRB), they apparently only did so after data collection was complete, raising further questions about the ethicality of the study and highlighting concerns about the ability of large, profit-driven corporations to subtly manipulate people’s social lives and choices. Research Issues in Social Psychology The Question of Representativeness Along with our counterparts in the other areas of psychology, social psychologists have been guilty of largely recruiting samples of convenience from the thin slice of humanity—students—found at universities and colleges (Sears, 1986). This presents a problem when trying to assess the social mechanics of the public at large. Aside from being an overrepresentation of young, middle-class Caucasians, college students may also be more compliant and more susceptible to attitude change, have less stable personality traits and interpersonal relationships, and possess stronger cognitive skills than samples reflecting a wider range of age and experience (Peterson & Merunka, 2014; Visser, Krosnick, & Lavrakas, 2000). Put simply, these traditional samples (college students) may not be sufficiently representative of the broader population. Furthermore, considering that 96% of participants in psychology studies come from western, educated, industrialized, rich, and democratic countries (so-called WEIRD cultures; Henrich, Heine, & Norenzayan, 2010), and that the majority of these are also psychology students, the question of non-representativeness becomes even more serious. Of course, when studying a basic cognitive process (like working memory capacity) or an aspect of social behavior that appears to be fairly universal (e.g., even cockroaches exhibit social facilitation!), a non-representative sample may not be a big deal. However, over time research has repeatedly demonstrated the important role that individual differences (e.g., personality traits, cognitive abilities, etc.) and culture (e.g., individualism vs. collectivism) play in shaping social behavior. For instance, even if we only consider a tiny sample of research on aggression, we know that narcissists are more likely to respond to criticism with aggression (Bushman & Baumeister, 1998); conservatives, who have a low tolerance for uncertainty, are more likely to prefer aggressive actions against those considered to be “outsiders” (de Zavala et al., 2010); countries where men hold the bulk of power in society have higher rates of physical aggression directed against female partners (Archer, 2006); and males from the southern part of the United States are more likely to react with aggression following an insult (Cohen et al., 1996). Ethics in Social Psychological Research For better or worse (but probably for worse), when we think about the most unethical studies in psychology, we think about social psychology. Imagine, for example, encouraging people to deliver what they believe to be a dangerous electric shock to a stranger (with bloodcurdling screams for added effect!). This is considered a “classic” study in social psychology. Or, how about having students play the role of prison guards, deliberately and sadistically abusing other students in the role of prison inmates. Yep, social psychology too. Of course, both Stanley Milgram’s (1963) experiments on obedience to authority and the Stanford prison study (Haney et al., 1973) would be considered unethical by today’s standards, which have progressed with our understanding of the field. Today, we follow a series of guidelines and receive prior approval from our institutional research boards before beginning such experiments. Among the most important principles are the following: 1. Informed consent: In general, people should know when they are involved in research, and understand what will happen to them during the study (at least in general terms that do not give away the hypothesis). They are then given the choice to participate, along with the freedom to withdraw from the study at any time. This is precisely why the Facebook emotional contagion study discussed earlier is considered ethically questionable. Still, it’s important to note that certain kinds of methods—such as naturalistic observation in public spaces, or archival research based on public records—do not require obtaining informed consent. 2. Privacy: Although it is permissible to observe people’s actions in public—even without them knowing—researchers cannot violate their privacy by observing them in restrooms or other private spaces without their knowledge and consent. Researchers also may not identify individual participants in their research reports (we typically report only group means and other statistics). With online data collection becoming increasingly popular, researchers also have to be mindful that they follow local data privacy laws, collect only the data that they really need (e.g., avoiding including unnecessary questions in surveys), strictly restrict access to the raw data, and have a plan in place to securely destroy the data after it is no longer needed. 3. Risks and Benefits: People who participate in psychological studies should be exposed to risk only if they fully understand the risks and only if the likely benefits clearly outweigh those risks. The Stanford prison study is a notorious example of a failure to meet this obligation. It was planned to run for two weeks but had to be shut down after only six days because of the abuse suffered by the “prison inmates.” But even less extreme cases, such as researchers wishing to investigate implicit prejudice using the IAT, need to be considerate of the consequences of providing feedback to participants about their nonconscious biases. Similarly, any manipulations that could potentially provoke serious emotional reactions (e.g., the culture of honor study described above) or relatively permanent changes in people’s beliefs or behaviors (e.g., attitudes towards recycling) need to be carefully reviewed by the IRB. 4. Deception: Social psychologists sometimes need to deceive participants (e.g., using a cover story) to avoid demand characteristics by hiding the true nature of the study. This is typically done to prevent participants from modifying their behavior in unnatural ways, especially in laboratory or field experiments. For example, when Milgram recruited participants for his experiments on obedience to authority, he described it as being a study of the effects of punishment on memory! Deception is typically only permitted (a) when the benefits of the study outweigh the risks, (b) participants are not reasonably expected to be harmed, (c) the research question cannot be answered without the use of deception, and (d) participants are informed about the deception as soon as possible, usually through debriefing. 5. Debriefing: This is the process of informing research participants as soon as possible of the purpose of the study, revealing any deceptions, and correcting any misconceptions they might have as a result of participating. Debriefing also involves minimizing harm that might have occurred. For example, an experiment examining the effects of sad moods on charitable behavior might involve inducing a sad mood in participants by having them think sad thoughts, watch a sad video, or listen to sad music. Debriefing would therefore be the time to return participants’ moods to normal by having them think happy thoughts, watch a happy video, or listen to happy music. Conclusion As an immensely social species, we affect and influence each other in many ways, particularly through our interactions and cultural expectations, both conscious and nonconscious. The study of social psychology examines much of the business of our everyday lives, including our thoughts, feelings, and behaviors we are unaware or ashamed of. The desire to carefully and precisely study these topics, together with advances in technology, has led to the development of many creative techniques that allow researchers to explore the mechanics of how we relate to one another. Consider this your invitation to join the investigation. Outside Resources Article: Do research ethics need updating for the digital age? Questions raised by the Facebook emotional contagion study. http://www.apa.org/monitor/2014/10/r...ch-ethics.aspx Article: Psychology is WEIRD. A commentary on non-representative samples in Psychology. http://www.slate.com/articles/health...n_college.html Web: Linguistic Inquiry and Word Count. Paste in text from a speech, article, or other archive to analyze its linguistic structure. www.liwc.net/tryonline.php Web: Project Implicit. Take a demonstration implicit association test https://implicit.harvard.edu/implicit/ Web: Research Randomizer. An interactive tool for random sampling and random assignment. https://www.randomizer.org/ Discussion Questions 1. What are some pros and cons of experimental research, field research, and archival research? 2. How would you feel if you learned that you had been a participant in a naturalistic observation study (without explicitly providing your consent)? How would you feel if you learned during a debriefing procedure that you have a stronger association between the concept of violence and members of visible minorities? Can you think of other examples of when following principles of ethical research create challenging situations? 3. Can you think of an attitude (other than those related to prejudice) that would be difficult or impossible to measure by asking people directly? 4. What do you think is the difference between a manipulation check and a dependent variable? Vocabulary Anecdotal evidence An argument that is based on personal experience and not considered reliable or representative. Archival research A type of research in which the researcher analyses records or archives instead of collecting data from live human participants. Basking in reflected glory The tendency for people to associate themselves with successful people or groups. Big data The analysis of large data sets. Complex experimental designs An experiment with two or more independent variables. Confederate An actor working with the researcher. Most often, this individual is used to deceive unsuspecting research participants. Also known as a “stooge.” Correlational research A type of descriptive research that involves measuring the association between two variables, or how they go together. Cover story A fake description of the purpose and/or procedure of a study, used when deception is necessary in order to answer a research question. Demand characteristics Subtle cues that make participants aware of what the experimenter expects to find or how participants are expected to behave. Dependent variable The variable the researcher measures but does not manipulate in an experiment. Ecological validity The degree to which a study finding has been obtained under conditions that are typical for what happens in everyday life. Electronically activated recorder (EAR) A methodology where participants wear a small, portable audio recorder that intermittently records snippets of ambient sounds around them. Experience sampling methods Systematic ways of having participants provide samples of their ongoing behavior. Participants' reports are dependent (contingent) upon either a signal, pre-established intervals, or the occurrence of some event. Field experiment An experiment that occurs outside of the lab and in a real world situation. Hypothesis A logical idea that can be tested. Implicit association test (IAT) A computer-based categorization task that measures the strength of association between specific concepts over several trials. Independent variable The variable the researcher manipulates and controls in an experiment. Laboratory environments A setting in which the researcher can carefully control situations and manipulate variables. Manipulation check A measure used to determine whether or not the manipulation of the independent variable has had its intended effect on the participants. Naturalistic observation Unobtrusively watching people as they go about the business of living their lives. Operationalize How researchers specifically measure a concept. Participant variable The individual characteristics of research subjects - age, personality, health, intelligence, etc. Priming The process by which exposing people to one stimulus makes certain thoughts, feelings or behaviors more salient. Random assignment Assigning participants to receive different conditions of an experiment by chance. Samples of convenience Participants that have been recruited in a manner that prioritizes convenience over representativeness. Scientific method A method of investigation that includes systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses. Social facilitation When performance on simple or well-rehearsed tasks is enhanced when we are in the presence of others. Social neuroscience An interdisciplinary field concerned with identifying the neural processes underlying social behavior and cognition. Social or behavioral priming A field of research that investigates how the activation of one social concept in memory can elicit changes in behavior, physiology, or self-reports of a related social concept without conscious awareness. Survey research A method of research that involves administering a questionnaire to respondents in person, by telephone, through the mail, or over the internet. Terror management theory (TMT) A theory that proposes that humans manage the anxiety that stems from the inevitability of death by embracing frameworks of meaning such as cultural values and beliefs. WEIRD cultures Cultures that are western, educated, industrialized, rich, and democratic.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_11%3A_Social_Part_I/11.04%3A_Research_Methods_in_Social_Psychology.txt
By Yanine D. Hess and Cynthia L. Pickett University of California, Davis Social cognition is the area of social psychology that examines how people perceive and think about their social world. This module provides an overview of key topics within social cognition and attitudes, including judgmental heuristics, social prediction, affective and motivational influences on judgment, and explicit and implicit attitudes. learning objectives • Learn how we simplify the vast array of information in the world in a way that allows us to make decisions and navigate our environments efficiently. • Understand some of the social factors that influence how we reason. • Determine if our reasoning processes are always conscious, and if not, what some of the effects of automatic/nonconscious cognition are. • Understand the difference between explicit and implicit attitudes, and the implications they have for behavior. Introduction Imagine you are walking toward your classroom and you see your teacher and a fellow student you know to be disruptive in class whispering together in the hallway. As you approach, both of them quit talking, nod to you, and then resume their urgent whispers after you pass by. What would you make of this scene? What story might you tell yourself to help explain this interesting and unusual behavior? People know intuitively that we can better understand others’ behavior if we know the thoughts contributing to the behavior. In this example, you might guess that your teacher harbors several concerns about the disruptive student, and therefore you believe their whispering is related to this. The area of social psychology that focuses on how people think about others and about the social world is called social cognition. Researchers of social cognition study how people make sense of themselves and others to make judgments, form attitudes, and make predictions about the future. Much of the research in social cognition has demonstrated that humans are adept at distilling large amounts of information into smaller, more usable chunks, and that we possess many cognitive tools that allow us to efficiently navigate our environments. This research has also illuminated many social factors that can influence these judgments and predictions. Not only can our past experiences, expectations, motivations, and moods impact our reasoning, but many of our decisions and behaviors are driven by unconscious processes and implicit attitudes we are unaware of having. The goal of this module is to highlight the mental tools we use to navigate and make sense of our complex social world, and describe some of the emotional, motivational, and cognitive factors that affect our reasoning. Simplifying Our Social World Consider how much information you come across on any given day; just looking around your bedroom, there are hundreds of objects, smells, and sounds. How do we simplify all this information to attend to what is important and make decisions quickly and efficiently? In part, we do it by forming schemas of the various people, objects, situations, and events we encounter. A schema is a mental model, or representation, of any of the various things we come across in our daily lives. A schema (related to the word schematic) is kind of like a mental blueprint for how we expect something to be or behave. It is an organized body of general information or beliefs we develop from direct encounters, as well as from secondhand sources. Rather than spending copious amounts of time learning about each new individual object (e.g., each new dog we see), we rely on our schemas to tell us that a newly encountered dog probably barks, likes to fetch, and enjoys treats. In this way, our schemas greatly reduce the amount of cognitive work we need to do and allow us to “go beyond the information given” (Bruner, 1957). We can hold schemas about almost anything—individual people (person schemas), ourselves (self-schemas), and recurring events (event schemas, or scripts). Each of these types of schemas is useful in its own way. For example, event schemas allow us to navigate new situations efficiently and seamlessly. A script for dining at a restaurant would indicate that one should wait to be seated by the host or hostess, that food should be ordered from a menu, and that one is expected to pay the check at the end of the meal. Because the majority of dining situations conform to this general format, most diners just need to follow their mental scripts to know what to expect and how they should behave, greatly reducing their cognitive workload. Another important way we simplify our social world is by employing heuristics, which are mental shortcuts that reduce complex problem-solving to more simple, rule-based decisions. For example, have you ever had a hard time trying to decide on a book to buy, then you see one ranked highly on a book review website? Although selecting a book to purchase can be a complicated decision, you might rely on the “rule of thumb” that a recommendation from a credible source is likely a safe bet—so you buy it. A common instance of using heuristics is when people are faced with judging whether an object belongs to a particular category. For example, you would easily classify a pit bull into the category of “dog.” But what about a coyote? Or a fox? A plastic toy dog? In order to make this classification (and many others), people may rely on the representativeness heuristic to arrive at a quick decision (Kahneman & Tversky, 1972, 1973). Rather than engaging in an in-depth consideration of the object’s attributes, one can simply judge the likelihood of the object belonging to a category, based on how similar it is to one’s mental representation of that category. For example, a perceiver may quickly judge a female to be an athlete based on the fact that the female is tall, muscular, and wearing sports apparel—which fits the perceiver’s representation of an athlete’s characteristics. In many situations, an object’s similarity to a category is a good indicator of its membership in that category, and an individual using the representativeness heuristic will arrive at a correct judgment. However, when base-rate information (e.g., the actual percentage of athletes in the area and therefore the probability that this person actually is an athlete) conflicts with representativeness information, use of this heuristic is less appropriate. For example, if asked to judge whether a quiet, thin man who likes to read poetry is a classics professor at a prestigious university or a truck driver, the representativeness heuristic might lead one to guess he’s a professor. However, considering the base-rates, we know there are far fewer university classics professors than truck drivers. Therefore, although the man fits the mental image of a professor, the actual probability of him being one (considering the number of professors out there) is lower than that of being a truck driver. In addition to judging whether things belong to particular categories, we also attempt to judge the likelihood that things will happen. A commonly employed heuristic for making this type of judgment is called the availability heuristic. People use the availability heuristic to evaluate the frequency or likelihood of an event based on how easily instances of it come to mind (Tversky & Kahneman, 1973). Because more commonly occurring events are more likely to be cognitively accessible (or, they come to mind more easily), use of the availability heuristic can lead to relatively good approximations of frequency. However, the heuristic can be less reliable when judging the frequency of relatively infrequent but highly accessible events. For example, do you think there are more words that begin with “k,” or more that have “k” as the third letter? To figure this out, you would probably make a list of words that start with “k” and compare it to a list of words with “k” as the third letter. Though such a quick test may lead you to believe there are more words that begin with “k,” the truth is that there are 3 times as many words that have “k” as the third letter (Schwarz et al., 1991). In this case, words beginning with “k” are more readily available to memory (i.e., more accessible), so they seem to be more numerous. Another example is the very common fear of flying: dying in a plane crash is extremely rare, but people often overestimate the probability of it occurring because plane crashes tend to be highly memorable and publicized. In summary, despite the vast amount of information we are bombarded with on a daily basis, the mind has an entire kit of “tools” that allows us to navigate that information efficiently. In addition to category and frequency judgments, another common mental calculation we perform is predicting the future. We rely on our predictions about the future to guide our actions. When deciding what entrée to select for dinner, we may ask ourselves, “How happy will I be if I choose this over that?” The answer we arrive at is an example of a future prediction. In the next section, we examine individuals’ ability to accurately predict others’ behaviors, as well as their own future thoughts, feelings, and behaviors, and how these predictions can impact their decisions. Making Predictions About the Social World Whenever we face a decision, we predict our future behaviors or feelings in order to choose the best course of action. If you have a paper due in a week and have the option of going out to a party or working on the paper, the decision of what to do rests on a few things: the amount of time you predict you will need to write the paper, your prediction of how you will feel if you do poorly on the paper, and your prediction of how harshly the professor will grade it. In general, we make predictions about others quickly, based on relatively little information. Research on “thin-slice judgments” has shown that perceivers are able to make surprisingly accurate inferences about another person’s emotional state, personality traits, and even sexual orientation based on just snippets of information—for example, a 10-second video clip (Ambady, Bernieri, & Richeson, 2000; Ambady, Hallahan, & Conner, 1999; Ambady & Rosenthal, 1993). Furthermore, these judgments are predictive of the target’s future behaviors. For example, one study found that students’ ratings of a teacher’s warmth, enthusiasm, and attentiveness from a 30-second video clip strongly predicted that teacher’s final student evaluations after an entire semester (Ambady & Rosenthal, 1993). As might be expected, the more information there is available, the more accurate many of these judgments become (Carney, Colvin, & Hall, 2007). Because we seem to be fairly adept at making predictions about others, one might expect predictions about the self to be foolproof, given the considerable amount of information one has about the self compared to others. To an extent, research has supported this conclusion. For example, our own predictions of our future academic performance are more accurate than peers’ predictions of our performance, and self-expressed interests better predict occupational choice than career inventories (Shrauger & Osberg, 1981). Yet, it is not always the case that we hold greater insight into ourselves. While our own assessment of our personality traits does predict certain behavioral tendencies better than peer assessment of our personality, for certain behaviors, peer reports are more accurate than self-reports (Kolar, Funder, & Colvin, 1996; Vazire, 2010). Similarly, although we are generally aware of our knowledge, abilities, and future prospects, our perceptions are often overly positive, and we display overconfidence in their accuracy and potential (Metcalfe, 1998). For example, we tend to underestimate how much time it will take us to complete a task, whether it is writing a paper, finishing a project at work, or building a bridge—a phenomenon known as the planning fallacy (Buehler, Griffin, & Ross, 1994). The planning fallacy helps explain why so many college students end up pulling all-nighters to finish writing assignments or study for exams. The tasks simply end up taking longer than expected. On the positive side, the planning fallacy can also lead individuals to pursue ambitious projects that may turn out to be worthwhile. That is, if they had accurately predicted how much time and work it would have taken them, they may have never started it in the first place. The other important factor that affects decision-making is our ability to predict how we will feel about certain outcomes. Not only do we predict whether we will feel positively or negatively, we also make predictions about how strongly and for how long we will feel that way. Research demonstrates that these predictions of one’s future feelings—known as affective forecasting—are accurate in some ways but limited in others (Gilbert & Wilson, 2007). We are adept at predicting whether a future event or situation will make us feel positively or negatively (Wilson & Gilbert, 2003), but we often incorrectly predict the strength or duration of those emotions. For example, you may predict that if your favorite sports team loses an important match, you will be devastated. Although you’re probably right that you will feel negative (and not positive) emotions, will you be able to accurately estimate how negative you’ll feel? What about how long those negative feelings will last? Predictions about future feelings are influenced by the impact bias : the tendency for a person to overestimate the intensity of their future feelings. For example, by comparing people’s estimates of how they expected to feel after a specific event to their actual feelings after the event, research has shown that people generally overestimate how badly they will feel after a negative event—such as losing a job—and they also overestimate how happy they will feel after a positive event—such as winning the lottery (Brickman, Coates, & Janoff-Bullman, 1978). Another factor in these estimations is the durability bias. The durability bias refers to the tendency for people to overestimate how long (or, the duration) positive and negative events will affect them. This bias is much greater for predictions regarding negative events than positive events, and occurs because people are generally unaware of the many psychological mechanisms that help us adapt to and cope with negative events (Gilbert, Pinel, Wilson, Blumberg, & Wheatley, 1998;Wilson, Wheatley, Meyers, Gilbert, & Axsom, 2000). In summary, individuals form impressions of themselves and others, make predictions about the future, and use these judgments to inform their decisions. However, these judgments are shaped by our tendency to view ourselves in an overly positive light and our inability to appreciate our habituation to both positive and negative events. In the next section, we will discuss how motivations, moods, and desires also shape social judgment. Hot Cognition: The Influence of Motivations, Mood, and Desires on Social Judgment Although we may believe we are always capable of rational and objective thinking (for example, when we methodically weigh the pros and cons of two laundry detergents in an unemotional—i.e., “cold”—manner), our reasoning is often influenced by our motivations and mood. Hot cognition refers to the mental processes that are influenced by desires and feelings. For example, imagine you receive a poor grade on a class assignment. In this situation, your ability to reason objectively about the quality of your assignment may be limited by your anger toward the teacher, upset feelings over the bad grade, and your motivation to maintain your belief that you are a good student. In this sort of scenario, we may want the situation to turn out a particular way or our belief to be the truth. When we have these directional goals, we are motivated to reach a particular outcome or judgment and do not process information in a cold, objective manner. Directional goals can bias our thinking in many ways, such as leading to motivated skepticism, whereby we are skeptical of evidence that goes against what we want to believe despite the strength of the evidence (Ditto & Lopez, 1992). For example, individuals trust medical tests less if the results suggest they have a deficiency compared to when the results suggest they are healthy. Through this motivated skepticism, people often continue to believe what they want to believe, even in the face of nearly incontrovertible evidence to the contrary. There are also situations in which we do not have wishes for a particular outcome but our goals bias our reasoning, anyway. For example, being motivated to reach an accurate conclusion can influence our reasoning processes by making us more cautious—leading to indecision. In contrast, sometimes individuals are motivated to make a quick decision, without being particularly concerned about the quality of it. Imagine trying to choose a restaurant with a group of friends when you’re really hungry. You may choose whatever’s nearby without caring if the restaurant is the best or not. This need for closure (the desire to come to a firm conclusion) is often induced by time constraints (when a decision needs to be made quickly) as well as by individual differences in the need for closure (Webster & Kruglanski, 1997). Some individuals are simply more uncomfortable with ambiguity than others, and are thus more motivated to reach clear, decisive conclusions. Just as our goals and motivations influence our reasoning, our moods and feelings also shape our thinking process and ultimate decisions. Many of our decisions are based in part on our memories of past events, and our retrieval of memories is affected by our current mood. For example, when you are sad, it is easier to recall the sad memory of your dog’s death than the happy moment you received the dog. This tendency to recall memories similar in valence to our current mood is known as mood-congruent memory (Blaney, 1986; Bower 1981, 1991; DeSteno, Petty, Wegener, & Rucker, 2000; Forgas, Bower, & Krantz, 1984; Schwarz, Strack, Kommer, & Wagner, 1987). The mood we were in when the memory was recorded becomes a retrieval cue; our present mood primes these congruent memories, making them come to mind more easily (Fiedler, 2001). Furthermore, because the availability of events in our memory can affect their perceived frequency (the availability heuristic), the biased retrieval of congruent memories can then impact the subsequent judgments we make (Tversky & Kahneman, 1973). For example, if you are retrieving many sad memories, you might conclude that you have had a tough, depressing life. In addition to our moods influencing the specific memories we retrieve, our moods can also influence the broader judgments we make. This sometimes leads to inaccuracies when our current mood is irrelevant to the judgment at hand. In a classic study demonstrating this effect, researchers found that study participants rated themselves as less-satisfied with their lives in general if they were asked on a day when it happened to be raining vs. sunny (Schwarz & Clore, 1983). However, this occurred only if the participants were not aware that the weather might be influencing their mood. In essence, participants were in worse moods on rainy days than sunny days, and, if unaware of the weather’s effect on their mood, they incorrectly used their mood as evidence of their overall life satisfaction. In summary, our mood and motivations can influence both the way we think and the decisions we ultimately make. Mood can shape our thinking even when the mood is irrelevant to the judgment, and our motivations can influence our thinking even if we have no particular preference about the outcome. Just as we might be unaware of how our reasoning is influenced by our motives and moods, research has found that our behaviors can be determined by unconscious processes rather than intentional decisions, an idea we will explore in the next section. Automaticity Do we actively choose and control all our behaviors or do some of these behaviors occur automatically? A large body of evidence now suggests that many of our behaviors are, in fact, automatic. A behavior or process is considered automatic if it is unintentional, uncontrollable, occurs outside of conscious awareness, or is cognitively efficient (Bargh & Chartrand, 1999). A process may be considered automatic even if it does not have all these features; for example, driving is a fairly automatic process, but is clearly intentional. Processes can become automatic through repetition, practice, or repeated associations. Staying with the driving example: although it can be very difficult and cognitively effortful at the start, over time it becomes a relatively automatic process, and aspects of it can occur outside conscious awareness. In addition to practice leading to the learning of automatic behaviors, some automatic processes, such as fear responses, appear to be innate. For example, people quickly detect negative stimuli, such as negative words, even when those stimuli are presented subliminally (Dijksterhuis & Aarts, 2003; Pratto & John, 1991). This may represent an evolutionarily adaptive response that makes individuals more likely to detect danger in their environment. Other innate automatic processes may have evolved due to their pro-social outcomes. The chameleon effect—where individuals nonconsciously mimic the postures, mannerisms, facial expressions, and other behaviors of their interaction partners—is an example of how people may engage in certain behaviors without conscious intention or awareness (Chartrand & Bargh, 1999). For example, have you ever noticed that you’ve picked up some of the habits of your friends? Over time, but also in brief encounters, we will nonconsciously mimic those around us because of the positive social effects of doing so. That is, automatic mimicry has been shown to lead to more positive social interactions and to increase liking between the mimicked person and the mimicking person. When concepts and behaviors have been repeatedly associated with each other, one of them can be primed—i.e., made more cognitively accessible—by exposing participants to the (strongly associated) other one. For example, by presenting participants with the concept of a doctor, associated concepts such as “nurse” or “stethoscope” are primed. As a result, participants recognize a word like “nurse” more quickly (Meyer, & Schvaneveldt, 1971). Similarly, stereotypes can automatically prime associated judgments and behaviors. Stereotypes are our general beliefs about a group of people and, once activated, they may guide our judgments outside of conscious awareness. Similar to schemas, stereotypes involve a mental representation of how we expect a person will think and behave. For example, someone’s mental schema for women may be that they’re caring, compassionate, and maternal; however, a stereotype would be that all women are examples of this schema. As you know, assuming all people are a certain way is not only wrong but insulting, especially if negative traits are incorporated into a schema and subsequent stereotype. In a now classic study, Patricia Devine (1989) primed study participants with words typically associated with Blacks (e.g., “blues,” “basketball”) in order to activate the stereotype of Blacks. Devine found that study participants who were primed with the Black stereotype judged a target’s ambiguous behaviors as being more hostile (a trait stereotypically associated with Blacks) than nonprimed participants. Research in this area suggests that our social context—which constantly bombards us with concepts—may prime us to form particular judgments and influence our thoughts and behaviors. In summary, there are many cognitive processes and behaviors that occur outside of our awareness and despite our intentions. Because automatic thoughts and behaviors do not require the same level of cognitive processing as conscious, deliberate thinking and acting, automaticity provides an efficient way for individuals to process and respond to the social world. However, this efficiency comes at a cost, as unconsciously held stereotypes and attitudes can sometimes influence us to behave in unintended ways. We will discuss the consequences of both consciously and unconsciously held attitudes in the next section. Attitudes and Attitude Measurement When we encounter a new object or person, we often form an attitude toward it (him/her). An attitude is a “psychological tendency that is expressed by evaluating a particular entity with some degree of favor or disfavor” (Eagly & Chaiken, 1993, p. 1). In essence, our attitudes are our general evaluations of things (i.e., do you regard this thing positively or negatively?) that can bias us toward having a particular response to it. For example, a negative attitude toward mushrooms would predispose you to avoid them and think negatively of them in other ways. This bias can be long- or short-term and can be overridden by another experience with the object. Thus, if you encounter a delicious mushroom dish in the future, your negative attitude could change to a positive one. Traditionally, attitudes have been measured through explicit attitude measures, in which participants are directly asked to provide their attitudes toward various objects, people, or issues (e.g., a survey). For example, in a semantic-differential scale, respondents are asked to provide evaluations of an attitude object using a series of negative to positive response scales—which have something like “unpleasant” at one end of the scale and “pleasant” at the other (Osgood, Suci, & Tannenbaum, 1957). In a Likert scale, respondents are asked to indicate their agreement level with various evaluative statements, such as, “I believe that psychology is the most interesting major” (Likert, 1932). Here, participants mark their selection between something like “strongly disagree” and “strongly agree.” These explicit measures of attitudes can be used to predict people’s actual behavior, but there are limitations to them. For one thing, individuals aren’t always aware of their true attitudes, because they’re either undecided or haven’t given a particular issue much thought. Furthermore, even when individuals are aware of their attitudes, they might not want to admit to them, such as when holding a certain attitude is viewed negatively by their culture. For example, sometimes it can be difficult to measure people’s true opinions on racial issues, because participants fear that expressing their true attitudes will be viewed as socially unacceptable. Thus, explicit attitude measures may be unreliable when asking about controversial attitudes or attitudes that are not widely accepted by society. In order to avoid some of these limitations, many researchers use more subtle or covert ways of measuring attitudes that do not suffer from such self-presentation concerns (Fazio & Olson, 2003). An implicit attitude is an attitude that a person does not verbally or overtly express. For example, someone may have a positive, explicit attitude toward his job; however, nonconsciously, he may have a lot of negative associations with it (e.g., having to wake up early, the long commute, the office heating is broken) which results in an implicitly negative attitude. To learn what a person’s implicit attitude is, you have to use implicit measures of attitudes. These measures infer the participant’s attitude rather than having the participant explicitly report it. Many implicit measures accomplish this by recording the time it takes a participant (i.e., the reaction time) to label or categorize an attitude object (i.e., the person, concept, or object of interest) as positive or negative. For example, the faster someone categorizes his or her job (measured in milliseconds) as negative compared to positive, the more negative the implicit attitude is (i.e., because a faster categorization implies that the two concepts—“work” and “negative”—are closely related in one’s mind). One common implicit measure is the Implicit Association Test (IAT;Greenwald & Banaji, 1995; Greenwald, McGhee, & Schwartz, 1998), which does just what the name suggests, measuring how quickly the participant pairs a concept (e.g., cats) with an attribute (e.g., good or bad). The participant’s response time in pairing the concept with the attribute indicates how strongly the participant associates the two. Another common implicit measure is the evaluative priming task (Fazio, Jackson, Dunton, & Williams, 1995), which measures how quickly the participant labels the valence (i.e., positive or negative) of the attitude object when it appears immediately after a positive or negative image. The more quickly a participant labels the attitude object after being primed with a positive versus negative image indicates how positively the participant evaluates the object. Individuals’ implicit attitudes are sometimes inconsistent with their explicitly held attitudes. Hence, implicit measures may reveal biases that participants do not report on explicit measures. As a result, implicit attitude measures are especially useful for examining the pervasiveness and strength of controversial attitudes and stereotypic associations, such as racial biases or associations between race and violence. For example, research using the IAT has shown that about 66% of white respondents have a negative bias toward Blacks (Nosek, Banaji, & Greenwald, 2002), that bias on the IAT against Blacks is associated with more discomfort during interracial interactions (McConnell, & Leibold, 2001), and that implicit associations linking Blacks to violence are associated with a greater tendency to shoot unarmed Black targets in a video game (Payne, 2001). Thus, even though individuals are often unaware of their implicit attitudes, these attitudes can have serious implications for their behavior, especially when these individuals do not have the cognitive resources available to override the attitudes’ influence. Conclusion Decades of research on social cognition and attitudes have revealed many of the “tricks” and “tools” we use to efficiently process the limitless amounts of social information we encounter. These tools are quite useful for organizing that information to arrive at quick decisions. When you see an individual engage in a behavior, such as seeing a man push an elderly woman to the ground, you form judgments about his personality, predictions about the likelihood of him engaging in similar behaviors in the future, as well as predictions about the elderly woman’s feelings and how you would feel if you were in her position. As the research presented in this module demonstrates, we are adept and efficient at making these judgments and predictions, but they are not made in a vacuum. Ultimately, our perception of the social world is a subjective experience, and, consequently, our decisions are influenced by our experiences, expectations, emotions, motivations, and current contexts. Being aware of when our judgments are most accurate, and how our judgments are shaped by social influences, prepares us to be in a much better position to appreciate, and potentially counter, their effects. Outside Resources Video: Daniel Gilbert discussing affective forecasting. www.dailymotion.com/video/xeb...e#.UQlwDx3WLm4 Video: Focus on heuristics. http://study.com/academy/lesson/heuristics.html Web: BBC Horizon documentary How to Make Better Decisions that discusses many module topics (Part 1). Web: Implicit Attitudes Test. https://implicit.harvard.edu/implicit/ Discussion Questions 1. Describe your event-schema, or script, for an event that you encounter regularly (e.g., dining at a restaurant). Now, attempt to articulate a script for an event that you have encountered only once or a few times. How are these scripts different? How confident are you in your ability to navigate these two events? 2. Think of a time when you made a decision that you thought would make you very happy (e.g., purchasing an item). To what extent were you accurate or inaccurate? In what ways were you wrong, and why do you think you were wrong? 3. What is an issue you feel strongly about (e.g., abortion, death penalty)? How would you react if research demonstrated that your opinion was wrong? What would it take before you would believe the evidence? 4. Take an implicit association test at the Project Implicit website (https://implicit.harvard.edu/implicit). How do your results match or mismatch your explicit attitudes. Vocabulary Affective forecasting Predicting how one will feel in the future after some event or decision. Attitude A psychological tendency that is expressed by evaluating a particular entity with some degree of favor or disfavor. Automatic A behavior or process has one or more of the following features: unintentional, uncontrollable, occurring outside of conscious awareness, and cognitively efficient. Availability heuristic A heuristic in which the frequency or likelihood of an event is evaluated based on how easily instances of it come to mind. Chameleon effect The tendency for individuals to nonconsciously mimic the postures, mannerisms, facial expressions, and other behaviors of one’s interaction partners. Directional goals The motivation to reach a particular outcome or judgment. Durability bias A bias in affective forecasting in which one overestimates for how long one will feel an emotion (positive or negative) after some event. Evaluative priming task An implicit attitude task that assesses the extent to which an attitude object is associated with a positive or negative valence by measuring the time it takes a person to label an adjective as good or bad after being presented with an attitude object. Explicit attitude An attitude that is consciously held and can be reported on by the person holding the attitude. Heuristics A mental shortcut or rule of thumb that reduces complex mental problems to more simple rule-based decisions. Hot cognition The mental processes that are influenced by desires and feelings. Impact bias A bias in affective forecasting in which one overestimates the strength or intensity of emotion one will experience after some event. Implicit Association Test An implicit attitude task that assesses a person’s automatic associations between concepts by measuring the response times in pairing the concepts. Implicit attitude An attitude that a person cannot verbally or overtly state. Implicit measures of attitudes Measures of attitudes in which researchers infer the participant’s attitude rather than having the participant explicitly report it. Mood-congruent memory The tendency to be better able to recall memories that have a mood similar to our current mood. Motivated skepticism A form of bias that can result from having a directional goal in which one is skeptical of evidence despite its strength because it goes against what one wants to believe. Need for closure The desire to come to a decision that will resolve ambiguity and conclude an issue. Planning fallacy A cognitive bias in which one underestimates how long it will take to complete a task. Primed A process by which a concept or behavior is made more cognitively accessible or likely to occur through the presentation of an associated concept. Representativeness heuristic A heuristic in which the likelihood of an object belonging to a category is evaluated based on the extent to which the object appears similar to one’s mental representation of the category. Schema A mental model or representation that organizes the important information about a thing, person, or event (also known as a script). Social cognition The study of how people think about the social world. Stereotypes Our general beliefs about the traits or behaviors shared by group of people.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_11%3A_Social_Part_I/11.05%3A_Social_Cognition_and_Attitudes.txt
By Jake P. Moskowitz and Paul K. Piff University of California, Irvine Humans are social animals. This means we work together in groups to achieve goals that benefit everyone. From building skyscrapers to delivering packages to remote island nations, modern life requires that people cooperate with one another. However, people are also motivated by self-interest, which often stands as an obstacle to effective cooperation. This module explores the concept of cooperation and the processes that both help and hinder it. learning objectives • Define "cooperation" • Distinguish between different social value orientations • List 2 influences on cooperation • Explain 2 methods psychologists use to research cooperation Introduction As far back as the early 1800s, people imagined constructing a tunnel under the sea to connect France and England. But, digging under the English Channel—a body of water spanning more than 20 miles (32 km)—would be an enormous and difficult undertaking. It would require a massive amount of resources as well as coordinating the efforts of people from two separate nations, speaking two different languages. Not until 1988 did the idea of the Channel Tunnel (or “Chunnel” as it is known) change from dream to reality, as construction began. It took ten different construction companies-- financed by three separate banks-- six years to complete the project. Even today, decades later, the Chunnel is an amazing feat of engineering and collaboration. Seen through the lens of psychological science, it stands as an inspiring example of what is possible when people work together. Humans need to cooperate with others to survive and to thrive. Cooperation, or the coordination of multiple individuals toward a goal that benefits the entire group, is a fundamental feature of human social life. Whether on the playground with friends, at home with family, or at work with colleagues, cooperation is a natural instinct (Keltner, Kogan, Piff, & Saturn, 2014). Children as young as 14 months cooperate with others on joint tasks (Warneken, Chen, & Tomasello 2006; Warneken & Tomasello, 2007). Humans’ closest evolutionary relatives, chimpanzees and bonobos, maintain long-term cooperative relationships as well, sharing resources and caring for each other’s young (de Waal & Lanting, 1997; Langergraber, Mitani, & Vigilant, 2007). Ancient animal remains found near early human settlements suggest that our ancestors hunted in cooperative groups (Mithen, 1996). Cooperation, it seems, is embedded in our evolutionary heritage. Yet, cooperation can also be difficult to achieve; there are often breakdowns in people’s ability to work effectively in teams, or in their willingness to collaborate with others. Even with issues that can only be solved through large-scale cooperation, such as climate change and world hunger, people can have difficulties joining forces with others to take collective action. Psychologists have identified numerous individual and situational factors that influence the effectiveness of cooperation across many areas of life. From the trust that people place in others to the lines they draw between “us” and “them,” many different processes shape cooperation. This module will explore these individual, situational, and cultural influences on cooperation. The Prisoner’s Dilemma Imagine that you are a participant in a social experiment. As you sit down, you are told that you will be playing a game with another person in a separate room. The other participant is also part of the experiment but the two of you will never meet. In the experiment, there is the possibility that you will be awarded some money. Both you and your unknown partner are required to make a choice: either choose to “cooperate,” maximizing your combined reward, or “defect,” (not cooperate) and thereby maximize your individual reward. The choice you make, along with that of the other participant, will result in one of three unique outcomes to this task, illustrated below in Figure 11.6.1. If you and your partner both cooperate (1), you will each receive \$5. If you and your partner both defect (2), you will each receive \$2. However, if one partner defects and the other partner cooperates (3), the defector will receive \$8, while the cooperator will receive nothing. Remember, you and your partner cannot discuss your strategy. Which would you choose? Striking out on your own promises big rewards but you could also lose everything. Cooperating, on the other hand, offers the best benefit for the most people but requires a high level of trust. This scenario, in which two people independently choose between cooperation and defection, is known as the prisoner’s dilemma. It gets its name from the situation in which two prisoners who have committed a crime are given the opportunity to either (A) both confess their crime (and get a moderate sentence), (B) rat out their accomplice (and get a lesser sentence), or (C) both remain silent (and avoid punishment altogether). Psychologists use various forms of the prisoner’s dilemma scenario to study self-interest and cooperation. Whether framed as a monetary game or a prison game, the prisoner’s dilemma illuminates a conflict at the core of many decisions to cooperate: it pits the motivation to maximize personal reward against the motivation to maximize gains for the group (you and your partner combined). For someone trying to maximize his or her own personal reward, the most “rational” choice is to defect (not cooperate), because defecting always results in a larger personal reward, regardless of the partner’s choice. However, when the two participants view their partnership as a joint effort (such as a friendly relationship), cooperating is the best strategy of all, since it provides the largest combined sum of money (\$10—which they share), as opposed to partial cooperation (\$8), or mutual defection (\$4). In other words, although defecting represents the “best” choice from an individual perspective, it is also the worst choice to make for the group as a whole. This divide between personal and collective interests is a key obstacle that prevents people from cooperating. Think back to our earlier definition of cooperation: cooperation is when multiple partners work together toward a common goal that will benefit everyone. As is frequent in these types of scenarios, even though cooperation may benefit the whole group, individuals are often able to earn even larger, personal rewards by defecting—as demonstrated in the prisoner’s dilemma example above. Do you like music? You can see a small, real-world example of the prisoner’s dilemma phenomenon at live music concerts. At venues with seating, many audience members will choose to stand, hoping to get a better view of the musicians onstage. As a result, the people sitting directly behind those now-standing people are also forced to stand to see the action onstage. This creates a chain reaction in which the entire audience now has to stand, just to see over the heads of the crowd in front of them. While choosing to stand may improve one’s own concert experience, it creates a literal barrier for the rest of the audience, hurting the overall experience of the group. Simple models of rational self-interest predict 100% defection in cooperative tasks. That is, if people were only interested in benefiting themselves, we would always expect to see selfish behavior. Instead, there is a surprising tendency to cooperate in the prisoner’s dilemma and similar tasks (Batson & Moran, 1999; Oosterbeek, Sloof, Van De Kuilen, 2004). Given the clear benefits to defect, why then do some people choose to cooperate, whereas others choose to defect? Individual Differences in Cooperation Social Value Orientation One key factor related to individual differences in cooperation is the extent to which people value not only their own outcomes, but also the outcomes of others. Social value orientation (SVO) describes people’s preferences when dividing important resources between themselves and others (Messick & McClintock, 1968). A person might, for example, generally be competitive with others, or cooperative, or self-sacrificing. People with different social values differ in the importance they place on their own positive outcomes relative to the outcomes of others. For example, you might give your friend gas money because she drives you to school, even though that means you will have less spending money for the weekend. In this example, you are demonstrating a cooperative orientation. People generally fall into one of three categories of SVO: cooperative, individualistic, or competitive. While most people want to bring about positive outcomes for all (cooperative orientation), certain types of people are less concerned about the outcomes of others (individualistic), or even seek to undermine others in order to get ahead (competitive orientation). Are you curious about your own orientation? One technique psychologists use to sort people into one of these categories is to have them play a series of decomposed games—short laboratory exercises that involve making a choice from various distributions of resources between oneself and an “other.” Consider the example shown in Figure 11.6.2, which offers three different ways to distribute a valuable resource (such as money). People with competitive SVOs, who try to maximize their relative advantage over others, are most likely to pick option A. People with cooperative SVOs, who try to maximize joint gain for both themselves and others, are more likely to split the resource evenly, picking option B. People with individualistic SVOs, who always maximize gains to the self, regardless of how it affects others, will most likely pick option C. Researchers have found that a person’s SVO predicts how cooperative he or she is in both laboratory experiments and the outside world. For example, in one laboratory experiment, groups of participants were asked to play a commons dilemma game. In this game, participants each took turns drawing from a central collection of points to be exchanged for real money at the end of the experiment. These points represented a common-pool resource for the group, like valuable goods or services in society (such as farm land, ground water, and air quality) that are freely accessible to everyone but prone to overuse and degradation. Participants were told that, while the common-pool resource would gradually replenish after the end of every turn, taking too much of the resource too quickly would eventually deplete it. The researchers found that participants with cooperative SVOs withdrew fewer resources from the common-pool than those with competitive and individualistic SVOs, indicating a greater willingness to cooperate with others and act in a way that is sustainable for the group (Kramer, McClintock, & Messick, 1986; Roch & Samuelson, 1997). Research has also shown that people with cooperative SVOs are more likely to commute to work using public transportation—an act of cooperation that can help reduce carbon emissions—rather than drive themselves, compared to people with competitive and individualistic SVOs (Van Vugt, Meertens, & Van Lange, 1995; Van Vugt, Van Lange, & Meertens, 1996). People with cooperative SVOs also more frequently engage in behavior intended to help others, such as volunteering and giving money to charity (McClintock & Allison, 1989; Van Lange, Bekkers, Schuyt, Van Vugt, 2007). Taken together, these findings show that people with cooperative SVOs act with greater consideration for the overall well-being of others and the group as a whole, using resources in moderation and taking more effortful measures (like using public transportation to protect the environment) to benefit the group. Empathic Ability Empathy is the ability to feel and understand another’s emotional experience. When we empathize with someone else, we take on that person’s perspective, imagining the world from his or her point of view and vicariously experiencing his or her emotions (Davis, 1994; Goetz, Keltner, & Simon-Thomas, 2010). Research has shown that when people empathize with their partner, they act with greater cooperation and overall altruism—the desire to help the partner, even at a potential cost to the self. People that can experience and understand the emotions of others are better able to work with others in groups, earning higher job performance ratings on average from their supervisors, even after adjusting for different types of work and other aspects of personality (Côté & Miners, 2006). When empathizing with a person in distress, the natural desire to help is often expressed as a desire to cooperate. In one study, just before playing an economic game with a partner in another room, participants were given a note revealing that their partner had just gone through a rough breakup and needed some cheering up. While half of the subjects were urged by the experimenters to “remain objective and detached,” the other half were told to “try and imagine how the other person feels.” Though both groups received the same information about their partner, those who were encouraged to engage in empathy—by actively experiencing their partner’s emotions—acted with greater cooperation in the economic game (Batson & Moran, 1999). The researchers also found that people who empathized with their partners were more likely to act cooperatively, even after being told that their partner had already made a choice to not cooperate (Batson & Ahmad, 2001)! Evidence of the link between empathy and cooperation has even been found in studies of preschool children (Marcus, Telleen, & Roke, 1979). From a very early age, emotional understanding can foster cooperation. Although empathizing with a partner can lead to more cooperation between two people, it can also undercut cooperation within larger groups. In groups, empathizing with a single person can lead people to abandon broader cooperation in favor of helping only the target individual. In one study, participants were asked to play a cooperative game with three partners. In the game, participants were asked to (A) donate resources to a central pool, (B) donate resources to a specific group member, or (C) keep the resources for themselves. According to the rules, all donations to the central pool would be increased by 50% then distributed evenly, resulting in a net gain to the entire group. Objectively, this might seem to be the best option. However, when participants were encouraged to imagine the feelings of one of their partners said to be in distress, they were more likely to donate their tickets to that partner and not engage in cooperation with the group—rather than remaining detached and objective (Batson et al., 1995). Though empathy can create strong cooperative bonds between individuals, it can sometimes lead to actions that, despite being well-intentioned, end up undermining the group’s best interests. Situational Influences of Cooperation Communication and Commitment Open communication between people is one of the best ways to promote cooperation (Dawes, McTavish, & Shaklee, 1977; Dawes, 1988). This is because communication provides an opportunity to size up the trustworthiness of others. It also affords us a chance to prove our own trustworthiness, by verbally committing to cooperate with others. Since cooperation requires people to enter a state of vulnerability and trust with partners, we are very sensitive to the social cues and interactions of potential partners before deciding to cooperate with them. In one line of research, groups of participants were allowed to chat for five minutes before playing a multi-round “public goods” game. During the chats, the players were allowed to discuss game strategies and make verbal commitments about their in-game actions. While some groups were able to reach a consensus on a strategy (e.g., “always cooperate”), other groups failed to reach a consensus within their allotted five minutes or even picked strategies that ensured noncooperation (e.g., “every person for themselves”). The researchers found that when group members made explicit commitments to each other to cooperate, they ended up honoring those commitments and acting with greater cooperation. Interestingly, the effect of face-to-face verbal commitments persisted even when the cooperation game itself was completely anonymous (Kerr and Kaufman-Gilliland, 1994; Kerr, Garst, Lewandowski, & Harris, 1997). This suggests that those who explicitly commit to cooperate are driven not by the fear of external punishment by group members, but by their own personal desire to honor such commitments. In other words, once people make a specific promise to cooperate, they are driven by “that still, small voice”—the voice of their own inner conscience—to fulfill that commitment (Kerr et al., 1997). Trust When it comes to cooperation, trust is key (Pruitt & Kimmel, 1977; Parks, Henager, & Scamahorn, 1996; Chaudhuri, Sopher, & Strand, 2002). Working with others toward a common goal requires a level of faith that our partners will repay our hard work and generosity, and not take advantage of us for their own selfish gains. Social trust, or the belief that another person’s actions will be beneficial to one’s own interests (Kramer, 1999), enables people to work together as a single unit, pooling their resources to accomplish more than they could individually. Trusting others, however, depends on their actions and reputation. One common example of the difficulties in trusting others that you might recognize from being a student occurs when you are assigned a group project. Many students dislike group projects because they worry about “social loafing”—the way that one person expends less effort but still benefits from the efforts of the group. Imagine, for example, that you and five other students are assigned to work together on a difficult class project. At first, you and your group members split the work up evenly. As the project continues, however, you notice that one member of your team isn’t doing his “fair share.” He fails to show up to meetings, his work is sloppy, and he seems generally uninterested in contributing to the project. After a while, you might begin to suspect that this student is trying to get by with minimal effort, perhaps assuming others will pick up the slack. Your group now faces a difficult choice: either join the slacker and abandon all work on the project, causing it to collapse, or keep cooperating and allow for the possibility that the uncooperative student may receive a decent grade for others’ work. If this scenario sounds familiar to you, you’re not alone. Economists call this situation the free rider problem—when individuals benefit from the cooperation of others without contributing anything in return (Grossman & Hart, 1980). Although these sorts of actions may benefit the free rider in the short-term, free riding can have a negative impact on a person’s social reputation over time. In the above example, for instance, the “free riding” student may develop a reputation as lazy or untrustworthy, leading others to be less willing to work with him in the future. Indeed, research has shown that a poor reputation for cooperation can serve as a warning sign for others not to cooperate with the person in disrepute. For example, in one experiment involving a group economic game, participants seen as being uncooperative were punished harshly by their fellow participants. According to the rules of the game, individuals took turns being either a “donor” or a “receiver” over the course of multiple rounds. If donors chose to give up a small sum of actual money, receivers would receive a slightly larger sum, resulting in an overall net gain. However, unbeknownst to the group, one participant was secretly instructed never to donate. After just a few rounds of play, this individual was effectively shunned by the rest of the group, receiving almost zero donations from the other members (Milinski, Semmann, Bakker, & Krambeck, 2001). When someone is seen being consistently uncooperative, other people have no incentive to trust him/her, resulting in a collapse of cooperation. On the other hand, people are more likely to cooperate with others who have a good reputation for cooperation and are therefore deemed trustworthy. In one study, people played a group economic game similar to the one described above: over multiple rounds, they took turns choosing whether to donate to other group members. Over the course of the game, donations were more frequently given to individuals who had been generous in earlier rounds of the game (Wedekind & Milinski, 2000). In other words, individuals seen cooperating with others were afforded a reputational advantage, earning them more partners willing to cooperate and a larger overall monetary reward. Group Identification Another factor that can impact cooperation is a person’s social identity, or the extent to which he or she identifies as a member of a particular social group (Tajfel & Turner, 1979/1986). People can identify with groups of all shapes and sizes: a group might be relatively small, such as a local high school class, or very large, such as a national citizenship or a political party. While these groups are often bound together by shared goals and values, they can also form according to seemingly arbitrary qualities, such as musical taste, hometown, or even completely randomized assignment, such as a coin toss (Tajfel, Billig, Bundy, & Flament, 1971; Bigler, Brown, & Markell, 2001; Locksley, Ortiz, & Hepburn, 1980). When members of a group place a high value on their group membership, their identity (the way they view themselves) can be shaped in part by the goals and values of that group. When people strongly identify with a group, their own well-being becomes bound to the welfare of that group, increasing their willingness to make personal sacrifices for its benefit. We see this with sports fans. When fans heavily identify with a favorite team, they become elated when the team wins and sad when the team loses. Die-hard fans often make personal sacrifices to support their team, such as braving terrible weather, paying high prices for tickets, and standing and chanting during games. Research shows that when people’s group identity is emphasized (for example, when laboratory participants are referred to as “group members” rather than “individuals”), they are less likely to act selfishly in a commons dilemma game. In such experiments, so-called “group members” withdraw fewer resources, with the outcome of promoting the sustainability of the group (Brewer & Kramer, 1986). In one study, students who strongly identified with their university were less likely to leave a cooperative group of fellow students when given an attractive option to exit (Van Vugt & Hart, 2004). In addition, the strength of a person’s identification with a group or organization is a key driver behind participation in large-scale cooperative efforts, such as collective action in political and workers’ groups (Klandersman, 2002), and engaging in organizational citizenship behaviors (Cropanzano & Byrne, 2000). Emphasizing group identity is not without its costs: although it can increase cooperation within groups, it can also undermine cooperation between groups. Researchers have found that groups interacting with other groups are more competitive and less cooperative than individuals interacting with other individuals, a phenomenon known as interindividual-intergroup discontinuity (Schopler & Insko, 1999; Wildschut, Pinter, Vevea, Insko, & Schopler, 2003). For example, groups interacting with other groups displayed greater self-interest and reduced cooperation in a prisoner’s dilemma game than did individuals completing the same tasks with other individuals (Insko et al., 1987). Such problems with trust and cooperation are largely due to people’s general reluctance to cooperate with members of an outgroup, or those outside the boundaries of one’s own social group (Allport, 1954; Van Vugt, Biel, Snyder, & Tyler, 2000). Outgroups do not have to be explicit rivals for this effect to take place. Indeed, in one study, simply telling groups of participants that other groups preferred a different style of painting led them to behave less cooperatively than pairs of individuals completing the same task (Insko, Kirchner, Pinter, Efaw, & Wildschut, 2005). Though a strong group identity can bind individuals within the group together, it can also drive divisions between different groups, reducing overall trust and cooperation on a larger scope. Under the right circumstances, however, even rival groups can be turned into cooperative partners in the presence of superordinate goals. In a classic demonstration of this phenomenon, Muzafer Sherif and colleagues observed the cooperative and competing behaviors of two groups of twelve-year-old boys at a summer camp in Robber’s Cave State Park, in Oklahoma (Sherif, Harvey, White, Hood, & Sherif, 1961). The twenty-two boys in the study were all carefully interviewed to determine that none of them knew each other beforehand. Importantly, Sherif and colleagues kept both groups unaware of each other’s existence, arranging for them to arrive at separate times and occupy different areas of the camp. Within each group, the participants quickly bonded and established their own group identity—“The Eagles” and “The Rattlers”—identifying leaders and creating flags decorated with their own group’s name and symbols. For the next phase of the experiment, the researchers revealed the existence of each group to the other, leading to reactions of anger, territorialism, and verbal abuse between the two. This behavior was further compounded by a series of competitive group activities, such as baseball and tug-of-war, leading the two groups to engage in even more spiteful behavior: The Eagles set fire to The Rattlers’ flag, and The Rattlers retaliated by ransacking The Eagles’ cabin, overturning beds and stealing their belongings. Eventually, the two groups refused to eat together in the same dining hall, and they had to be physically separated to avoid further conflict. However, in the final phase of the experiment, Sherif and colleagues introduced a dilemma to both groups that could only be solved through mutual cooperation. The researchers told both groups that there was a shortage of drinking water in the camp, supposedly due to “vandals” damaging the water supply. As both groups gathered around the water supply, attempting to find a solution, members from each group offered suggestions and worked together to fix the problem. Since the lack of drinking water affected both groups equally, both were highly motivated to try and resolve the issue. Finally, after 45 minutes, the two groups managed to clear a stuck pipe, allowing fresh water to flow. The researchers concluded that when conflicting groups share a superordinate goal, they are capable of shifting their attitudes and bridging group differences to become cooperative partners. The insights from this study have important implications for group-level cooperation. Since many problems facing the world today, such as climate change and nuclear proliferation, affect individuals of all nations, and are best dealt with through the coordinated efforts of different groups and countries, emphasizing the shared nature of these dilemmas may enable otherwise competing groups to engage in cooperative and collective action. Culture Culture can have a powerful effect on people’s beliefs about and ways they interact with others. Might culture also affect a person’s tendency toward cooperation? To answer this question, Joseph Henrich and his colleagues surveyed people from 15 small-scale societies around the world, located in places such as Zimbabwe, Bolivia, and Indonesia. These groups varied widely in the ways they traditionally interacted with their environments: some practiced small-scale agriculture, others foraged for food, and still others were nomadic herders of animals (Henrich et al., 2001). To measure their tendency toward cooperation, individuals of each society were asked to play the ultimatum game, a task similar in nature to the prisoner’s dilemma. The game has two players: Player A (the “allocator”) is given a sum of money (equal to two days’ wages) and allowed to donate any amount of it to Player B (the “responder”). Player B can then either accept or reject Player A’s offer. If Player B accepts the offer, both players keep their agreed-upon amounts. However, if Player B rejects the offer, then neither player receives anything. In this scenario, the responder can use his/her authority to punish unfair offers, even though it requires giving up his or her own reward. In turn, Player A must be careful to propose an acceptable offer to Player B, while still trying to maximize his/her own outcome in the game. According to a model of rational economics, a self-interested Player B should always choose to accept any offer, no matter how small or unfair. As a result, Player A should always try to offer the minimum possible amount to Player B, in order to maximize his/her own reward. Instead, the researchers found that people in these 15 societies donated on average 39% of the sum to their partner (Henrich et al., 2001). This number is almost identical to the amount that people of Western cultures donate when playing the ultimatum game (Oosterbeek et al., 2004). These findings suggest that allocators in the game, instead of offering the least possible amount, try to maintain a sense of fairness and “shared rewards” in the game, in part so that their offers will not be rejected by the responder. Henrich and colleagues (2001) also observed significant variation between cultures in terms of their level of cooperation. Specifically, the researchers found that the extent to which individuals in a culture needed to collaborate with each other to gather resources to survive predicted how likely they were to be cooperative. For example, among the people of the Lamelara in Indonesia, who survive by hunting whales in groups of a dozen or more individuals, donations in the ultimatum game were extremely high—approximately 58% of the total sum. In contrast, the Machiguenga people of Peru, who are generally economically independent at the family level, donated much less on average—about 26% of the total sum. The interdependence of people for survival, therefore, seems to be a key component of why people decide to cooperate with others. Though the various survival strategies of small-scale societies might seem quite remote from your own experiences, take a moment to think about how your life is dependent on collaboration with others. Very few of us in industrialized societies live in houses we build ourselves, wear clothes we make ourselves, or eat food we grow ourselves. Instead, we depend on others to provide specialized resources and products, such as food, clothing, and shelter that are essential to our survival. Studies show that Americans give about 40% of their sum in the ultimatum game—less than the Lamelara give, but on par with most of the small-scale societies sampled by Henrich and colleagues (Oosterbeek et al., 2004). While living in an industrialized society might not require us to hunt in groups like the Lamelara do, we still depend on others to supply the resources we need to survive. Conclusion Cooperation is an important part of our everyday lives. Practically every feature of modern social life, from the taxes we pay to the street signs we follow, involves multiple parties working together toward shared goals. There are many factors that help determine whether people will successfully cooperate, from their culture of origin and the trust they place in their partners, to the degree to which they empathize with others. Although cooperation can sometimes be difficult to achieve, certain diplomatic practices, such as emphasizing shared goals and engaging in open communication, can promote teamwork and even break down rivalries. Though choosing not to cooperate can sometimes achieve a larger reward for an individual in the short term, cooperation is often necessary to ensure that the group as a whole––including all members of that group—achieves the optimal outcome. Outside Resources Article: Weber, J. M., Kopelman, S., & Messick, D. M. (2004). A conceptual review of decision making in social dilemmas: Applying a logic of appropriateness. Personality and Social Psychology Review, 8(3), 281-307. http://psr.sagepub.com/content/8/3/281.abstract Book: Harvey, O. J., White, B. J., Hood, W. R., & Sherif, C. W. (1961). Intergroup conflict and cooperation: The Robbers Cave experiment. Norman, OK: University Book Exchange. http://psychclassics.yorku.ca/Sherif/index.htm Experiment: Intergroup Conflict and Cooperation: The Robbers Cave Experiment - An online version of Sherif, Harvey, White, Hood, and Sherif’s (1954/1961) study, which includes photos. http://psychclassics.yorku.ca/Sherif/ Video: A clip from a reality TV show, “Golden Balls”, that pits players against each other in a high-stakes Prisoners’ Dilemma situation. Video: Describes recent research showing how chimpanzees naturally cooperate with each other to accomplish tasks. Video: The Empathic Civilization - A 10 minute, 39 second animated talk that explores the topics of empathy. Video: Tragedy of the Commons, Part 1 - What happens when many people seek to share the same, limited resource? Video: Tragedy of the Commons, Part 2 - This video (which is 1 minute, 27 seconds) discusses how cooperation can be a solution to the commons dilemma. Video: Understanding the Prisoners’ Dilemma. Video: Why Some People are More Altruistic Than Others - A 12 minute, 21 second TED talk about altruism. A psychologist, Abigail Marsh, discusses the research about altruism. Web: Take an online test to determine your Social Values Orientation (SVO). vlab.ethz.ch/svo/index-normal.html Web: What is Social Identity? - A brief explanation of social identity, which includes specific examples. http://people.howstuffworks.com/what...l-identity.htm Discussion Questions 1. Which groups do you identify with? Consider sports teams, home towns, and universities. How does your identification with these groups make you feel about other members of these groups? What about members of competing groups? 2. Thinking of all the accomplishments of humanity throughout history which do you believe required the greatest amounts of cooperation? Why? 3. In your experience working on group projects—such as group projects for a class—what have you noticed regarding the themes presented in this module (eg. Competition, free riding, cooperation, trust)? How could you use the material you have just learned to make group projects more effective? Vocabulary Altruism A desire to improve the welfare of another person, at a potential cost to the self and without any expectation of reward. Common-pool resource A collective product or service that is freely available to all individuals of a society, but is vulnerable to overuse and degradation. Commons dilemma game A game in which members of a group must balance their desire for personal gain against the deterioration and possible collapse of a resource. Cooperation The coordination of multiple partners toward a common goal that will benefit everyone involved. Decomposed games A task in which an individual chooses from multiple allocations of resources to distribute between him- or herself and another person. Empathy The ability to vicariously experience the emotions of another person. Free rider problem A situation in which one or more individuals benefit from a common-pool resource without paying their share of the cost. Interindividual-intergroup discontinuity The tendency for relations between groups to be less cooperative than relations between individuals. Outgroup A social category or group with which an individual does not identify. Prisoner’s dilemma A classic paradox in which two individuals must independently choose between defection (maximizing reward to the self) and cooperation (maximizing reward to the group). Rational self-interest The principle that people will make logical decisions based on maximizing their own gains and benefits. Social identity A person’s sense of who they are, based on their group membership(s). Social value orientation (SVO) An assessment of how an individual prefers to allocate resources between him- or herself and another person. State of vulnerability When a person places him or herself in a position in which he or she might be exploited or harmed. This is often done out of trust that others will not exploit the vulnerability. Ultimatum game An economic game in which a proposer (Player A) can offer a subset of resources to a responder (Player B), who can then either accept or reject the given proposal.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_11%3A_Social_Part_I/11.06%3A_Cooperation.txt
By Joel A. Muraco University of Wisconsin, Green Bay Each and every one of us has a family. However, these families exist in many variations around the world. In this module, we discuss definitions of family, family forms, the developmental trajectory of families, and commonly used theories to understand families. We also cover factors that influence families such as culture and societal expectations while incorporating the latest family relevant statistics. learning objectives • Understand the various family forms. • Describe attachment theory. • Identify different parenting styles. • Know the typical developmental trajectory of families. • Understand cultural differences in dating, marriage, and divorce. • Explain the influence of children and aging parents on families. • Know concrete tips for increasing happiness within your family. It is often said that humans are social creatures. We make friends, live in communities, and connect to acquaintances through shared interests. In recent times, social media has become a new way for people to connect with childhood peers, friends of friends, and even strangers. Perhaps nothing is more central to the social world than the concept of family. Our families represent our earliest relationships and—often—our most enduring ones. In this module, you will learn about the psychology of families. Our discussion will begin with a basic definition of family and how this has changed across time and place. Next, we move on to a discussion of family roles and how families evolve across the lifespan. Finally, we conclude with issues such as divorce and abuse that are important factors in the psychological health of families. What is Family? In J.K. Rowling's famous Harry Potter novels, the boy magician lives in a cupboard under the stairs. His unfortunate situation is the result of his wizarding parents having been killed in a duel, causing the young Potter to be subsequently shipped off to live with his cruel aunt and uncle. Although family may not be the central theme of these wand and sorcery novels, Harry's example raises a compelling question: what, exactly, counts as family? The definition of family changes across time and across culture. Traditional family has been defined as two or more people who are related by blood, marriage, and—occasionally—adoption (Murdock, 1949). Historically, the most standard version of the traditional family has been the two-parent family. Are there people in your life you consider family who are not necessarily related to you in the traditional sense? Harry Potter would undoubtedly call his schoolmates Ron Weasley and Hermione Granger family, even though they do not fit the traditional definition. Likewise, Harry might consider Hedwig, his snowy owl, a family member, and he would not be alone in doing so. Research from the US (Harris, 2015) and Japan (Veldkamp, 2009) finds that many pet owners consider their pets to be members of the family. Another traditional form of family is the joint family, in which three or more generations of blood relatives live in a single household or compound. Joint families often include cousins, aunts and uncles, and other relatives from the extended family. Versions of the joint family system exist around the globe including in South Asia, Southern Europe, the South Pacific and other locations. In more modern times, the traditional definition of family has been criticized as being too narrow. Modern families—especially those in industrialized societies—exist in many forms, including the single parent family, foster families, same-sex couples, childfree families, and many other variations from traditional norms. Common to each of these family forms is commitment, caring, and close emotional ties—which are increasingly the defining characteristics of family (Benokraitis, 2015). The changing definition of family has come about, in part, because of factors such as divorce and re-marriage. In many cases, people do not grow up with their family of orientation, but become part of a stepfamily or blended family. Whether a single-parent, joint, or two-parent family, a person’s family of orientation, or the family into which he or she is born, generally acts as the social context for young children learning about relationships. According to Bowen (1978), each person has a role to play in his or her family, and each role comes with certain rules and expectations. This system of rules and roles is known as family systems theory. The goal for the family is stability: rules and expectations that work for all. When the role of one member of the family changes, so do the rules and expectations. Such changes ripple through the family and cause each member to adjust his or her own role and expectations to compensate for the change. Take, for example, the classic story of Cinderella. Cinderella’s initial role is that of a child. Her parents’ expectations of her are what would be expected of a growing and developing child. But, by the time Cinderella reaches her teen years, her role has changed considerably. Both of her biological parents have died and she has ended up living with her stepmother and stepsisters. Cinderella’s role shifts from being an adored child to acting as the household servant. The stereotype of stepfamilies as being emotionally toxic is, of course, not true. You might even say there are often-overlooked instructive elements in the Cinderella story: Her role in the family has become not only that of servant but also that of caretaker-- the others expecting her to cook and clean while in return they treat her with spite and cruelty. When Cinderella finds her prince and leaves to start her own family—known as a family of procreation—it is safe to assume that the roles of her stepmother and stepsisters will change—suddenly having to cook and clean for themselves. Gender has been one factor by which family roles have long been assigned. Traditional roles have historically placed housekeeping and childrearing squarely in the realm of women’s responsibilities. Men, by contrast, have been seen as protectors and as providers of resources including money. Increasingly, families are crossing these traditional roles with women working outside the home and men contributing more to domestic and childrearing responsibilities. Despite this shift toward more egalitarian roles, women still tend to do more housekeeping and childrearing tasks than their husbands (known as the second shift) (Hochschild & Machung, 2012). Interestingly, parental roles have an impact on the ambitions of their children. Croft and her colleagues (2014) examined the beliefs of more than 300 children. The researchers discovered that when fathers endorsed more equal sharing of household duties and when mothers were more workplace oriented it influenced how their daughters thought. In both cases, daughters were more likely to have ambitions toward working outside the home and working in less gender-stereotyped professions. How Families Develop Our families are so familiar to us that we can sometimes take for granted the idea that families develop over time. Nuclear families, those core units of parents and children, do not simply pop into being. The parents meet one another, they court or date one another, and they make the decision to have children. Even then the family does not quit changing. Children grow up and leave home and the roles shift yet again. Intimacy In a psychological sense, families begin with intimacy. The need for intimacy, or close relationships with others, is universal. We seek out close and meaningful relationships over the course of our lives. What our adult intimate relationships look like actually stems from infancy and our relationship with our primary caregiver (historically our mother)—a process of development described by attachment theory. According to attachment theory, different styles of caregiving result in different relationship “attachments.” For example, responsive mothers—mothers who soothe their crying infants—produce infants who have secure attachments (Ainsworth, 1973; Bowlby, 1969). About 60% of all children are securely attached. As adults, secure individuals rely on their working models—concepts of how relationships operate—that were created in infancy, as a result of their interactions with their primary caregiver (mother), to foster happy and healthy adult intimate relationships. Securely attached adults feel comfortable being depended on and depending on others. As you might imagine, inconsistent or dismissive parents also impact the attachment style of their infants (Ainsworth, 1973), but in a different direction. In early studies on attachment style, infants were observed interacting with their caregivers, followed by being separated from them, then finally reunited. About 20% of the observed children were “resistant,” meaning they were anxious even before, and especially during, the separation; and 20% were “avoidant,” meaning they actively avoided their caregiver after separation (i.e., ignoring the mother when they were reunited). These early attachment patterns can affect the way people relate to one another in adulthood. Anxious-resistant adults worry that others don’t love them, and they often become frustrated or angry when their needs go unmet. Anxious-avoidant adults will appear not to care much about their intimate relationships, and are uncomfortable being depended on or depending on others themselves. The good news is that our attachment can be changed. It isn’t easy, but it is possible for anyone to “recover” a secure attachment. The process often requires the help of a supportive and dependable other, and for the insecure person to achieve coherence—the realization that his or her upbringing is not a permanent reflection of character or a reflection of the world at large, nor does it bar him or her from being worthy of love or others of being trustworthy (Treboux, Crowell, & Waters, 2004). Dating, Courtship, and Cohabitation Over time, the process of finding a mate has changed dramatically. In Victorian England, for instance, young women in high society trained for years in the arts—to sing, play music, dance, compose verse, etc. These skills were thought to be vital to the courtship ritual—a demonstration of feminine worthiness. Once a woman was of marriageable age, she would attend dances and other public events as a means of displaying her availability. A young couple interested in one another would find opportunities to spend time together, such as taking a walk. That era had very different dating practices from today, in which teenagers have more freedom, more privacy, and can date more people. One major difference in the way people find a mate these days is the way we use technology to both expand and restrict the marriage market—the process by which potential mates compare assets and liabilities of available prospects and choose the best option (Benokraitis, 2015). Comparing marriage to a market might sound unromantic, but think of it as a way to illustrate how people seek out attractive qualities in a mate. Modern technology has allowed us to expand our “market” by allowing us to search for potential partners all over the world—as opposed to the days when people mostly relied on local dating pools. Technology also allows us to filter out undesirable (albeit available) prospects at the outset, based on factors such as shared interests, age, and other features. The use of filters to find the most desirable partner is a common practice, resulting in people marrying others very similar to themselves—a concept called homogamy; the opposite is known as heterogamy (Burgess & Wallin, 1943). In his comparison of educational homogamy in 55 countries, Smits (2003) found strong support for higher-educated people marrying other highly educated people. As such, education appears to be a strong filter people use to help them select a mate. The most common filters we use—or, put another way, the characteristics we focus on most in potential mates—are age, race, social status, and religion (Regan, 2008). Other filters we use include compatibility, physical attractiveness (we tend to pick people who are as attractive as we are), and proximity (for practical reasons, we often pick people close to us) (Klenke-Hamel & Janda, 1980). In many countries, technology is increasingly used to help single people find each other, and this may be especially true of older adults who are divorced or widowed, as there are few societally-structured activities for older singles. For example, younger people in school are usually surrounded with many potential dating partners of a similar age and background. As we get older, this is less true, as we focus on our careers and find ourselves surrounded by co-workers of various ages, marital statuses, and backgrounds. In some cultures, however, it is not uncommon for the families of young people to do the work of finding a mate for them. For example, the Shanghai Marriage Market refers to the People’s Park in Shanghai, China—a place where parents of unmarried adults meet on weekends to trade information about their children in attempts to find suitable spouses for them (Bolsover, 2011). In India, the marriage market refers to the use of marriage brokers or marriage bureaus to pair eligible singles together (Trivedi, 2013). To many Westerners, the idea of arranged marriage can seem puzzling. It can appear to take the romance out of the equation and violate values about personal freedom. On the other hand, some people in favor of arranged marriage argue that parents are able to make more mature decisions than young people. While such intrusions may seem inappropriate based on your upbringing, for many people of the world such help is expected, even appreciated. In India for example, “parental arranged marriages are largely preferred to other forms of marital choices” (Ramsheena & Gundemeda, 2015, p. 138). Of course, one’s religious and social caste plays a role in determining how involved family may be. In terms of other notable shifts in attitude seen around the world, an increase in cohabitation has been documented. Cohabitation is defined as an arrangement in which two people who are romantically live together even though they are not married (Prinz, 1995). Cohabitation is common in many countries, with the Scandinavian nations of Iceland, Sweden, and Norway reporting the highest percentages, and more traditional countries like India, China, and Japan reporting low percentages (DeRose, 2011). In countries where cohabitation is increasingly common, there has been speculation as to whether or not cohabitation is now part of the natural developmental progression of romantic relationships: dating and courtship, then cohabitation, engagement, and finally marriage. Though, while many cohabitating arrangements ultimately lead to marriage, many do not. Engagement and Marriage Most people will marry in their lifetime. In the majority of countries, 80% of men and women have been married by the age of 49 (United Nations, 2013). Despite how common marriage remains, it has undergone some interesting shifts in recent times. Around the world, people are tending to get married later in life or, increasingly, not at all. People in more developed countries (e.g., Nordic and Western Europe), for instance, marry later in life—at an average age of 30 years. This is very different than, for example, the economically developing country of Afghanistan, which has one of the lowest average-age statistics for marriage—at 20.2 years (United Nations, 2013). Another shift seen around the world is a gender gap in terms of age when people get married. In every country, men marry later than women. Since the 1970’s, the average age of marriage for women has increased from 21.8 to 24.7 years. Men have seen a similar increase in age at first marriage. As illustrated, the courtship process can vary greatly around the world. So too can an engagement—a formal agreement to get married. Some of these differences are small, such as on which hand an engagement ring is worn. In many countries it is worn on the left, but in Russia, Germany, Norway, and India, women wear their ring on their right. There are also more overt differences, such as who makes the proposal. In India and Pakistan, it is not uncommon for the family of the groom to propose to the family of the bride, with little to no involvement from the bride and groom themselves. In most Western industrialized countries, it is traditional for the male to propose to the female. What types of engagement traditions, practices, and rituals are common where you are from? How are they changing? Children? Do you want children? Do you already have children? Increasingly, families are postponing or not having children. Families that choose to forego having children are known as childfree families, while families that want but are unable to conceive are referred to as childless families. As more young people pursue their education and careers, age at first marriage has increased; similarly, so has the age at which people become parents. The average age for first-time mothers is 25 in the United States (up from 21 in 1970), 29.4 in Switzerland, and 29.2 in Japan (Matthews & Hamilton, 2014). The decision to become a parent should not be taken lightly. There are positives and negatives associated with parenting that should be considered. Many parents report that having children increases their well-being (White & Dolan, 2009). Researchers have also found that parents, compared to their non-parent peers, are more positive about their lives (Nelson, Kushlev, English, Dunn, & Lyubomirsky, 2013). On the other hand, researchers have also found that parents, compared to non-parents, are more likely to be depressed, report lower levels of marital quality, and feel like their relationship with their partner is more businesslike than intimate (Walker, 2011). If you do become a parent, your parenting style will impact your child’s future success in romantic and parenting relationships. Authoritative parenting, arguably the best parenting style, is both demanding and supportive of the child (Maccoby & Martin, 1983). Support refers to the amount of affection, acceptance, and warmth a parent provides. Demandingness refers to the degree a parent controls his/her child’s behavior. Children who have authoritative parents are generally happy, capable, and successful (Maccoby, 1992). Other, less advantageous parenting styles include authoritarian (in contrast to authoritative), permissive, and uninvolved (Tavassolie, Dudding, Madigan, Thorvardarson, & Winsler, 2016). Authoritarian parents are low in support and high in demandingness. Arguably, this is the parenting style used by Harry Potter’s harsh aunt and uncle, and Cinderella’s vindictive stepmother. Children who receive authoritarian parenting are more likely to be obedient and proficient, but score lower in happiness, social competence, and self-esteem. Permissive parents are high in support and low in demandingness. Their children rank low in happiness and self-regulation, and are more likely to have problems with authority. Uninvolved parents are low in both support and demandingness. Children of these parents tend to rank lowest across all life domains, lack self-control, have low self-esteem, and are less competent than their peers. Support for the benefits of authoritative parenting has been found in countries as diverse as the Czech Republic (Dmitrieva, Chen, Greenberger, & Gil-Rivas, 2004), India (Carson, Chowdhurry, Perry, & Pati, 1999), China (Pilgrim, Luo, Urberg, & Fang, 1999), Israel (Mayseless, Scharf, & Sholt, 2003), and Palestine (Punamaki, Qouta, & Sarraj, 1997). In fact, authoritative parenting appears to be superior in Western, individualistic societies—so much so that some people have argued that there is no longer a need to study it (Steinberg, 2001). Other researchers are less certain about the superiority of authoritative parenting and point to differences in cultural values and beliefs. For example, while many European-American children do poorly with too much strictness (authoritarianparenting), Chinese children often do well, especially academically. The reason for this likely stems from Chinese culture viewing strictness in parenting as related to training, which is not central to American parenting (Chao, 1994). Parenting in Later Life Just because children grow up does not mean their family stops being a family. The concept of family persists across the entire lifespan, but the specific roles and expectations of its members change over time. One major change comes when a child reaches adulthood and moves away. When exactly children leave home varies greatly depending on societal norms and expectations, as well as on economic conditions such as employment opportunities and affordable housing options. Some parents may experience sadness when their adult children leave the home—a situation known as Empty Nest. Many parents are also finding that their grown children are struggling to launch into independence. It’s an increasingly common story: a child goes off to college and, upon graduation, is unable to find steady employment. In such instances, a frequent outcome is for the child to return home, becoming a “boomerang kid.” The boomerang generation, as the phenomenon has come to be known, refers to young adults, mostly between the ages of 25 and 34, who return home to live with their parents while they strive for stability in their lives—often in terms of finances, living arrangements, and sometimes romantic relationships. These boomerang kids can be both good and bad for families. Within American families, 48% of boomerang kids report having paid rent to their parents, and 89% say they help out with household expenses—a win for everyone (Parker, 2012). On the other hand, 24% of boomerang kids report that returning home hurts their relationship with their parents (Parker, 2012). For better or for worse, the number of children returning home has been increasing around the world. In addition to middle-aged parents spending more time, money, and energy taking care of their adult children, they are also increasingly taking care of their own aging and ailing parents. Middle-aged people in this set of circumstances are commonly referred to as the sandwich generation (Dukhovnov & Zagheni, 2015). Of course, cultural norms and practices again come into play. In some Asian and Hispanic cultures, the expectation is that adult children are supposed to take care of aging parents and parents-in-law. In other Western cultures—cultures that emphasize individuality and self-sustainability—the expectation has historically been that elders either age in place, modifying their home and receiving services to allow them to continue to live independently, or enter long-term care facilities. However, given financial constraints, many families find themselves taking in and caring for their aging parents, increasing the number of multigenerational homes around the world. Family Issues and Considerations Divorce Divorce refers to the legal dissolution of a marriage. Depending on societal factors, divorce may be more or less of an option for married couples. Despite popular belief, divorce rates in the United States actually declined for many years during the 1980s and 1990s, and only just recently started to climb back up—landing at just below 50% of marriages ending in divorce today (Marriage & Divorce, 2016); however, it should be noted that divorce rates increase for each subsequent marriage, and there is considerable debate about the exact divorce rate. Are there specific factors that can predict divorce? Are certain types of people or certain types of relationships more or less at risk for breaking up? Indeed, there are several factors that appear to be either risk factors or protective factors. Pursuing education decreases the risk of divorce. So too does waiting until we are older to marry. Likewise, if our parents are still married we are less likely to divorce. Factors that increase our risk of divorce include having a child before marriage and living with multiple partners before marriage, known as serial cohabitation (cohabitation with one’s expected martial partner does not appear to have the same effect). And, of course, societal and religious attitudes must also be taken into account. In societies that are more accepting of divorce, divorce rates tend to be higher. Likewise, in religions that are less accepting of divorce, divorce rates tend to be lower. See Lyngstad & Jalovaara (2010) for a more thorough discussion of divorce risk. If a couple does divorce, there are specific considerations they should take into account to help their children cope. Parents should reassure their children that both parents will continue to love them and that the divorce is in no way the children’s fault. Parents should also encourage open communication with their children and be careful not to bias them against their “ex” or use them as a means of hurting their “ex” (Denham, 2013; Harvey & Fine, 2004; Pescosoido, 2013). Abuse Abuse can occur in multiple forms and across all family relationships. Breiding, Basile, Smith, Black, & Mahendra (2015) define the forms of abuse as: • Physical abuse, the use of intentional physical force to cause harm. Scratching, pushing, shoving, throwing, grabbing, biting, choking, shaking, slapping, punching, and hitting are common forms of physical abuse; • Sexual abuse, the act of forcing someone to participate in a sex act against his or her will. Such abuse is often referred to as sexual assault or rape. A marital relationship does not grant anyone the right to demand sex or sexual activity from anyone, even a spouse; • Psychological abuse, aggressive behavior that is intended to control someone else. Such abuse can include threats of physical or sexual abuse, manipulation, bullying, and stalking. Abuse between partners is referred to as intimate partner violence; however, such abuse can also occur between a parent and child (child abuse), adult children and their aging parents (elder abuse), and even between siblings. The most common form of abuse between parents and children is actually that of neglect. Neglect refers to a family’s failure to provide for a child’s basic physical, emotional, medical, or educational needs (DePanfilis, 2006). Harry Potter’s aunt and uncle, as well as Cinderella’s stepmother, could all be prosecuted for neglect in the real world. Abuse is a complex issue, especially within families. There are many reasons people become abusers: poverty, stress, and substance abuse are common characteristics shared by abusers, although abuse can happen in any family. There are also many reasons adults stay in abusive relationships: (a) learned helplessness (the abused person believing he or she has no control over the situation); (b) the belief that the abuser can/will change; (c) shame, guilt, self-blame, and/or fear; and (d) economic dependence. All of these factors can play a role. Children who experience abuse may “act out” or otherwise respond in a variety of unhealthful ways. These include acts of self-destruction, withdrawal, and aggression, as well as struggles with depression, anxiety, and academic performance. Researchers have found that abused children’s brains may produce higher levels of stress hormones. These hormones can lead to decreased brain development, lower stress thresholds, suppressed immune responses, and lifelong difficulties with learning and memory (Middlebrooks & Audage, 2008). Adoption Divorce and abuse are important concerns, but not all family hurdles are negative. One example of a positive family issue is adoption. Adoption has long historical roots (it is even mentioned in the Bible) and involves taking in and raising someone else’s child legally as one’s own. Becoming a parent is one of the most fulfilling things a person can do (Gallup & Newport, 1990), but even with modern reproductive technologies, not all couples who would like to have children (which is still most) are able to. For these families, adoption often allows them to feel whole—by completing their family. In 2013, in the United States, there were over 100,000 children in foster care (where children go when their biological families are unable to adequately care for them) available for adoption (Soronen, 2013). In total, about 2% of the U.S. child population is adopted, either through foster care or through private domestic or international adoption (Adopted Children, 2012). Adopting a child from the foster care system is relatively inexpensive, costing \$0-\$2,500, with many families qualifying for state-subsidized support (Soronen, 2013). For years, international adoptions have been popular. In the United States, between 1999 and 2014, 256,132 international adoptions occurred, with the largest number of children coming from China (73,672) and Russia (46,113) (Intercountry Adoption, 2016). People in the United States, Spain, France, Italy, and Canada adopt the largest numbers of children (Selman, 2009). More recently, however, international adoptions have begun to decrease. One significant complication is that each country has its own set of requirements for adoption, as does each country from which an adopted child originates. As such, the adoption process can vary greatly, especially in terms of cost, and countries are able to police who adopts their children. For example, single, obese, or over-50 individuals are not allowed to adopt a child from China (Bartholet, 2007). Regardless of why a family chooses to adopt, traits such as flexibility, patience, strong problem-solving skills, and a willingness to identify local community resources are highly favorable for the prospective parents to possess. Additionally, it may be helpful for adoptive parents to recognize that they do not have to be “perfect” parents as long as they are loving and willing to meet the unique challenges their adopted child may pose. Happy Healthy Families Our families play a crucial role in our overall development and happiness. They can support and validate us, but they can also criticize and burden us. For better or worse, we all have a family. In closing, here are strategies you can use to increase the happiness of your family: • Teach morality—fostering a sense of moral development in children can promote well-being (Damon, 2004). • Savor the good—celebrate each other’s successes (Gable, Gonzaga & Strachman, 2006). • Use the extended family network—family members of all ages, including older siblings and grandparents, who can act as caregivers can promote family well-being (Armstrong, Birnie-Lefcovitch & Ungar, 2005). • Create family identity—share inside jokes, fond memories, and frame the story of the family (McAdams, 1993). • Forgive—Don’t hold grudges against one another (McCullough, Worthington & Rachal, 1997). Outside Resources Article: Social Trends Institute: The Sustainable Demographic Dividend http://sustaindemographicdividend.org/articles/international-family-indicators/global-family-culture Video: TED Talk: What Makes a Good Life? Lessons from the Longest Study on Happiness Web: Child Trends and Social Trends Institute: Mapping Family Change and Well-Being Outcomes http://worldfamilymap.ifstudies.org/2015/ Web: Pew Research Center: Family and Relationships http://www.pewresearch.org/topics/fa...relationships/ Web: PSYCHALIVE: Psychology for Everyday Life: Relationships http://www.psychalive.org/category/alive-to-intimacy/ Web: United States Census Bureau: Families and Living Arrangements http://www.census.gov/topics/families.html Discussion Questions 1. Throughout this module many ‘shifts’ are mentioned—shifts in division of labor, family roles, marital expectations, divorce, and societal and cultural norms, among others, were discussed. What shift do you find most interesting and why? What types of shifts do you think we might see in the future? 2. In the reading we discuss different parenting practices. Much of the literature suggests that authoritative parenting is best. Do you agree? Why or why not? Are there times when you think another parenting style would be better? 3. The section on divorce discusses specific factors that increase or decrease the chances of divorce. Based on your background, are you more or less at risk for divorce? Consider things about your family of orientation, culture, religious practices and beliefs, age, and educational goals. How does this risk make you feel? 4. The module ends with some tips for happy, healthy families. Are there specific things you could be doing in your own life to make for a happier, healthier family? What are some concrete things you could start doing immediately to increase happiness in your family? Vocabulary Adoption To take in and raise a child of other parents legally as one’s own. Age in place The trend toward making accommodations to ensure that aging people can stay in their homes and live independently. Anxious-avoidant Attachment style that involves suppressing one’s own feelings and desires, and a difficulty depending on others. Anxious-resistant Attachment style that is self-critical, insecure, and fearful of rejection. Attachment theory Theory that describes the enduring patterns of relationships from birth to death. Authoritarian parenting Parenting style that is high is demandingness and low in support. Authoritative parenting A parenting style that is high in demandingness and high in support. Blended family A family consisting of an adult couple and their children from previous relationships. Boomerang generation Term used to describe young adults, primarily between the ages of 25 and 34, who return home after previously living on their own. Child abuse Injury, death, or emotional harm to a child caused by a parent or caregiver, either intentionally or unintentionally. Childfree Term used to describe people who purposefully choose not to have children. Childless Term used to describe people who would like to have children but are unable to conceive. Cohabitation Arrangement where two unmarried adults live together. Coherence Within attachment theory, the gaining of insight into and reconciling one’s childhood experiences. Elder abuse Any form of mistreatment that results in harm to an elder person, often caused by his/her adult child. Empty Nest Feelings of sadness and loneliness that parents may feel when their adult children leave the home for the first time. Engagement Formal agreement to get married. Family of orientation The family one is born into. Family of procreation The family one creates, usually through marriage. Family systems theory Theory that says a person cannot be understood on their own, but as a member of a unit. Foster care Care provided by alternative families to children whose families of orientation cannot adequately care for them; often arranged through the government or a social service agency. Heterogamy Partnering with someone who is unlike you in a meaningful way. Homogamy Partnering with someone who is like you in a meaningful way. Intimate partner violence Physical, sexual, or psychological abuse inflicted by a partner. Joint family A family comprised of at least three generations living together. Joint families often include many members of the extended family. Learned helplessness The belief, as someone who is abused, that one has no control over his or her situation. Marriage market The process through which prospective spouses compare assets and liabilities of available partners and choose the best available mate. Modern family A family based on commitment, caring, and close emotional ties. Multigenerational homes Homes with more than one adult generation. Neglect Failure to care for someone properly. Nuclear families A core family unit comprised of only the parents and children. Permissive parenting Parenting that is low in demandingness and high in support. Physical abuse The use of intentional physical force to cause harm. Psychological abuse Aggressive behavior intended to control a partner. Sandwich generation Generation of people responsible for taking care of their own children as well as their aging parents. Second shift Term used to describe the unpaid work a parent, usually a mother, does in the home in terms of housekeeping and childrearing. Secure attachments Attachment style that involves being comfortable with depending on your partner and having your partner depend on you. Sexual abuse The act of forcing a partner to take part in a sex act against his or her will. Single parent family An individual parent raising a child or children. Stepfamily A family formed, after divorce or widowhood, through remarriage. Traditional family Two or more people related by blood, marriage, and—occasionally-- by adoption. Two-parent family A family consisting of two parents—typical both of the biological parents-- and their children. Uninvolved parenting Parenting that is low in demandingness and low in support. Working models An understanding of how relationships operate; viewing oneself as worthy of love and others as trustworthy.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_11%3A_Social_Part_I/11.07%3A_The_Family.txt
By Debi Brannan and Cynthia D. Mohr Western Oregon University, Portland State University Friendship and love, and more broadly, the relationships that people cultivate in their lives, are some of the most valuable treasures a person can own. This module explores ways in which we try to understand how friendships form, what attracts one person to another, and how love develops. It also explores how the Internet influences how we meet people and develop deep relationships. Finally, this module will examine social support and how this can help many through the hardest times and help make the best times even better. learning objectives • Understand what attracts us to others. • Review research that suggests that friendships are important for our health and well-being. • Examine the influence of the Internet on friendship and developing relationships. • Understand what happens to our brains when we are in love. • Consider the complexity of love. • Examine the construct and components of social support. Introduction The importance of relationships has been examined by researchers for decades. Many researchers point to sociologist Émile Durkheim’s classic study of suicide and social ties (1951) as a starting point for this work. Durkheim argued that being socially connected is imperative to achieving personal well-being. In fact, he argued that a person who has no close relationships is likely a person who is at risk for suicide. It is those relationships that give a person meaning in their life. In other words, suicide tends to be higher among those who become disconnected from society. What is interesting about that notion is when people are asked to describe the basic necessities for life—people will most often say food, water, and shelter, but seldom do people list “close relationships” in the top three. Yet time and time again, research has demonstrated that we are social creatures and we need others to survive and thrive. Another way of thinking about it is that close relationships are the psychological equivalent of food and water; in other words, these relationships are necessary for survival. Baumeister and Leary (1995) maintain that humans have basic needs and one of them is the need to belong; these needs are what makes us human and give a sense of purpose and identity to our lives (Brissette, Cohen, & Seeman, 2000; Ryff, 1989). Given that close relationships are so vital to well-being, it is important to ask how interpersonal relationships begin. What makes us like or love one person but not another? Why is it that when bad things happen, we frequently want to talk to our friends or family about the situation? Though these are difficult questions to answer because relationships are complicated and unique, this module will examine how relationships begin; the impact of technology on relationships; and why coworkers, acquaintances, friends, family, and intimate partners are so important in our lives. Attraction: The Start of Friendship and Love Why do some people hit it off immediately? Or decide that the friend of a friend was not likable? Using scientific methods, psychologists have investigated factors influencing attraction and have identified a number of variables, such as similarity, proximity (physical or functional), familiarity, and reciprocity, that influence with whom we develop relationships. Proximity Often we “stumble upon” friends or romantic partners; this happens partly due to how close in proximity we are to those people. Specifically, proximity or physical nearness has been found to be a significant factor in the development of relationships. For example, when college students go away to a new school, they will make friends consisting of classmates, roommates, and teammates (i.e., people close in proximity). Proximity allows people the opportunity to get to know one other and discover their similarities—all of which can result in a friendship or intimate relationship. Proximity is not just about geographic distance, but rather functional distance, or the frequency with which we cross paths with others. For example, college students are more likely to become closer and develop relationships with people on their dorm-room floors because they see them (i.e., cross paths) more often than they see people on a different floor. How does the notion of proximity apply in terms of online relationships? Deb Levine (2000) argues that in terms of developing online relationships and attraction, functional distance refers to being at the same place at the same time in a virtual world (i.e., a chat room or Internet forum)—crossing virtual paths. Familiarity One of the reasons why proximity matters to attraction is that it breeds familiarity; people are more attracted to that which is familiar. Just being around someone or being repeatedly exposed to them increases the likelihood that we will be attracted to them. We also tend to feel safe with familiar people, as it is likely we know what to expect from them. Dr. Robert Zajonc (1968) labeled this phenomenon the mere-exposure effect. More specifically, he argued that the more often we are exposed to a stimulus (e.g., sound, person) the more likely we are to view that stimulus positively. Moreland and Beach (1992) demonstrated this by exposing a college class to four women (similar in appearance and age) who attended different numbers of classes, revealing that the more classes a woman attended, the more familiar, similar, and attractive she was considered by the other students. There is a certain comfort in knowing what to expect from others; consequently research suggests that we like what is familiar. While this is often on a subconscious level, research has found this to be one of the most basic principles of attraction (Zajonc, 1980). For example, a young man growing up with an overbearing mother may be attracted to other overbearing women not because he likes being dominated but rather because it is what he considers normal (i.e., familiar). Similarity When you hear about couples such as Sandra Bullock and Jesse James, or Kim Kardashian and Kanye West, do you shake your head thinking “this won’t last”? It is probably because they seem so different. While many make the argument that opposites attract, research has found that is generally not true; similarity is key. Sure, there are times when couples can appear fairly different, but overall we like others who are like us. Ingram and Morris (2007) examined this phenomenon by inviting business executives to a cocktail mixer, 95% of whom reported that they wanted to meet new people. Using electronic name tag tracking, researchers revealed that the executives did not mingle or meet new people; instead, they only spoke with those they already knew well (i.e., people who were similar). When it comes to marriage, research has found that couples tend to be very similar, particularly when it comes to age, social class, race, education, physical attractiveness, values, and attitudes (McCann Hamilton, 2007; Taylor, Fiore, Mendelsohn, & Cheshire, 2011). This phenomenon is known as the matching hypothesis (Feingold, 1988; Mckillip & Redel, 1983). We like others who validate our points of view and who are similar in thoughts, desires, and attitudes. Reciprocity Another key component in attraction is reciprocity; this principle is based on the notion that we are more likely to like someone if they feel the same way toward us. In other words, it is hard to be friends with someone who is not friendly in return. Another way to think of it is that relationships are built on give and take; if one side is not reciprocating, then the relationship is doomed. Basically, we feel obliged to give what we get and to maintain equity in relationships. Researchers have found that this is true across cultures (Gouldner, 1960). Friendship “In poverty and other misfortunes of life, true friends are a sure refuge. They keep the young out of mischief; they comfort and aid the old in their weakness, and they incite those in the prime of life to noble deeds.”—Aristotle Research has found that close friendships can protect our mental and physical health when times get tough. For example, Adams, Santo, and Bukowski (2011) asked fifth- and sixth-graders to record their experiences and self-worth, and to provide saliva samples for 4 days. Children whose best friend was present during or shortly after a negative experience had significantly lower levels of the stress hormone cortisol in their saliva compared to those who did not have a best friend present. Having a best friend also seemed to protect their feelings of self-worth. Children who did not identify a best friend or did not have an available best friend during distress experienced a drop in self-esteem over the course of the study. Workplace friendships Friendships often take root in the workplace, due to the fact that people are spending as much, or more, time at work than they are with their family and friends (Kaufman & Hotchkiss, 2003). Often, it is through these relationships that people receive mentoring and obtain social support and resources, but they can also experience conflicts and the potential for misinterpretation when sexual attraction is an issue. Indeed, Elsesser and Peplau (2006) found that many workers reported that friendships grew out of collaborative work projects, and these friendships made their days more pleasant. In addition to those benefits, Riordan and Griffeth (1995) found that people who worked in an environment where friendships could develop and be maintained were more likely to report higher levels of job satisfaction, job involvement, and organizational commitment, and they were less likely to leave that job. Similarly, a Gallup poll revealed that employees who had “close friends” at work were almost 50% more satisfied with their jobs than those who did not (Armour, 2007). Internet friendships What influence does the Internet have on friendships? It is not surprising that people use the Internet with the goal of meeting and making new friends (Fehr, 2008; McKenna, 2008). Researchers have wondered if the issue of not being face-to-face reduces the authenticity of relationships, or if the Internet really allows people to develop deep, meaningful connections. Interestingly, research has demonstrated that virtual relationships are often as intimate as in-person relationships; in fact, Bargh and colleagues found that online relationships are sometimes more intimate (Bargh et al., 2002). This can be especially true for those individuals who are more socially anxious and lonely—such individuals who are more likely to turn to the Internet to find new and meaningful relationships (McKenna, Green, & Gleason, 2002). McKenna et al. (2002) suggest that for people who have a hard time meeting and maintaining relationships, due to shyness, anxiety, or lack of face-to-face social skills, the Internet provides a safe, nonthreatening place to develop and maintain relationships. Similarly, Penny Benford (2008) found that for high-functioning autistic individuals, the Internet facilitated communication and relationship development with others, which would have been more difficult in face-to-face contexts, leading to the conclusion that Internet communication could be empowering for those who feel frustrated when communicating face to face. Love Is all love the same? Are there different types of love? Examining these questions more closely, Robert Sternberg’s (2004; 2007) work has focused on the notion that all types of love are comprised of three distinct areas: intimacy, passion, and commitment. Intimacy includes caring, closeness, and emotional support. The passion component of love is comprised of physiological and emotional arousal; these can include physical attraction, emotional responses that promote physiological changes, and sexual arousal. Lastly, commitment refers to the cognitive process and decision to commit to love another person and the willingness to work to keep that love over the course of your life. The elements involved in intimacy (caring, closeness, and emotional support) are generally found in all types of close relationships—for example, a mother’s love for a child or the love that friends share. Interestingly, this is not true for passion. Passion is unique to romantic love, differentiating friends from lovers. In sum, depending on the type of love and the stage of the relationship (i.e., newly in love), different combinations of these elements are present. Taking this theory a step further, anthropologist Helen Fisher explained that she scanned the brains (using fMRI) of people who had just fallen in love and observed that their brain chemistry was “going crazy,” similar to the brain of an addict on a drug high (Cohen, 2007). Specifically, serotonin production increased by as much as 40% in newly in-love individuals. Further, those newly in love tended to show obsessive-compulsive tendencies. Conversely, when a person experiences a breakup, the brain processes it in a similar way to quitting a heroin habit (Fisher, Brown, Aron, Strong, & Mashek, 2009). Thus, those who believe that breakups are physically painful are correct! Another interesting point is that long-term love and sexual desire activate different areas of the brain. More specifically, sexual needs activate the part of the brain that is particularly sensitive to innately pleasurable things such as food, sex, and drugs (i.e., the striatum—a rather simplistic reward system), whereas love requires conditioning—it is more like a habit. When sexual needs are rewarded consistently, then love can develop. In other words, love grows out of positive rewards, expectancies, and habit (Cacioppo, Bianchi-Demicheli, Hatfield & Rapson, 2012). Love and the Internet The ways people are finding love has changed with the advent of the Internet. In a poll, 49% of all American adults reported that either themselves or someone they knew had dated a person they met online (Madden & Lenhart, 2006). As Finkel and colleagues (2007) found, social networking sites, and the Internet generally, perform three important tasks. Specifically, sites provide individuals with access to a database of other individuals who are interested in meeting someone. Dating sites generally reduce issues of proximity, as individuals do not have to be close in proximity to meet. Also, they provide a medium in which individuals can communicate with others. Finally, some Internet dating websites advertise special matching strategies, based on factors such as personality, hobbies, and interests, to identify the “perfect match” for people looking for love online. In general, scientific questions about the effectiveness of Internet matching or online dating compared to face-to-face dating remain to be answered. It is important to note that social networking sites have opened the doors for many to meet people that they might not have ever had the opportunity to meet; unfortunately, it now appears that the social networking sites can be forums for unsuspecting people to be duped. In 2010 a documentary, Catfish, focused on the personal experience of a man who met a woman online and carried on an emotional relationship with this person for months. As he later came to discover, though, the person he thought he was talking and writing with did not exist. As Dr. Aaron Ben-Zeév stated, online relationships leave room for deception; thus, people have to be cautious. Social Support When bad things happen, it is important for people to know that others care about them and can help them out. Unsurprisingly, research has found that this is a common thread across cultures (Markus & Kitayma, 1991; Triandis, 1995) and over time (Reis, Sheldon, Gable, Roscoe, & Ryan, 2000); in other words, social support is the active ingredient that makes our relationships particularly beneficial. But what is social support? One way of thinking about social support is that it consists of three discrete conceptual components. Perceived Social Support Have you ever thought that when things go wrong, you know you have friends/family members that are there to help you? This is what psychologists call perceived social support or “a psychological sense of support” (Gottlieb, 1985). How powerful is this belief that others will be available in times of need? To examine this question, Dr. Arnberg and colleagues asked 4,600 survivors of the tragic 2004 Indian Ocean (or Boxing Day) Tsunami about their perception of social support provided by friends and family after the event. Those who experienced the most amount of stress found the most benefit from just knowing others were available if they needed anything (i.e., perceived support). In other words, the magnitude of the benefits depended on the extent of the stress, but the bottom line was that for these survivors, knowing that they had people around to support them if they needed it helped them all to some degree. Perceived support has also been linked to well-being. Brannan and colleagues (2012) found that perceived support predicted each component of well-being (high positive affect, low negative affect, high satisfaction with life) among college students in Iran, Jordan, and the United States. Similarly, Cohen and McKay (1984) found that a high level of perceived support can serve as a buffer against stress. Interestingly enough, Dr. Cohen found that those with higher levels of social support were less likely to catch the common cold. The research is clear—perceived social support increases happiness and well-being and makes our live better in general (Diener & Seligman, 2002; Emmons & Colby, 1995). Received Social Support Received support is the actual receipt of support or helping behaviors from others (Cohen & Wills, 1985). Interestingly, unlike perceived support, the benefits of received support have been beset with mixed findings (Stroebe & Stroebe, 1996). Similar to perceived support, receiving support can buffer people from stress and positively influence some individuals—however, others might not want support or think they need it. For example, dating advice from a friend may be considered more helpful than such advice from your mom! Interestingly, research has indicated that regardless of the support-provider’s intentions, the support may not be considered as helpful to the person receiving the support if it is unwanted (Dunkel-Schetter, Blasband, Feinstein, & Herbert, 1992; Cutrona, 1986). Indeed, mentor support was viewed negatively by novice ESOL teachers (those teaching English as a second language in other countries; Brannan & Bleistein, 2012). Yet received support from family was perceived as very positive—the teachers said that their family members cared enough to ask about their jobs and told them how proud they were. Conversely, received mentor support did not meet teachers’ needs, instead making them feel afraid and embarrassed to receive mentor support. Quality or Quantity? With so many mixed findings, psychologists have asked whether it is the quality of social support that matters or the quantity (e.g., more people in my support network). Interestingly, research by Friedman and Martin (2011) examining 1,500 Californians over 8 decades found that while quality does matter, individuals with larger social networks lived significantly longer than those with smaller networks. This research suggests we should count the number of our friends / family members—the more, the better, right? Not necessarily: Dunbar (1992; 1993) argued that we have a cognitive limit with regard to how many people with whom we can maintain social relationships. The general consensus is about 150—we can only “really” know (maintain contact and relate to) about 150 people. Finally, research shows that diversity also matters in terms of one’s network, such that individuals with more diverse social networks (i.e., different types of relationships including friends, parents, neighbors, and classmates) were less likely to get the common cold compared to those with fewer and less diverse networks (Cohen, Doyle, Turner, Alper, & Skoner, 2003). In sum, it is important to have quality relationships as well as quantity—and as the Beatles said, “all you need is love—love is all you need.” Outside Resources Movie: Official Website of Catfish the Movie www.iamrogue.com/catfish Video: Ted Talk from Helen Fisher on the brain in love http://www.ted.com/talks/helen_fishe...n_in_love.html Video: The Science of Heartbreak Web: Groundbreaking longitudinal study on longevity from Howard S. Friedman and Leslie R. Martin http://www.howardsfriedman.com/longevityproject/ Discussion Questions 1. What is more important—perceived social support or received social support? Why? 2. We understand how the Internet has changed the dating scene—how might it further change how we become romantically involved? 3. Can you love someone whom you have never met? 4. Do you think it is the quality or quantity of your relationships that really matters most? Vocabulary Functional distance The frequency with which we cross paths with others. Mere-exposure effect The notion that people like people/places/things merely because they are familiar with them. Perceived social support A person’s perception that others are there to help them in times of need. Proximity Physical nearness. Received social support The actual act of receiving support (e.g., informational, functional). Support support network The people who care about and support a person.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_11%3A_Social_Part_I/11.08%3A_Love_Friendship_and_Social_Support.txt
By Kenneth Tan and Louis Tay Purdue University The relationships we cultivate in our lives are essential to our well-being—namely, happiness and health. Why is that so? We begin to answer this question by exploring the types of relationships—family, friends, colleagues, and lovers—we have in our lives and how they are measured. We also explore the different aspects of happiness and health, and show how the quantity and quality of relationships can affect our happiness and health. learning objectives • Understand why relationships are key to happiness and health. • Define and list different forms of relationships. • List different aspects of well-being. • Explain how relationships can enhance well-being. • Explain how relationships might not enhance well-being. Introduction In Daniel Defoe’s classic novel Robinson Crusoe(1719), the main character is shipwrecked. For years he lives alone, creating a shelter for himself and marking the passage of time on a wooden calendar. It is a lonely existence, and Crusoe describes climbing a hilltop in the hopes of seeing a passing ship and possible rescue. He scans the horizon until, in his own words, he is “almost blind.” Then, without hope, he sits and weeps. Although it is a work of fiction, Robinson Crusoecontains themes we can all relate to. One of these is the idea of loneliness. Humans are social animals and we prefer living together in groups. We cluster in families, in cities, and in groups of friends. In fact, most people spend relatively few of their waking hours alone. Even introverts report feeling happier when they are with others! Yes, being surrounded by people and feeling connected to others appears to be a natural impulse. In this module we will discuss relationships in the context of well-being. We will begin by defining well-being and then presenting research about different types of relationships. We will explore how both the quantity and quality of our relationships affect us, as well as take a look at a few popular conceptions (or misconceptions) about relationships and happiness. The Importance of Relationships If you were to reflect on the best moments of your life, chances are they involved other people. We feel good sharing our experiences with others, and our desire for high quality relationships may be connected to a deep-seated psychological impulse: the need to belong (Baumeister & Leary, 1995). Aristotle commented that humans are fundamentally social in nature. Modern society is full of evidence that Aristotle was right. For instance, people often hold strong opinions about single child families—usually concerning what are often viewed as problematic “only child” characteristics—and most parents choose to have multiple kids. People join book clubs to make a solitary activity—reading—into a social activity. Prisons often punish offenders by putting them in solitary confinement, depriving them of the company of others. Perhaps the most obvious expression of the need to belong in contemporary life is the prevalence of social media. We live in an era when, for the first time in history, people effectively have two overlapping sets of social relationships: those in the real world and those in the virtual world. It may seem intuitive that our strong urge to connect with others has to do with the boost we receive to our own well-being from relationships. After all, we derive considerable meaning from our relational bonds—as seen in the joy a newborn brings to its parents, the happiness of a wedding, and the good feelings of having reliable, supportive friendships. In fact, this intuition is borne out by research suggesting that relationships can be sources of intimacy and closeness (Reis, Clark & Holmes, 2004), comfort and relief from stress (Collins & Feeney, 2000), and accountability—all of which help toward achieving better health outcomes (Tay, Tan, Diener, & Gonzalez, 2013; Taylor, 2010). Indeed, scholars have long considered social relationships to be fundamental to happiness and well-being (Argyle, 2001; Myers, 2000). If the people in our lives are as important to our happiness as the research suggests, it only makes sense to investigate how relationships affect us. The Question of Measurement Despite the intuitive appeal of the idea that good relationships translate to more happiness, researchers must collect and analyze data to arrive at reliable conclusions. This is particularly difficult with the concepts of relationships and happiness, because both can be difficult to define. What counts as a relationship? A pet? An old friend from childhood you haven’t seen in ten years? Similarly, it is difficult to pinpoint exactly what qualifies as happiness. It is vital to define these terms, because their definitions serve as the guidelines by which they can be measured, a process called operationalization. Scientifically speaking, the two major questions any researcher needs to answer before he or she can begin to understand how relationships and well-being interact are, “How do I best measure relationships?” and “How do I best measure well-being?” Let’s begin with relationships. There are both objective and subjective ways to measure social relationships. Objective social variables are factors that are based on evidence rather than opinions. They focus on the presence and frequency of different types of relationships, and the degree of contact and amount of shared activities between people. Examples of these measures include participants’ marital status, their number of friends and work colleagues, and the size of their social networks. Each of these variables is factually based (e.g., you have x number of coworkers, etc.). Another objective social variable is social integration, or one’s degree of integration into social networks. This can be measured by looking at the frequency and amount of social activity or contact one has with others (see Okun, Stock, Haring, & Witter, 1984; Pinquart & Sorensen, 2000). The strength of objective measures is that they generally have a single correct answer. For example, a person is either married or not; there is no in-between. Subjective social variables, as the name suggests, are those that focus on the subjective qualities of social relationships. These are the products of personal opinions and feelings rather than facts. A key subjective variable is social support—the extent to which individuals feel cared for, can receive help from others, and are part of a supportive network. Measures of social support ask people to report on their perceived levels of support as well as their satisfaction with the support they receive (see Cohen, Underwood, & Gottlieb, 2000). Other subjective social variables assess the nature and quality of social relationships themselves—that is, what types of relationships people have, and whether these social relationships are good or bad. These can include measures that ask about the quality of a marriage (e.g., Dyadic Adjustment Scale; Spanier, 1976), the amount of conflict in a relationship (e.g., Conflict Tactics Scale; Straus, 1979), or the quality of each relationship in one’s social network (e.g., Network of Relationships Inventory (NRI); Furman & Buhrmester, 1985). The strength of subjective measures is that they provide insight into people’s personal experience. A married person, for example, might love or hate his/her marriage; subjective measures tell us which of these is the case. Objective and subjective measures are often administered in a way that asks individuals to make a global assessment of their relationships (i.e., “How much social support do you receive?”). However, scientists have more recently begun to study social relationships and activity using methods such as daily diary methodology (Bolger, Davis, & Rafaeli, 2003), whereby individuals report on their relationships on a regular basis (e.g., three times a day). This allows researchers to examine in-the-moment instances and/or day-to-day trends of how social relationships affect happiness and well-being compared to more global measures. Many researchers try to include multiple types of measurement—objective, subjective, and daily diaries—to overcome the weaknesses associated with any one measurement technique. Just as researchers must consider how to best measure relationships, they must also face the issue of measuring well-being. Well-being is a topic many people have an opinion about. If you and nine other people were to write down your own definitions of happiness, or of well-being, there’s a good chance you’d end up with ten unique answers. Some folks define happiness as a sense of peace, while others think of it as being healthy. Some people equate happiness with a sense of purpose, while others think of it as just another word for joy. Modern researchers have wrestled with this topic for decades. They acknowledge that both psychological and physical approaches are relevant to defining well-being, and that many dimensions—satisfaction, joy, meaning—are all important. One prominent psychological dimension of well-being is happiness. In psychology, the scientific term for happiness is subjective well-being, which is defined by three different components: high life satisfaction, which refers to positive evaluations of one’s life in general (e.g., “Overall, I am satisfied with my life”); positive feelings, which refers to the amount of positive emotions one experiences in life (e.g., peace, joy); and low negative feelings, which refers to the amount of negative emotions one experiences in life (e.g., sadness, anger) (Diener, 1984). These components are commonly measured using subjective self-report scales. The physical dimension of well-being is best thought of as one’s health. Health is a broad concept and includes, at least in part, being free of illness or infirmity. There are several aspects of physical health that researchers commonly consider when thinking about well-being and relationships. For example, health can be defined in terms of (A) injury, (B) disease, and (C) mortality. Health can also include physiological indicators, such as blood pressure or the strength of a person’s immune system. Finally, there are health behaviors to be considered, such as dietary consumption, exercise, and smoking. Researchers often examine a variety of health variables in order to better understand the possible benefits of good relationships. Presence and Quality of Relationships and Well-Being If you wanted to investigate the connection between social relationships and well-being, where would you start? Would you focus on teenagers? Married couples? Would you interview religious people who have taken a vow of silence? These are the types of considerations well-being researchers face. It is impossible for a single study to look at all types of relationships across all age groups and cultures. Instead, researchers narrow their focus to specific variables. They tend to consider two major elements: the presence of relationships, and the quality of relationships. Presence of relationships The first consideration when trying to understand how relationships influence well-being is the presence of relationships. Simply put, researchers need to know whether or not people have relationships. Are they married? Do they have many friends? Are they a member of a club? Finding this out can be accomplished by looking at objective social variables, such as the size of a person’s social network, or the number of friends they have. Researchers have discovered that the more social relationships people have, in general, the more positively their sense of well-being is impacted (Lucas, Dyrenforth, & Diener 2008). In one study of more than 200 undergraduate students, psychologists Ed Diener and Martin Seligman (2002) compared the happiest 10% to the unhappiest 10%. The researchers were curious to see what differentiated these two groups. Was it gender? Exercise habits? Religion? The answer turned out to be relationships! The happiest students were much more satisfied with their relationships, including with close friends, family, and romantic partnerships, than the unhappiest. They also spent less time alone. Some people might be inclined to dismiss the research findings above because they focused primarily on college students. However, in a worldwide study of people of all ages from 123 nations, results showed that having even a few high quality social relationships was consistently linked with subjective well-being (Tay & Diener, 2011). This is an important finding because it means that a person doesn’t have to be a social butterfly in order to be happy. Happiness doesn’t depend necessarily on having dozens of friends, but rather on having at least a few close connections. Another way of gaining an understanding of the presence of relationships is by looking at the absence of relationships. A lack of social connections can lead to loneliness and depression. People lose well-being when social relationships are denied—as in cases of ostracism. In many societies, withholding social relationships is used as a form of punishment. For example, in some Western high schools, people form social groups known as “cliques,” in which people share interests and a sense of identity. Unlike clubs, cliques do not have explicit rules for membership but tend to form organically, as exclusive group friendships. When one member of a clique conflicts with the others, the offending member may be socially rejected. Similarly, some small societies practice shunning, a temporary period during which members withhold emotion, communication, and other forms of social contact as a form of punishment for wrongdoing. The Amish—a group of traditional Christian communities in North America who reject modern conveniences such as electricity—occasionally practice shunning (Hostetler, 1993). Members who break important social rules, for example, are made to eat alone rather than with their family. This typically lasts for one to two weeks. Individuals’ well-being has been shown to dramatically suffer when they are ostracized in such a way (Williams, 2009). Research has even shown that the areas of the brain that process physical pain when we are injured are the same areas that process emotional pain when we are ostracized (Eisenberger, Lieberman, & Williams, 2003). Quality of relationships Simply having a relationship is not, in itself, sufficient to produce well-being. We’re all familiar with instances of awful relationships: Cinderella and her step-sisters, loveless marriages, friends who have frequent falling-outs (giving birth to the word “frenemy”). In order for a relationship to improve well-being it has to be a good one. Researchers have found that higher friendship quality is associated with increased happiness (Demir & Weitekamp, 2007). Friendships aren’t the only relationships that help, though. Researchers have found that high quality relationships between parents and children are associated with increased happiness, both for teenagers (Gohm, Oishi, Darlington, & Diener, 1998) and adults (Amato & Afifi, 2006). Finally, an argument can be made for looking at relationships’ effects on each of the distinct components of subjective well-being. Walen and Luchman (2000) investigated a mix of relationships, including family, friends, and romantic partners. They found that social support and conflict were associated with all three aspects of subjective well-being (life satisfaction, positive affect, and negative affect). Similarly, in a cross-cultural study comparing college students in Iran, Jordan, and the United States, researchers found that social support was linked to higher life satisfaction, higher positive affect, and lower negative affect (Brannan, Biswas-Diener, Mohr, Mortazavi, & Stein, 2012). It may seem like common sense that good relationships translate to more happiness. You may be surprised to learn, however, that good relationships also translate to better health. Interestingly, both the quality and quantity of social relationships can affect a person’s health (Cohen 1988; House, Landis, & Umberson, 1988). Research has shown that having a larger social network and high quality relationships can be beneficial for health, whereas having a small social network and poor quality relationships can actually be detrimental to health (Uchino, 2006). Why might it be the case that good relationships are linked to health? One reason is that friends and romantic partners might share health behaviors, such as wearing seat belts, exercising, or abstaining from heavy alcohol consumption. Another reason is that people who experience social support might feel less stress. Stress, it turns out, is associated with a variety of health problems. Other discussions on social relationships and health can also be found in Noba (http://noba.to/4tm85z2x). Types of Relationships Intimate relationships It makes sense to consider the various types of relationships in our lives when trying to determine just how relationships impact our well-being. For example, would you expect a person to derive the exact same happiness from an ex-spouse as from a child or coworker? Among the most important relationships for most people is their long-time romantic partner. Most researchers begin their investigation of this topic by focusing on intimate relationships because they are the closest form of social bond. Intimacy is more than just physical in nature; it also entails psychological closeness. Research findings suggest that having a single confidante—a person with whom you can be authentic and trust not to exploit your secrets and vulnerabilities—is more important to happiness than having a large social network (see Taylor, 2010 for a review). Another important aspect of relationships is the distinction between formal and informal. Formal relationships are those that are bound by the rules of politeness. In most cultures, for instance, young people treat older people with formal respect, avoiding profanity and slang when interacting with them. Similarly, workplace relationships tend to be more formal, as do relationships with new acquaintances. Formal connections are generally less relaxed because they require a bit more work, demanding that we exert more self-control. Contrast these connections with informal relationships—friends, lovers, siblings, or others with whom you can relax. We can express our true feelings and opinions in these informal relationships, using the language that comes most naturally to us, and generally being more authentic. Because of this, it makes sense that more intimate relationships—those that are more comfortable and in which you can be more vulnerable—might be the most likely to translate to happiness. The most common way researchers investigate intimacy is by examining marital status. Although marriage is just one type of intimate relationship, it is by far the most common type. In some research, the well-being of married people is compared to that of people who are single or have never been married, and in other research, married people are compared to people who are divorced or widowed (Lucas & Dyrenforth, 2005). Researchers have found that the transition from singlehood to marriage brings about an increase in subjective well-being (Haring-Hidore, Stock, Okun, & Witter, 1985; Lucas, 2005; Williams, 2003). Research has also shown that progress through the stages of relationship commitment (i.e., from singlehood to dating to marriage) is also associated with an increase in happiness (Dush & Amato, 2005). On the other hand, experiencing divorce, or the death of a spouse, leads to adverse effects on subjective well-being and happiness, and these effects are stronger than the positive effects of being married (Lucas, 2005). Although research frequently points to marriage being associated with higher rates of happiness, this does not guarantee that getting married will make you happy! The quality of one’s marriage matters greatly. When a person remains in a problematic marriage, it takes an emotional toll. Indeed, a large body of research shows that people’s overall life satisfaction is affected by their satisfaction with their marriage (Carr, Freedman, Cornman, Schwarz, 2014; Dush, Taylor, & Kroeger, 2008; Karney, 2001; Luhmann, Hofmann, Eid, & Lucas, 2012; Proulx, Helms, & Buehler, 2007). The lower a person’s self-reported level of marital quality, the more likely he or she is to report depression (Bookwala, 2012). In fact, longitudinal studies—those that follow the same people over a period of time—show that as marital quality declines, depressive symptoms increase (Fincham, Beach, Harold, & Osborne, 1997; Karney, 2001). Proulx and colleagues (2007) arrived at this same conclusion after a systematic review of 66 cross-sectional and 27 longitudinal studies. What is it about bad marriages, or bad relationships in general, that takes such a toll on well-being? Research has pointed to conflict between partners as a major factor leading to lower subjective well-being (Gere & Schimmack, 2011). This makes sense. Negative relationships are linked to ineffective social support (Reblin, Uchino, & Smith, 2010) and are a source of stress (Holt-Lunstad, Uchino, Smith, & Hicks, 2007). In more extreme cases, physical and psychological abuse can be detrimental to well-being (Follingstad, Rutledge, Berg, Hause, & Polek, 1990). Victims of abuse sometimes feel shame, lose their sense of self, and become less happy and prone to depression and anxiety (Arias & Pape, 1999). However, the unhappiness and dissatisfaction that occur in abusive relationships tend to dissipate once the relationships end. (Arriaga, Capezza, Goodfriend, Rayl & Sands, 2013). Work Relationships and Well-Being Working adults spend a large part of their waking hours in relationships with coworkers and supervisors. Because these relationships are forced upon us by work, researchers focus less on their presence or absence and instead focus on their quality. High quality work relationships can make jobs enjoyable and less stressful. This is because workers experience mutual trust and support in the workplace to overcome work challenges. Liking the people we work with can also translate to more humor and fun on the job. Research has shown that supervisors who are more supportive have employees who are more likely to thrive at work (Paterson, Luthans, & Jeung, 2014; Monnot & Beehr, 2014; Winkler, Busch, Clasen, & Vowinkel, 2015). On the other hand, poor quality work relationships can make a job feel like drudgery. Everyone knows that horrible bosses can make the workday unpleasant. Supervisors that are sources of stress have a negative impact on the subjective well-being of their employees (Monnot & Beehr, 2014). Specifically, research has shown that employees who rate their supervisors high on the so-called “dark triad”—psychopathy, narcissism, and Machiavellianism—reported greater psychological distress at work, as well as less job satisfaction (Mathieu, Neumann, Hare, & Babiak, 2014). In addition to the direct benefits or costs of work relationships on our well-being, we should also consider how these relationships can impact our job performance. Research has shown that feeling engaged in our work and having a high job performance predicts better health and greater life satisfaction (Shimazu, Schaufeli, Kamiyama, & Kawakami, 2015). Given that so many of our waking hours are spent on the job—about ninety thousand hours across a lifetime—it makes sense that we should seek out and invest in positive relationships at work. Fact or Myth: Are Social Relationships the Secret to Happiness? If you read pop culture magazines or blogs, you’ve likely come across many supposed “secrets” to happiness. Some articles point to exercise as a sure route to happiness, while others point to gratitude as a crucial piece of the puzzle. Perhaps the most written about “secret” to happiness is having high quality social relationships. Some researchers argue that social relationships are central to subjective well-being (Argyle, 2001), but others contend that social relationships’ effects on happiness have been exaggerated. This is because, when looking at the correlations—the size of the associations—between social relationships and well-being, they are typically small (Lucas & Dyrenforth, 2006; Lucas et al., 2008). Does this mean that social relationships are not actually important for well-being? It would be premature to draw such conclusions, because even though the effects are small, they are robust and reliable across different studies, as well as other domains of well-being. There may be no single secret to happiness but there may be a recipe, and, if so, good social relationships would be one ingredient. Outside Resources Article: The New Yorker Magazine—“Hellhole” article on solitary confinement http://www.newyorker.com/magazine/2009/03/30/hellhole Blog: The Gottman Relationship Blog https://www.gottman.com/blog/ Helen Fisher on Millennials' Dating Trends https://www.theatlantic.com/video/index/504626/tinder-wont-change-love/ Web: Science of Relationship’s website on social relationships and health http://www.scienceofrelationships.co...rceived-p.html Web: Science of Relationship’s website on social relationships and well-being www.scienceofrelationships.co...ell-being.html Discussion Questions 1. What is more important to happiness: the quality or quantity of your social relationships? 2. What do you think has more influence on happiness: friends or family relationships? Do you think that the effect of friends and family on happiness will change with age? What about relationship duration? 3. Do you think that single people are likely to be unhappy? 4. Do you think that same-sex couples who get married will have the same benefits, in terms of happiness and well-being, compared to heterosexual couples? 5. What elements of subjective well-being do you think social relationships have the largest impact on: life satisfaction, positive affect, or negative affect? 6. Do you think that if you are unhappy you can have good quality relationships? 7. Do you think that social relationships are important for happiness more so for women compared to men? Vocabulary Confidante A trusted person with whom secrets and vulnerabilities can be shared. Correlation A measure of the association between two variables, or how they go together. Health The complete state of physical, mental, and social well-being—not just the absence of disease or infirmity. Health behaviors Behaviors that are associated with better health. Examples include exercising, not smoking, and wearing a seat belt while in a vehicle. Machiavellianism Being cunning, strategic, or exploitative in one’s relationships. Named after Machiavelli, who outlined this way of relating in his book, The Prince. Narcissism A pervasive pattern of grandiosity (in fantasy or behavior), a need for admiration, and lack of empathy. Objective social variables Targets of research interest that are factual and not subject to personal opinions or feelings. Operationalization The process of defining a concept so that it can be measured. In psychology, this often happens by identifying related concepts or behaviors that can be more easily measured. Ostracism Being excluded and ignored by others. Psychopathy A pattern of antisocial behavior characterized by an inability to empathize, egocentricity, and a desire to use relationships as tools for personal gain. Shunning The act of avoiding or ignoring a person, and withholding all social interaction for a period of time. Shunning generally occurs as a punishment and is temporary. Social integration Active engagement and participation in a broad range of social relationships. Social support A social network’s provision of psychological and material resources that benefit an individual. Subjective social variables Targets of research interest that are not necessarily factual but are related to personal opinions or feelings Subjective well-being The scientific term used to describe how people experience the quality of their lives in terms of life satisfaction and emotional judgments of positive and negative affect.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_11%3A_Social_Part_I/11.09%3A_Relationships_and_Well-being.txt
By Nathaniel M. Lambert Brigham Young University Most research in the realm of relationships has examined that which can go wrong in relationships (e.g., conflict, infidelity, intimate partner violence). I summarize much of what has been examined about what goes right in a relationship and call these positive relationship deposits. Some research indicates that relationships need five positive interactions for every negative interaction. Active-constructive responding, gratitude, forgiveness, and time spent together are some sources of positive deposits in one’s relational bank account. These kinds of deposits can reduce the negative effects of conflict on marriage and strengthen relationships. learning objectives • Understand some of the challenges that plague close relationships today. • Become familiar with the concept of positive emotional deposits. • Review some of the research that is relevant to positive emotional deposits. • Describe several ways people make positive emotional deposits. Introduction The status of close relationships in America can sometimes look a bit grim. More than half of marriages now end in divorce in the United States (Pinsof, 2002). Infidelity is the leading cause of divorce (Priviti & Amato, 2004) and is on the rise across all age groups (Allen et al., 2008). Cybersex has likely contributed to the increased rates of infidelity, with some 65% of those who look for sex online having intercourse with their “Internet” partner offline as well. Research on intimate partner violence indicates that it occurs at alarmingly high rates, with over one-fifth of couples reporting at least one episode of violence over the course of a year (Schafer, Caetano, & Clark, 1998). These and other issues that arise in relationships (e.g., substance abuse, conflict) represent significant obstacles to close relationships. With so many problems that plague relationships, how can a positive relationship be cultivated? Is there some magic bullet or ratio? Yes, kind of. The Magic Formula Of course, no research is perfect, and there really is no panacea that will cure any relationship. However, we do have some research that suggests that long-term, stable marriages have been shown to display a particular ratio between positive and negative interactions. That ratio is not 1:1, in fact, 1:1 is approximately the ratio of couples who were heading toward divorce. Thus, in a couple where a spouse gives one compliment for each criticism, the likely outcome is divorce. Happier couples have five positive interactions for every one negative interaction (Gottman, 1994). What can you do to increase the ratio of positive interactions on a regular basis?—through positive relationship deposits. Naturally, making positive relationship deposits will boost your overall positive emotions—so by making positive relationships a priority in your life you can boost your positive emotions, becoming a flourishing individual. Positive Relationship Deposits In Seven Habits of Highly Effective People, Covey (1989) compared human relationships to actual bank accounts—suggesting that every day we make deposits or withdrawals from our relationship accounts with each person in our lives. He recommended that to keep an overall positive balance, we need to make regular positive deposits. This will ultimately help buffer the negatives that are bound to occur in relationships. Keeping this metaphor of emotional capital in mind could be beneficial for promoting the well-being of the relationships in one’s life. Some research suggests that people, on average, have more positive than negative experiences (Gable & Haidt, 2005). Thus, there are far more opportunities for deposits than for withdrawals. Conversely, even though there may be fewer negatives, Baumeister, Bratslavsky, Finkenauer, and Vohs (2001) argue quite persuasively that bad events overpower good events in one’s life, which suggests that the negative withdrawals are more salient and more impactful. This further accentuates the need to ensure that we have a healthy store of positive deposits that can help to counteract these more impactful account withdrawals. Positive deposits that accumulate over time should provide a buffer against the withdrawals that happen in every relationship. In other words, the inevitable occasional conflict is not nearly so bad for the relationship when it occurs in a partnership that is otherwise highly positive. What opportunities does relationships science suggest are effective opportunities each day to make positive relationship deposits? Common Opportunities for Daily Positive Deposits An individual’s general sentiment of his or her partner is dependent on ongoing interactions, and these interactions provide many opportunities for deposits or withdrawals. To illustrate how much daily interaction can give opportunities to make deposits in relationships, I will describe research that has been done on capitalization and active-constructive responding, gratitude, forgiveness, and spending time together in meaningful ways. Although there are several other ways by which positive relationship deposits can be made, these four have received quite a bit of attention by researchers. Then I will discuss some evidence on how an accumulation of such daily relationship deposits seems to provide a safeguard against the impact of conflict. Building Intimacy Through Capitalization and Active-Constructive Responding Intimacy has been defined as a close and familiar bond with another person. Intimacy has been positively related with satisfaction in marriage (Patrick, Sells, Giordano & Tollerud, 2007) and well-being in general (e.g., Waltz & Badura, 1987; Prager & Buhrmester, 1998). On the other hand, lacking marital intimacy is related to higher severity of depression (Waring & Patton, 1984). Thus, achieving intimacy with one’s partner is essential for a happy marriage and happiness in general and is something worth seeking. Given that 60% to 80% of the time, people disclose their most positive daily experiences with their partner (Gable et al., 2004), this becomes a regular opportunity for intimacy building. When we disclose certain private things about ourselves, we increase the potential intimacy that we can have with another person, however, we also make ourselves vulnerable to getting hurt by the other person. What if they do not like what I have disclosed or react negatively? It can be a double-edged sword. Disclosing positive news from one’s day is a great opportunity for a daily deposit if the response from the other person is positive. What constitutes a positive response? To achieve intimacy we must respond positively to remarks our partner makes. When a person responds enthusiastically to a partner’s good news, this fosters higher levels of intimacy (Gable, Reis, Impett, & Asher, 2004). Thus, responding in a positive manner to a relationship partner’s good news provides frequent opportunities to make deposits in the relationship bank account. In fact, most people are presented the chance to make this kind of relationship deposit almost every day. Most research has focused on support (partners’ responses to negative events), however, one study found that responses to positive events tend to be better predictors of relationship well-being than responses to negative events (Gable, Gonzaga, & Strachman, 2006). When one person seeks out another person with the intent to share positive news, it has been called capitalization (Gable et al., 2004). The best, supportive response to someone who shares good news has been termed active-constructive and is characterized by enthusiastic support. These active-constructive responses are positively associated with trust, satisfaction, commitment, and intimacy. On the other hand, when the listener points out something negative about what is said, it is called active-destructive responding. Ignoring what is said is termed passive-destructive, and understating support is called passive-constructive. All of these types of responses (see Figure 11.10.1) have been related to adverse relationship outcomes (Gable et al., 2004). If partners listen and are enthusiastic about the good news of the other, they build a stronger relationship. If they ignore the good news, change the subject, devalue the good news, or refocus the good news to be about themselves, they may make a withdrawal from the account. Being aware of this research and findings can help individuals to focus on better providing helpful responses to those they care about. Gratitude Relationship researchers report that expressing gratitude on a regular basis is an important means by which positive deposits may be made into relationship bank accounts. In a recent study, participants were randomly assigned to write about daily events, express gratitude to a friend, discuss a positive memory with a friend, or think grateful thoughts about a friend twice a week for three weeks. At the conclusion of the three weeks, those who were randomly assigned to express gratitude to their friend reported higher positive regard for their friend and more comfort voicing relationship concerns than did those in the two control conditions (Lambert & Fincham, 2011). Also, those who expressed gratitude to a close relationship partner reported greater perceived communal strength (e.g., caring, willingness to sacrifice) than participants in all control conditions (Lambert, Clark, Durtschi, Fincham, & Graham, 2010). Similarly, Algoe, Fredrickson, and Gable (2013) found that benefactors' positive perceptions of beneficiaries were increased when gratitude was expressed for the benefit, and these perceptions enhanced relationship quality. These studies suggest that expressing gratitude to someone you are close to is an important way of making positive relationship deposits. Forgiveness Forgiveness is something else you can do regularly to aid relationship satisfaction (e.g., Fincham, 2000; Paleari, Regalia, & Fincham, 2003) and commitment (e.g., Finkel, Rusbult, Kumashiro, & Hannon, 2002; Karremans & Van Lange, 2008). Unresolved conflict can put couples at risk of developing the negative cycle of interaction that causes further harm to relationships. For instance, one study found that lack of forgiveness is linked to ineffective conflict resolution (Fincham, Beach, & Davila, 2004). For instance, if Cindy cannot forgive Joe, Cindy will struggle to effectively resolve other disagreements in their relationship. Yet, those who do forgive report much better conflict resolution a year later (Fincham, Beach, & Davila, 2007). It appears that forgiveness can be an important way of building emotional capital in the relationship. Not forgiving the people in your life can block positive deposits to the relationship bank account. Spending Time in Meaningful Ways Some suggest that the best way to spell love is T-I-M-E. In our fast-paced society, many relationships are time deprived. In the beginning phases of a relationship, this rarely seems to be an issue given the novelty and excitement of the relationship, however, discovering new things about one’s partner declines and couples can slump into relationship boredom. The self-expansion model(Aron & Aron, 1996) suggests that people naturally seek to expand their capacity and that intimate relationships are an important way by which they accomplish self-expansion. They have found that couples who engaged in more challenging and novel activities felt more satisfied with their relationship immediately afterward than control couples (Aron et al., 2000). The takeaway message here is that simply watching TV with one’s romantic partner will not make nearly the magnitude of a deposit in a relational bank account as would a more engaging or challenging joint activity. Accumulated Positive Deposits and Conflict Management When there is a positive balance of relationship deposits this can help the overall relationship in times of conflict. For instance, some research indicates that a husband’s level of enthusiasm in everyday marital interactions was related to a wife’s affection in the midst of conflict (Driver & Gottman, 2004), showing that being pleasant and making deposits can change the nature of conflict. Also, Gottman and Levenson (1992) found that couples rated as having more pleasant interactions (compared with couples with less pleasant interactions) reported marital problems as less severe, higher marital satisfaction, better physical health, and less risk for divorce. Finally, Janicki, Kamarck, Shiffman, and Gwaltney (2006) showed that the intensity of conflict with a spouse predicted marital satisfaction unless there was a record of positive partner interactions, in which case the conflict did not matter as much. Again, it seems as though having a positive balance through prior positive deposits helps to keep relationships strong even in the midst of conflict. Relationships today are riddled with problems including divorce, infidelity, intimate partner violence, and chronic conflict. If you want to avoid some of these common pitfalls of relationships , if you want to build a good relationship with a partner or with your friends, it is crucial to make daily positive deposits in your relationship bank accounts. Doing so will help you enjoy each other more and also help you weather the inevitable conflicts that pop up over time. Some of the ways that have been most explored by researchers as a way to build your positive relationship bank account are through building intimacy by active constructive responding, expressing gratitude to the others, forgiving, and spending time in engaging joint activities. Although these are not the only ways that you can make positive deposits in one’s relationship bank accounts, they are some of the best examined. Consider how you might do more to make positive relationship deposits through these or other means for the survival and improvement of your relationships. Outside Resources A Primer on Teaching Positive Psychology http://www.apa.org/monitor/oct03/primer.aspx An Experiment in Gratitude Positive Psychology Center www.ppc.sas.upenn.edu/videolectures.htm Relationship Matters Podcast Series http://spr.sagepub.com/site/podcast/podcast_dir.xhtml Understanding Forgiveness http://www.pbs.org/thisemotionallife...ng-forgiveness Discussion Questions 1. What are some of the main challenges that face relationships today? 2. How would you describe the concept of an emotional bank account? 3. What are some ways people can make deposits to their relationship bank accounts? 4. What do you think are the most effective ways for making positive relationship deposits? 5. What are some of the most powerful relationship deposits that others have made into your relationship bank account? 6. What would you consider to be some challenging or engaging activities that you would consider doing more of with a close relationship partner? 7. Are there relationships of yours that have gotten into a negative spiral and could profit from positive relationship deposits? Vocabulary Active-constructive responding Demonstrating sincere interest and enthusiasm for the good news of another person. Capitalization Seeking out someone else with whom to share your good news. Relationship bank account An account you hold with every person in which a positive deposit or a negative withdrawal can be made during every interaction you have with the person. Self-expansion model Seeking to increase one’s capacity often through an intimate relationship.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_11%3A_Social_Part_I/11.10%3A_Positive_Relationships.txt
• 12.1: Industrial/Organizational (I/O) Psychology This module provides an introduction to industrial and organizational (I/O) psychology. I/O psychology is an area of psychology that specializes in the scientific study of behavior in organizational settings and the application of psychology to understand work behavior. The key individuals and studies in the history of I/O psychology are addressed in this module. Further, professional I/O associations are discussed, as are the key areas of competence developed in I/O master’s programs. • 12.2: Helping and Prosocial Behavior The focus of this module is on helping—prosocial acts in dyadic situations in which one person is in need and another provides the necessary assistance to eliminate the other’s need. Although people are often in need, help is not always given. Why not?  In this module, we will try to understand how the decision to help is made by answering the question: Who helps when and why? • 12.3: Conformity and Obedience We often change our attitudes and behaviors to match the attitudes and behaviors of the people around us. One reason for this conformity is a concern about what other people think of us. This process was demonstrated in a classic study in which college students deliberately gave wrong answers to a simple visual judgment task rather than go against the group. Another reason we conform to the norm is because other people often have information we do not. • 12.4: Persuasion: So Easily Fooled This module introduces several major principles in the process of persuasion. It offers an overview of the different paths to persuasion. It then describes how mindless processing makes us vulnerable to undesirable persuasion and some of the “tricks” that may be used against us • 12.5: Attraction and Beauty More attractive people elicit more positive first impressions. This effect is called the attractiveness halo, and it is shown when judging those with more attractive faces, bodies, or voices. Moreover, it yields significant social outcomes, including advantages to attractive people in domains as far-reaching as romance, friendships, family relations, education, work, and criminal justice. • 12.6: Prejudice, Discrimination, and Stereotyping People are often biased against others outside of their own social group, showing prejudice (emotional bias), stereotypes (cognitive bias), and discrimination (behavioral bias). In the past, people used to be more explicit with their biases, but during the 20th century, when it became less socially acceptable to exhibit bias, such things like prejudice, stereotypes, and discrimination became more subtle (automatic, ambiguous, and ambivalent). • 12.7: Social Comparison When athletes compete in a race, they are able to observe and compare their performance against those of their competitors. In the same way, all people naturally engage in mental comparisons with the people around them during the course of daily life. These evaluations can impact our motivation and feelings. In this module, you will learn about the process of social comparison: its definition, consequences, and the factors that affect it. • 12.8: Aggression and Violence This module discusses the causes and consequences of human aggression and violence. Both internal and external causes are considered. Effective and ineffective techniques for reducing aggression are also discussed. • 12.9: Social Neuroscience This module provides an overview of the new field of social neuroscience, which combines the use of neuroscience methods and theories to understand how other people influence our thoughts, feelings, and behavior. The module reviews research measuring neural and hormonal responses to understand how we make judgments about other people and react to stress. Thumbnail: The Scream by Edvard Munch. Chapter 12: Social Part II By Berrin Erdogan and Talya N. Bauer Portland State University, Koç University This module provides an introduction to industrial and organizational (I/O) psychology. I/O psychology is an area of psychology that specializes in the scientific study of behavior in organizational settings and the application of psychology to understand work behavior. The U.S. Department of Labor estimates that I/O psychology, as a field, will grow 26% by the year 2018. I/O psychologists typically have advanced degrees such as a Ph.D. or master’s degree and may work in academic, consulting, government, military, or private for-profit and not-for-profit organizational settings. Depending on the state in which they work, I/O psychologists may be licensed. They might ask and answer questions such as “What makes people happy at work?” “What motivates employees at work?” “What types of leadership styles result in better performance of employees?” “Who are the best applicants to hire for a job?” One hallmark of I/O psychology is its basis in data and evidence to answer such questions, and I/O psychology is based on the scientist-practitioner model. The key individuals and studies in the history of I/O psychology are addressed in this module. Further, professional I/O associations are discussed, as are the key areas of competence developed in I/O master’s programs. learning objectives • Define industrial and organizational (I/O) psychology. • Describe what an I/O psychologist does. • List the professional associations of I/O psychologists. • Identify major milestones in the history of I/O psychology. What is Industrial and Organizational (I/O) Psychology? Psychology as a field is composed of many different areas. When thinking of psychology, the person on the street probably imagines the clinical psychologist who studies and treats dysfunctional behavior or maybe the criminal psychologist who has become familiar due to popular TV shows such as Law & Order. I/O psychology may be underrepresented on TV, but it is a fast-growing and influential branch of psychology. What is I/O psychology? Briefly, it can be defined as the scientific study of behavior in organizational settings and the application of psychology to understand work behavior. In other words, while general psychology concerns itself with behavior of individuals in general, I/O psychology focuses on understanding employee behavior in work settings. For example, they ask questions such as: How can organizations recruit and select the people they need in order to remain productive? How can organizations assess and improve the performance of their employees? What work and non-work factors contribute to the happiness, effectiveness, and well-being of employees in the workplace? How does work influence non-work behavior and happiness? What motivates employees at work? All of these important queries fall within the domain of I/O psychology. Table 1 presents a list of tasks I/O psychologists may perform in their work. This is an extensive list, and one person will not be responsible for all these tasks. The I/O psychology field prepares and trains individuals to be more effective in performing the tasks listed in this table. At this point you may be asking yourself: Does psychology really need a special field to study work behaviors? In other words, wouldn’t the findings of general psychology be sufficient to understand how individuals behave at work? The answer is an underlined no. Employees behave differently at work compared with how they behave in general. While some fundamental principles of psychology definitely explain how employees behave at work (such as selective perception or the desire to relate to those who are similar to us), organizational settings are unique. To begin with, organizations have a hierarchy. They have job descriptions for employees. Individuals go to work not only to seek fulfillment and to remain active, but also to receive a paycheck and satisfy their financial needs. Even when they dislike their jobs, many stay and continue to work until a better alternative comes along. All these constraints suggest that how we behave at work may be somewhat different from how we would behave without these constraints. According to the U.S. Bureau of Labor Statistics, in 2011, more than 149 million individuals worked at least part time and spent many hours of the week working—see Figure 12.1.1 for a breakdown (U.S. Department of Labor, 2011). In other words, we spend a large portion of our waking hours at work. How happy we are with our jobs and our careers is a primary predictor of how happy and content we are with our lives in general (Erdogan, Bauer, Truxillo, & Mansfield, 2012). Therefore, the I/O psychology field has much to offer to individuals and organizations interested in increasing employee productivity, retention, and effectiveness while at the same time ensuring that employees are happy and healthy. It seems that I/O psychology is useful for organizations, but how is it helpful to you? Findings of I/O psychology are useful and relevant to everyone who is planning to work in an organizational setting. Note that we are not necessarily taking about a business setting. Even if you are planning to form your own band, or write a novel, or work in a not-for-profit organization, you will likely be working in, or interacting with, organizations. Understanding why people behave the way they do will be useful to you by helping you motivate and influence your coworkers and managers, communicate your message more effectively, negotiate a contract, and manage your own work life and career in a way that fits your life and career goals. What Does an I/O Psychologist Do? I/O psychology is a scientific discipline. Similar to other scientific fields, it uses research methods and approaches, and tests hypotheses. However, I/O psychology is a social science. This means that its findings will always be less exact than in physical sciences. Physical sciences study natural matter in closed systems and in controlled conditions. Social sciences study human behavior in its natural setting, with multiple factors that can affect behavior, so their predictive ability will never be perfect. While we can expect that two hydrogen and one oxygen atom will always make water when combined, combining job satisfaction with fair treatment will not always result in high performance. There are many influences on employee behaviors at work, and how they behave depends on the person interacting with a given situation on a given day. Despite the lack of precise results, I/O psychology uses scientific principles to study organizational phenomena. Many of those who conduct these studies are located at universities, in psychology or management departments, but there are also many who work in private, government, or military organizations who conduct studies about I/O-related topics. These scholars conduct studies to understand topics such as “What makes people happy at work?” “What motivates employees at work?” “What types of leadership styles result in better performance of employees?” I/O psychology researchers tend to have a Ph.D. degree, and they develop hypotheses, find ways of reasonably testing those hypotheses in organizational settings, and distribute their findings by publishing in academic journals. I/O psychology is based on the scientist-practitioner model. In other words, while the science part deals with understanding how and why things happen at work, the practitioner side takes a data-driven approach to understand organizational problems and to apply these findings to solving these specific problems facing the organization. While practitioners may learn about the most recent research findings by reading the journals that publish these results, some conduct their own research in their own companies, and some companies employ many I/O psychologists. Google is one company that collects and analyzes data to deal with talent-related issues. Google uses an annual Googlegeist (roughly translating to the spirit of Google) survey to keep tabs on how happy employees are. When survey results as well as turnover data showed that new mothers were twice as likely to leave the company as the average employee, the company made changes in its maternity leave policy and mitigated the problem (Manjoo, 2013). In other words, I/O psychologists both contribute to the science of workplace behavior by generating knowledge and solve actual problems organizations face by designing the workplace recruitment, selection, and workforce management policies using this knowledge. While the scientist-practitioner model is the hoped-for ideal, not everyone agrees that it captures the reality. Some argue that practitioners are not always up to date about what scientists know and, conversely, that scientists do not study what practitioners really care about often enough (Briner & Rousseau, 2011). At the same time, consumers of research should be wary, as there is some pseudo-science out there. The issues related to I/O psychology are important to organizations, which are sometimes willing to pay a lot of money for solutions to their problems, with some people trying to sell their most recent invention in employee testing, training, performance appraisal, and coaching to organizations. Many of these claims are not valid, and there is very little evidence that some of these products, in fact, improve the performance or retention of employees. Therefore, organizations and consumers of I/O-related knowledge and interventions need to be selective and ask to see such evidence (which is not the same as asking to see the list of other clients who purchased their products!). Careers in I/O Psychology The U.S. Department of Labor estimates that I/O psychology as a field is expected to grow 26% by the year 2018 (American Psychological Association, 2011) so the job outlook for I/O psychologists is good. Helping organizations understand and manage their workforce more effectively using science-based tools is important regardless of the shape of the economy, and I/O psychology as a field remains a desirable career option for those who have an interest in psychology in a work-related context coupled with an affinity for research methods and statistics. If you would like to refer to yourself as a psychologist in the United States, then you would need to be licensed, and this requirement also applies to I/O psychologists. Licensing requirements vary by state (see www.siop.org for details). However, it is possible to pursue a career relating to I/O psychology without holding the title psychologist. Licensing requirements usually include a doctoral degree in psychology. That said, there are many job opportunities for those with a master’s degree in I/O psychology, or in related fields such as organizational behavior and human resource management. Academics and practitioners who work in I/O psychology or related fields are often members of the Society for Industrial and Organizational Psychology (SIOP). Students with an interest in I/O psychology are eligible to become an affiliated member of this organization, even if they are not pursuing a degree related to I/O psychology. SIOP membership brings benefits including networking opportunities and subscriptions to an academic journal of I/O research and a newsletter detailing current issues in I/O. The organization supports its members by providing forums for information and idea exchange, as well as monitoring developments about the field for its membership. SIOP is an independent organization but also a subdivision of American Psychological Association (APA), which is the scientific organization that represents psychologists in the United States. Different regions of the world have their own associations for I/O psychologists. For example, the European Association for Work and Organizational Psychology (EAWOP) is the premiere organization for I/O psychologists in Europe, where I/O psychology is typically referred to as work and organizational psychology. A global federation of I/O psychology organizations, named the Alliance for Organizational Psychology, was recently established. It currently has three member organizations (SIOP, EAWOP, and the Organizational Psychology Division of the International Association for Applied Psychology, or Division 1), with plans to expand in the future. The Association for Psychological Science (APS) is another association to which many I/O psychologists belong. Those who work in the I/O field may be based at a university, teaching and researching I/O-related topics. Some private organizations employing I/O psychologists include DDI, HUMRRO, Corporate Executive Board (CEB), and IBM Smarter Workforce. These organizations engage in services such as testing, performance management, and administering attitude surveys. Many organizations also hire in-house employees with expertise in I/O psychology–related fields to work in departments including human resource management or “people analytics.” According to a 2011 membership survey of SIOP, the largest percentage of members were employed in academic institutions, followed by those in consulting or independent practice, private sector organizations, and public sector organizations (Society for Industrial and Organizational Psychology, 2011). Moreover, the majority of respondents (86%) were not licensed. History of I/O Psychology The field of I/O psychology is almost as old as the field of psychology itself. In order to understand any field, it helps to understand how it started and evolved. Let’s look at the pioneers of I/O psychology and some defining studies and developments in the field (see Koppes, 1997; Landy, 1997). The term “founding father” of I/O psychology is usually associated with Hugo Munsterberg of Harvard University. His 1913 book on Psychology and Industrial Efficiency, is considered to be the first textbook in I/O psychology. The book is the first to discuss topics such as how to find the best person for the job and how to design jobs to maintain efficiency by dealing with fatigue. One of his contemporaries, Frederick Taylor, was not a psychologist and is considered to be a founding father not of I/O psychology but of scientific management. Despite his non-psychology background, his ideas were important to the development of the I/O psychology field, because they evolved at around the same time, and some of his innovations, such as job analysis, later became critically important aspects of I/O psychology. Taylor was an engineer and management consultant who pioneered time studies where management observed how work was being performed and how it could be performed better. For example, after analyzing how workers shoveled coal, he decided that the optimum weight of coal to be lifted was 21 pounds, and he designed a shovel to be distributed to workers for this purpose. He instituted mandatory breaks to prevent fatigue, which increased efficiency of workers. His book Principles of Scientific Management was highly influential in pointing out how management could play a role in increasing efficiency of human factors. Lillian Gilbreth was an engineer and I/O psychologist, arguably completing the first Ph.D. in I/O psychology. She and her husband, Frank Gilbreth, developed Taylor’s ideas by conducting time and motion studies, but also bringing more humanism to these efforts. Gilbreth underlined the importance of how workers felt about their jobs, in addition to how they could perform their jobs more efficiently. She was also the first to bring attention to the value of observing job candidates while they performed their jobs, which is the foundation behind work sample tests. The Gilbreths ran a successful consulting business based on these ideas. Her advising of GE in kitchen redesign resulted in foot-pedal trash cans and shelves in refrigerator doors. Her life with her husband and 12 kids is detailed in a book later made into a 1950 movie, Cheaper by the Dozen, authored by two of her children. World War I was a turning point for the field of I/O psychology, as it popularized the notion of testing for placement purposes. During and after the war, more than 1 million Americans were tested, which exposed a generation of men to the idea of using tests as part of selection and placement. Following the war, the idea of testing started to take root in the private industry. American Psychological Association President Robert Yerkes, as well as Walter Dill Scott and Walter Van Dyke Bingham from the Carnegie Institute of Technology (later Carnegie Mellon University) division of applied psychology department were influential in popularizing the idea of testing by offering their services to the U.S. Army. Another major development in the field was the Hawthorne Studies, conducted under the leadership of Harvard University researchers Elton Mayo and Fritz Roethlisberger at the Western Electric Co. in the late 1920s. Originally planned as a study of the effects of lighting on productivity, this series of studies revealed unexpected and surprising findings. For example, one study showed that regardless of the level of change in lighting, productivity remained high and started worsening only when it was reduced to the level of moonlight. Further exploration resulted in the hypothesis that employees were responding to being paid attention to and being observed, rather than the level of lighting (called the “Hawthorne effect”). Another study revealed the phenomenon of group pressure on individuals to limit production to be below their capacity. These studies are considered to be classics in I/O psychology due to their underlining the importance of understanding employee psychology to make sense of employee behavior in the workplace. Since then, thousands of articles have been published on topics relating to I/O psychology, and it is one of the influential subdimensions of psychology. I/O psychologists generate scholarly knowledge and have a role in recruitment, selection, assessment and development of talent, and design and improvement of the workplace. One of the major projects I/O psychologists contributed to is O*Net, a vast database of occupational information sponsored by the U.S. government, which contains information on hundreds of jobs, listing tasks, knowledge, skill, and ability requirements of jobs, work activities, contexts under which work is performed, as well as personality and values that are critical to effectiveness on those jobs. This database is free and a useful resource for students, job seekers, and HR professionals. Findings of I/O psychology have the potential to contribute to the health and happiness of people around the world. When people are asked how happy they are with their lives, their feelings about the work domain are a big part of how they answer this question. I/O psychology research uncovers the secrets of a happy workplace (see Table 2). Organizations designed around these principles will see direct benefits, in the form of employee happiness, well-being, motivation, effectiveness, and retention. We have now reviewed what I/O psychology is, what I/O psychologists do, the history of I/O, associations related to I/O psychology, and accomplishments of I/O psychologists. Those interested in finding out more about I/O psychology are encouraged to visit the outside resources below to learn more. Outside Resources Careers: Occupational information via O*Net\'s database containing information on hundreds of standardized and occupation-specific descriptors http://www.onetonline.org/ Organization: Society for Industrial/Organizational Psychology (SIOP) http://www.siop.org Organization: Alliance for Organizational Psychology (AOP) www.allianceorgpsych.org Organization: American Psychological Association (APA) http://www.apa.org Organization: Association for Psychological Science (APS) http://www.psychologicalscience.org/ Organization: European Association of Work and Organizational Psychology (EAWOP) http://www.eawop.org Organization: International Association for Applied Psychology (IAAP) www.iaapsy.org/division1/ Training: For more about graduate training programs in I/O psychology and related fields www.siop.org/gtp/ Video: An introduction to I/O Psychology produced by the Society for Industrial and Organizational Psychology. Discussion Questions 1. If your organization is approached by a company stating that it has an excellent training program in leadership, how would you assess if the program is good or not? What information would you seek before making a decision? 2. After reading this module, what topics in I/O psychology seemed most interesting to you? 3. How would an I/O psychologist go about establishing whether a selection test is better than an alternative? 4. What would be the advantages and downsides of pursuing a career in I/O psychology? Vocabulary Hawthorne Effect An effect in which individuals change or improve some facet of their behavior as a result of their awareness of being observed. Hawthorne Studies A series of well-known studies conducted under the leadership of Harvard University researchers, which changed the perspective of scholars and practitioners about the role of human psychology in relation to work behavior. Industrial/Organizational psychology Scientific study of behavior in organizational settings and the application of psychology to understand work behavior. O*Net A vast database of occupational information containing data on hundreds of jobs. Scientist-practitioner model The dual focus of I/O psychology, which entails practical questions motivating scientific inquiry to generate knowledge about the work-person interface and the practitioner side applying this scientific knowledge to organizational problems. Society for Industrial and Organizational Psychology (SIOP) A professional organization bringing together academics and practitioners who work in I/O psychology and related areas. It is Division 14 of the American Psychological Association (APA). Work and organizational psychology Preferred name for I/O psychology in Europe.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_12%3A_Social_Part_II/12.1%3A_Industrial_Organizational_%28I_O%29_Psychology.txt
By Dennis L. Poepsel and David A. Schroeder Truman State University, University of Arkansas People often act to benefit other people, and these acts are examples of prosocial behavior. Such behaviors may come in many guises: helping an individual in need; sharing personal resources; volunteering time, effort, and expertise; cooperating with others to achieve some common goals. The focus of this module is on helping—prosocial acts in dyadic situations in which one person is in need and another provides the necessary assistance to eliminate the other’s need. Although people are often in need, help is not always given. Why not? The decision of whether or not to help is not as simple and straightforward as it might seem, and many factors need to be considered by those who might help. In this module, we will try to understand how the decision to help is made by answering the question: Who helps when and why? learning objectives • Learn which situational and social factors affect when a bystander will help another in need. • Understand which personality and individual difference factors make some people more likely to help than others. • Discover whether we help others out of a sense of altruistic concern for the victim, for more self-centered and egoistic motives, or both. Introduction Go to YouTube and search for episodes of “Primetime: What Would You Do?” You will find video segments in which apparently innocent individuals are victimized, while onlookers typically fail to intervene. The events are all staged, but they are very real to the bystanders on the scene. The entertainment offered is the nature of the bystanders’ responses, and viewers are outraged when bystanders fail to intervene. They are convinced that they would have helped. But would they? Viewers are overly optimistic in their beliefs that they would play the hero. Helping may occur frequently, but help is not always given to those in need. So when do people help, and when do they not? All people are not equally helpful—who helps? Why would a person help another in the first place? Many factors go into a person’s decision to help—a fact that the viewers do not fully appreciate. This module will answer the question: Who helps when and why? When Do People Help? Social psychologists began trying to answer this question following the unfortunate murder of Kitty Genovese in 1964 (Dovidio, Piliavin, Schroeder, & Penner, 2006; Penner, Dovidio, Piliavin, & Schroeder, 2005). A knife-wielding assailant attacked Kitty repeatedly as she was returning to her apartment early one morning. At least 38 people may have been aware of the attack, but no one came to save her. More recently, in 2010, Hugo Alfredo Tale-Yax was stabbed when he apparently tried to intervene in an argument between a man and woman. As he lay dying in the street, only one man checked his status, but many others simply glanced at the scene and continued on their way. (One passerby did stop to take a cellphone photo, however.) Unfortunately, failures to come to the aid of someone in need are not unique, as the segments on “What Would You Do?” show. Help is not always forthcoming for those who may need it the most. Trying to understand why people do not always help became the focus of bystander intervention research (e.g., Latané & Darley, 1970). To answer the question regarding when people help, researchers have focused on 1. how bystanders come to define emergencies, 2. when they decide to take responsibility for helping, and 3. how the costs and benefits of intervening affect their decisions of whether to help. Defining the situation: The role of pluralistic ignorance The decision to help is not a simple yes/no proposition. In fact, a series of questions must be addressed before help is given—even in emergencies in which time may be of the essence. Sometimes help comes quickly; an onlooker recently jumped from a Philadelphia subway platform to help a stranger who had fallen on the track. Help was clearly needed and was quickly given. But some situations are ambiguous, and potential helpers may have to decide whether a situation is one in which help, in fact, needs to be given. To define ambiguous situations (including many emergencies), potential helpers may look to the action of others to decide what should be done. But those others are looking around too, also trying to figure out what to do. Everyone is looking, but no one is acting! Relying on others to define the situation and to then erroneously conclude that no intervention is necessary when help is actually needed is called pluralistic ignorance (Latané & Darley, 1970). When people use the inactions of others to define their own course of action, the resulting pluralistic ignorance leads to less help being given. Do I have to be the one to help?: Diffusion of responsibility Simply being with others may facilitate or inhibit whether we get involved in other ways as well. In situations in which help is needed, the presence or absence of others may affect whether a bystander will assume personal responsibility to give the assistance. If the bystander is alone, personal responsibility to help falls solely on the shoulders of that person. But what if others are present? Although it might seem that having more potential helpers around would increase the chances of the victim getting help, the opposite is often the case. Knowing that someone else could help seems to relieve bystanders of personal responsibility, so bystanders do not intervene. This phenomenon is known as diffusion of responsibility (Darley & Latané, 1968). On the other hand, watch the video of the race officials following the 2013 Boston Marathon after two bombs exploded as runners crossed the finish line. Despite the presence of many spectators, the yellow-jacketed race officials immediately rushed to give aid and comfort to the victims of the blast. Each one no doubt felt a personal responsibility to help by virtue of their official capacity in the event; fulfilling the obligations of their roles overrode the influence of the diffusion of responsibility effect. There is an extensive body of research showing the negative impact of pluralistic ignorance and diffusion of responsibility on helping (Fisher et al., 2011), in both emergencies and everyday need situations. These studies show the tremendous importance potential helpers place on the social situation in which unfortunate events occur, especially when it is not clear what should be done and who should do it. Other people provide important social information about how we should act and what our personal obligations might be. But does knowing a person needs help and accepting responsibility to provide that help mean the person will get assistance? Not necessarily. The costs and rewards of helping The nature of the help needed plays a crucial role in determining what happens next. Specifically, potential helpers engage in a cost–benefit analysis before getting involved (Dovidio et al., 2006). If the needed help is of relatively low cost in terms of time, money, resources, or risk, then help is more likely to be given. Lending a classmate a pencil is easy; confronting the knife-wielding assailant who attacked Kitty Genovese is an entirely different matter. As the unfortunate case of Hugo Alfredo Tale-Yax demonstrates, intervening may cost the life of the helper. The potential rewards of helping someone will also enter into the equation, perhaps offsetting the cost of helping. Thanks from the recipient of help may be a sufficient reward. If helpful acts are recognized by others, helpers may receive social rewards of praise or monetary rewards. Even avoiding feelings of guilt if one does not help may be considered a benefit. Potential helpers consider how much helping will cost and compare those costs to the rewards that might be realized; it is the economics of helping. If costs outweigh the rewards, helping is less likely. If rewards are greater than cost, helping is more likely. Who Helps? Do you know someone who always seems to be ready, willing, and able to help? Do you know someone who never helps out? It seems there are personality and individual differences in the helpfulness of others. To answer the question of who chooses to help, researchers have examined 1) the role that sex and gender play in helping, 2) what personality traits are associated with helping, and 3) the characteristics of the “prosocial personality.” Who are more helpful—men or women? In terms of individual differences that might matter, one obvious question is whether men or women are more likely to help. In one of the “What Would You Do?” segments, a man takes a woman’s purse from the back of her chair and then leaves the restaurant. Initially, no one responds, but as soon as the woman asks about her missing purse, a group of men immediately rush out the door to catch the thief. So, are men more helpful than women? The quick answer is “not necessarily.” It all depends on the type of help needed. To be very clear, the general level of helpfulness may be pretty much equivalent between the sexes, but men and women help in different ways (Becker & Eagly, 2004; Eagly & Crowley, 1986). What accounts for these differences? Two factors help to explain sex and gender differences in helping. The first is related to the cost–benefit analysis process discussed previously. Physical differences between men and women may come into play (e.g., Wood & Eagly, 2002); the fact that men tend to have greater upper body strength than women makes the cost of intervening in some situations less for a man. Confronting a thief is a risky proposition, and some strength may be needed in case the perpetrator decides to fight. A bigger, stronger bystander is less likely to be injured and more likely to be successful. The second explanation is simple socialization. Men and women have traditionally been raised to play different social roles that prepare them to respond differently to the needs of others, and people tend to help in ways that are most consistent with their gender roles. Female gender roles encourage women to be compassionate, caring, and nurturing; male gender roles encourage men to take physical risks, to be heroic and chivalrous, and to be protective of those less powerful. As a consequence of social training and the gender roles that people have assumed, men may be more likely to jump onto subway tracks to save a fallen passenger, but women are more likely to give comfort to a friend with personal problems (Diekman & Eagly, 2000; Eagly & Crowley, 1986). There may be some specialization in the types of help given by the two sexes, but it is nice to know that there is someone out there—man or woman—who is able to give you the help that you need, regardless of what kind of help it might be. A trait for being helpful: Agreeableness Graziano and his colleagues (e.g., Graziano & Tobin, 2009; Graziano, Habishi, Sheese, & Tobin, 2007) have explored how agreeableness—one of the Big Five personality dimensions (e.g., Costa & McCrae, 1988)—plays an important role in prosocial behavior. Agreeableness is a core trait that includes such dispositional characteristics as being sympathetic, generous, forgiving, and helpful, and behavioral tendencies toward harmonious social relations and likeability. At the conceptual level, a positive relationship between agreeableness and helping may be expected, and research by Graziano et al. (2007) has found that those higher on the agreeableness dimension are, in fact, more likely than those low on agreeableness to help siblings, friends, strangers, or members of some other group. Agreeable people seem to expect that others will be similarly cooperative and generous in interpersonal relations, and they, therefore, act in helpful ways that are likely to elicit positive social interactions. Searching for the prosocial personality Rather than focusing on a single trait, Penner and his colleagues (Penner, Fritzsche, Craiger, & Freifeld, 1995; Penner & Orom, 2010) have taken a somewhat broader perspective and identified what they call the prosocial personality orientation. Their research indicates that two major characteristics are related to the prosocial personality and prosocial behavior. The first characteristic is called other-oriented empathy: People high on this dimension have a strong sense of social responsibility, empathize with and feel emotionally tied to those in need, understand the problems the victim is experiencing, and have a heightened sense of moral obligation to be helpful. This factor has been shown to be highly correlated with the trait of agreeableness discussed previously. The second characteristic, helpfulness, is more behaviorally oriented. Those high on the helpfulness factor have been helpful in the past, and because they believe they can be effective with the help they give, they are more likely to be helpful in the future. Why Help? Finally, the question of why a person would help needs to be asked. What motivation is there for that behavior? Psychologists have suggested that 1) evolutionary forces may serve to predispose humans to help others, 2) egoistic concerns may determine if and when help will be given, and 3) selfless, altruistic motives may also promote helping in some cases. Evolutionary roots for prosocial behavior Our evolutionary past may provide keys about why we help (Buss, 2004). Our very survival was no doubt promoted by the prosocial relations with clan and family members, and, as a hereditary consequence, we may now be especially likely to help those closest to us—blood-related relatives with whom we share a genetic heritage. According to evolutionary psychology, we are helpful in ways that increase the chances that our DNA will be passed along to future generations (Burnstein, Crandall, & Kitayama, 1994)—the goal of the “selfish gene” (Dawkins, 1976). Our personal DNA may not always move on, but we can still be successful in getting some portion of our DNA transmitted if our daughters, sons, nephews, nieces, and cousins survive to produce offspring. The favoritism shown for helping our blood relatives is called kin selection (Hamilton, 1964). But, we do not restrict our relationships just to our own family members. We live in groups that include individuals who are unrelated to us, and we often help them too. Why? Reciprocal altruism (Trivers, 1971) provides the answer. Because of reciprocal altruism, we are all better off in the long run if we help one another. If helping someone now increases the chances that you will be helped later, then your overall chances of survival are increased. There is the chance that someone will take advantage of your help and not return your favors. But people seem predisposed to identify those who fail to reciprocate, and punishments including social exclusion may result (Buss, 2004). Cheaters will not enjoy the benefit of help from others, reducing the likelihood of the survival of themselves and their kin. Evolutionary forces may provide a general inclination for being helpful, but they may not be as good an explanation for why we help in the here and now. What factors serve as proximal influences for decisions to help? Egoistic motivation for helping Most people would like to think that they help others because they are concerned about the other person’s plight. In truth, the reasons why we help may be more about ourselves than others: Egoistic or selfish motivations may make us help. Implicitly, we may ask, “What’s in it for me?” There are two major theories that explain what types of reinforcement helpers may be seeking. The negative state relief model (e.g., Cialdini, Darby, & Vincent, 1973; Cialdini, Kenrick, & Baumann, 1982) suggests that people sometimes help in order to make themselves feel better. Whenever we are feeling sad, we can use helping someone else as a positive mood boost to feel happier. Through socialization, we have learned that helping can serve as a secondary reinforcement that will relieve negative moods (Cialdini & Kenrick, 1976). The arousal: cost–reward model provides an additional way to understand why people help (e.g., Piliavin, Dovidio, Gaertner, & Clark, 1981). This model focuses on the aversive feelings aroused by seeing another in need. If you have ever heard an injured puppy yelping in pain, you know that feeling, and you know that the best way to relieve that feeling is to help and to comfort the puppy. Similarly, when we see someone who is suffering in some way (e.g., injured, homeless, hungry), we vicariously experience a sympathetic arousal that is unpleasant, and we are motivated to eliminate that aversive state. One way to do that is to help the person in need. By eliminating the victim’s pain, we eliminate our own aversive arousal. Helping is an effective way to alleviate our own discomfort. As an egoistic model, the arousal: cost–reward model explicitly includes the cost/reward considerations that come into play. Potential helpers will find ways to cope with the aversive arousal that will minimize their costs—maybe by means other than direct involvement. For example, the costs of directly confronting a knife-wielding assailant might stop a bystander from getting involved, but the cost of some indirect help (e.g., calling the police) may be acceptable. In either case, the victim’s need is addressed. Unfortunately, if the costs of helping are too high, bystanders may reinterpret the situation to justify not helping at all. We now know that the attack of Kitty Genovese was a murderous assault, but it may have been misperceived as a lover’s spat by someone who just wanted to go back to sleep. For some, fleeing the situation causing their distress may do the trick (Piliavin et al., 1981). The egoistically based negative state relief model and the arousal: cost–reward model see the primary motivation for helping as being the helper’s own outcome. Recognize that the victim’s outcome is of relatively little concern to the helper—benefits to the victim are incidental byproducts of the exchange (Dovidio et al., 2006). The victim may be helped, but the helper’s real motivation according to these two explanations is egoistic: Helpers help to the extent that it makes them feel better. Altruistic help Although many researchers believe that egoism is the only motivation for helping, others suggest that altruism—helping that has as its ultimate goal the improvement of another’s welfare—may also be a motivation for helping under the right circumstances. Batson (2011) has offered the empathy–altruism model to explain altruistically motivated helping for which the helper expects no benefits. According to this model, the key for altruism is empathizing with the victim, that is, putting oneself in the shoes of the victim and imagining how the victim must feel. When taking this perspective and having empathic concern, potential helpers become primarily interested in increasing the well-being of the victim, even if the helper must incur some costs that might otherwise be easily avoided. The empathy–altruism model does not dismiss egoistic motivations; helpers not empathizing with a victim may experience personal distress and have an egoistic motivation, not unlike the feelings and motivations explained by the arousal: cost–reward model. Because egoistically motivated individuals are primarily concerned with their own cost–benefit outcomes, they are less likely to help if they think they can escape the situation with no costs to themselves. In contrast, altruistically motivated helpers are willing to accept the cost of helping to benefit a person with whom they have empathized—this “self-sacrificial” approach to helping is the hallmark of altruism (Batson, 2011). Although there is still some controversy about whether people can ever act for purely altruistic motives, it is important to recognize that, while helpers may derive some personal rewards by helping another, the help that has been given is also benefitting someone who was in need. The residents who offered food, blankets, and shelter to stranded runners who were unable to get back to their hotel rooms because of the Boston Marathon bombing undoubtedly received positive rewards because of the help they gave, but those stranded runners who were helped got what they needed badly as well. “In fact, it is quite remarkable how the fates of people who have never met can be so intertwined and complementary. Your benefit is mine; and mine is yours” (Dovidio et al., 2006, p. 143). Conclusion We started this module by asking the question, “Who helps when and why?” As we have shown, the question of when help will be given is not quite as simple as the viewers of “What Would You Do?” believe. The power of the situation that operates on potential helpers in real time is not fully considered. What might appear to be a split-second decision to help is actually the result of consideration of multiple situational factors (e.g., the helper’s interpretation of the situation, the presence and ability of others to provide the help, the results of a cost–benefit analysis) (Dovidio et al., 2006). We have found that men and women tend to help in different ways—men are more impulsive and physically active, while women are more nurturing and supportive. Personality characteristics such as agreeableness and the prosocial personality orientation also affect people’s likelihood of giving assistance to others. And, why would people help in the first place? In addition to evolutionary forces (e.g., kin selection, reciprocal altruism), there is extensive evidence to show that helping and prosocial acts may be motivated by selfish, egoistic desires; by selfless, altruistic goals; or by some combination of egoistic and altruistic motives. (For a fuller consideration of the field of prosocial behavior, we refer you to Dovidio et al. [2006].) Outside Resources Article: Alden, L. E., & Trew, J. L. (2013). If it makes you happy: Engaging in kind acts increases positive affect in socially anxious individuals. Emotion, 13, 64-75. doi:10.1037/a0027761 Review available at: http://nymag.com/scienceofus/2015/07...y-be-nice.html Book: Batson, C.D. (2009). Altruism in humans. New York, NY: Oxford University Press. Book: Dovidio, J. F., Piliavin, J. A., Schroeder, D. A., & Penner, L. A. (2006). The social psychology of prosocial behavior. Mahwah, NJ: Erlbaum. Book: Mikuliner, M., & Shaver, P. R. (2010). Prosocial motives, emotions, and behavior: The better angels of our nature. Washington, DC: American Psychological Association. Book: Schroeder, D. A. & Graziano, W. G. (forthcoming). The Oxford handbook of prosocial behavior. New York, NY: Oxford University Press. Institution: Center for Generosity, University of Notre Dame, 936 Flanner Hall, Notre Dame, IN 46556. http://www.generosityresearch.nd.edu Institution: The Greater Good Science Center, University of California, Berkeley. www.greatergood.berkeley.edu News Article: Bystanders Stop Suicide Attempt http://jfmueller.faculty.noctrl.edu/crow/bystander.pdf Social Psychology Network (SPN) http://www.socialpsychology.org/social.htm#prosocial Video: Episodes (individual) of “Primetime: What Would You Do?” http://www.YouTube.com Video: Episodes of “Primetime: What Would You Do?” that often include some commentary from experts in the field may be available at http://www.abc.com Video: From The Inquisitive Mind website, a great overview of different aspects of helping and pro-social behavior including - pluralistic ignorance, diffusion of responsibility, the bystander effect, and empathy. Discussion Questions 1. Pluralistic ignorance suggests that inactions by other observers of an emergency will decrease the likelihood that help will be given. What do you think will happen if even one other observer begins to offer assistance to a victim? 2. In addition to those mentioned in the module, what other costs and rewards might affect a potential helper’s decision of whether to help? Receiving help to solve some problem is an obvious benefit for someone in need; are there any costs that a person might have to bear as a result of receiving help from someone? 3. What are the characteristics possessed by your friends who are most helpful? By your friends who are least helpful? What has made your helpful friends and your unhelpful friends so different? What kinds of help have they given to you, and what kind of help have you given to them? Are you a helpful person? 4. Do you think that sex and gender differences in the frequency of helping and the kinds of helping have changed over time? Why? Do you think that we might expect more changes in the future? 5. What do you think is the primary motive for helping behavior: egoism or altruism? Are there any professions in which people are being “pure” altruists, or are some egoistic motivations always playing a role? 6. There are other prosocial behaviors in addition to the kind of helping discussed here. People volunteer to serve many different causes and organizations. People come together to cooperate with one another to achieve goals that no one individual could reach alone. How do you think the factors that affect helping might affect prosocial actions such as volunteering and cooperating? Do you think that there might be other factors that make people more or less likely to volunteer their time and energy or to cooperate in a group? Vocabulary Agreeableness A core personality trait that includes such dispositional characteristics as being sympathetic, generous, forgiving, and helpful, and behavioral tendencies toward harmonious social relations and likeability. Altruism A motivation for helping that has the improvement of another’s welfare as its ultimate goal, with no expectation of any benefits for the helper. Arousal: cost–reward model An egoistic theory proposed by Piliavin et al. (1981) that claims that seeing a person in need leads to the arousal of unpleasant feelings, and observers are motivated to eliminate that aversive state, often by helping the victim. A cost–reward analysis may lead observers to react in ways other than offering direct assistance, including indirect help, reinterpretation of the situation, or fleeing the scene. Bystander intervention The phenomenon whereby people intervene to help others in need even if the other is a complete stranger and the intervention puts the helper at risk. Cost–benefit analysis A decision-making process that compares the cost of an action or thing against the expected benefit to help determine the best course of action. Diffusion of responsibility When deciding whether to help a person in need, knowing that there are others who could also provide assistance relieves bystanders of some measure of personal responsibility, reducing the likelihood that bystanders will intervene. Egoism A motivation for helping that has the improvement of the helper’s own circumstances as its primary goal. Empathic concern According to Batson’s empathy–altruism hypothesis, observers who empathize with a person in need (that is, put themselves in the shoes of the victim and imagine how that person feels) will experience empathic concern and have an altruistic motivation for helping. Empathy–altruism model An altruistic theory proposed by Batson (2011) that claims that people who put themselves in the shoes of a victim and imagining how the victim feel will experience empathic concern that evokes an altruistic motivation for helping. Helpfulness A component of the prosocial personality orientation; describes individuals who have been helpful in the past and, because they believe they can be effective with the help they give, are more likely to be helpful in the future. Helping Prosocial acts that typically involve situations in which one person is in need and another provides the necessary assistance to eliminate the other’s need. Kin selection According to evolutionary psychology, the favoritism shown for helping our blood relatives, with the goals of increasing the likelihood that some portion of our DNA will be passed on to future generations. Negative state relief model An egoistic theory proposed by Cialdini et al. (1982) that claims that people have learned through socialization that helping can serve as a secondary reinforcement that will relieve negative moods such as sadness. Other-oriented empathy A component of the prosocial personality orientation; describes individuals who have a strong sense of social responsibility, empathize with and feel emotionally tied to those in need, understand the problems the victim is experiencing, and have a heightened sense of moral obligations to be helpful. Personal distress According to Batson’s empathy–altruism hypothesis, observers who take a detached view of a person in need will experience feelings of being “worried” and “upset” and will have an egoistic motivation for helping to relieve that distress. Pluralistic ignorance Relying on the actions of others to define an ambiguous need situation and to then erroneously conclude that no help or intervention is necessary. Prosocial behavior Social behavior that benefits another person. Prosocial personality orientation A measure of individual differences that identifies two sets of personality characteristics (other-oriented empathy, helpfulness) that are highly correlated with prosocial behavior. Reciprocal altruism According to evolutionary psychology, a genetic predisposition for people to help those who have previously helped them.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_12%3A_Social_Part_II/12.2%3A_Helping_and_Prosocial_Behavior.txt
By Jerry M. Burger Santa Clara University We often change our attitudes and behaviors to match the attitudes and behaviors of the people around us. One reason for this conformity is a concern about what other people think of us. This process was demonstrated in a classic study in which college students deliberately gave wrong answers to a simple visual judgment task rather than go against the group. Another reason we conform to the norm is because other people often have information we do not, and relying on norms can be a reasonable strategy when we are uncertain about how we are supposed to act. Unfortunately, we frequently misperceive how the typical person acts, which can contribute to problems such as the excessive binge drinking often seen in college students. Obeying orders from an authority figure can sometimes lead to disturbing behavior. This danger was illustrated in a famous study in which participants were instructed to administer painful electric shocks to another person in what they believed to be a learning experiment. Despite vehement protests from the person receiving the shocks, most participants continued the procedure when instructed to do so by the experimenter. The findings raise questions about the power of blind obedience in deplorable situations such as atrocities and genocide. They also raise concerns about the ethical treatment of participants in psychology experiments. learning objectives • Become aware of how widespread conformity is in our lives and some of the ways each of us changes our attitudes and behavior to match the norm. • Understand the two primary reasons why people often conform to perceived norms. • Appreciate how obedience to authority has been examined in laboratory studies and some of the implications of the findings from these investigations. • Consider some of the remaining issues and sources of controversy surrounding Milgram’s obedience studies. Introduction When he was a teenager, my son often enjoyed looking at photographs of me and my wife taken when we were in high school. He laughed at the hairstyles, the clothing, and the kind of glasses people wore “back then.” And when he was through with his ridiculing, we would point out that no one is immune to fashions and fads and that someday his children will probably be equally amused by his high school photographs and the trends he found so normal at the time. Everyday observation confirms that we often adopt the actions and attitudes of the people around us. Trends in clothing, music, foods, and entertainment are obvious. But our views on political issues, religious questions, and lifestyles also reflect to some degree the attitudes of the people we interact with. Similarly, decisions about behaviors such as smoking and drinking are influenced by whether the people we spend time with engage in these activities. Psychologists refer to this widespread tendency to act and think like the people around us as conformity. Conformity What causes all this conformity? To start, humans may possess an inherent tendency to imitate the actions of others. Although we usually are not aware of it, we often mimic the gestures, body posture, language, talking speed, and many other behaviors of the people we interact with. Researchers find that this mimicking increases the connection between people and allows our interactions to flow more smoothly (Chartrand & Bargh, 1999). Beyond this automatic tendency to imitate others, psychologists have identified two primary reasons for conformity. The first of these is normative influence. When normative influence is operating, people go along with the crowd because they are concerned about what others think of them. We don’t want to look out of step or become the target of criticism just because we like different kinds of music or dress differently than everyone else. Fitting in also brings rewards such as camaraderie and compliments. How powerful is normative influence? Consider a classic study conducted many years ago by Solomon Asch (1956). The participants were male college students who were asked to engage in a seemingly simple task. An experimenter standing several feet away held up a card that depicted one line on the left side and three lines on the right side. The participant’s job was to say aloud which of the three lines on the right was the same length as the line on the left. Sixteen cards were presented one at a time, and the correct answer on each was so obvious as to make the task a little boring. Except for one thing. The participant was not alone. In fact, there were six other people in the room who also gave their answers to the line-judgment task aloud. Moreover, although they pretended to be fellow participants, these other individuals were, in fact, confederates working with the experimenter. The real participant was seated so that he always gave his answer after hearing what five other “participants” said. Everything went smoothly until the third trial, when inexplicably the first “participant” gave an obviously incorrect answer. The mistake might have been amusing, except the second participant gave the same answer. As did the third, the fourth, and the fifth participant. Suddenly the real participant was in a difficult situation. His eyes told him one thing, but five out of five people apparently saw something else. It’s one thing to wear your hair a certain way or like certain foods because everyone around you does. But, would participants intentionally give a wrong answer just to conform with the other participants? The confederates uniformly gave incorrect answers on 12 of the 16 trials, and 76 percent of the participants went along with the norm at least once and also gave the wrong answer. In total, they conformed with the group on one-third of the 12 test trials. Although we might be impressed that the majority of the time participants answered honestly, most psychologists find it remarkable that so many college students caved in to the pressure of the group rather than do the job they had volunteered to do. In almost all cases, the participants knew they were giving an incorrect answer, but their concern for what these other people might be thinking about them overpowered their desire to do the right thing. Variations of Asch’s procedures have been conducted numerous times (Bond, 2005; Bond & Smith, 1996). We now know that the findings are easily replicated, that there is an increase in conformity with more confederates (up to about five), that teenagers are more prone to conforming than are adults, and that people conform significantly less often when they believe the confederates will not hear their responses (Berndt, 1979; Bond, 2005; Crutchfield, 1955; Deutsch & Gerard, 1955). This last finding is consistent with the notion that participants change their answers because they are concerned about what others think of them. Finally, although we see the effect in virtually every culture that has been studied, more conformity is found in collectivist countries such as Japan and China than in individualistic countries such as the United States (Bond & Smith, 1996). Compared with individualistic cultures, people who live in collectivist cultures place a higher value on the goals of the group than on individual preferences. They also are more motivated to maintain harmony in their interpersonal relations. The other reason we sometimes go along with the crowd is that people are often a source of information. Psychologists refer to this process as informational influence. Most of us, most of the time, are motivated to do the right thing. If society deems that we put litter in a proper container, speak softly in libraries, and tip our waiter, then that’s what most of us will do. But sometimes it’s not clear what society expects of us. In these situations, we often rely on descriptive norms (Cialdini, Reno, & Kallgren, 1990). That is, we act the way most people—or most people like us—act. This is not an unreasonable strategy. Other people often have information that we do not, especially when we find ourselves in new situations. If you have ever been part of a conversation that went something like this, “Do you think we should?” “Sure. Everyone else is doing it.”, you have experienced the power of informational influence. However, it’s not always easy to obtain good descriptive norm information, which means we sometimes rely on a flawed notion of the norm when deciding how we should behave. A good example of how misperceived norms can lead to problems is found in research on binge drinking among college students. Excessive drinking is a serious problem on many campuses (Mita, 2009). There are many reasons why students binge drink, but one of the most important is their perception of the descriptive norm. How much students drink is highly correlated with how much they believe the average student drinks (Neighbors, Lee, Lewis, Fossos, & Larimer, 2007). Unfortunately, students aren’t very good at making this assessment. They notice the boisterous heavy drinker at the party but fail to consider all the students not attending the party. As a result, students typically overestimate the descriptive norm for college student drinking (Borsari & Carey, 2003; Perkins, Haines, & Rice, 2005). Most students believe they consume significantly less alcohol than the norm, a miscalculation that creates a dangerous push toward more and more excessive alcohol consumption. On the positive side, providing students with accurate information about drinking norms has been found to reduce overindulgent drinking (Burger, LaSalvia, Hendricks, Mehdipour, & Neudeck, 2011; Neighbors, Lee, Lewis, Fossos, & Walter, 2009). Researchers have demonstrated the power of descriptive norms in a number of areas. Homeowners reduced the amount of energy they used when they learned that they were consuming more energy than their neighbors (Schultz, Nolan, Cialdini, Goldstein, & Griskevicius, 2007). Undergraduates selected the healthy food option when led to believe that other students had made this choice (Burger et al., 2010). Hotel guests were more likely to reuse their towels when a hanger in the bathroom told them that this is what most guests did (Goldstein, Cialdini, & Griskevicius, 2008). And more people began using the stairs instead of the elevator when informed that the vast majority of people took the stairs to go up one or two floors (Burger & Shelton, 2011). Obedience Although we may be influenced by the people around us more than we recognize, whether we conform to the norm is up to us. But sometimes decisions about how to act are not so easy. Sometimes we are directed by a more powerful person to do things we may not want to do. Researchers who study obedience are interested in how people react when given an order or command from someone in a position of authority. In many situations, obedience is a good thing. We are taught at an early age to obey parents, teachers, and police officers. It’s also important to follow instructions from judges, firefighters, and lifeguards. And a military would fail to function if soldiers stopped obeying orders from superiors. But, there is also a dark side to obedience. In the name of “following orders” or “just doing my job,” people can violate ethical principles and break laws. More disturbingly, obedience often is at the heart of some of the worst of human behavior—massacres, atrocities, and even genocide. It was this unsettling side of obedience that led to some of the most famous and most controversial research in the history of psychology. Milgram (1963, 1965, 1974) wanted to know why so many otherwise decent German citizens went along with the brutality of the Nazi leaders during the Holocaust. “These inhumane policies may have originated in the mind of a single person,” Milgram (1963, p. 371) wrote, “but they could only be carried out on a massive scale if a very large number of persons obeyed orders.” To understand this obedience, Milgram conducted a series of laboratory investigations. In all but one variation of the basic procedure, participants were men recruited from the community surrounding Yale University, where the research was carried out. These citizens signed up for what they believed to be an experiment on learning and memory. In particular, they were told the research concerned the effects of punishment on learning. Three people were involved in each session. One was the participant. Another was the experimenter. The third was a confederate who pretended to be another participant. The experimenter explained that the study consisted of a memory test and that one of the men would be the teacher and the other the learner. Through a rigged drawing, the real participant was always assigned the teacher’s role and the confederate was always the learner. The teacher watched as the learner was strapped into a chair and had electrodes attached to his wrist. The teacher then moved to the room next door where he was seated in front of a large metal box the experimenter identified as a “shock generator.” The front of the box displayed gauges and lights and, most noteworthy, a series of 30 levers across the bottom. Each lever was labeled with a voltage figure, starting with 15 volts and moving up in 15-volt increments to 450 volts. Labels also indicated the strength of the shocks, starting with “Slight Shock” and moving up to “Danger: Severe Shock” toward the end. The last two levers were simply labeled “XXX” in red. Through a microphone, the teacher administered a memory test to the learner in the next room. The learner responded to the multiple-choice items by pressing one of four buttons that were barely within reach of his strapped-down hand. If the teacher saw the correct answer light up on his side of the wall, he simply moved on to the next item. But if the learner got the item wrong, the teacher pressed one of the shock levers and, thereby, delivered the learner’s punishment. The teacher was instructed to start with the 15-volt lever and move up to the next highest shock for each successive wrong answer. In reality, the learner received no shocks. But he did make a lot of mistakes on the test, which forced the teacher to administer what he believed to be increasingly strong shocks. The purpose of the study was to see how far the teacher would go before refusing to continue. The teacher’s first hint that something was amiss came after pressing the 75-volt lever and hearing through the wall the learner say “Ugh!” The learner’s reactions became stronger and louder with each lever press. At 150 volts, the learner yelled out, “Experimenter! That’s all. Get me out of here. I told you I had heart trouble. My heart’s starting to bother me now. Get me out of here, please. My heart’s starting to bother me. I refuse to go on. Let me out.” The experimenter’s role was to encourage the participant to continue. If at any time the teacher asked to end the session, the experimenter responded with phrases such as, “The experiment requires that you continue,” and “You have no other choice, you must go on.” The experimenter ended the session only after the teacher stated four successive times that he did not want to continue. All the while, the learner’s protests became more intense with each shock. After 300 volts, the learner refused to answer any more questions, which led the experimenter to say that no answer should be considered a wrong answer. After 330 volts, despite vehement protests from the learner following previous shocks, the teacher heard only silence, suggesting that the learner was now physically unable to respond. If the teacher reached 450 volts—the end of the generator—the experimenter told him to continue pressing the 450 volt lever for each wrong answer. It was only after the teacher pressed the 450-volt lever three times that the experimenter announced that the study was over. If you had been a participant in this research, what would you have done? Virtually everyone says he or she would have stopped early in the process. And most people predict that very few if any participants would keep pressing all the way to 450 volts. Yet in the basic procedure described here, 65 percent of the participants continued to administer shocks to the very end of the session. These were not brutal, sadistic men. They were ordinary citizens who nonetheless followed the experimenter’s instructions to administer what they believed to be excruciating if not dangerous electric shocks to an innocent person. The disturbing implication from the findings is that, under the right circumstances, each of us may be capable of acting in some very uncharacteristic and perhaps some very unsettling ways. Milgram conducted many variations of this basic procedure to explore some of the factors that affect obedience. He found that obedience rates decreased when the learner was in the same room as the experimenter and declined even further when the teacher had to physically touch the learner to administer the punishment. Participants also were less willing to continue the procedure after seeing other teachers refuse to press the shock levers, and they were significantly less obedient when the instructions to continue came from a person they believed to be another participant rather than from the experimenter. Finally, Milgram found that women participants followed the experimenter’s instructions at exactly the same rate the men had. Milgram’s obedience research has been the subject of much controversy and discussion. Psychologists continue to debate the extent to which Milgram’s studies tell us something about atrocities in general and about the behavior of German citizens during the Holocaust in particular (Miller, 2004). Certainly, there are important features of that time and place that cannot be recreated in a laboratory, such as a pervasive climate of prejudice and dehumanization. Another issue concerns the relevance of the findings. Some people have argued that today we are more aware of the dangers of blind obedience than we were when the research was conducted back in the 1960s. However, findings from partial and modified replications of Milgram’s procedures conducted in recent years suggest that people respond to the situation today much like they did a half a century ago (Burger, 2009). Another point of controversy concerns the ethical treatment of research participants. Researchers have an obligation to look out for the welfare of their participants. Yet, there is little doubt that many of Milgram’s participants experienced intense levels of stress as they went through the procedure. In his defense, Milgram was not unconcerned about the effects of the experience on his participants. And in follow-up questionnaires, the vast majority of his participants said they were pleased they had been part of the research and thought similar experiments should be conducted in the future. Nonetheless, in part because of Milgram’s studies, guidelines and procedures were developed to protect research participants from these kinds of experiences. Although Milgram’s intriguing findings left us with many unanswered questions, conducting a full replication of his experiment remains out of bounds by today’s standards. Social psychologists are fond of saying that we are all influenced by the people around us more than we recognize. Of course, each person is unique, and ultimately each of us makes choices about how we will and will not act. But decades of research on conformity and obedience make it clear that we live in a social world and that—for better or worse—much of what we do is a reflection of the people we encounter. Outside Resources Student Video: Christine N. Winston and Hemali Maher's 'The Milgram Experiment' gives an excellent 3-minute overview of one of the most famous experiments in the history of psychology. It was one of the winning entries in the 2015 Noba Student Video Award. Video: An example of information influence in a field setting Video: Scenes from a recent partial replication of Milgram’s obedience studies Video: Scenes from a recent replication of Asch’s conformity experiment Web: Website devoted to scholarship and research related to Milgram’s obedience studies http://www.stanleymilgram.com Discussion Questions 1. In what ways do you see normative influence operating among you and your peers? How difficult would it be to go against the norm? What would it take for you to not do something just because all your friends were doing it? 2. What are some examples of how informational influence helps us do the right thing? How can we use descriptive norm information to change problem behaviors? 3. Is conformity more likely or less likely to occur when interacting with other people through social media as compared to face-to-face encounters? 4. When is obedience to authority a good thing and when is it bad? What can be done to prevent people from obeying commands to engage in truly deplorable behavior such as atrocities and massacres? 5. In what ways do Milgram’s experimental procedures fall outside the guidelines for research with human participants? Are there ways to conduct relevant research on obedience to authority without violating these guidelines? Vocabulary Conformity Changing one’s attitude or behavior to match a perceived social norm. Descriptive norm The perception of what most people do in a given situation. Informational influence Conformity that results from a concern to act in a socially approved manner as determined by how others act. Normative influence Conformity that results from a concern for what other people think of us. Obedience Responding to an order or command from a person in a position of authority.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_12%3A_Social_Part_II/12.3%3A_Conformity_and_Obedience.txt
By Robert V. Levine California State University, Fresno This module introduces several major principles in the process of persuasion. It offers an overview of the different paths to persuasion. It then describes how mindless processing makes us vulnerable to undesirable persuasion and some of the “tricks” that may be used against us. learning objectives • Recognize the difference between the central and peripheral routes to persuasion. • Understand the concepts of trigger features, fixed action patterns, heuristics, and mindless thinking, and how these processes are essential to our survival but, at the same time, leave us vulnerable to exploitation. • Understand some common “tricks” persuasion artists may use to take advantage of us. • Use this knowledge to make you less susceptible to unwanted persuasion. Introduction Have you ever tried to swap seats with a stranger on an airline? Ever negotiated the price of a car? Ever tried to convince someone to recycle, quit smoking, or make a similar change in health behaviors? If so, you are well versed with how persuasion can show up in everyday life. Persuasion has been defined as “the process by which a message induces change in beliefs, attitudes, or behaviors” (Myers, 2011). Persuasion can take many forms. It may, for example, differ in whether it targets public compliance or private acceptance, is short-term or long-term, whether it involves slowly escalating commitments or sudden interventions and, most of all, in the benevolence of its intentions. When persuasion is well-meaning, we might call it education. When it is manipulative, it might be called mind control (Levine, 2003). Whatever the content, however, there is a similarity to the form of the persuasion process itself. As the advertising commentator Sid Bernstein once observed, “Of course, you sell candidates for political office the same way you sell soap or sealing wax or whatever; because, when you get right down to it, that’s the only way anything is sold” (Levine, 2003). Persuasion is one of the most studied of all social psychology phenomena. This module provides an introduction to several of its most important components. Two Paths to Persuasion Persuasion theorists distinguish between the central and peripheral routes to persuasion (Petty & Cacioppo, 1986). The central route employs direct, relevant, logical messages. This method rests on the assumption that the audience is motivated, will think carefully about what is presented, and will react on the basis of your arguments. The central route is intended to produce enduring agreement. For example, you might decide to vote for a particular political candidate after hearing her speak and finding her logic and proposed policies to be convincing. The peripheral route, on the other hand, relies on superficial cues that have little to do with logic. The peripheral approach is the salesman’s way of thinking. It requires a target who isn’tthinking carefully about what you are saying. It requires low effort from the target and often exploits rule-of-thumb heuristics that trigger mindless reactions (see below). It may be intended to persuade you to do something you do not want to do and might later be sorry you did. Advertisements, for example, may show celebrities, cute animals, beautiful scenery, or provocative sexual images that have nothing to do with the product. The peripheral approach is also common in the darkest of persuasion programs, such as those of dictators and cult leaders. Returning to the example of voting, you can experience the peripheral route in action when you see a provocative, emotionally charged political advertisement that tugs at you to vote a particular way. Triggers and Fixed Action Patterns The central route emphasizes objective communication of information. The peripheral route relies on psychological techniques. These techniques may take advantage of a target’s not thinking carefully about the message. The process mirrors a phenomenon in animal behavior known as fixed action patterns (FAPs). These are sequences of behavior that occur in exactly the same fashion, in exactly the same order, every time they’re elicited. Cialdini (2008) compares it to a prerecorded tape that is turned on and, once it is, always plays to its finish. He describes it is as if the animal were turning on a tape recorder (Cialdini, 2008). There is the feeding tape, the territorial tape, the migration tape, the nesting tape, the aggressive tape—each sequence ready to be played when a situation calls for it. In humans fixed action patterns include many of the activities we engage in while mentally on "auto-pilot." These behaviors are so automatic that it is very difficult to control them. If you ever feed a baby, for instance, nearly everyone mimics each bite the baby takes by opening and closing their own mouth! If two people near you look up and point you will automatically look up yourself. We also operate in a reflexive, non-thinking way when we make many decisions. We are more likely, for example, to be less critical about medical advice dispensed from a doctor than from a friend who read an interesting article on the topic in a popular magazine. A notable characteristic of fixed action patterns is how they are activated. At first glance, it appears the animal is responding to the overall situation. For example, the maternal tape appears to be set off when a mother sees her hungry baby, or the aggressive tape seems to be activated when an enemy invades the animal’s territory. It turns out, however, that the on/off switch may actually be controlled by a specific, minute detail of the situation—maybe a sound or shape or patch of color. These are the hot buttons of the biological world—what Cialdini refers to as “trigger features” and biologists call “releasers.” Humans are not so different. Take the example of a study conducted on various ways to promote a campus bake sale for charity (Levine, 2003). Simply displaying the cookies and other treats to passersby did not generate many sales (only 2 out of 30 potential customers made a purchase). In an alternate condition, however, when potential customers were asked to "buy a cookie for a good cause" the number rose to 12 out of 30. It seems that the phrase "a good cause" triggered a willingness to act. In fact, when the phrase "a good cause" was paired with a locally-recognized charity (known for its food-for-the-homeless program) the numbers held steady at 14 out of 30. When a fictional good cause was used instead (the make believe "Levine House") still 11 out of 30 potential customers made purchases and not one asked about the purpose or nature of the cause. The phrase "for a good cause" was an influential enough hot button that the exact cause didn't seem to matter. The effectiveness of peripheral persuasion relies on our frequent reliance on these sorts of fixed action patterns and trigger features. These mindless, rules-of-thumb are generally effective shortcuts for coping with the overload of information we all must confront. They serve as heuristics—mental shortcuts-- that enable us to make decisions and solve problems quickly and efficiently. They also, however, make us vulnerable to uninvited exploitation through the peripheral route of persuasion. The Source of Persuasion: The Triad of Trustworthiness Effective persuasion requires trusting the source of the communication. Studies have identified three characteristics that lead to trust: perceived authority, honesty, and likability. When the source appears to have any or all of these characteristics, people not only are more willing to agree to their request but are willing to do so without carefully considering the facts. We assume we are on safe ground and are happy to shortcut the tedious process of informed decision making. As a result, we are more susceptible to messages and requests, no matter their particular content or how peripheral they may be. Authority From earliest childhood, we learn to rely on authority figures for sound decision making because their authority signifies status and power, as well as expertise. These two facets often work together. Authorities such as parents and teachers are not only our primary sources of wisdom while we grow up, but they control us and our access to the things we want. In addition, we have been taught to believe that respect for authority is a moral virtue. As adults, it is natural to transfer this respect to society’s designated authorities, such as judges, doctors, bosses, and religious leaders. We assume their positions give them special access to information and power. Usually we are correct, so that our willingness to defer to authorities becomes a convenient shortcut to sound decision making. Uncritical trust in authority may, however, lead to bad decisions. Perhaps the most famous study ever conducted in social psychology demonstrated that, when conditions were set up just so, two-thirds of a sample of psychologically normal men were willing to administer potentially lethal shocks to a stranger when an apparent authority in a laboratory coat ordered them to do so (Milgram, 1974; Burger, 2009). Uncritical trust in authority can be problematic for several reasons. First, even if the source of the message is a legitimate, well-intentioned authority, they may not always be correct. Second, when respect for authority becomes mindless, expertise in one domain may be confused with expertise in general. To assume there is credibility when a successful actor promotes a cold remedy, or when a psychology professor offers his views about politics, can lead to problems. Third, the authority may not be legitimate. It is not difficult to fake a college degree or professional credential or to buy an official-looking badge or uniform. Honesty Honesty is the moral dimension of trustworthiness. Persuasion professionals have long understood how critical it is to their efforts. Marketers, for example, dedicate exorbitant resources to developing and maintaining an image of honesty. A trusted brand or company name becomes a mental shortcut for consumers. It is estimated that some 50,000 new products come out each year. Forrester Research, a marketing research company, calculates that children have seen almost six million ads by the age of 16. An established brand name helps us cut through this volume of information. It signals we are in safe territory. “The real suggestion to convey,” advertising leader Theodore MacManus observed in 1910, “is that the man manufacturing the product is an honest man, and the product is an honest product, to be preferred above all others” (Fox, 1997). Likability If we know that celebrities aren’t really experts, and that they are being paid to say what they’re saying, why do their endorsements sell so many products? Ultimately, it is because we like them. More than any single quality, we trust people we like. Roger Ailes, a public relations adviser to Presidents Reagan and George H.W. Bush, observed: “If you could master one element of personal communication that is more powerful than anything . . . it is the quality of being likable. I call it the magic bullet, because if your audience likes you, they’ll forgive just about everything else you do wrong. If they don’t like you, you can hit every rule right on target and it doesn’t matter.” The mix of qualities that make a person likable are complex and often do not generalize from one situation to another. One clear finding, however, is that physically attractive people tend to be liked more. In fact, we prefer them to a disturbing extent: Various studies have shown we perceive attractive people as smarter, kinder, stronger, more successful, more socially skilled, better poised, better adjusted, more exciting, more nurturing, and, most important, of higher moral character. All of this is based on no other information than their physical appearance (e.g., Dion, Berscheid, & Walster, 1972). Manipulating the Perception of Trustworthiness The perception of trustworthiness is highly susceptible to manipulation. Levine (2003) lists some of the most common psychological strategies that are used to achieve this effect: Testimonials and Endorsement This technique employs someone who people already trust to testify about the product or message being sold. The technique goes back to the earliest days of advertising when satisfied customers might be shown describing how a patent medicine cured their life-long battle with “nerves” or how Dr. Scott’s Electric Hair Brush healed their baldness (“My hair (was) falling out, and I was rapidly becoming bald, but since using the brush a thick growth of hair has made its appearance, quite equal to that I had before previous to its falling out,” reported a satisfied customer in an 1884 ad for the product). Similarly, Kodak had Prince Henri D’Orleans and others endorse the superior quality of their camera (“The results are marvellous[sic]. The enlargements which you sent me are superb,“ stated Prince Henri D’Orleans in a 1888 ad). Celebrity endorsements are a frequent feature in commercials aimed at children. The practice has aroused considerable ethical concern, and research shows the concern is warranted. In a study funded by the Federal Trade Commission, more than 400 children ages 8 to 14 were shown one of various commercials for a model racing set. Some of the commercials featured an endorsement from a famous race car driver, some included real racing footage, and others included neither. Children who watched the celebrity endorser not only preferred the toy cars more but were convinced the endorser was an expert about the toys. This held true for children of all ages. In addition, they believed the toy race cars were bigger, faster, and more complex than real race cars they saw on film. They were also less likely to believe the commercial was staged (Ross et al., 1984). Presenting the Message as Education The message may be framed as objective information. Salespeople, for example, may try to convey the impression they are less interested in selling a product than helping you make the best decision. The implicit message is that being informed is in everyone’s best interest, because they are confident that when you understand what their product has to offer that you will conclude it is the best choice. Levine (2003) describes how, during training for a job as a used car salesman, he was instructed: “If the customer tells you they do not want to be bothered by a salesperson, your response is ‘I’m not a salesperson, I’m a product consultant. I don’t give prices or negotiate with you. I’m simply here to show you our inventory and help you find a vehicle that will fit your needs.’” Word of Mouth Imagine you read an ad that claims a new restaurant has the best food in your city. Now, imagine a friend tells you this new restaurant has the best food in the city. Who are you more likely to believe? Surveys show we turn to people around us for many decisions. A 1995 poll found that 70% of Americans rely on personal advice when selecting a new doctor. The same poll found that 53% of moviegoers are influenced by the recommendation of a person they know. In another survey, 91% said they’re likely to use another person’s recommendation when making a major purchase. Persuasion professionals may exploit these tendencies. Often, in fact, they pay for the surveys. Using this data, they may try to disguise their message as word of mouth from your peers. For example, Cornerstone Promotion, a leading marketing firm that advertises itself as under-the-radar marketing specialists, sometimes hires children to log into chat rooms and pretend to be fans of one of their clients or pays students to throw parties where they subtly circulate marketing material among their classmates. The Maven More persuasive yet, however, is to involve peers face-to-face. Rather than over-investing in formal advertising, businesses and organizations may plant seeds at the grassroots level hoping that consumers themselves will then spread the word to each other. The seeding process begins by identifying so-called information hubs—individuals the marketers believe can and will reach the most other people. The seeds may be planted with established opinion leaders. Software companies, for example, give advance copies of new computer programs to professors they hope will recommend it to students and colleagues. Pharmaceutical companies regularly provide travel expenses and speaking fees to researchers willing to lecture to health professionals about the virtues of their drugs. Hotels give travel agents free weekends at their resorts in the hope they’ll later recommend them to clients seeking advice. There is a Yiddish word, maven, which refers to a person who’s an expert or a connoisseur, as in a friend who knows where to get the best price on a sofa or the co-worker you can turn to for advice about where to buy a computer. They (a) know a lot of people, (b) communicate a great deal with people, (c) are more likely than others to be asked for their opinions, and (d) enjoy spreading the word about what they know and think. Most important of all, they are trusted. As a result, mavens are often targeted by persuasion professionals to help spread their message. Other Tricks of Persuasion There are many other mindless, mental shortcuts—heuristics and fixed action patterns—that leave us susceptible to persuasion. A few examples: • "Free Gifts" & Reciprocity • Social Proof • Getting a Foot-in-the-Door • A Door-in-the-Face • "And That's Not All" • The Sunk Cost Trap • Scarcity & Psychological Reactance Reciprocity “There is no duty more indispensable than that of returning a kindness,” wrote Cicero. Humans are motivated by a sense of equity and fairness. When someone does something for us or gives us something, we feel obligated to return the favor in kind. It triggers one of the most powerful of social norms, the reciprocity rule, whereby we feel compelled to repay, in equitable value, what another person has given to us. Gouldner (1960), in his seminal study of the reciprocity rule, found it appears in every culture. It lays the basis for virtually every type of social relationship, from the legalities of business arrangements to the subtle exchanges within a romance. A salesperson may offer free gifts, concessions, or their valuable time in order to get us to do something for them in return. For example, if a colleague helps you when you’re busy with a project, you might feel obliged to support her ideas for improving team processes. You might decide to buy more from a supplier if they have offered you an aggressive discount. Or, you might give money to a charity fundraiser who has given you a flower in the street (Cialdini, 2008; Levine, 2003). Social Proof If everyone is doing it, it must be right. People are more likely to work late if others on their team are doing the same, to put a tip in a jar that already contains money, or eat in a restaurant that is busy. This principle derives from two extremely powerful social forces—social comparison and conformity. We compare our behavior to what others are doing and, if there is a discrepancy between the other person and ourselves, we feel pressure to change (Cialdini, 2008). The principle of social proof is so common that it easily passes unnoticed. Advertisements, for example, often consist of little more than attractive social models appealing to our desire to be one of the group. For example, the German candy company Haribo suggests that when you purchase their products you are joining a larger society of satisfied customers: “Kids and grown-ups love it so-- the happy world of Haribo”. Sometimes social cues are presented with such specificity that it is as if the target is being manipulated by a puppeteer—for example, the laugh tracks on situation comedies that instruct one not only when to laugh but how to laugh. Studies find these techniques work. Fuller and Skeehy-Skeffington (1974), for example, found that audiences laughed longer and more when a laugh track accompanied the show than when it did not, even though respondents knew the laughs they heard were connived by a technician from old tapes that had nothing to do with the show they were watching. People are particularly susceptible to social proof (a) when they are feeling uncertain, and (b) if the people in the comparison group seem to be similar to ourselves. As P.T. Barnum once said, “Nothing draws a crowd like a crowd.” Commitment and Consistency Westerners have a desire to both feel and be perceived to act consistently. Once we have made an initial commitment, it is more likely that we will agree to subsequent commitments that follow from the first. Knowing this, a clever persuasion artist might induce someone to agree to a difficult-to-refuse small request and follow this with progressively larger requests that were his target from the beginning. The process is known as getting a foot in the door and then slowly escalating the commitments. Paradoxically, we are less likely to say “No” to a large request than we are to a small request when it follows this pattern. This can have costly consequences. Levine (2003), for example, found ex-cult members tend to agree with the statement: “Nobody ever joins a cult. They just postpone the decision to leave.” A Door in the Face Some techniques bring a paradoxical approach to the escalation sequence by pushing a request to or beyond its acceptable limit and then backing off. In the door-in-the-face (sometimes called the reject-then-compromise) procedure, the persuader begins with a large request they expect will be rejected. They want the door to be slammed in their face. Looking forlorn, they now follow this with a smaller request, which, unknown to the customer, was their target all along. In one study, for example, Mowen and Cialdini (1980), posing as representatives of the fictitious “California Mutual Insurance Co.,” asked university students walking on campus if they’d be willing to fill out a survey about safety in the home or dorm. The survey, students were told, would take about 15 minutes. Not surprisingly, most of the students declined—only one out of four complied with the request. In another condition, however, the researchers door-in-the-faced them by beginning with a much larger request. “The survey takes about two hours,” students were told. Then, after the subject declined to participate, the experimenters retreated to the target request: “. . . look, one part of the survey is particularly important and is fairly short. It will take only 15 minutes to administer.” Almost twice as many now complied. And That’s Not All! The that’s-not-all technique also begins with the salesperson asking a high price. This is followed by several seconds’ pause during which the customer is kept from responding. The salesperson then offers a better deal by either lowering the price or adding a bonus product. That’s-not-all is a variation on door-in-the-face. Whereas the latter begins with a request that will be rejected, however, that’s-not-all gains its influence by putting the customer on the fence, allowing them to waver and then offering them a comfortable way off. Burger (1986) demonstrated the technique in a series of field experiments. In one study, for example, an experimenter-salesman told customers at a student bake sale that cupcakes cost 75 cents. As this price was announced, another salesman held up his hand and said, “Wait a second,” briefly consulted with the first salesman, and then announced (“that’s-not-all”) that the price today included two cookies. In a control condition, customers were offered the cupcake and two cookies as a package for 75 cents right at the onset. The bonus worked magic: Almost twice as many people bought cupcakes in the that’s-not-all condition (73%) than in the control group (40%). The Sunk Cost Trap Sunk cost is a term used in economics referring to nonrecoverable investments of time or money. The trap occurs when a person’s aversion to loss impels them to throw good money after bad, because they don’t want to waste their earlier investment. This is vulnerable to manipulation. The more time and energy a cult recruit can be persuaded to spend with the group, the more “invested” they will feel, and, consequently, the more of a loss it will feel to leave that group. Consider the advice of billionaire investor Warren Buffet: “When you find yourself in a hole, the best thing you can do is stop digging” (Levine, 2003). Scarcity and Psychological Reactance People tend to perceive things as more attractive when their availability is limited, or when they stand to lose the opportunity to acquire them on favorable terms (Cialdini, 2008). Anyone who has encountered a willful child is familiar with this principle. In a classic study, Brehm & Weinraub (1977), for example, placed 2-year-old boys in a room with a pair of equally attractive toys. One of the toys was placed next to a plexiglass wall; the other was set behind the plexiglass. For some boys, the wall was 1 foot high, which allowed the boys to easily reach over and touch the distant toy. Given this easy access, they showed no particular preference for one toy or the other. For other boys, however, the wall was a formidable 2 feet high, which required them to walk around the barrier to touch the toy. When confronted with this wall of inaccessibility, the boys headed directly for the forbidden fruit, touching it three times as quickly as the accessible toy. Research shows that much of that 2-year-old remains in adults, too. People resent being controlled. When a person seems too pushy, we get suspicious, annoyed, often angry, and yearn to retain our freedom of choice more than before. Brehm (1966) labeled this the principle of psychological reactance. The most effective way to circumvent psychological reactance is to first get a foot in the door and then escalate the demands so gradually that there is seemingly nothing to react against. Hassan (1988), who spent many years as a higher-up in the “Moonies” cult, describes how they would shape behaviors subtly at first, then more forcefully. The material that would make up the new identity of a recruit was doled out gradually, piece by piece, only as fast as the person was deemed ready to assimilate it. The rule of thumb was to “tell him only what he can accept.” He continues: “Don’t sell them [the converts] more than they can handle . . . . If a recruit started getting angry because he was learning too much about us, the person working on him would back off and let another member move in .....” Defending Against Unwelcome Persuasion The most commonly used approach to help people defend against unwanted persuasion is known as the “inoculation” method. Research has shown that people who are subjected to weak versions of a persuasive message are less vulnerable to stronger versions later on, in much the same way that being exposed to small doses of a virus immunizes you against full-blown attacks. In a classic study by McGuire (1964), subjects were asked to state their opinion on an issue. They were then mildly attacked for their position and then given an opportunity to refute the attack. When later confronted by a powerful argument against their initial opinion, these subjects were more resistant than were a control group. In effect, they developed defenses that rendered them immune. Sagarin and his colleagues have developed a more aggressive version of this technique that they refer to as “stinging” (Sagarin, Cialdini, Rice, & Serna, 2002). Their studies focused on the popular advertising tactic whereby well-known authority figures are employed to sell products they know nothing about, for example, ads showing a famous astronaut pontificating on Rolex watches. In a first experiment, they found that simply forewarning people about the deviousness of these ads had little effect on peoples’ inclination to buy the product later. Next, they stung the subjects. This time, they were immediately confronted with their gullibility. “Take a look at your answer to the first question. Did you find the ad to be even somewhat convincing? If so, then you got fooled. ... Take a look at your answer to the second question. Did you notice that this ‘stockbroker’ was a fake?” They were then asked to evaluate a new set of ads. The sting worked. These subjects were not only more likely to recognize the manipulativeness of deceptive ads; they were also less likely to be persuaded by them. Anti-vulnerability trainings such as these can be helpful. Ultimately, however, the most effective defense against unwanted persuasion is to accept just how vulnerable we are. One must, first, accept that it is normal to be vulnerable and, second, to learn to recognize the danger signs when we are falling prey. To be forewarned is to be forearmed. Conclusion This module has provided a brief introduction to the psychological processes and subsequent “tricks” involved in persuasion. It has emphasized the peripheral route of persuasion because this is when we are most vulnerable to psychological manipulation. These vulnerabilities are side effects of “normal” and usually adaptive psychological processes. Mindless heuristics offer shortcuts for coping with a hopelessly complicated world. They are necessities for human survival. All, however, underscore the dangers that accompany any mindless thinking. Outside Resources Book: Ariely, D. (2008). Predictably irrational. New York, NY: Harper. Book: Cialdini, R. B. (2008). Influence: Science and practice (5th ed.). Boston, MA: Allyn and Bacon. Book: Gass, R., & Seiter, J. (2010). Persuasion, social influence, and compliance gaining (4th ed.). Boston, MA: Pearson. Book: Kahneman, D. (2012). Thinking fast and slow. New York, NY: Farrar, Straus & Giroux. Book: Levine, R. (2006). The power of persuasion: how we\'re bought and sold. Hoboken, NJ: Wiley www.amazon.com/The-Power-Pers.../dp/0471763179 Book: Tavris, C., & Aronson, E. (2011). Mistakes were made (but not by me). New York, NY: Farrar, Straus & Giroux. Student Video 1: Kyle Ball and Brandon Do's 'Principles of Persuasion'. This is a student-made video highlighting 6 key principles of persuasion that we encounter in our everyday lives. It was one of the winning entries in the 2015 Noba Student Video Award. Student Video 2: 'Persuasion', created by Jake Teeny and Ben Oliveto, compares the central and peripheral routes to persuasion and also looks at how techniques of persuasion such as Scarcity and Social Proof influence our consumer choices. It was one of the winning entries in the 2015 Noba Student Video Award. Student Video 3: 'Persuasion in Advertising' is a humorous look at the techniques used by companies to try to convince us to buy their products. The video was created by the team of Edward Puckering, Chris Cameron, and Kevin Smith. It was one of the winning entries in the 2015 Noba Student Video Award. Video: A brief, entertaining interview with the celebrity pickpocket shows how easily we can be fooled. See A Pickpocket’s Tale at http://www.newyorker.com/online/blogs/culture/2013/01/video-the-art-of-pickpocketing.html Video: Cults employ extreme versions of many of the principles in this module. An excellent documentary tracing the history of the Jonestown cult is the PBS “American Experience” production, Jonestown: The Life and Death of Peoples Temple at www.pbs.org/wgbh/americanexpe...-introduction/ Video: Philip Zimbardo’s now-classic video, Quiet Rage, offers a powerful, insightful description of his famous Stanford prison study www.prisonexp.org/documentary.htm Video: The documentary Outfoxed provides an excellent example of how persuasion can be masked as news and education. http://www.outfoxed.org/ Video: The video, The Science of Countering Terrorism: Psychological Perspectives, a talk by psychologist Fathali Moghaddam, is an excellent introduction to the process of terrorist recruitment and thinking sciencestage.com/v/32330/fath...spectives.html Discussion Questions 1. Imagine you are commissioned to create an ad to sell a new beer. Can you give an example of an ad that would rely on the central route? Can you give an example of an ad that would rely on the peripheral route? 2. The reciprocity principle can be exploited in obvious ways, such as giving a customer a free sample of a product. Can you give an example of a less obvious way it might be exploited? What is a less obvious way that a cult leader might use it to get someone under his or her grip? 3. Which “trick” in this module are you, personally, most prone to? Give a personal example of this. How might you have avoided it? Vocabulary Central route to persuasion Persuasion that employs direct, relevant, logical messages. Fixed action patterns (FAPs) Sequences of behavior that occur in exactly the same fashion, in exactly the same order, every time they are elicited. Foot in the door Obtaining a small, initial commitment. Gradually escalating commitments A pattern of small, progressively escalating demands is less likely to be rejected than a single large demand made all at once. Heuristics Mental shortcuts that enable people to make decisions and solve problems quickly and efficiently. Peripheral route to persuasion Persuasion that relies on superficial cues that have little to do with logic. Psychological reactance A reaction to people, rules, requirements, or offerings that are perceived to limit freedoms. Social proof The mental shortcut based on the assumption that, if everyone is doing it, it must be right. The norm of reciprocity The normative pressure to repay, in equitable value, what another person has given to us. The rule of scarcity People tend to perceive things as more attractive when their availability is limited, or when they stand to lose the opportunity to acquire them on favorable terms. The triad of trust We are most vulnerable to persuasion when the source is perceived as an authority, as honest and likable. Trigger features Specific, sometimes minute, aspects of a situation that activate fixed action patterns.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_12%3A_Social_Part_II/12.4%3A_Persuasion%3A_So_Easily_Fooled.txt
By Robert G. Franklin and Leslie Zebrowitz Anderson University, Brandeis University More attractive people elicit more positive first impressions. This effect is called the attractiveness halo, and it is shown when judging those with more attractive faces, bodies, or voices. Moreover, it yields significant social outcomes, including advantages to attractive people in domains as far-reaching as romance, friendships, family relations, education, work, and criminal justice. Physical qualities that increase attractiveness include youthfulness, symmetry, averageness, masculinity in men, and femininity in women. Positive expressions and behaviors also raise evaluations of a person’s attractiveness. Cultural, cognitive, evolutionary, and overgeneralization explanations have been offered to explain why we find certain people attractive. Whereas the evolutionary explanation predicts that the impressions associated with the halo effect will be accurate, the other explanations do not. Although the research evidence does show some accuracy, it is too weak to satisfactorily account for the positive responses shown to more attractive people. learning objectives • Learn the advantages of attractiveness in social situations. • Know what features are associated with facial, body, and vocal attractiveness. • Understand the universality and cultural variation in attractiveness. • Learn about the mechanisms proposed to explain positive responses to attractiveness. We are ambivalent about attractiveness. We are enjoined not to “judge a book by its cover,” and told that “beauty is only skin deep.” Just as these warnings indicate, our natural tendency is to judge people by their appearance and to prefer those who are beautiful. The attractiveness of peoples’ faces, as well as their bodies and voices, not only influences our choice of romantic partners, but also our impressions of people’s traits and important social outcomes in areas that have nothing to do with romance. This module reviews these effects of attractiveness and examines what physical qualities increase attractiveness and why. The Advantages of Attractiveness Attractiveness is an asset. Although it may be no surprise that attractiveness is important in romantic settings, its benefits are found in many other social domains. More attractive people are perceived more positively on a wide variety of traits, being seen as more intelligent, healthy, trustworthy, and sociable. Although facial attractiveness has received the most research attention (Eagly, Ashmore, Makhijani, & Longo, 1991), people higher in body or vocal attractiveness also create more positive impressions (Riggio, Widaman, Tucker, & Salinas, 1991; Zuckerman & Driver, 1989). This advantage is termed the attractiveness halo effect, and it is widespread. Not only are attractive adults judged more positively than their less attractive peers, but even attractive babies are viewed more positively by their own parents, and strangers consider them more healthy, affectionate, attached to mother, cheerful, responsive, likeable, and smart (Langlois et al., 2000). Teachers not only like attractive children better but also perceive them as less likely to misbehave, more intelligent, and even more likely to get advanced degrees. More positive impressions of those judged facially attractive are shown across many cultures, even within an isolated indigenous tribe in the Bolivian rainforest (Zebrowitz et al., 2012). Attractiveness not only elicits positive trait impressions, but it also provides advantages in a wide variety of social situations. In a classic study, attractiveness, rather than measures of personality or intelligence, predicted whether individuals randomly paired on a blind date wanted to contact their partner again (Walster, Aronson, Abrahams, & Rottman, 1966). Although attractiveness has a greater influence on men’s romantic preferences than women’s (Feingold, 1990), it has significant effects for both sexes. Attractive men and women become sexually active earlier than their less attractive peers. Also, attractiveness in men is positively related to the number of short-term, but not long-term, sexual partners, whereas the reverse is true for women (Rhodes, Simmons, & Peters, 2005). These results suggest that attractiveness in both sexes is associated with greater reproductive success, since success for men depends more on short-term mating opportunities—more mates increases the probability of offspring—and success for women depends more on long-term mating opportunities—a committed mate increases the probability of offspring survival. Of course, not everyone can win the most attractive mate, and research shows a “matching” effect. More attractive people expect to date individuals higher in attractiveness than do unattractive people (Montoya, 2008), and actual romantic couples are similar in attractiveness (Feingold, 1988). The appeal of attractive people extends to platonic friendships. More attractive people are more popular with their peers, and this is shown even in early childhood (Langlois et al., 2000). The attractiveness halo is also found in situations where one would not expect it to make such a difference. For example, research has shown that strangers are more likely to help an attractive than an unattractive person by mailing a lost letter containing a graduate school application with an attached photograph (Benson, Karabenick, & Lerner, 1976). More attractive job applicants are preferred in hiring decisions for a variety of jobs, and attractive people receive higher salaries (Dipboye, Arvey, & Terpstra, 1977; Hamermesh & Biddle, 1994; Hosoda, Stone-Romero, & Coats, 2003). Facial attractiveness also affects political and judicial outcomes. More attractive congressional candidates are more likely to be elected, and more attractive defendants convicted of crimes receive lighter sentences (Stewart, 1980; Verhulst, Lodge, & Lavine, 2010). Body attractiveness also contributes to social outcomes. A smaller percentage of overweight than normal-weight college applicants are admitted despite similar high school records (Canning & Mayer, 1966), parents are less likely to pay for the education of their heavier weight children (Crandall, 1991), and overweight people are less highly recommended for jobs despite equal qualifications (Larkin & Pines, 1979). Voice qualities also have social outcomes. College undergraduates express a greater desire to affiliate with other students who have more attractive voices (Miyake & Zuckerman, 1993), and politicians with more attractive voices are more likely to win elections (Gregory & Gallagher, 2002; Tigue, Borak, O’Connor, Schandl, & Feinberg, 2012). These are but a few of the research findings clearly demonstrating that we are unable to adhere to the conventional wisdom not to judge a book by its cover. What Makes a Person Attractive? Most research investigating what makes a person attractive has focused on sexual attraction. However, attraction is a multifaceted phenomenon. We are attracted to infants (nurturant attraction), to friends (communal attraction), and to leaders (respectful attraction). Although some facial qualities may be universally attractive, others depend on the individual being judged as well as the “eye of the beholder.” For example, babyish facial qualities are essential to the facial attractiveness of infants, but detract from the charisma of male leaders (Hildebrandt & Fitzgerald, 1979; Sternglanz, Gray, & Murakami, 1977; Mueller & Mazur, 1996), and the sexual attractiveness of particular facial qualities depends on whether the viewer is evaluating someone as a short-term or a long-term mate (Little, Jones, Penton-Voak, Burt, & Perrett, 2002). The fact that attractiveness is multifaceted is highlighted in research suggesting that attraction is a dual process, combining sexual and aesthetic preferences. More specifically, women’s overall ratings of men’s attractiveness are explained both by their ratings of how appealing a man is for a sexual situation, such as a potential date, and also by their ratings of how appealing he is for a nonsexual situation, such as a potential lab partner (Franklin & Adams, 2009). The dual process is further revealed in the finding that different brain regions are involved in judging sexual versus nonsexual attractiveness (Franklin & Adams, 2010). More attractive facial features include youthfulness, unblemished skin, symmetry, a facial configuration that is close to the population average, and femininity in women or masculinity in men, with smaller chins, higher eyebrows, and smaller noses being some of the features that are more feminine/less masculine. Similarly, more feminine, higher-pitched voices are more attractive in women and more masculine, lower-pitched voices are more attractive in men (Collins, 2000; Puts, Barndt, Welling, Dawood, & Burriss, 2011). In the case of bodies, features that increase attractiveness include a more sex-typical waist-to-hip ratio—narrower waist than hips for women but not for men—as well as a physique that is not emaciated or grossly obese. Negative reactions to obesity are present from a young age. For example, a classic study found that when children were asked to rank-order their preferences for children with various disabilities who were depicted in pictures, the overweight child was ranked the lowest, even lower than a child who was missing a hand, one who was seated in a wheelchair, and one with a facial scar (Richardson, Goodman, Hastorf, & Dornbusch, 1961). Although there are many physical qualities that influence attractiveness, no single quality seems to be a necessary or sufficient condition for high attractiveness. A person with a perfectly symmetrical face may not be attractive if the eyes are too close together or too far apart. One can also imagine a woman with beautiful skin or a man with a masculine facial features who is not attractive. Even a person with a perfectly average face may not be attractive if the face is the average of a population of 90-year-olds. These examples suggest that a combination of features are required for high attractiveness. In the case of men’s attraction to women, a desirable combination appears to include perceived youthfulness, sexual maturity, and approachability (Cunningham, 1986). In contrast, a single quality, like extreme distance from the average face, is sufficient for low attractiveness. Although certain physical qualities are generally viewed as more attractive, anatomy is not destiny. Attractiveness is positively related to smiling and facial expressivity (Riggio & Friedman, 1986), and there also is some truth to the maxim “pretty is as pretty does.” Research has shown that students are more likely to judge an instructor’s physical appearance as appealing when his behavior is warm and friendly than when it is cold and distant (Nisbett & Wilson, 1977), and people rate a woman as more physically attractive when they have a favorable description of her personality (Gross & Crofton, 1977). Why Are Certain People Attractive? Cultural, cognitive, evolutionary, and overgeneralization explanations have been offered to account for why certain people are deemed attractive. Early explanations suggested that attractiveness was based on what a culture preferred. This is supported by the many variations in ornamentation, jewelry, and body modification that different cultures use to convey attractiveness. For example, the long neck on the woman shown in Figure 12.5.1 is unlikely to be judged attractive by Westerners. Yet, long necks have been preferred in a traditional Myanmar tribe, because they are thought to resemble a mythological dragon who spawned them. Despite cultural variations like this, research has provided strong evidence against the claim that attractiveness is only due to social learning. Indeed, young infants prefer to look at faces that adults have judged to be highly attractive rather than those judged to be less attractive (Kramer, Zebrowitz, San Giovanni, & Sherak, 1995; Langlois et al., 1987). Moreover, 12-month-olds are less likely to smile at or play with a stranger who is wearing a lifelike mask judged unattractive by adults than a mask judged as attractive (Langlois, Roggman, & Rieser-Danner, 1990). In addition, people across many cultures, including individuals in the Amazon rainforest who are isolated from Western culture, view the same faces as attractive (Cunningham, Roberts, Barbee, Druen, & Wu, 1995; Zebrowitz et al. 2012). On the other hand, there are more cultural variations in body attractiveness. In particular, whereas people from diverse cultures agree that very thin, emaciated-looking bodies are unattractive, they differ more in their appraisal of heavier bodies. Larger bodies are viewed more negatively in Western European cultures than other countries, especially those with lower socioeconomic statuses (Swami et al., 2010). There also is evidence that African Americans judge overweight women less harshly than do European Americans (Hebl & Heatherton, 1997). Although cultural learning makes some contribution to who we find attractive, the universal elements of attractiveness require a culturally universal explanation. One suggestion is that attractiveness is a by-product of a more general cognitive mechanism that leads us to recognize and prefer familiar stimuli. People prefer category members that are closer to a category prototype, or the average member of the category, over those that are at the extremes of a category. Thus, people find average stimuli more attractive whether they are human faces, cars, or animals (Halberstadt, 2006). Indeed, a face morph that is the average of many individuals’ faces is more attractive than the individual faces used to create it (Langlois & Roggman, 1990). Also, individual faces that have been morphed toward an average face are more attractive than those that have been morphed away from average (see Figure 12.5.2; face from Martinez & Benevente, 1998). The preference for stimuli closer to a category prototype is also consistent with the fact that we prefer men with more masculine physical qualities and women with more feminine ones. This preference would further predict that the people who are most attractive depend on our learning experiences, since what is average or prototypical in a face, voice, or body will depend on the people we have seen. Consistent with an effect of learning experiences, young infants prefer face morphs that are an average of faces they have previously seen over morphs that are an average of novel faces (Rubenstein, Kalakanis, & Langlois, 1999). Short-term perceptual experiences can influence judgments of attractiveness even in adults. Brief exposure to a series of faces with the same distortion increases the rated attractiveness of new faces with that distortion (Rhodes, Jeffery, Watson, Clifford, & Nakayama, 2003), and exposure to morphs of human and chimpanzee faces increases the rated attractiveness of new human faces morphed with a small degree of chimpanzee face (Principe & Langlois, 2012). One reason average stimuli, including faces, may be preferred is that they are easy to categorize, and when a stimulus is easy to categorize, it elicits positive emotion (Winkielman, Halberstadt, Fazendeiro, & Catty, 2006). Another possible reason average stimuli may be preferred is that we may be less apprehensive about familiar-looking stimuli (Zajonc, 2001). All other things equal, we prefer stimuli we have seen before over novel ones, a mere-exposure effect, and we also prefer stimuli that are similar to those we have seen before, a generalized mere-exposure effect. Consistent with a reduced apprehensiveness mechanism, exposure to other-race faces reduced neural activation in a region that responds to negatively valenced stimuli, not only for the faces the participants saw, but also new faces from the familiarized other-race category (Zebrowitz & Zhang, 2012). Such a generalized mere-exposure effect also could explain the preference for average stimuli, which look more familiar, although the effect may be more reliable for judgments of likeability than attractiveness (Rhodes, Halberstadt, & Brajkovich, 2001; Rhodes, Halberstadt, Jeffery, & Palermo, 2005). Whether due to ease of categorization or less apprehensiveness, the cognitive explanation holds that certain people are more attractive because perceptual learning has rendered them more familiar. In contrast to the cognitive explanation for why we find particular people attractive, the evolutionary explanation argues that preferences developed because it was adaptive to prefer those individuals. More specifically, the good genes hypothesis proposes that people with physical qualities like averageness, symmetry, sex prototypicality, and youthfulness are more attractive because they are better-quality mates. Mate quality may reflect better health, greater fertility, or better genetic traits that lead to better offspring and hence greater reproductive success (Thornhill & Gangestad, 1999). Theoretically, averageness and symmetry provide evidence of genetic fitness because they show the ability to develop normally despite environmental stressors (Scheib, Gangestad, & Thornhill, 1999). Averageness also signals genetic diversity (Thornhill & Gangestad, 1999), which is associated with a strong immune system (Penn, Damjanovich, & Potts, 2002). High masculinity in male faces may indicate fitness because it shows an ability to withstand the stress that testosterone places on the immune system (Folstad & Karter, 1992). High femininity in female faces may signal fitness by indicating sexual maturity and fertility. The evolutionary account also can explain the attractiveness of youthfulness, since aging is often associated with declines in cognitive and physical functioning and decreased fertility. Some researchers have investigated whether attractiveness actually does signal mate quality by examining the relationship between facial attractiveness and health (see Rhodes, 2006, for a review). Support for such a relationship is weak. In particular, people rated very low in attractiveness, averageness, or masculinity (in the case of men) tend to have poorer health than those who are average in these qualities. However, people rated high in attractiveness, averageness, or masculinity do not differ from those who are average (Zebrowitz & Rhodes, 2004). Low body attractiveness, as indexed by overweight or a sex-atypical waist-to-hip ratio, also may be associated with poorer health or lower fertility in women (Singh & Singh, 2011). Others have assessed whether attractiveness signals mate quality by examining the relationship with intelligence, since more intelligent mates may increase reproductive success. In particular, more intelligent mates may provide better parental care. Also, since intelligence is heritable, more intelligent mates may yield more intelligent offspring, who have a better chance of passing genes on to the next generation (Miller & Todd, 1998). The evidence indicates that attractiveness is positively correlated with intelligence. However, as in the case of health, the relationship is weak, and it appears to be largely due to lower-than-average intelligence among those who are very low in attractiveness rather than higher-than-average intelligence among those who are highly attractive (Zebrowitz & Rhodes, 2004). These results are consistent with the fact that subtle negative deviations from average attractiveness can signal low fitness. For example, minor facial anomalies that are too subtle for the layperson to recognize as a genetic anomaly are associated with lower intelligence (Foroud et al., 2012). Although the level of attractiveness provides a valid cue to low, but not high, intelligence or health, it is important to bear in mind that attractiveness is only a weak predictor of these traits, even in the range where it has some validity. The finding that low, but not high, attractiveness can be diagnostic of actual traits is consistent with another explanation for why we find particular people attractive. This has been dubbed anomalous face overgeneralization, but it could equally apply to anomalous voices or bodies. The evolutionary account has typically assumed that as attractiveness increases, so does fitness, and it has emphasized the greater fitness of highly attractive individuals, a good genes effect (Buss, 1989). In contrast, the overgeneralization hypothesis argues that the level of attractiveness provides an accurate index only of low fitness. On this account, the attractiveness halo effect is a by-product of reactions to low fitness. More specifically, we overgeneralize the adaptive tendency to use low attractiveness as an indication of lower-than-average health and intelligence, and we mistakenly use higher-than-average attractiveness as an indication of higher-than-average health and intelligence (Zebrowitz & Rhodes, 2004). The overgeneralization hypothesis differs from the evolutionary hypothesis in another important respect. It is concerned with the importance of detecting low fitness not only when choosing a mate, but also in other social interactions. This is consistent with the fact that the attractiveness halo effect is present in many domains. Whereas the cultural, cognitive, and overgeneralization accounts of attractiveness do not necessarily predict that the halo effect in impressions will be accurate, the evolutionary “good genes” account does. As we have seen, there is some support for this prediction, but the effects are too weak and circumscribed to fully explain the strong halo effect in response to highly attractive people. In addition, it is important to recognize that whatever accuracy there is does not necessarily imply a genetic link between attractiveness and adaptive traits, such as health or intelligence. One non-genetic mechanism is an influence of environmental factors. For example, the quality of nutrition and that a person receives may have an impact on the development of both attractiveness and health (Whitehead, Ozakinci, Stephen, & Perrett, 2012). Another non-genetic explanation is a self-fulfilling prophecy effect (Snyder, Tanke, & Berscheid, 1977). For example, the higher expectations that teachers have for more attractive students may nurture higher intelligence, an effect that has been shown when teachers have high expectations for reasons other than appearance (Rosenthal, 2003). Conclusions Although it may seem unfair, attractiveness confers many advantages. More attractive people are favored not only as romantic partners but, more surprisingly, by their parents, peers, teachers, employers, and even judges and voters. Moreover, there is substantial agreement about who is attractive, with infants and perceivers from diverse cultures showing similar responses. Although this suggests that cultural influences cannot completely explain attractiveness, experience does have an influence. There is controversy about why certain people are attractive to us. The cognitive account attributes higher attractiveness to the ease of processing prototypes or the safety associated with familiar stimuli. The evolutionary account attributes higher attractiveness to the adaptive value of preferring physical qualities that signal better health or genetic fitness when choosing mates. The overgeneralization account attributes higher attractiveness to the overgeneralization of an adaptive avoidance of physical qualities that signal poor health or low genetic fitness. Although there is debate as to which explanation is best, it is important to realize that all of the proposed mechanisms may have some validity. Outside Resources Article: For Couples, Time Can Upend the Laws of Attraction - This is an accessible New York Times article, summarizing research findings that show romantic couples’ level of attractiveness is correlated if they started dating soon after meeting (predicted by the matching hypothesis). However, if they knew each other or were friends for a while before dating, they were less likely to match on physical attractiveness. This research highlights that while attractiveness is important, other factors such as acquaintanceship length can also be important. http://nyti.ms/1HtIkFt Article: Is Faceism Spoiling Your Life? - This is an accessible article that describes faceism, as well as how our expectations of people (based on their facial features) influence our reactions to them. It presents the findings from a few studies, such as how participants making snap judgments of political candidates’ faces predicted who won the election with almost 70% accuracy. It includes example photos of faces we would consider more or less competent, dominant, extroverted, or trustworthy. http://www.bbc.com/future/story/20150707-is-faceism-spoiling-your-life Video: Is Your Face Attractive? - This is a short video. The researcher in the video discusses and shows examples of face morphs, and then manipulates pictures of faces, making them more or less masculine or feminine. We tend to prefer women with more feminized faces and men with more masculine faces, and the video briefly correlates these characteristics to good health. www.discovery.com/tv-shows/other-shows/videos/science-of-sex-appeal-is-your-face-attractive/ Video: Multiple videos realted to the science of beauty http://dsc.discovery.com/search.htm?...ence+of+beauty Video: Multiple videos related to the science of sex appeal http://dsc.discovery.com/search.htm?...+of+sex+appeal Video: The Beauty of Symmetry - A short video about facial symmetry. It describes facial symmetry, and explains why our faces aren’t always symmetrical. The video shows a demonstration of a researcher photographing a man and a woman and then manipulating the photos. www.discovery.com/tv-shows/other-shows/videos/science-of-sex-appeal-the-beauty-of-symmetry/ Video: The Economic Benefits of Being Beautiful - Less than 2-minute video with cited statistics about the advantages of being beautiful. The video starts with information about how babies are treated differently, and it quickly cites 14 facts about the advantages of being attractive, including the halo effect. Discussion Questions 1. Why do you think the attractiveness halo exists even though there is very little evidence that attractive people are more intelligent or healthy? 2. What cultural influences affect whom you perceive as attractive? Why? 3. How do you think evolutionary theories of why faces are attractive apply in a modern world, where people are much more likely to survive and reproduce, regardless of how intelligent or healthy they are? 4. Which of the theories do you think provides the most compelling explanation for why we find certain people attractive? Vocabulary Anomalous face overgeneralization hypothesis Proposes that the attractiveness halo effect is a by-product of reactions to low fitness. People overgeneralize the adaptive tendency to use low attractiveness as an indicator of negative traits, like low health or intelligence, and mistakenly use higher-than-average attractiveness as an indicator of high health or intelligence. Attractiveness halo effect The tendency to associate attractiveness with a variety of positive traits, such as being more sociable, intelligent, competent, and healthy. Good genes hypothesis Proposes that certain physical qualities, like averageness, are attractive because they advertise mate quality—either greater fertility or better genetic traits that lead to better offspring and hence greater reproductive success. Mere-exposure effect The tendency to prefer stimuli that have been seen before over novel ones. There also is a generalized mere-exposure effect shown in a preference for stimuli that are similar to those that have been seen before. Morph A face or other image that has been transformed by a computer program so that it is a mixture of multiple images. Prototype A typical, or average, member of a category. Averageness increases attractiveness.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_12%3A_Social_Part_II/12.5%3A_Attraction_and_Beauty.txt
By Susan T. Fiske Princeton University People are often biased against others outside of their own social group, showing prejudice (emotional bias), stereotypes (cognitive bias), and discrimination (behavioral bias). In the past, people used to be more explicit with their biases, but during the 20th century, when it became less socially acceptable to exhibit bias, such things like prejudice, stereotypes, and discrimination became more subtle (automatic, ambiguous, and ambivalent). In the 21st century, however, with social group categories even more complex, biases may be transforming once again. learning objectives • Distinguish prejudice, stereotypes, and discrimination. • Distinguish old-fashioned, blatant biases from contemporary, subtle biases. • Understand old-fashioned biases such as social dominance orientation and right-wing. authoritarianism. • Understand subtle, unexamined biases that are automatic, ambiguous, and ambivalent. • Understand 21st century biases that may break down as identities get more complicated. Introduction Even in one’s own family, everyone wants to be seen for who they are, not as “just another typical X.” But still, people put other people into groups, using that label to inform their evaluation of the person as a whole—a process that can result in serious consequences. This module focuses on biases against social groups, which social psychologists sort into emotional prejudices, mental stereotypes, and behavioral discrimination. These three aspects of bias are related, but they each can occur separately from the others (Dovidio & Gaertner, 2010; Fiske, 1998). For example, sometimes people have a negative, emotional reaction to a social group (prejudice) without knowing even the most superficial reasons to dislike them (stereotypes). This module shows that today’s biases are not yesterday’s biases in many ways, but at the same time, they are troublingly similar. First, we’ll discuss old-fashioned biases that might have belonged to our grandparents and great-grandparents—or even the people nowadays who have yet to leave those wrongful times. Next, we will discuss late 20th century biases that affected our parents and still linger today. Finally, we will talk about today’s 21st century biases that challenge fairness and respect for all. Old-fashioned Biases: Almost Gone You would be hard pressed to find someone today who openly admits they don’t believe in equality. Regardless of one’s demographics, most people believe everyone is entitled to the same, natural rights. However, as much as we now collectively believe this, not too far back in our history, this ideal of equality was an unpracticed sentiment. Of all the countries in the world, only a few have equality in their constitution, and those who do, originally defined it for a select group of people. At the time, old-fashioned biases were simple: people openly put down those not from their own group. For example, just 80 years ago, American college students unabashedly thought Turkish people were “cruel, very religious, and treacherous” (Katz & Braly, 1933). So where did they get those ideas, assuming that most of them had never met anyone from Turkey? Old-fashioned stereotypes were overt, unapologetic, and expected to be shared by others—what we now call “blatant biases.” Blatant biases are conscious beliefs, feelings, and behavior that people are perfectly willing to admit, which mostly express hostility toward other groups (outgroups) while unduly favoring one’s own group (in-group). For example, organizations that preach contempt for other races (and praise for their own) is an example of a blatant bias. And scarily, these blatant biases tend to run in packs: People who openly hate one outgroup also hate many others. To illustrate this pattern, we turn to two personality scales next. Social Dominance Orientation Social dominance orientation (SDO) describes a belief that group hierarchies are inevitable in all societies and are even a good idea to maintain order and stability (Sidanius & Pratto, 1999). Those who score high on SDO believe that some groups are inherently better than others, and because of this, there is no such thing as group “equality.” At the same time, though, SDO is not just about being personally dominant and controlling of others; SDO describes a preferred arrangement of groups with some on top (preferably one’s own group) and some on the bottom. For example, someone high in SDO would likely be upset if someone from an outgroup moved into his or her neighborhood. It’s not that the person high in SDO wants to “control” what this outgroup member does; it’s that moving into this “nice neighborhood” disrupts the social hierarchy the person high in SDO believes in (i.e. living in a nice neighborhood denotes one’s place in the social hierarchy—a place reserved for one’s in-group members). Although research has shown that people higher in SDO are more likely to be politically conservative, there are other traits that more strongly predict one’s SDO. For example, researchers have found that those who score higher on SDO are usually lower than average on tolerance, empathy, altruism, and community orientation. In general, those high in SDO have a strong belief in work ethic—that hard work always pays off and leisure is a waste of time. People higher on SDO tend to choose and thrive in occupations that maintain existing group hierarchies (police, prosecutors, business), compared to those lower in SDO, who tend to pick more equalizing occupations (social work, public defense, psychology). The point is that SDO—a preference for inequality as normal and natural—also predicts endorsing the superiority of certain groups: men, native-born residents, heterosexuals, and believers in the dominant religion. This means seeing women, minorities, homosexuals, and non-believers as inferior. Understandably, the first list of groups tend to score higher on SDO, while the second group tends to score lower. For example, the SDO gender difference (men higher, women lower) appears all over the world. At its heart, SDO rests on a fundamental belief that the world is tough and competitive with only a limited number of resources. Thus, those high in SDO see groups as battling each other for these resources, with winners at the top of the social hierarchy and losers at the bottom (see Table 1). Right-wing Authoritarianism Right-wing authoritarianism (RWA) focuses on value conflicts, whereas SDO focuses on the economic ones. That is, RWA endorses respect for obedience and authority in the service of group conformity (Altemeyer, 1988). Returning to an example from earlier, the homeowner who’s high in SDO may dislike the outgroup member moving into his or her neighborhood because it “threatens” one’s economic resources (e.g. lowering the value of one’s house; fewer openings in the school; etc.). Those high in RWA may equally dislike the outgroup member moving into the neighborhood but for different reasons. Here, it’s because this outgroup member brings in values or beliefs that the person high in RWA disagrees with, thus “threatening” the collective values of his or her group. RWA respects group unity over individual preferences, wanting to maintain group values in the face of differing opinions. Despite its name, though, RWA is not necessarily limited to people on the right (conservatives). Like SDO, there does appear to be an association between this personality scale (i.e. the preference for order, clarity, and conventional values) and conservative beliefs. However, regardless of political ideology, RWA focuses on groups’ competing frameworks of values. Extreme scores on RWA predict biases against outgroups while demanding in-group loyalty and conformity Notably, the combination of high RWA and high SDO predicts joining hate groups that openly endorse aggression against minority groups, immigrants, homosexuals, and believers in non-dominant religions (Altemeyer, 2004). 20th Century Biases: Subtle but Significant Fortunately, old-fashioned biases have diminished over the 20th century and into the 21st century. Openly expressing prejudice is like blowing second-hand cigarette smoke in someone’s face: It’s just not done any more in most circles, and if it is, people are readily criticized for their behavior. Still, these biases exist in people; they’re just less in view than before. These subtle biases are unexamined and sometimes unconscious but real in their consequences. They are automatic, ambiguous, and ambivalent, but nonetheless biased, unfair, and disrespectful to the belief in equality. Automatic Biases Most people like themselves well enough, and most people identify themselves as members of certain groups but not others. Logic suggests, then, that because we like ourselves, we therefore like the groups we associate with more, whether those groups are our hometown, school, religion, gender, or ethnicity. Liking yourself and your groups is human nature. The larger issue, however, is that own-group preference often results in liking other groups less. And whether you recognize this “favoritism” as wrong, this trade-off is relatively automatic, that is, unintended, immediate, and irresistible. Social psychologists have developed several ways to measure this relatively automatic own-group preference, the most famous being the Implicit Association Test(IAT;Greenwald, Banaji, Rudman, Farnham, Nosek, & Mellott, 2002; Greenwald, McGhee, & Schwartz, 1998). The test itself is rather simple and you can experience it yourself if you Google “implicit” or go to understandingprejudice.org. Essentially, the IAT is done on the computer and measures how quickly you can sort words or pictures into different categories. For example, if you were asked to categorize “ice cream” as good or bad, you would quickly categorize it as good. However, imagine if every time you ate ice cream, you got a brain freeze. When it comes time to categorize ice cream as good or bad, you may still categorize it as “good,” but you will likely be a little slower in doing so compared to someone who has nothing but positive thoughts about ice cream. Related to group biases, people may explicitly claim they don’t discriminate against outgroups—and this is very likely true. However, when they’re given this computer task to categorize people from these outgroups, that automatic or unconscious hesitation (a result of having mixed evaluations about the outgroup) will show up in the test. And as countless studies have revealed, people are mostly faster at pairing their own group with good categories, compared to pairing others’ groups. In fact, this finding generally holds regardless if one’s group is measured according race, age, religion, nationality, and even temporary, insignificant memberships. This all-too-human tendency would remain a mere interesting discovery except that people’s reaction time on the IAT predicts actual feelings about individuals from other groups, decisions about them, and behavior toward them, especially nonverbal behavior (Greenwald, Poehlman, Uhlmann, & Banaji, 2009). For example, although a job interviewer may not be “blatantly biased,” his or her “automatic or implicit biases” may result in unconsciously acting distant and indifferent, which can have devastating effects on the hopeful interviewee’s ability to perform well (Word, Zanna, & Cooper, 1973). Although this is unfair, sometimes the automatic associations—often driven by society’s stereotypes—trump our own, explicit values (Devine, 1989). And sadly, this can result in consequential discrimination, such as allocating fewer resources to disliked outgroups (Rudman & Ashmore, 2009). See Table 2 for a summary of this section and the next two sections on subtle biases. Ambiguous Biases As the IAT indicates, people’s biases often stem from the spontaneous tendency to favor their own, at the expense of the other. Social identity theory (Tajfel, Billig, Bundy, & Flament, 1971) describes this tendency to favor one’s own in-group over another’s outgroup. And as a result, outgroup disliking stems from this in-group liking (Brewer & Brown, 1998). For example, if two classes of children want to play on the same soccer field, the classes will come to dislike each other not because of any real, objectionable traits about the other group. The dislike originates from each class’s favoritism toward itself and the fact that only one group can play on the soccer field at a time. With this preferential perspective for one’s own group, people are not punishing the other one so much as neglecting it in favor of their own. However, to justify this preferential treatment, people will often exaggerate the differences between their in-group and the outgroup. In turn, people see the outgroup as more similar in personality than they are. This results in the perception that “they” really differ from us, and “they” are all alike. Spontaneously, people categorize people into groups just as we categorize furniture or food into one type or another. The difference is that we people inhabit categories ourselves, as self-categorization theory points out (Turner, 1975). Because the attributes of group categories can be either good or bad, we tend to favor the groups with people like us and incidentally disfavor the others. In-group favoritism is an ambiguous form of bias because it disfavors the outgroup by exclusion. For example, if a politician has to decide between funding one program or another, s/he may be more likely to give resources to the group that more closely represents his in-group. And this life-changing decision stems from the simple, natural human tendency to be more comfortable with people like yourself. A specific case of comfort with the ingroup is called aversive racism, so-called because people do not like to admit their own racial biases to themselves or others (Dovidio & Gaertner, 2010). Tensions between, say, a White person’s own good intentions and discomfort with the perhaps novel situation of interacting closely with a Black person may cause the White person to feel uneasy, behave stiffly, or be distracted. As a result, the White person may give a good excuse to avoid the situation altogether and prevent any awkwardness that could have come from it. However, such a reaction will be ambiguous to both parties and hard to interpret. That is, was the White person right to avoid the situation so that neither person would feel uncomfortable? Indicators of aversive racism correlate with discriminatory behavior, despite being the ambiguous result of good intentions gone bad. Bias Can Be Complicated - Ambivalent Biases Not all stereotypes of outgroups are all bad. For example, ethnic Asians living in the United States are commonly referred to as the “model minority” because of their perceived success in areas such as education, income, and social stability. Another example includes people who feel benevolent toward traditional women but hostile toward nontraditional women. Or even ageist people who feel respect toward older adults but, at the same time, worry about the burden they place on public welfare programs. A simple way to understand these mixed feelings, across a variety of groups, results from the Stereotype Content Model (Fiske, Cuddy, & Glick, 2007). When people learn about a new group, they first want to know if its intentions of the people in this group are for good or ill. Like the guard at night: “Who goes there, friend or foe?” If the other group has good, cooperative intentions, we view them as warm and trustworthy and often consider them part of “our side.” However, if the other group is cold and competitive or full of exploiters, we often view them as a threat and treat them accordingly. After learning the group’s intentions, though, we also want to know whether they are competent enough to act on them (if they are incompetent, or unable, their intentions matter less). These two simple dimensions—warmth and competence—together map how groups relate to each other in society. There are common stereotypes of people from all sorts of categories and occupations that lead them to be classified along these two dimensions. For example, a stereotypical “housewife” would be seen as high in warmth but lower in competence. This is not to suggest that actual housewives are not competent, of course, but that they are not widely admired for their competence in the same way as scientific pioneers, trendsetters, or captains of industry. At another end of the spectrum are homeless people and drug addicts, stereotyped as not having good intentions (perhaps exploitative for not trying to play by the rules), and likewise being incompetent (unable) to do anything useful. These groups reportedly make society more disgusted than any other groups do. Some group stereotypes are mixed, high on one dimension and low on the other. Groups stereotyped as competent but not warm, for example, include rich people and outsiders good at business. These groups that are seen as “competent but cold” make people feel some envy, admitting that these others may have some talent but resenting them for not being “people like us.” The “model minority” stereotype mentioned earlier includes people with this excessive competence but deficient sociability. The other mixed combination is high warmth but low competence. Groups who fit this combination include older people and disabled people. Others report pitying them, but only so long as they stay in their place. In an effort to combat this negative stereotype, disability- and elderly-rights activists try to eliminate that pity, hopefully gaining respect in the process. Altogether, these four kinds of stereotypes and their associated emotional prejudices (pride, disgust, envy, pity) occur all over the world for each of society’s own groups. These maps of the group terrain predict specific types of discrimination for specific kinds of groups, underlining how bias is not exactly equal opportunity. Conclusion: 21st Century Prejudices As the world becomes more interconnected—more collaborations between countries, more intermarrying between different groups—more and more people are encountering greater diversity of others in everyday life. Just ask yourself if you’ve ever been asked, “What are you?” Such a question would be preposterous if you were only surrounded by members of your own group. Categories, then, are becoming more and more uncertain, unclear, volatile, and complex (Bodenhausen & Peery, 2009). People’s identities are multifaceted, intersecting across gender, race, class, age, region, and more. Identities are not so simple, but maybe as the 21st century unfurls, we will recognize each other by the content of our character instead of the cover on our outside. Outside Resources Web: Website exploring the causes and consequences of prejudice. http://www.understandingprejudice.org/ Discussion Questions 1. Do you know more people from different kinds of social groups than your parents did? 2. How often do you hear people criticizing groups without knowing anything about them? 3. Take the IAT. Could you feel that some associations are easier than others? 4. What groups illustrate ambivalent biases, seemingly competent but cold, or warm but incompetent? 5. Do you or someone you know believe that group hierarchies are inevitable? Desirable? 6. How can people learn to get along with people who seem different from them? Vocabulary Automatic bias Automatic biases are unintended, immediate, and irresistible. Aversive racism Aversive racism is unexamined racial bias that the person does not intend and would reject, but that avoids inter-racial contact. Blatant biases Blatant biases are conscious beliefs, feelings, and behavior that people are perfectly willing to admit, are mostly hostile, and openly favor their own group. Discrimination Discrimination is behavior that advantages or disadvantages people merely based on their group membership. Implicit Association Test Implicit Association Test (IAT) measures relatively automatic biases that favor own group relative to other groups. Prejudice Prejudice is an evaluation or emotion toward people merely based on their group membership. Right-wing authoritarianism Right-wing authoritarianism (RWA) focuses on value conflicts but endorses respect for obedience and authority in the service of group conformity. Self-categorization theory Self-categorization theory develops social identity theory’s point that people categorize themselves, along with each other into groups, favoring their own group. Social dominance orientation Social dominance orientation (SDO) describes a belief that group hierarchies are inevitable in all societies and even good, to maintain order and stability. Social identity theory Social identity theory notes that people categorize each other into groups, favoring their own group. Stereotype Content Model Stereotype Content Model shows that social groups are viewed according to their perceived warmth and competence. Stereotypes Stereotype is a belief that characterizes people based merely on their group membership. Subtle biases Subtle biases are automatic, ambiguous, and ambivalent, but real in their consequences.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_12%3A_Social_Part_II/12.6%3A_Prejudice_Discrimination_and_Stereotyping.txt
By Stephen Garcia and Arnor Halldorsson University of Michigan When athletes compete in a race, they are able to observe and compare their performance against those of their competitors. In the same way, all people naturally engage in mental comparisons with the people around them during the course of daily life. These evaluations can impact our motivation and feelings. In this module, you will learn about the process of social comparison: its definition, consequences, and the factors that affect it. learning objectives • Understand the reasons people make social comparisons. • Identify consequences of social comparison. • Understand the Self-Evaluation Maintenance Model. • Explain situational factors that can affect social comparison. Introduction: Social Comparison One pleasant Saturday afternoon, Mr. Jones arrives home from the car dealership in a brand-new Mercedes-Benz C-Class, the entry-level sedan in the Mercedes family of cars. Although Mercedes-Benzes are common in Europe, they are often viewed as status symbols in Mr. Jones’ neighborhood in North America. This new car is a huge upgrade from his previous car. Excited, Mr. Jones immediately drives around the block and into town to show it off. He is thrilled with his purchase for a full week—that is, until he sees his neighbor across the street, Mr. Smith, driving a brand-new Mercedes S-Class, the highest tier of Mercedes sedans. Mr. Smith notices Mr. Jones from a distance and waves to him with a big smile. Climbing into his C-Class, Mr. Jones suddenly feels disappointed with his purchase and even feels envious of Mr. Smith. Now his C-Class feels just as uncoo as his old car. Mr. Smith is experiencing the effects of social comparison. Occurring frequently in our lives, social comparison shapes our perceptions, memory, and behavior—even regarding the most trivial of issues. In this module, we will take a closer look at the reasons we make social comparisons and the consequences of the social comparison process. Social Comparison: Basics In 1954, psychologist Leon Festinger hypothesized that people compare themselves to others in order to fulfill a basic human desire: the need for self-evaluation. He called this process social comparison theory. At the core of his theory is the idea that people come to know about themselves—their own abilities, successes, and personality—by comparing themselves with others. These comparisons can be divided into two basic categories. In one category, we consider social norms and the opinions of others. Specifically, we compare our own opinions and values to those of others when our own self-evaluation is unclear. For example, you might not be certain about your position on a hotly contested issue, such as the legality of abortion. Or, you might not be certain about which fork to use first in a multi-course place setting. In these types of instances people are prone to look toward others—to make social comparisons—to help fill in the gaps. Imagine an American exchange student arriving in India for the first time, a country where the culture is drastically different from his own. He notices quickly through observing others—i.e., social comparison—that when greeting a person, it is normal to place his own palms together rather than shaking the other person’s hand. This comparison informs him of how he should behave in the surrounding social context. The second category of social comparison pertains to our abilities and performance. In these cases, the need for self-evaluation is driven by another fundamental desire: to perform better and better—as Festinger (1954) put it, “a unidirectional drive upward.” In essence, we compare our performance not only to evaluate ourselves but also to benchmark our performance related to another person. If we observe or even anticipate that a specific person is doing better than us at some ability then we may be motivated to boost our performance level. Take, for example, a realistic scenario where Olivia uses social comparison to gauge her abilities: Olivia is a high school student who often spends a few hours in her backyard shooting a soccer ball at her homemade goal. A friend of hers suggests she try out for the school’s soccer team. Olivia accepts her friend’s suggestion, although nervously, doubting she’s good enough to make the team. On the day of tryouts, Olivia gets her gear ready and starts walking towards the soccer field. As she approaches, she feels butterflies in her stomach and her legs get wobbly. But, glancing towards the other candidates who have arrived early to take a few practice shots at the goal, she notices that their aim is inconsistent and they frequently miss the goal. Seeing this, Olivia feels more relaxed, and she confidently marches onto the field, ready to show everyone her skills. Relevance and Similarity There are important factors, however, that determine whether people will engage in social comparison. First, the performance dimension has to be relevant to the self (Festinger, 1954). For example, if excelling in academics is more important to you than excelling in sports, you are more likely to compare yourself with others in terms of academic rather than athletic performance. Relevance is also important when assessing opinions. If the issue at hand is relevant to you, you will compare your opinion to others; if not, you most likely won’t even bother. Relevance is thus a necessary precondition for social comparison. A secondary question is, " to whom do people compare themselves ?" Generally speaking, people compare themselves to those who are similar (Festinger, 1954; Goethals & Darley, 1977), whether similar in personal characteristics (e.g., gender, ethnic background, hair color, etc.) or in terms of performance (e.g., both being of comparable ability or both being neck-and-neck in a race). For example, a casual tennis player will not compare her performance to that of a professional, but rather to that of another casual tennis player. The same is true of opinions. People will cross-reference their own opinions on an issue with others who are similar to them rather than dissimilar (e.g., ethnic background or economic status). Direction of Comparison Social comparison is a bi-directional phenomenon where we can compare ourselves to people who are better than us—“upward comparisons”—or worse than us—“downward comparisons.” Engaging in either of these two comparisons on a performance dimension can affect our self-evaluation. On one hand, upward comparisons on relevant dimensions can threaten our self-evaluation and jeopardize self-esteem (Tesser, 1988). On the other hand, they can also lead to joy and admiration for others’ accomplishments on dimensions that are not relevant to the self, where one’s self-evaluation is not under threat. For example, an academic overachiever who distinguishes himself by having two advanced degrees, both a PhD and a law degree, may not enjoy meeting another individual with a PhD, a law degree, and an MBA, but may well enjoy meeting a fellow overachiever in a domain that is not self-relevant, such as a famous NASCAR racer or professional hockey player. Downward comparisons may boost our self-evaluation on relevant dimensions, leading to a self-enhancement effect (Wills, 1981), such as when an individual suffering from an illness makes downward comparisons with those suffering even more. A person enduring treatment for cancer, for instance, might feel better about his own side effects if he learns that an acquaintance suffered worse side effects from the same treatment. More recent findings have also shown that downward comparisons can also lead to feelings of scorn (Fiske, 2011), such as when those of a younger generation look down upon the elderly. In these cases, the boost to self-evaluation is so strong that it leads to an exaggerated sense of pride. Interestingly, the direction of comparison and a person’s emotional response can also depend on the counterfactual—“what might have been”—that comes most easily to mind. For example, one might think that an Olympic silver medalist would feel happier than a bronze medalist. After all, placing second is more prestigious than placing third. However, a classic study by Victoria Medvec, Scott Madey, and Thomas Gilovich (1995) found the opposite effect: bronze medalists were actually happier than silver medalists. The reason for this effect is that silver medalist's focus on having fallen short of achieving the gold (so close!), essentially turning a possible downward comparison into an upward comparison; whereas the bronze medalists recognize they came close to not winning any medal, essentially turning a possible upward comparison (to another medalist) into a downward comparison to those who did not even receive a medal. Consequences of Social Comparison The social comparison process has been associated with numerous consequences. For one, social comparison can impact self-esteem (Tesser, 1988), especially when doing well relative to others. For example, having the best final score in a class can increase your self-esteem quite a bit. Social comparison can also lead to feelings of regret (White, Langer, Yariv, & Welch, 2006), as when comparing the negative outcome of one’s investment strategy to the positive outcome of a different strategy taken by a neighbor. Social comparison can also lead to feelings of envy (Fiske, 2011; Salovey & Rodin, 1984), as when someone with thinning hair envies the thick hair of a colleague. Social comparison can also have interesting behavioral consequences. If you were to observe a discrepancy in performance between yourself and another person, then you might behave more competitively (Garcia, Tor, & Schiff, 2013), as you attempt to minimize the discrepancy. If, for example, you are among the top 10% on your class mid-term you might feel competitive with the other top students. Although competition can raise performance it can also take more problematic forms, from inflicting actual harm to making a comment to another person. These kinds of behaviors are likely to arise when the situation following the social comparison does not provide the opportunity to self-repair, such as another chance to compete in a race or retake a test (Johnson, 2012). However, when later opportunities to self-repair do exist, a more positive form of competitive motivation arises, whether that means running harder in a race or striving to earn a higher test score. Self-Evaluation Maintenance Model The self-evaluation maintenance (SEM; Tesser, 1988) model builds on social comparison theory. SEM points to a range of psychological forces that help and maintain our self-evaluation and self-esteem. In addition to relevance and similarity, SEM reveals the importance of relationship closeness. It turns out that relationship closeness—where two people stand on the continuum from being complete strangers to being intimate friends—affects self-evaluations. For example, in one study, Tesser and Smith (1980) asked people to play a verbal game in which they were given the opportunity to receive clues from a partner. These clues could be used to help them guess the correct word in a word game. Half the participants were told the game was related to intelligence whereas the other half were not. Additionally, half the participants were paired with a close friend but the other half played with a stranger. Results show that participants who were led to believe the task was self-relevant or having to do with intelligence provided more difficult clues when their partner was a friend versus a stranger—suggesting a competitive uptick associated with relationship closeness. However, when performance was implied to be irrelevant to the self, partners gave easier clues to friends than strangers. SEM can predict which of our friends and which of our comparison dimensions are self-relevant (Tesser & Campbell, 2006; Zuckerman & Jost, 2001). For example, suppose playing chess is highly self-relevant for you. In this case you will naturally compare yourselves to other chess players. Now, suppose that your chess-playing friend consistently beats you. In fact, each time you play she beats you by a wider and wider margin. SEM would predict that one of two things will likely happen: (1) winning at chess will no longer be self-relevant to you, or (2) you will no longer be friends with this individual. In fact, if the first option occurs—you lose interest in competing—you will begin to bask in the glory of your chess playing friend as his or her performance approaches perfection. These psychological processes have real world implications! They may determine who is hired in an organization or who is promoted at work. For example, suppose you are a faculty member of a university law school. Your work performance is appraised based on your teaching and on your academic publications. Although you do not have the most publications in your law school, you do have the most publications in prestigious journals. Now, suppose that you are chairing a committee to hire a new faculty member. One candidate has even more top tier publications than you, while another candidate has the most publications in general of all the faculty members. How do you think social comparison might influence your choice of applicants? Research suggests that someone in your hypothetical shoes would likely favor the second candidate over the first candidate: people will actively champion the candidate who does not threaten their standing on a relevant dimension in an organization (Garcia, Song, & Tesser, 2010). In other words, the SEM forces are so powerful that people will essentially advocate for a candidate whom they feel is inferior! Individual Differences It is also worth mentioning that social comparison and its effects on self-evaluation will often depend on personality and individual differences. For example, people with mastery goals(Poortvliet, Janssen, Van Yperen, & Van de Vliert, 2007 ) may not interpret an upward comparison as a threat to the self but more as challenge, and a hopeful sign that one can achieve a certain level of performance. Another individual difference is whether one has a “fixed mindset” or “growth mindset” (Dweck, 2007). People with fixed mindsets think that their abilities and talents cannot change; thus, an upward comparison will likely threaten their self-evaluation and prompt them to experience negative consequences of social comparison, such as competitive behavior, envy, or unhappiness. People with growth mindsets, however, are likely to interpret an upward comparison as a challenge, and an opportunity to improve themselves. Situational factors Social comparison researchers are actively exploring situational factors that can likewise influence degrees of social comparison: Number As the number of comparison targets (i.e., the number of people with whom you can compare) increases, social comparison tends to decrease. For example, imagine you are running a race with competitors of similar ability as your own, and the top 20% will receive a prize. Do you think you would try harder if there were only 10 people in the race, or if there were 100? The findings on N-Effect (Garcia & Tor, 2009; Tor & Garcia, 2010) suggest the answer is 10 . Even though the expected value of winning is the same in both cases, people will try harder when there are fewer people. In fact, findings suggest that as the number of SAT test-takers at a particular venue increases, the lower the average SAT score for that venue will be (Garcia & Tor, 2009). One of the mechanisms behind the N-Effect is social comparison. As the number of competitors increases, social comparison—one of the engines behind competitive motivation—becomes less important. Perhaps you have experienced this if you have had to give class presentations. As the number of presenters increases, you feel a decreasing amount of comparison pressure. Local Research on the local dominance effect (Zell & Alicke, 2010) also provides insights about social comparison. People are more influenced by social comparison when the comparison is more localized rather than being broad and general. For example, if you wanted to evaluate your height by using social comparison, you could compare your height to a good friend, a group of friends, people in your workplace, or even the average height of people living in your city. Although any of these comparisons is hypothetically possible people generally rely on more local comparisons. They are more likely to compare with friends or co-workers than they are to industry or national averages. So, if you are among the tallest in your group of friends, it may very well give you a bigger boost to your self-esteem, even if you’re still among the shortest individuals at the national level. Proximity to a Standard Research suggests that social comparison involves the proximity of a standard—such as the #1 ranking or other qualitative threshold. One consequence of this is an increase in competitive behavior. For example, in childhood games, if someone shouts, “First one to the tree is the coolest-person-in the-world!” then the children who are nearest the tree will tug and pull at each other for the lead. However, if someone shouts, “Last one there is a rotten-egg!” then the children who are in last place will be the ones tugging and pulling each other to get ahead. In the proximity of a standard, social comparison concerns increase. We also see this in rankings. Rivals ranked #2 and #3, for instance, are less willing to maximize joint gains (in which they both benefit) if it means their opponent will benefit more, compared to rivals ranked #202 and #203 (Garcia, Tor, & Gonzalez, 2006; Garcia & Tor, 2007). These latter rivals are so far from the #1 rank (i.e., the standard) that it does not bother them if their opponent benefits more than them. Thus, social comparison concerns are only important in the proximity of a standard. Social Category Lines Social comparison can also happen between groups. This is especially the case when groups come from different social categories versus the same social category. For example, if students were deciding what kind of music to play at the high school prom, one option would be to simply flip a coin—say, heads for hip-hop, tails for pop. In this case, everyone represents the same social category—high school seniors—and social comparison isn’t an issue. However, if all the boys wanted hip-hop and all the girls wanted pop flipping a coin is not such an easy solution as it privileges one social category over another (Garcia & Miller, 2007). For more on this, consider looking into the research literature about the difficulties of win-win scenarios between different social categories (Tajfel, Billig, Bundy, & Flament, 1971; Turner, Brown, & Tajfel, 1979). Related Phenomena Frog Pond Effect One interesting phenomenon of social comparison is the Frog Pond Effect. As the name suggests, its premise can be illustrated using the simple analogy of a frog in a pond: as a frog, would you rather be in a small pond where you’re a big frog, or a large pond where you’re a small frog? According to Marsh, Trautwein, Ludtke and Koller (2008), people in general had a better academic self-concept if they were a big frog in a small pond (e.g., the top student in their local high school) rather than a small frog in a large one (e.g., one of many good students at an Ivy League university). In a large study of students, they found that school-average ability can have a negative impact on the academic self-esteem of a student when the average ability is 1 standard deviation higher than normal (i.e., a big pond). In other words, average students have a higher academic self-concept when attending a below-average school (big fish in a small pond), and they have a lower academic self-concept when attending an above-average school (small fish in a big pond) (Marsh, 1987; Marsh & Parker, 1984). The Dunning-Kruger Effect Another related topic to social comparison is the Dunning-Kruger Effect. The Dunning-Kruger effect, as explained by Dunning, Johnson, Ehrlinger and Kruger (2003), addresses the fact that unskilled people often think they are on par or superior to their peers in tasks such as test-taking abilities. That is, they are overconfident. Basically, they fail to accurately compare themselves or their skills within their surroundings. For example, Dunning et al. (2003) asked students to disclose how well they thought they had done on an exam they’d just taken. The bottom 25% of students with the lowest test scores overestimated their performance by approximately 30%, thinking their performance was above the 50th percentile. This estimation problem doesn’t only apply to poor performers, however. According to Dunning et al. (2003), top performers tend to underestimate their skills or percentile ranking in their surrounding context. Some explanations are provided by Dunning et al. (2003) for this effect on both the good and poor performers:The poor performers, compared to their more capable peers, lack specific logical abilities similar to the logic necessary to do some of the tasks/tests in these studies and, as such, cannot really distinguish which questions they are getting right or wrong. This is known as the double-curse explanation. However, the good performers do not have this particular logic problem and are actually quite good at estimating their raw scores. Ironically, the good performers usually overestimate how well the people around them are doing and therefore devaluate their own performance. As a result, most people tend to think they are above average in what they do, when in actuality not everyone can be above average. Conclusion Social comparison is a natural psychological tendency and one that can exert a powerful influence on the way we feel and behave. Many people act as if social comparison is an ugly phenomenon and one to be avoided. This sentiment is at the heart of phrases like “keeping up with the Joneses” and “the rat race,” in which it is assumed that people are primarily motivated by a desire to beat others. In truth, social comparison has many positive aspects. Just think about it: how could you ever gauge your skills in chess without having anyone to compare yourself to? It would be nearly impossible to ever know just how good your chess skills are, or even what criteria determine “good” vs. “bad” chess skills. In addition, the engine of social comparison can also provide the push you need to rise to the occasion and increase your motivation, and therefore make progress toward your goals. Outside Resources Video: Downward Comparison Video: Dunning-Kruger Effect Video: Social Comparison overview Video: Social Media and Comparison Video: Upward Comparison Web: Self-Compassion to counter the negative effects of social comparison http://self-compassion.org/the-three-elements-of-self-compassion-2/ Discussion Questions 1. On what do you compare yourself with others? Qualities such as attractiveness and intelligence? Skills such as school performance or athleticism? Do others also make these same types of comparisons or does each person make a unique set? Why do you think this is? 2. How can making comparisons to others help you? 3. One way to make comparisons is to compare yourself with your own past performance. Discuss a time you did this. Could this example be described as an “upward” or “downward” comparison? How did this type of comparison affect you? Vocabulary Counterfactual thinking Mentally comparing actual events with fantasies of what might have been possible in alternative scenarios. Downward comparison Making mental comparisons with people who are perceived to be inferior on the standard of comparison. Dunning-Kruger Effect The tendency for unskilled people to be overconfident in their ability and highly skilled people to underestimate their ability. Fixed mindset The belief that personal qualities such as intelligence are traits that cannot be developed. People with fixed mindsets often underperform compared to those with “growth mindsets” Frog Pond Effect The theory that a person’s comparison group can affect their evaluations of themselves. Specifically, people have a tendency to have lower self-evaluations when comparing themselves to higher performing groups. Growth mindset The belief that personal qualities, such as intelligence, can be developed through effort and practice. Individual differences Psychological traits, abilities, aptitudes and tendencies that vary from person to person. Local dominance effect People are generally more influenced by social comparison when that comparison is personally relevant rather than broad and general. Mastery goals Goals that are focused primarily on learning, competence, and self-development. These are contrasted with “performance goals” that are focused on the quality of a person’s performance. N-Effect The finding that increasing the number of competitors generally decreases one’s motivation to compete. Personality A person’s relatively stable patterns of thought, feeling, and behavior. Proximity The relative closeness or distance from a given comparison standard. The further from the standard a person is, the less important he or she considers the standard. When a person is closer to the standard he/she is more likely to be competitive. Self-enhancement effect The finding that people can boost their own self-evaluations by comparing themselves to others who rank lower on a particular comparison standard. Self-esteem The feeling of confidence in one’s own abilities or worth. Self-evaluation maintenance (SEM) A model of social comparison that emphasizes one’s closeness to the comparison target, the relative performance of that target person, and the relevance of the comparison behavior to one’s self-concept. Social category Any group in which membership is defined by similarities between its members. Examples include religious, ethnic, and athletic groups. Social comparison The process by which people understand their own ability or condition by mentally comparing themselves to others. Upward comparisons Making mental comparisons to people who are perceived to be superior on the standard of comparison.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_12%3A_Social_Part_II/12.7%3A_Social_Comparison.txt
By Brad J. Bushman The Ohio State University This module discusses the causes and consequences of human aggression and violence. Both internal and external causes are considered. Effective and ineffective techniques for reducing aggression are also discussed. learning objectives • Explain the important components of the definition of aggression, and explain how aggression differs from violence. • Explain whether people think the world is less violent now than in the past, and whether it actually is less violent. If there is a discrepancy between perception and reality, how can it be resolved? • Identify the internal causes and external causes of aggression. Compare and contrast how the inner and external causes differ. • Identify effective and ineffective approaches to reducing aggression. Introduction "Beware of the dark side. Anger, fear, aggression; the dark side of the Force are they." -Yoda, renowned Jedi master in the Star Wars universe Aggression is indeed the dark side of human nature. Although aggression may have been adaptive in our ancient past, it hardly seems adaptive today. For example, on 14 December 2012 Adam Lanza, age 20, first killed his mother in their home, and then went to an elementary school in Newtown, Connecticut and began shooting, killing 20 children and 6 school employees, before killing himself. When incidents such as these happen, we want to know what caused them. Although it is impossible to know what motivated a particular individual such as Lanza to commit the Newtown school shooting, for decades researchers have studied the internal and external factors that influence aggression and violence. We consider some of these factors in this module. Before we get too far, let’s begin by defining the term “aggression.” Laypeople and researchers often use the term “aggression” differently. Laypeople might describe a salesperson that tries really hard to sell them something as “aggressive.” The salesperson does not, however, want to harm potential customers. Most researchers define aggression as any behavior intended to harm another person who does not want to be harmed (Baron & Richardson, 1994). This definition includes three important features. First, aggression is a behavior—you can see it. Aggression is not an internal response, such as having angry feelings or aggressive thoughts (although such internal responses can increase the likelihood of actual aggression). Second, aggression is intentional rather than accidental. For example, a dentist might intentionally give a patient a shot of Novocain (which hurts!), but the goal is to help rather than harm the patient. Third, the victim wants to avoid the harm. Thus, suicide and sadomasochistic sex play would not be called aggression because the victim actively seeks to be harmed. Researchers and laypeople also differ in their use of the term violence. A meteorologist might call a storm “violent” if it has intense winds, rain, thunder, lightning, or hail. Researchers define violence as aggression intended to cause extreme physical harm (e.g., injury, death). Thus, all violent acts are aggressive, but not all aggressive acts are violent. For example, screaming and swearing at another person is aggressive, but not violent. The good news is that the level of violence in the world is decreasing over time—by millennia, century, and even decade (Pinker, 2011). Studies of body counts, such as the proportion of prehistoric skeletons with axe and arrowhead wounds, suggest that prehistoric societies were far more violent than those today. Estimates show that if the wars of the 20th century had killed the same proportion of the population as ancient tribal wars did, then the death toll would have been 20 times higher—2 billion rather than 100 million. More recent data show that murder rates in Europe have decreased dramatically since the Middle Ages. For example, estimated murders in England dropped from 24 per 100,000 in the 14th century to 0.6 per 100,000 by the early 1960s. The major decline in violence occurred in the 17th century during the “Age of Reason,” which began in the Netherlands and England and then spread to other European countries. Global violence has also steadily decreased since the middle of the 20th century. For example, the number of battle deaths in interstate wars has declined from more than 65,000 per year in the 1950s to fewer than 2,000 per year in the 2000s. There have also been global declines in the number of armed conflicts and combat deaths, the number of military coups, and the number of deadly violence campaigns waged against civilians. For example, Figure 12.8.1 shows the number of battle deaths per 100,000 people per year over 60 years (see Pinker, 2011, p. 301). As can be seen, battle deaths of all types (civil, colonial, interstate, internationalized civil) have decreased over time. The claim that violence has decreased dramatically over time may seem hard to believe in today’s digital age when we are constantly bombarded by scenes of violence in the media. In the news media, the top stories are the most violent ones—“If it bleeds it leads,” so the saying goes. Citizen journalists around the world also use social media to “show and tell” the world about unjustified acts of violence. Because violent images are more available to us now than ever before, we incorrectly assume that violence levels are also higher. Our tendency to overestimate the amount of violence in the world is due to the availability heuristic, which is the tendency to judge the frequency or likelihood of an event by the ease with which relevant instances come to mind. Because we are frequently exposed to scenes of violence in the mass media, acts of violence are readily accessible in memory and come to mind easily, so we assume violence is more common than it actually is. Human aggression is very complex and is caused by multiple factors. We will consider a few of the most important internal and external causes of aggression. Internal causes include anything the individual brings to the situation that increases the probability of aggression. External causes include anything in the environment that increases the probability of aggression. Finally, we will consider a few strategies for reducing aggression. Internal Factors Age At what age are people most aggressive? You might be surprised to learn that toddlers 1 to 3 years old are most aggressive. Toddlers often rely on physical aggression to resolve conflict and get what they want. In free play situations, researchers have found that 25 percent of their interactions are aggressive (Tremblay, 2000). No other group of individuals (e.g., Mafia, street gangs) resorts to aggression 25 percent of the time. Fortunately for the rest of us, most toddler aggression isn’t severe enough to qualify as violence because they don’t use weapons, such as guns and knives. As children grow older, they learn to inhibit their aggressive impulses and resolve conflict using nonaggressive means, such as compromise and negotiation. Although most people become less aggressive over time, a small subset of people becomes moreaggressive over time. The most dangerous years for this small subset of people (and for society as a whole) are late adolescence and early adulthood. For example, 18- to 24-year-olds commit most murders in the U.S. (U.S. Federal Bureau of Investigation, 2012). Gender At all ages, males tend to be more physically aggressive than females. However, it would be wrong to think that females are never physically aggressive. Females do use physical aggression, especially when they are provoked by other females (Collins, Quigley, & Leonard, 2007). Among heterosexual partners, women are actually slightly more likely than men to use physical aggression (Archer, 2000). However, when men do use physical aggression, they are more likely than women to cause serious injuries and even death to their partners. When people are strongly provoked, gender differences in aggression shrink (Bettencourt & Miller, 1996). Females are much more likely than males to engage in relational aggression, defined as intentionally harming another person’s social relationships, feelings of acceptance, or inclusion within a group (Crick & Grotpeter, 1995). Examples of relational aggression include gossiping, spreading rumors, withdrawing affection to get what you want, excluding someone from your circle of friends, and giving someone the “silent treatment.” Personality Traits Related to Aggression Some people seem to be cranky and aggressive almost all the time. Aggressiveness is almost as stable as intelligence over time (Olweus, 1979). Individual differences in aggressiveness are often assessed using self-report questionnaires such as the “Aggression Questionnaire” (Buss & Perry, 1992), which includes items such as “I get into fights a little more than the average person” and “When frustrated, I let my irritation show.” Scores on these questionnaires are positively related to actual aggressive and violent behaviors (Anderson & Bushman, 1997). The components of the “Dark Triad of Personality”—narcissism, psychopathy, and Machiavellianism—are also related to aggression (Paulhus & Williams, 2002). The term “narcissism” comes from the mythical Greek character Narcissus who fell in love with his own image reflected in the water. Narcissists have inflated egos, and they lash out aggressively against others when their inflated egos are threatened (e.g., Bushman & Baumeister, 1998). It is a common myth that aggressive people have low self-esteem (Bushman et al., 2009). Psychopaths are callous individuals who lack empathy for others. One of the strongest deterrents of aggression is empathy, which psychopaths lack. The term “Machiavellianism” comes from the Italian philosopher and writer Niccolò Machiavelli, who advocated using any means necessary to gain raw political power, including aggression and violence. Hostile Cognitive Biases One key to keeping aggression in check is to give people the benefit of the doubt. Some people, however, do just the opposite. There are three hostile cognitive biases. The hostile attribution bias is the tendency to perceive ambiguous actions by others as hostile actions (Dodge, 1980). For example, if a person bumps into you, a hostile attribution would be that the person did it on purpose and wants to hurt you. The hostile perception bias is the tendency to perceive social interactions in general as being aggressive (Dill et al., 1997). For example, if you see two people talking in an animated fashion, a hostile perception would be that they are fighting with each other. The hostile expectation bias is the tendency to expect others to react to potential conflicts with aggression (Dill et al., 1997). For example, if you bump into another person, a hostile expectation would be that the person will assume that you did it on purpose and will attack you in return. People with hostile cognitive biases view the world as a hostile place. External Factors Frustration and Other Unpleasant Events One of the earliest theories of aggression proposed that aggression is caused by frustration, which was defined as blocking goal-directed behavior (Dollard et al., 1939). For example, if you are standing in a long line to purchase a ticket, it is frustrating when someone crowds in front of you. This theory was later expanded to say that all unpleasant events, not just frustrations, cause aggression (Berkowitz, 1989). Unpleasant events such as frustrations, provocations, social rejections, hot temperatures, loud noises, bad air (e.g., pollution, foul odors, secondhand smoke), and crowding can all cause aggression. Unpleasant events automatically trigger a fight–flight response. Weapons Obviously, using a weapon can increase aggression and violence, but can just seeing a weapon increase aggression? To find out, researchers sat angry participants at a table that had a shotgun and a revolver on it—or, in the control condition, badminton racquets and shuttlecocks (Berkowitz & LePage, 1967). The items on the table were supposedly part of a different study, but the researcher had forgotten to put them away. The participant was supposed to decide what level of electric shock to deliver to a person pretending to be another participant, and the electric shocks were used to measure aggression. The experimenter told participants to ignore the items on the table, but apparently they could not. Participants who saw the guns gave more shocks than did participants who saw the sports items. Several other studies have replicated this so-called weapons effect, including some conducted outside the lab (Carlson, Marcus-Newhall, & Miller, 1990). For example, one study found that motorists were more likely to honk their horns at another driver stalled in a pickup truck with a rifle visible in his rear window than in response to the same delay from the same truck, but with no gun (Turner, Layton, & Simons, 1975). When you think about it, you would have to be pretty stupid to honk your horn at a driver with a rifle in his truck. However, drivers were probably responding in an automatic rather than a deliberate manner. Other research has shown drivers who have guns in their vehicles are more aggressive drivers than those without guns in their vehicles (Hemenway, Vriniotis, & Miller, 2006). Violent Media There are plenty of aggressive cues in the mass media, such as in TV programs, films, and video games. In the U.S., the Surgeon General warns the public about threats to their physical and mental health. Most Americans know that the U.S. Surgeon General issued a warning about cigarettes in 1964: “Warning: The Surgeon General Has Determined That Cigarette Smoking Is Dangerous to Your Health.” However, most Americans do not know that the U.S. Surgeon General issued a warning regarding violent TV programs in 1972: “It is clear to me that the causal relationship between televised violence and antisocial behavior is sufficient to warrant appropriate and immediate remedial action. . . . There comes a time when the data are sufficient to justify action. That time has come” (Steinfeld, 1972). Since then, hundreds of additional studies have shown that all forms of violent media can increase aggression (e.g., Anderson & Bushman, 2002). Violent video games might even be more harmful than violent TV programs, for at least three reasons. First, playing a video game is active, whereas watching a TV program is passive. Active involvement enhances learning. One study found that boys who played a violent video game were more aggressive afterward than were boys who merely watched the same game (Polman, Orobio de Castro, & van Aken, 2008). Second, video game players are more likely to identify with a violent character than TV watchers. If the game involves a first-person shooter, players have the same visual perspective as the killer. If the game is third person, the player controls the character’s actions from a more distant visual perspective. In either case, the player is linked to a violent character. Research has shown that people are more aggressive when they identify with a violent character (e.g., Konijn, Nije Bijvank, & Bushman, 2007). Third, violent games directly reward players for violent behavior by awarding points or by allowing them to advance in the game. In some games, players are also rewarded through verbal praise, such as hearing “Impressive!” after killing an enemy. In TV programs, reward is not directly tied to the viewer’s behavior. It is well known that rewarding behavior increases its frequency. One study found that players were more aggressive after playing a violent game that rewarded violent actions than after playing the same game that punished violent actions (Carnagey & Anderson, 2005). The evidence linking violent video games to aggression is compelling. A comprehensive review found that violent games increase aggressive thoughts, angry feelings, and aggressive behaviors and decrease empathic feelings and prosocial behaviors (Anderson et al., 2010). Similar effects were obtained for males and females, regardless of their age, and regardless of what country they were from. Alcohol Alcohol has long been associated with aggression and violence. In fact, sometimes alcohol is deliberately used to promote aggression. It has been standard practice for many centuries to issue soldiers some alcohol before they went into battle, both to increase aggression and reduce fear (Keegan, 1993). There is ample evidence of a link between alcohol and aggression, including evidence from experimental studies showing that consuming alcohol can cause an increase in aggression (e.g., Lipsey, Wilson, Cohen, & Derzon, 1997). Most theories of intoxicated aggression fall into one of two categories: (a) pharmacological theories that focus on how alcohol disrupts cognitive processes, and (b) expectancy theories that focus on how social attitudes about alcohol facilitate aggression. Normally, people have strong inhibitions against behaving aggressively, and pharmacological models focus on how alcohol reduces these inhibitions. To use a car analogy, alcohol increases aggression by cutting the brake line rather than by stepping on the gas. How does alcohol cut the brake line? Alcohol disrupts cognitive executive functions that help us organize, plan, achieve goals, and inhibit inappropriate behaviors (Giancola, 2000). Alcohol also reduces glucose, which provides energy to the brain for self-control (Gailliot & Baumeister, 2007). Alcohol has a “myopic” effect on attention—it causes people to focus attention only on the most salient features of a situation and not pay attention to more subtle features (Steele & Josephs, 1990). In some places where alcohol is consumed (e.g., crowded bar), provocations can be salient. Alcohol also reduces self-awareness, which decreases attention to internal standards against behaving aggressively (Hull, 1981). According to expectancy theories, alcohol increases aggression because people expect it to. In our brains, alcohol and aggression are strongly linked together. Indeed, research shows that subliminally exposing people to alcohol-related words (e.g., vodka) can make them more aggressive, even though they do not drink one drop of alcohol (Subra et al., 2010). In many cultures, drinking occasions are culturally agreed-on “time out” periods where people are not held responsible for their actions (MacAndrew & Edgerton, 1969). Those who behave aggressively when intoxicated sometimes “blame the bottle” for their aggressive actions. Does this research evidence mean that aggression is somehow contained in alcohol? No. Alcohol increases rather than causes aggressive tendencies. Factors that normally increase aggression (e.g., frustrations and other unpleasant events, aggressive cues) have a stronger effect on intoxicated people than on sober people (Bushman, 1997). In other words, alcohol mainly seems to increase aggression in combination with other factors. If someone insults or attacks you, your response will probably be more aggressive if you are drunk than sober. When there is no provocation, however, the effect of alcohol on aggression may be negligible. Plenty of people enjoy an occasional drink without becoming aggressive. Reducing Aggression Most people are greatly concerned about the amount of aggression in society. Aggression directly interferes with our basic needs of safety and security. Thus, it is urgent to find ways to reduce aggression. Because there is no single cause for aggression, it is difficult to design effective treatments. A treatment that works for one individual may not work for another individual. And some extremely aggressive people, such as psychopaths, are considered to be untreatable. Indeed, many people have started to accept the fact that aggression and violence have become an inevitable, intrinsic part of our society. This being said, there certainly are things that can be done to reduce aggression and violence. Before discussing some effective methods for reducing aggression, two ineffective methods need to be debunked: catharsis and punishment. Catharsis The term catharsis dates back to Aristotle and means to cleanse or purge. Aristotle taught that viewing tragic plays gave people emotional release from negative emotions. In Greek tragedy, the heroes didn’t just grow old and retire—they are often murdered. Sigmund Freud revived the ancient notion of catharsis by proposing that people should express their bottled-up anger. Freud believed if they repressed it, negative emotions would build up inside the individual and surface as psychological disorders. According to catharsis theory, acting aggressively or even viewing aggression purges angry feelings and aggressive impulses into harmless channels. Unfortunately for catharsis theory, research shows the opposite often occurs (e.g., Geen & Quanty, 1977). If venting anger doesn’t get rid of it, what does? All emotions, including anger, consist of bodily states (e.g., arousal) and mental meanings. To get rid of anger, you can focus on either of those. Anger can be reduced by getting rid of the arousal state, such as by relaxing, listening to calming music, or counting to 10 before responding. Mental tactics can also reduce anger, such as by reframing the situation or by distracting oneself and turning one’s attention to more pleasant topics. Incompatible behaviors can also help get rid of anger. For example, petting a puppy, watching a comedy, kissing your lover, or helping someone in need, because those acts are incompatible with anger and, therefore, they make the angry state impossible to sustain (e.g., Baron, 1976). Viewing the provocative situation from a more distant perspective, such as that of a fly on the wall, also helps (Mischkowski, Kross, & Bushman, 2012). Punishment Most cultures assume that punishment is an effective way to deter aggression and violence. Punishment is defined as inflicting pain or removing pleasure for a misdeed. Punishment can range in intensity from spanking a child to executing a convicted killer. Parents use it, organizations use it, and governments use it, but does it work? Today, aggression researchers have their doubts. Punishment is most effective when it is: (a) intense, (b) prompt, (c) applied consistently and with certainty, (d) perceived as justified, and (e) possible to replace the undesirable punished behavior with a desirable alternative behavior (Berkowitz, 1993). Even if punishment occurs under these ideal conditions, it may only suppress aggressive behavior temporarily, and it has several undesirable long-term consequences. Most important, punishment models the aggressive behavior it seeks to prevent. Longitudinal studies have shown that children who are physically punished by their parents at home are more aggressive outside the home, such as in school (e.g., Lefkowitz, Huesmann, & Eron, 1978). Because punishment is unpleasant, it can also trigger aggression just like other unpleasant events. Successful Interventions Although specific aggression intervention strategies cannot be discussed in any detail here, there are two important general points to be made. First, successful interventions target as many causes of aggression as possible and attempt to tackle them collectively. Interventions that are narrowly focused at removing a single cause of aggression, however well conducted, are bound to fail. In general, external causes are easier to change than internal causes. For example, one can reduce exposure to violent media or alcohol consumption, and make unpleasant situations more tolerable (e.g., use air conditioners when it is hot, reduce crowding in stressful environments such as prisons and psychiatric wards). Second, aggression problems are best treated in early development, when people are still malleable. As was mentioned previously, aggression is very stable over time, almost as stable as intelligence. If young children display excessive levels of aggression (often in the form of hitting, biting, or kicking), it places them at high risk for becoming violent adolescents and even violent adults. It is much more difficult to alter aggressive behaviors when they are part of an adult personality, than when they are still in development. Yoda warned that anger, fear, and aggression are the dark side of the Force. They are also the dark side of human nature. Fortunately, aggression and violence are decreasing over time, and this trend should continue. We also know a lot more now than ever before about what factors increase aggression and how to treat aggressive behavior problems. When Luke Skywalker was going to enter the dark cave on Degobah (the fictional Star Wars planet), Yoda said, “Your weapons, you will not need them.” Hopefully, there will come a time in the not-too-distant future when people all over the world will no longer need weapons. Outside Resources Book: Bushman, B. J., & Huesmann, L. R. (2010). Aggression. In S. T. Fiske, D. T. Gilbert, & G. Lindzey (Eds.), Handbook of social psychology (5th ed.) (pp. 833-863). New York: John Wiley & Sons. TED Talk: Zak Ebrahim https://www.ted.com/talks/zak_ebrahim_i_am_the_son_of_a_terrorist_here_s_how_i_chose_peace?language=en#t-528075 Video: From the Inquisitive Mind website, Brad Bushman conducts a short review of terminology and important research concerning aggression and violence. Discussion Questions 1. Discuss whether different examples (hypothetical and real) meet the definition of aggression and the definition of violence. 2. Why do people deny the harmful effects of violent media when the research evidence linking violent media to aggression is so conclusive? 3. Consider the various causes of aggression described in this module and elsewhere, and discuss whether they can be changed to reduce aggression, and if so how. Vocabulary Aggression Any behavior intended to harm another person who does not want to be harmed. Availability heuristic The tendency to judge the frequency or likelihood of an event by the ease with which relevant instances come to mind. Catharsis Greek term that means to cleanse or purge. Applied to aggression, catharsis is the belief that acting aggressively or even viewing aggression purges angry feelings and aggressive impulses into harmless channels. Hostile attribution bias The tendency to perceive ambiguous actions by others as aggressive. Hostile expectation bias The tendency to assume that people will react to potential conflicts with aggression. Hostile perception bias The tendency to perceive social interactions in general as being aggressive. Punishment Inflicting pain or removing pleasure for a misdeed. Punishment decreases the likelihood that a behavior will be repeated. Relational aggression Intentionally harming another person’s social relationships, feelings of acceptance, or inclusion within a group. Violence Aggression intended to cause extreme physical harm, such as injury or death. Weapons effect The increase in aggression that occurs as a result of the mere presence of a weapon.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_12%3A_Social_Part_II/12.8%3A_Aggression_and_Violence.txt
By Tiffany A. Ito and Jennifer T. Kubota University of Colorado Boulder, New York University This module provides an overview of the new field of social neuroscience, which combines the use of neuroscience methods and theories to understand how other people influence our thoughts, feelings, and behavior. The module reviews research measuring neural and hormonal responses to understand how we make judgments about other people and react to stress. Through these examples, it illustrates how social neuroscience addresses three different questions: (1) how our understanding of social behavior can be expanded when we consider neural and physiological responses, (2) what the actual biological systems are that implement social behavior (e.g., what specific brain areas are associated with specific social tasks), and (3) how biological systems are impacted by social processes. learning objectives • Define social neuroscience and describe its three major goals. • Describe how measures of brain activity such as EEG and fMRI are used to make inferences about social processes. • Discuss how social categorization occurs. • Describe how simulation may be used to make inferences about others. • Discuss the ways in which other people can cause stress and also protect us against stress. Psychology has a long tradition of using our brains and body to better understand how we think and act. For example, in 1939 Heinrich Kluver and Paul Bucy removed (i.e. lesioned) the temporal lobes in some rhesus monkeys and observed the effect on behavior. Included in these lesions was a subcortical area of the brain called the amygdala. After surgery, the monkeys experienced profound behavioral changes, including loss of fear. These results provided initial evidence that the amygdala plays a role in emotional responses, a finding that has since been confirmed by subsequent studies (Phelps & LeDoux, 2005; Whalen & Phelps, 2009). What Is Social Neuroscience? Social neuroscience similarly uses the brain and body to understand how we think and act, with a focus on how we think about and act toward other people. More specifically, we can think of social neuroscience as an interdisciplinary field that uses a range of neuroscience measures to understand how other people influence our thoughts, feelings, and behavior. As such, social neuroscience studies the same topics as social psychology, but does so from a multilevel perspective that includes the study of the brain and body. Figure 12.9.1 shows the scope of social neuroscience with respect to the older fields of social psychology and neuroscience. Although the field is relatively new – the term first appeared in 1992 (Cacioppo & Berntson, 1992) – it has grown rapidly, thanks to technological advances making measures of the brain and body cheaper and more powerful than ever before, and to the recognition that neural and physiological information are critical to understanding how we interact with other people. Social neuroscience can be thought of as both a methodological approach (using measures of the brain and body to study social processes) and a theoretical orientation (seeing the benefits of integrating neuroscience into the study of social psychology). The overall approach in social neuroscience is to understand the psychological processes that underlie our social behavior. Because those psychological processes are intrapsychic phenomena that cannot be directly observed, social neuroscientists rely on a combination of measureable or observable neural and physiological responses as well as actual overt behavior to make inferences about psychological states (see Figure 1). Using this approach, social neuroscientists have been able to pursue three different types of questions: (1) What more can we learn about social behavior when we consider neural and physiological responses? (2) What are the actual biological systems that implement social behavior (e.g., what specific brain areas are associated with specific social tasks)? and (3) How are biological systems impacted by social processes? In this module, we review three research questions that have been addressed with social neuroscience that illustrate the different goals of the field. These examples also expose you to some of the frequently used measures. How Automatically Do We Judge Other People? Social categorization is the act of mentally classifying someone as belonging in a group. Why do we do this? It is an effective mental shortcut. Rather than effortfully thinking about every detail of every person we encounter, social categorization allows us to rely on information we already know about the person’s group. For example, by classifying your restaurant server as a man, you can quickly activate all the information you have stored about men and use it to guide your behavior. But this shortcut comes with potentially high costs. The stored group beliefs might not be very accurate, and even when they do accurately describe some group members, they are unlikely to be true for every member you encounter. In addition, many beliefs we associate with groups – called stereotypes – are negative. This means that relying on social categorization can often lead people to make negative assumptions about others. The potential costs of social categorization make it important to understand how social categorization occurs. Is it rare or does it occur often? Is it something we can easily stop, or is it hard to override? One difficulty answering these questions is that people are not always consciously aware of what they are doing. In this case, we might not always realize when we are categorizing someone. Another concern is that even when people are aware of their behavior, they can be reluctant to accurately report it to an experimenter. In the case of social categorization, subjects might worry they will look bad if they accurately report classifying someone into a group associated with negative stereotypes. For instance, many racial groups are associated with some negative stereotypes, and subjects may worry that admitting to classifying someone into one of those groups means they believe and use those negative stereotypes. Social neuroscience has been useful for studying how social categorization occurs without having to rely on self-report measures, instead measuring brain activity differences that occur when people encounter members of different social groups. Much of this work has been recorded using the electroencephalogram, or EEG. EEG is a measure of electrical activity generated by the brain’s neurons. Comparing this electrical activity at a given point in time against what a person is thinking and doing at that same time allows us to make inferences about brain activity associated with specific psychological states. One particularly nice feature of EEG is that it provides very precise timing information about when brain activity occurs. EEG is measured non-invasively with small electrodes that rest on the surface of the scalp. This is often done with a stretchy elastic cap, like the one shown in Figure 12.9.2, into which the small electrodes are sewn. Researchers simply pull the cap onto the subject’s head to get the electrodes into place; wearing it is similar to wearing a swim cap. The subject can then be asked to think about different topics or engage in different tasks as brain activity is measured. To study social categorization, subjects have been shown pictures of people who belong to different social groups. Brain activity recorded from many individual trials (e.g., looking at lots of different Black individuals) is then averaged together to get an overall idea of how the brain responds when viewing individuals who belong to a particular social group. These studies suggest that social categorization is an automatic process – something that happens with little conscious awareness or control – especially for dimensions like gender, race, and age (Ito & Urland, 2003; Mouchetant-Rostaing & Giard, 2003). The studies specifically show that brain activity differs when subjects view members of different social groups (e.g., men versus women, Blacks versus Whites), suggesting that the group differences are being encoded and processed by the perceiver. One interesting finding is that these brain changes occur both when subjects are purposely asked to categorize the people into social groups (e.g., to judge whether the person is Black or White), and also when they are asked to do something that draws attention away from group classifications (e.g., making a personality judgment about the person) (Ito & Urland, 2005). This tells us that we do not have to intend to make group classifications in order for them to happen. It is also very interesting to consider how quickly the changes in brain responses occur. Brain activity is altered by viewing members of different groups within 200 milliseconds of seeing a person’s face. That is just two-tenths of a second. Such a fast response lends further support to the idea that social categorization occurs automatically and may not depend on conscious intention. Overall, this research suggests that we engage in social categorization very frequently. In fact, it appears to happen automatically (i.e., without us consciously intending for it to happen) in most situations for dimensions like gender, age, and race. Since classifying someone into a group is the first step to activating a group stereotype, this research provides important information about how easily stereotypes can be activated. And because it is hard for people to accurately report on things that happen so quickly, this issue has been difficult to study using more traditional self-report measures. Using EEGs has, therefore, been helpful in providing interesting new insights into social behavior. Do We Use Our Own Behavior to Help Us Understand Others? Classifying someone into a social group then activating the associated stereotype is one way to make inferences about others. However, it is not the only method. Another strategy is to imagine what our own thoughts, feelings, and behaviors would be in a similar situation. Then we can use our simulated reaction as a best guess about how someone else will respond (Goldman, 2005). After all, we are experts in our own feelings, thoughts, and tendencies. It might be hard to know what other people are feeling and thinking, but we can always ask ourselves how we would feel and act if we were in their shoes. There has been some debate about whether simulation is used to get into the minds of others (Carruthers & Smith, 1996; Gallese & Goldman, 1998). Social neuroscience research has addressed this question by looking at the brain areas used when people think about themselves and others. If the same brain areas are active for the two types of judgments, it lends support to the idea that the self may be used to make inferences about others via simulation. We know that an area in the prefrontal cortex called the medial prefrontal cortex (mPFC) – located in the middle of the frontal lobe – is active when people think about themselves (Kelley, Macrae, Wyland, Caglar, Inati, & Heatherton, 2002). This conclusion comes from studies using functional magnetic resonance imaging, or fMRI. While EEG measures the brain’s electrical activity, fMRI measures changes in the oxygenation of blood flowing in the brain. When neurons become more active, blood flow to the area increases to bring more oxygen and glucose to the active cells. fMRI allows us to image these changes in oxygenation by placing people in an fMRI machine or scanner (Figure 12.9.3), which consists of large magnets that create strong magnetic fields. The magnets affect the alignment of the oxygen molecules within the blood (i.e., how they are tilted). As the oxygen molecules move in and out of alignment with the magnetic fields, their nuclei produce energy that can be detected with special sensors placed close to the head. Recording fMRI involves having the subject lay on a small bed that is then rolled into the scanner. While fMRI does require subjects to lie still within the small scanner and the large magnets involved are noisy, the scanning itself is safe and painless. Like EEG, the subject can then be asked to think about different topics or engage in different tasks as brain activity is measured. If we know what a person is thinking or doing when fMRI detects a blood flow increase to a particular brain area, we can infer that part of the brain is involved with the thought or action. fMRI is particularly useful for identifying which particular brain areas are active at a given point in time. The conclusion that the mPFC is associated with the self comes from studies measuring fMRI while subjects think about themselves (e.g., saying whether traits are descriptive of themselves). Using this knowledge, other researchers have looked at whether the same brain area is active when people make inferences about others. Mitchell, Neil Macrae, and Banaji (2005) showed subjects pictures of strangers and had them judge either how pleased the person was to have his or her picture taken or how symmetrical the face appeared. Judging whether someone is pleased about being photographed requires making an inference about someone’s internal feelings – we call this mentalizing. By contrast, facial symmetry judgments are based solely on physical appearances and do not involve mentalizing. A comparison of brain activity during the two types of judgments shows more activity in the mPFC when making the mental versus physical judgments, suggesting this brain area is involved when inferring the internal beliefs of others. There are two other notable aspects of this study. First, mentalizing about others also increased activity in a variety of regions important for many aspects of social processing, including a region important in representing biological motion (superior temporal sulcus or STS), an area critical for emotional processing (amygdala), and a region also involved in thinking about the beliefs of others (temporal parietal junction, TPJ) (Gobbini & Haxby, 2007; Schultz, Imamizu, Kawato, & Frith, 2004) (Figure 12.9.4). This finding shows that a distributed and interacting set of brain areas is likely to be involved in social processing. Second, activity in the most ventral part of the mPFC (the part closer to the belly rather than toward the top of the head), which has been most consistently associated with thinking about the self, was particularly active when subjects mentalized about people they rated as similar to themselves. Simulation is thought to be most likely for similar others, so this finding lends support to the conclusion that we use simulation to mentalize about others. After all, if you encounter someone who has the same musical taste as you, you will probably assume you have other things in common with him. By contrast, if you learn that someone loves music that you hate, you might expect him to differ from you in other ways (Srivastava, Guglielmo, & Beer, 2010). Using a simulation of our own feelings and thoughts will be most accurate if we have reason to think the person’s internal experiences are like our own. Thus, we may be most likely to use simulation to make inferences about others if we think they are similar to us. This research is a good example of how social neuroscience is revealing the functional neuroanatomy of social behavior. That is, it tells us which brain areas are involved with social behavior. The mPFC (as well as other areas such as the STS, amygdala, and TPJ) is involved in making judgments about the self and others. This research also provides new information about how inferences are made about others. Whereas some have doubted the widespread use of simulation as a means for making inferences about others, the activation of the mPFC when mentalizing about others, and the sensitivity of this activation to similarity between self and other, provides evidence that simulation occurs. What Is the Cost of Social Stress? Stress is an unfortunately frequent experience for many of us. Stress – which can be broadly defined as a threat or challenge to our well-being – can result from everyday events like a course exam or more extreme events such as experiencing a natural disaster. When faced with a stressor, sympathetic nervous system activity increases in order to prepare our body to respond to the challenge. This produces what Selye (1950) called a fight or flight response. The release of hormones, which act as messengers from one part of an organism (e.g., a cell or gland) to another part of the organism, is part of the stress response. A small amount of stress can actually help us stay alert and active. In comparison, sustained stressors, or chronic stress, detrimentally affect our health and impair performance (Al’Absi, Hugdahl, & Lovallo, 2002; Black, 2002; Lazarus, 1974). This happens in part through the chronic secretion of stress-related hormones (e.g., Davidson, Pizzagalli, Nitschke, & Putnam, 2002; Dickerson, Gable, Irwin, Aziz, & Kemeny, 2009). In particular, stress activates the hypothalamic-pituitary-adrenal (HPA) axis to release cortisol (see Figure 12.9.5 for a discussion). Chronic stress, by way of increases in cortisol, impairs attention, memory, and self-control (Arnsten, 2009). Cortisol levels can be measured non-invasively in bodily fluids, including blood and saliva. Researchers often collect a cortisol sample before and after a potentially stressful task. In one common collection method, subjects place polymer swabs under their tongue for 1 to 2 minutes to soak up saliva. The saliva samples are then stored and analyzed later to determine the level of cortisol present at each time point. Whereas early stress researchers studied the effects of physical stressors like loud noises, social neuroscientists have been instrumental in studying how our interactions with other people can cause stress. This question has been addressed through neuroendocrinology, or the study of how the brain and hormones act in concert to coordinate the physiology of the body. One contribution of this work has been in understanding the conditions under which other people can cause stress. In one study, Dickerson, Mycek, and Zaldivar (2008) asked undergraduates to deliver a speech either alone or to two other people. When the students gave the speech in front of others, there was a marked increase in cortisol compared with when they were asked to give a speech alone. This suggests that like chronic physical stress, everyday social stressors, like having your performance judged by others, induces a stress response. Interestingly, simply giving a speech in the same room with someone who is doing something else did not induce a stress response. This suggests that the mere presence of others is not stressful, but rather it is the potential for them to judge us that induces stress. Worrying about what other people think of us is not the only source of social stress in our lives. Other research has shown that interacting with people who belong to different social groups than us – what social psychologists call outgroup members – can increase physiological stress responses. For example, cardiovascular responses associated with stress like contractility of the heart ventricles and the amount of blood pumped by the heart (what is called cardiac output) are increased when interacting with outgroup as compared with ingroup members (i.e., people who belong to the same social group we do) (Mendes, Blascovich, Likel, & Hunter, 2002). This stress may derive from the expectation that interactions with dissimilar others will be uncomfortable (Stephan & Stephan, 1985) or concern about being judged as unfriendly and prejudiced if the interaction goes poorly (Plant & Devine, 2003). The research just reviewed shows that events in our social lives can be stressful, but are social interactions always bad for us? No. In fact, while others can be the source of much stress, they are also a major buffer against stress. Research on social support shows that relying on a network of individuals in tough times gives us tools for dealing with stress and can ward off loneliness (Cacioppo & Patrick, 2008). For instance, people who report greater social support show a smaller increase in cortisol when performing a speech in front of two evaluators (Eisenberger, Taylor, Gable, Hilmert, & Lieberman, 2007). What determines whether others will increase or decrease stress? What matters is the context of the social interaction. When it has potential to reflect badly on the self, social interaction can be stressful, but when it provides support and comfort, social interaction can protect us from the negative effects of stress. Using neuroendocrinology by measuring hormonal changes in the body has helped researchers better understand how social factors impact our body and ultimately our health. Conclusions Human beings are intensely social creatures – our lives are intertwined with other people and our health and well-being depend on others. Social neuroscience helps us to understand the critical function of how we make sense of and interact with other people. This module provides an introduction to what social neuroscience is and what we have already learned from it, but there is much still to understand. As we move forward, one exciting future direction will be to better understand how different parts of the brain and body interact to produce the numerous and complex patterns of social behavior that humans display. We hinted at some of this complexity when we reviewed research showing that while the mPFC is involved in mentalizing, other areas such as the STS, amygdala, and TPJ are as well. There are likely additional brain areas involved as well, interacting in ways we do not yet fully understand. These brain areas in turn control other aspects of the body to coordinate our responses during social interactions. Social neuroscience will continue to investigate these questions, revealing new information about how social processes occur, while also increasing our understanding of basic neural and physiological processes. Outside Resources Society for Social Neuroscience http://www.s4sn.org Video: See a demonstration of fMRI data being collected. Video: See an example of EEG data being collected. Video: View two tasks frequently used in the lab to create stress – giving a speech in front of strangers, and doing math computations out loud in front of others. Notice how some subjects show obvious signs of stress, but in some situations, cortisol changes suggest that even people who appear calm are experiencing a physiological response associated with stress. Video: Watch a video used by Fritz Heider and Marianne Simmel in a landmark study on social perception published in 1944. Their goal was to investigate how we perceive other people, and they studied it by seeing how readily we apply people-like interpretations to non-social stimuli. intentionperception.org/wp-co...ider_Flash.swf Discussion Questions 1. Categorizing someone as a member of a social group can activate group stereotypes. EEG research suggests that social categorization occurs quickly and often automatically. What does this tell us about the likelihood of stereotyping occurring? How can we use this information to develop ways to stop stereotyping from happening? 2. Watch this video, similar to what was used by Fritz Heider and Marianne Simmel in a landmark study on social perception published in 1944, and imagine telling a friend what happened in the video. intentionperception.org/wp-co...ider_Flash.swf. After watching the video, think about the following: Did you describe the motion of the objects solely in geometric terms (e.g., a large triangle moved from the left to the right), or did you describe the movements as actions of animate beings, maybe even of people (e.g., the circle goes into the house and shuts the door)? In the original research, 33 of 34 subjects described the action of the shapes using human terms. What does this tell us about our tendency to mentalize? 3. Consider the types of things you find stressful. How many of them are social in nature (e.g., are related to your interactions with other people)? Why do you think our social relations have such potential for stress? In what ways can social relations be beneficial and serve as a buffer for stress? Vocabulary Amygdala A region located deep within the brain in the medial area (toward the center) of the temporal lobes (parallel to the ears). If you could draw a line through your eye sloping toward the back of your head and another line between your two ears, the amygdala would be located at the intersection of these lines. The amygdala is involved in detecting relevant stimuli in our environment and has been implicated in emotional responses. Automatic process When a thought, feeling, or behavior occurs with little or no mental effort. Typically, automatic processes are described as involuntary or spontaneous, often resulting from a great deal of practice or repetition. Cortisol A hormone made by the adrenal glands, within the cortex. Cortisol helps the body maintain blood pressure and immune function. Cortisol increases when the body is under stress. Electroencephalogram A measure of electrical activity generated by the brain’s neurons. Fight or flight response The physiological response that occurs in response to a perceived threat, preparing the body for actions needed to deal with the threat. Functional magnetic resonance imaging A measure of changes in the oxygenation of blood flow as areas in the brain become active. Functional neuroanatomy Classifying how regions within the nervous system relate to psychology and behavior. Hormones Chemicals released by cells in the brain or body that affect cells in other parts of the brain or body. Hypothalamic-pituitary-adrenal (HPA) axis A system that involves the hypothalamus (within the brain), the pituitary gland (within the brain), and the adrenal glands (at the top of the kidneys). This system helps maintain homeostasis (keeping the body’s systems within normal ranges) by regulating digestion, immune function, mood, temperature, and energy use. Through this, the HPA regulates the body’s response to stress and injury. Ingroup A social group to which an individual identifies or belongs. Lesions Damage or tissue abnormality due, for example, to an injury, surgery, or a vascular problem. Medial prefrontal cortex An area of the brain located in the middle of the frontal lobes (at the front of the head), active when people mentalize about the self and others. Mentalizing The act of representing the mental states of oneself and others. Mentalizing allows humans to interpret the intentions, beliefs, and emotional states of others. Neuroendocrinology The study of how the brain and hormones act in concert to coordinate the physiology of the body. Outgroup A social group to which an individual does not identify or belong. Simulation Imaginary or real imitation of other people’s behavior or feelings. Social categorization The act of mentally classifying someone into a social group (e.g., as female, elderly, a librarian). Social support A subjective feeling of psychological or physical comfort provided by family, friends, and others. Stereotypes The beliefs or attributes we associate with a specific social group. Stereotyping refers to the act of assuming that because someone is a member of a particular group, he or she possesses the group’s attributes. For example, stereotyping occurs when we assume someone is unemotional just because he is man, or particularly athletic just because she is African American. Stress A threat or challenge to our well-being. Stress can have both a psychological component, which consists of our subjective thoughts and feelings about being threatened or challenged, as well as a physiological component, which consists of our body’s response to the threat or challenge (see “fight or flight response”). Superior temporal sulcus The sulcus (a fissure in the surface of the brain) that separates the superior temporal gyrus from the middle temporal gyrus. Located in the temporal lobes (parallel to the ears), it is involved in perception of biological motion or the movement of animate objects. Sympathetic nervous system A branch of the autonomic nervous system that controls many of the body’s internal organs. Activity of the SNS generally mobilizes the body’s fight or flight response. Temporal parietal junction The area where the temporal lobes (parallel to the ears) and partial lobes (at the top of the head toward the back) meet. This area is important in mentalizing and distinguishing between the self and others.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_12%3A_Social_Part_II/12.9%3A_Social_Neuroscience.txt
• 2.1: Why Science? Psychologists believe that scientific methods can be used in the behavioral domain to understand and improve the world. This module outlines the characteristics of the science, and the promises it holds for understanding behavior. The ethics that guide psychological research are briefly described. It concludes with the reasons you should learn about scientific psychology. • 2.2: Thinking like a Psychological Scientist We are bombarded every day with claims about how the world works, claims that have a direct impact on how we think about and solve problems in society and our personal lives. This module explores important considerations for evaluating the trustworthiness of such claims by contrasting between scientific thinking and everyday observations (also known as “anecdotal evidence”). • 2.3: Statistical Thinking As our society increasingly calls for evidence-based decision making, it is important to consider how and when we can draw valid inferences from data. This module will use four recent research studies to highlight key elements of a statistical investigation. • 2.4: Research Designs Psychologists test research questions using a variety of methods. Most research relies on either correlations or experiments. With correlations, researchers measure variables as they naturally occur in people and compute the degree to which two variables go together. • 2.5: Conducting Psychology Research in the Real World This module highlights the importance of also conducting research outside the psychology laboratory, within participants’ natural, everyday environments, and reviews existing methodologies for studying daily life. • 2.6: History of Psychology This module provides an introduction and overview of the historical development of the science and practice of psychology in America. Ever-increasing specialization within the field often makes it difficult to discern the common roots from which the field of psychology has evolved. By exploring this shared past, students will be better able to understand how psychology has developed into the discipline we know today. • 2.7: Psychophysiological Methods in Neuroscience As a generally noninvasive subset of neuroscience methods, psychophysiological methods are used across a variety of disciplines in order to answer diverse questions about psychology, both mental events and behavior. Many different techniques are classified as psychophysiological. • 2.8: The Replication Crisis in Psychology In science, replication is the process of repeating research to determine the extent to which findings generalize across time and across situations. Recently, the science of psychology has come under criticism because a number of research findings do not replicate. In this module we discuss reasons for non-replication, the impact this phenomenon has on the field, and suggest solutions to the problem. Chapter 2: Psychology as Science By Edward Diener University of Utah, University of Virginia Scientific research has been one of the great drivers of progress in human history, and the dramatic changes we have seen during the past century are due primarily to scientific findings—modern medicine, electronics, automobiles and jets, birth control, and a host of other helpful inventions. Psychologists believe that scientific methods can be used in the behavioral domain to understand and improve the world. Although psychology trails the biological and physical sciences in terms of progress, we are optimistic based on discoveries to date that scientific psychology will make many important discoveries that can benefit humanity. This module outlines the characteristics of the science, and the promises it holds for understanding behavior. The ethics that guide psychological research are briefly described. It concludes with the reasons you should learn about scientific psychology learning objectives • Describe how scientific research has changed the world. • Describe the key characteristics of the scientific approach. • Discuss a few of the benefits, as well as problems that have been created by science. • Describe several ways that psychological science has improved the world. • Describe a number of the ethical guidelines that psychologists follow. Scientific Advances and World Progress There are many people who have made positive contributions to humanity in modern times. Take a careful look at the names on the following list. Which of these individuals do you think has helped humanity the most? 1. Mother Teresa 2. Albert Schweitzer 3. Edward Jenner 4. Norman Borlaug 5. Fritz Haber The usual response to this question is “Who on earth are Jenner, Borlaug, and Haber?” Many people know that Mother Teresa helped thousands of people living in the slums of Kolkata (Calcutta). Others recall that Albert Schweitzer opened his famous hospital in Africa and went on to earn the Nobel Peace Prize. The other three historical figures, on the other hand, are far less well known. Jenner, Borlaug, and Haber were scientists whose research discoveries saved millions, and even billions, of lives. Dr. Edward Jenner is often considered the “father of immunology” because he was among the first to conceive of and test vaccinations. His pioneering work led directly to the eradication of smallpox. Many other diseases have been greatly reduced because of vaccines discovered using science—measles, pertussis, diphtheria, tetanus, typhoid, cholera, polio, hepatitis—and all are the legacy of Jenner. Fritz Haber and Norman Borlaug saved more than a billion human lives. They created the “Green Revolution” by producing hybrid agricultural crops and synthetic fertilizer. Humanity can now produce food for the seven billion people on the planet, and the starvation that does occur is related to political and economic factors rather than our collective ability to produce food. If you examine major social and technological changes over the past century most of them can be directly attributed to science. The world in 1914 was very different than the one we see today (Easterbrook, 2003). There were few cars and most people traveled by foot, horseback, or carriage. There were no radios, televisions, birth control pills, artificial hearts or antibiotics. Only a small portion of the world had telephones, refrigeration or electricity. These days we find that 80% of all households have television and 84% have electricity. It is estimated that three quarters of the world’s population has access to a mobile phone! Life expectancy was 47 years in 1900 and 79 years in 2010. The percentage of hungry and malnourished people in the world has dropped substantially across the globe. Even average levels of I.Q. have risen dramatically over the past century due to better nutrition and schooling. All of these medical advances and technological innovations are the direct result of scientific research and understanding. In the modern age it is easy to grow complacent about the advances of science but make no mistake about it—science has made fantastic discoveries, and continues to do so. These discoveries have completely changed our world. What Is Science? What is this process we call “science,” which has so dramatically changed the world? Ancient people were more likely to believe in magical and supernatural explanations for natural phenomena such as solar eclipses or thunderstorms. By contrast, scientifically minded people try to figure out the natural world through testing and observation. Specifically, science is the use of systematic observation in order to acquire knowledge. For example, children in a science class might combine vinegar and baking soda to observe the bubbly chemical reaction. These empirical methods are wonderful ways to learn about the physical and biological world. Science is not magic—it will not solve all human problems, and might not answer all our questions about behavior. Nevertheless, it appears to be the most powerful method we have for acquiring knowledge about the observable world. The essential elements of science are as follows: 1. Systematic observation is the core of science. Scientists observe the world, in a very organized way. We often measure the phenomenon we are observing. We record our observations so that memory biases are less likely to enter in to our conclusions. We are systematic in that we try to observe under controlled conditions, and also systematically vary the conditions of our observations so that we can see variations in the phenomena and understand when they occur and do not occur. 2. Observation leads to hypotheses we can test. When we develop hypotheses and theories, we state them in a way that can be tested. For example, you might make the claim that candles made of paraffin wax burn more slowly than do candles of the exact same size and shape made from bee’s wax. This claim can be readily tested by timing the burning speed of candles made from these materials. 3. Science is democratic. People in ancient times may have been willing to accept the views of their kings or pharaohs as absolute truth. These days, however, people are more likely to want to be able to form their own opinions and debate conclusions. Scientists are skeptical and have open discussions about their observations and theories. These debates often occur as scientists publish competing findings with the idea that the best data will win the argument. 4. Science is cumulative. We can learn the important truths discovered by earlier scientists and build on them. Any physics student today knows more about physics than Sir Isaac Newton did even though Newton was possibly the most brilliant physicist of all time. A crucial aspect of scientific progress is that after we learn of earlier advances, we can build upon them and move farther along the path of knowledge. Psychology as a Science Even in modern times many people are skeptical that psychology is really a science. To some degree this doubt stems from the fact that many psychological phenomena such as depression, intelligence, and prejudice do not seem to be directly observable in the same way that we can observe the changes in ocean tides or the speed of light. Because thoughts and feelings are invisible many early psychological researchers chose to focus on behavior. You might have noticed that some people act in a friendly and outgoing way while others appear to be shy and withdrawn. If you have made these types of observations then you are acting just like early psychologists who used behavior to draw inferences about various types of personality. By using behavioral measures and rating scales it is possible to measure thoughts and feelings. This is similar to how other researchers explore “invisible” phenomena such as the way that educators measure academic performance or economists measure quality of life. One important pioneering researcher was Francis Galton, a cousin of Charles Darwin who lived in England during the late 1800s. Galton used patches of color to test people’s ability to distinguish between them. He also invented the self-report questionnaire, in which people offered their own expressed judgments or opinions on various matters. Galton was able to use self-reports to examine—among other things—people’s differing ability to accurately judge distances. Although he lacked a modern understanding of genetics Galton also had the idea that scientists could look at the behaviors of identical and fraternal twins to estimate the degree to which genetic and social factors contribute to personality; a puzzling issue we currently refer to as the “nature-nurture question.” In modern times psychology has become more sophisticated. Researchers now use better measures, more sophisticated study designs and better statistical analyses to explore human nature. Simply take the example of studying the emotion of happiness. How would you go about studying happiness? One straightforward method is to simply ask people about their happiness and to have them use a numbered scale to indicate their feelings. There are, of course, several problems with this. People might lie about their happiness, might not be able to accurately report on their own happiness, or might not use the numerical scale in the same way. With these limitations in mind modern psychologists employ a wide range of methods to assess happiness. They use, for instance, “peer report measures” in which they ask close friends and family members about the happiness of a target individual. Researchers can then compare these ratings to the self-report ratings and check for discrepancies. Researchers also use memory measures, with the idea that dispositionally positive people have an easier time recalling pleasant events and negative people have an easier time recalling unpleasant events. Modern psychologists even use biological measures such as saliva cortisol samples (cortisol is a stress related hormone) or fMRI images of brain activation (the left pre-frontal cortex is one area of brain activity associated with good moods). Despite our various methodological advances it is true that psychology is still a very young science. While physics and chemistry are hundreds of years old psychology is barely a hundred and fifty years old and most of our major findings have occurred only in the last 60 years. There are legitimate limits to psychological science but it is a science nonetheless. Psychological Science is Useful Psychological science is useful for creating interventions that help people live better lives. A growing body of research is concerned with determining which therapies are the most and least effective for the treatment of psychological disorders. For example, many studies have shown that cognitive behavioral therapy can help many people suffering from depression and anxiety disorders (Butler, Chapman, Forman, & Beck, 2006; Hoffman & Smits, 2008). In contrast, research reveals that some types of therapies actually might be harmful on average (Lilienfeld, 2007). In organizational psychology, a number of psychological interventions have been found by researchers to produce greater productivity and satisfaction in the workplace (e.g., Guzzo, Jette, & Katzell, 1985). Human factor engineers have greatly increased the safety and utility of the products we use. For example, the human factors psychologist Alphonse Chapanis and other researchers redesigned the cockpit controls of aircraft to make them less confusing and easier to respond to, and this led to a decrease in pilot errors and crashes. Forensic sciences have made courtroom decisions more valid. We all know of the famous cases of imprisoned persons who have been exonerated because of DNA evidence. Equally dramatic cases hinge on psychological findings. For instance, psychologist Elizabeth Loftus has conducted research demonstrating the limits and unreliability of eyewitness testimony and memory. Thus, psychological findings are having practical importance in the world outside the laboratory. Psychological science has experienced enough success to demonstrate that it works, but there remains a huge amount yet to be learned. Ethics of Scientific Psychology Psychology differs somewhat from the natural sciences such as chemistry in that researchers conduct studies with human research participants. Because of this there is a natural tendency to want to guard research participants against potential psychological harm. For example, it might be interesting to see how people handle ridicule but it might not be advisable to ridicule research participants. Scientific psychologists follow a specific set of guidelines for research known as a code of ethics. There are extensive ethical guidelines for how human participants should be treated in psychological research (Diener & Crandall, 1978; Sales & Folkman, 2000). Following are a few highlights: 1. Informed consent. In general, people should know when they are involved in research, and understand what will happen to them during the study. They should then be given a free choice as to whether to participate. 2. Confidentiality. Information that researchers learn about individual participants should not be made public without the consent of the individual. 3. Privacy. Researchers should not make observations of people in private places such as their bedrooms without their knowledge and consent. Researchers should not seek confidential information from others, such as school authorities, without consent of the participant or his or her guardian. 4. Benefits. Researchers should consider the benefits of their proposed research and weigh these against potential risks to the participants. People who participate in psychological studies should be exposed to risk only if they fully understand these risks and only if the likely benefits clearly outweigh the risks. 5. Deception. Some researchers need to deceive participants in order to hide the true nature of the study. This is typically done to prevent participants from modifying their behavior in unnatural ways. Researchers are required to “debrief” their participants after they have completed the study. Debriefing is an opportunity to educate participants about the true nature of the study. Why Learn About Scientific Psychology? I once had a psychology professor who asked my class why we were taking a psychology course. Our responses give the range of reasons that people want to learn about psychology: 1. To understand ourselves 2. To understand other people and groups 3. To be better able to influence others, for example, in socializing children or motivating employees 4. To learn how to better help others and improve the world, for example, by doing effective psychotherapy 5. To learn a skill that will lead to a profession such as being a social worker or a professor 6. To learn how to evaluate the research claims you hear or read about 7. Because it is interesting, challenging, and fun! People want to learn about psychology because this is exciting in itself, regardless of other positive outcomes it might have. Why do we see movies? Because they are fun and exciting, and we need no other reason. Thus, one good reason to study psychology is that it can be rewarding in itself. Conclusions The science of psychology is an exciting adventure. Whether you will become a scientific psychologist, an applied psychologist, or an educated person who knows about psychological research, this field can influence your life and provide fun, rewards, and understanding. My hope is that you learn a lot from the modules in this e-text, and also that you enjoy the experience! I love learning about psychology and neuroscience, and hope you will too! Outside Resources Web: Science Heroes- A celebration of people who have made lifesaving discoveries. http://www.scienceheroes.com/index.p...=258&Itemid=27 Discussion Questions 1. Some claim that science has done more harm than good. What do you think? 2. Humanity is faced with many challenges and problems. Which of these are due to human behavior, and which are external to human actions? 3. If you were a research psychologist, what phenomena or behaviors would most interest you? 4. Will psychological scientists be able to help with the current challenges humanity faces, such as global warming, war, inequality, and mental illness? 5. What can science study and what is outside the realm of science? What questions are impossible for scientists to study? 6. Some claim that science will replace religion by providing sound knowledge instead of myths to explain the world. They claim that science is a much more reliable source of solutions to problems such as disease than is religion. What do you think? Will science replace religion, and should it? 7. Are there human behaviors that should not be studied? Are some things so sacred or dangerous that we should not study them? Vocabulary Empirical methods Approaches to inquiry that are tied to actual measurement and observation. Ethics Professional guidelines that offer researchers a template for making decisions that protect research participants from potential harm and that help steer scientists away from conflicts of interest or other situations that might compromise the integrity of their research. Hypotheses A logical idea that can be tested. Systematic observation The careful observation of the natural world with the aim of better understanding it. Observations provide the basic data that allow scientists to track, tally, or otherwise organize information about the natural world. Theories Groups of closely related phenomena or observations.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_2%3A_Psychology_as_Science/2.1%3A_Why_Science.txt
By Erin I. Smith California Baptist University We are bombarded every day with claims about how the world works, claims that have a direct impact on how we think about and solve problems in society and our personal lives. This module explores important considerations for evaluating the trustworthiness of such claims by contrasting between scientific thinking and everyday observations (also known as “anecdotal evidence”). learning objectives • Compare and contrast conclusions based on scientific and everyday inductive reasoning. • Understand why scientific conclusions and theories are trustworthy, even if they are not able to be proven. • Articulate what it means to think like a psychological scientist, considering qualities of good scientific explanations and theories. • Discuss science as a social activity, comparing and contrasting facts and values. Introduction Why are some people so much happier than others? Is it harmful for children to have imaginary companions? How might students study more effectively? Even if you’ve never considered these questions before, you probably have some guesses about their answers. Maybe you think getting rich or falling in love leads to happiness. Perhaps you view imaginary friends as expressions of a dangerous lack of realism. What’s more, if you were to ask your friends, they would probably also have opinions about these questions—opinions that may even differ from your own. A quick internet search would yield even more answers. We live in the “Information Age,” with people having access to more explanations and answers than at any other time in history. But, although the quantity of information is continually increasing, it’s always good practice to consider the quality of what you read or watch: Not all information is equally trustworthy. The trustworthiness of information is especially important in an era when “fake news,” urban myths, misleading “click-bait,” and conspiracy theories compete for our attention alongside well-informed conclusions grounded in evidence. Determining what information is well-informed is a crucial concern and a central task of science. Science is a way of using observable data to help explain and understand the world around us in a trustworthy way. In this module, you will learn about scientific thinking. You will come to understand how scientific research informs our knowledge and helps us create theories. You will also come to appreciate how scientific reasoning is different from the types of reasoning people often use to form personal opinions. Scientific Versus Everyday Reasoning Each day, people offer statements as if they are facts, such as, “It looks like rain today,” or, “Dogs are very loyal.” These conclusions represent hypotheses about the world: best guesses as to how the world works. Scientists also draw conclusions, claiming things like, “There is an 80% chance of rain today,” or, “Dogs tend to protect their human companions.” You’ll notice that the two examples of scientific claims use less certain language and are more likely to be associated with probabilities. Understanding the similarities and differences between scientific and everyday (non-scientific) statements is essential to our ability to accurately evaluate the trustworthiness of various claims. Scientific and everyday reasoning both employ induction: drawing general conclusions from specific observations. For example, a person’s opinion that cramming for a test increases performance may be based on her memory of passing an exam after pulling an all-night study session. Similarly, a researcher’s conclusion against cramming might be based on studies comparing the test performances of people who studied the material in different ways (e.g., cramming versus study sessions spaced out over time). In these scenarios, both scientific and everyday conclusions are drawn from a limited sample of potential observations. The process of induction, alone, does not seem suitable enough to provide trustworthy information—given the contradictory results. What should a student who wants to perform well on exams do? One source of information encourages her to cram, while another suggests that spacing out her studying time is the best strategy. To make the best decision with the information at hand, we need to appreciate the differences between personal opinions and scientific statements, which requires an understanding of science and the nature of scientific reasoning. There are generally agreed-upon features that distinguish scientific thinking—and the theories and data generated by it—from everyday thinking. A short list of some of the commonly cited features of scientific theories and data is shown in Table 1. One additional feature of modern science not included in this list but prevalent in scientists’ thinking and theorizing is falsifiability, a feature that has so permeated scientific practice that it warrants additional clarification. In the early 20th century, Karl Popper (1902-1994) suggested that science can be distinguished from pseudoscience (or just everyday reasoning) because scientific claims are capable of being falsified. That is, a claim can be conceivably demonstrated to be untrue. For example, a person might claim that “all people are right handed.” This claim can be tested and—ultimately—thrown out because it can be shown to be false: There are people who are left-handed. An easy rule of thumb is to not get confused by the term “falsifiable” but to understand that—more or less—it means testable. On the other hand, some claims cannot be tested and falsified. Imagine, for instance, that a magician claims that he can teach people to move objects with their minds. The trick, he explains, is to truly believe in one’s ability for it to work. When his students fail to budge chairs with their minds, the magician scolds, “Obviously, you don’t truly believe.” The magician’s claim does not qualify as falsifiable because there is no way to disprove it. It is unscientific. Popper was particularly irritated about nonscientific claims because he believed they were a threat to the science of psychology. Specifically, he was dissatisfied with Freud’s explanations for mental illness. Freud believed that when a person suffers a mental illness it is often due to problems stemming from childhood. For instance, imagine a person who grows up to be an obsessive perfectionist. If she were raised by messy, relaxed parents, Freud might argue that her adult perfectionism is a reaction to her early family experiences—an effort to maintain order and routine instead of chaos. Alternatively, imagine the same person being raised by harsh, orderly parents. In this case, Freud might argue that her adult tidiness is simply her internalizing her parents’ way of being. As you can see, according to Freud’s rationale, both opposing scenarios are possible; no matter what the disorder, Freud’s theory could explain its childhood origin—thus failing to meet the principle of falsifiability. Popper argued against statements that could not be falsified. He claimed that they blocked scientific progress: There was no way to advance, refine, or refute knowledge based on such claims. Popper’s solution was a powerful one: If science showed all the possibilities that were not true, we would be left only with what is true. That is, we need to be able to articulate—beforehand—the kinds of evidence that will disprove our hypothesis and cause us to abandon it. This may seem counterintuitive. For example, if a scientist wanted to establish a comprehensive understanding of why car accidents happen, she would systematically test all potential causes: alcohol consumption, speeding, using a cell phone, fiddling with the radio, wearing sandals, eating, chatting with a passenger, etc. A complete understanding could only be achieved once all possible explanations were explored and either falsified or not. After all the testing was concluded, the evidence would be evaluated against the criteria for falsification, and only the real causes of accidents would remain. The scientist could dismiss certain claims (e.g., sandals lead to car accidents) and keep only those supported by research (e.g., using a mobile phone while driving increases risk). It might seem absurd that a scientist would need to investigate so many alternative explanations, but it is exactly how we rule out bad claims. Of course, many explanations are complicated and involve multiple causes—as with car accidents, as well as psychological phenomena. Test Yourself 1: Can It Be Falsified? Which of the following hypotheses can be falsified? For each, be sure to consider what kind of data could be collected to demonstrate that a statement is not true. A. Chocolate tastes better than pasta. B. We live in the most violent time in history. C. Time can run backward as well as forward. D. There are planets other than Earth that have water on them. [See answer at end of this module] Although the idea of falsification remains central to scientific data and theory development, these days it’s not used strictly the way Popper originally envisioned it. To begin with, scientists aren’t solely interested in demonstrating what isn’t. Scientists are also interested in providing descriptions and explanations for the way things are. We want to describe different causes and the various conditions under which they occur. We want to discover when young children start speaking in complete sentences, for example, or whether people are happier on the weekend, or how exercise impacts depression. These explorations require us to draw conclusions from limited samples of data. In some cases, these data seem to fit with our hypotheses and in others they do not. This is where interpretation and probability come in. The Interpretation of Research Results Imagine a researcher wanting to examine the hypothesis—a specific prediction based on previous research or scientific theory—that caffeine enhances memory. She knows there are several published studies that suggest this might be the case, and she wants to further explore the possibility. She designs an experiment to test this hypothesis. She randomly assigns some participants a cup of fully caffeinated tea and some a cup of herbal tea. All the participants are instructed to drink up, study a list of words, then complete a memory test. There are three possible outcomes of this proposed study: 1. The caffeine group performs better (support for the hypothesis). 2. The no-caffeine group performs better (evidence against the hypothesis). 3. There is no difference in the performance between the two groups (also evidence against the hypothesis). Let’s look, from a scientific point of view, at how the researcher should interpret each of these three possibilities. First, if the results of the memory test reveal that the caffeine group performs better, this is a piece of evidence in favor of the hypothesis: It appears, at least in this case, that caffeine is associated with better memory. It does not, however, prove that caffeine is associated with better memory. There are still many questions left unanswered. How long does the memory boost last? Does caffeine work the same way with people of all ages? Is there a difference in memory performance between people who drink caffeine regularly and those who never drink it? Could the results be a freak occurrence? Because of these uncertainties, we do not say that a study—especially a single study—proves a hypothesis. Instead, we say the results of the study offer evidence in support of the hypothesis. Even if we tested this across 10 thousand or 100 thousand people we still could not use the word “proven” to describe this phenomenon. This is because inductive reasoning is based on probabilities. Probabilities are always a matter of degree; they may be extremely likely or unlikely. Science is better at shedding light on the likelihood—or probability—of something than at proving it. In this way, data is still highly useful even if it doesn’t fit Popper’s absolute standards. The science of meteorology helps illustrate this point. You might look at your local weather forecast and see a high likelihood of rain. This is because the meteorologist has used inductive reasoning to create her forecast. She has taken current observations—lots of dense clouds coming toward your city—and compared them to historical weather patterns associated with rain, making a reasonable prediction of a high probability of rain. The meteorologist has not proven it will rain, however, by pointing out the oncoming clouds. Proof is more associated with deductive reasoning. Deductive reasoning starts with general principles that are applied to specific instances (the reverse of inductive reasoning). When the general principles, or premises, are true, and the structure of the argument is valid, the conclusion is, by definition, proven; it must be so. A deductive truth must apply in all relevant circumstances. For example, all living cells contain DNA. From this, you can reason—deductively—that any specific living cell (of an elephant, or a person, or a snake) will therefore contain DNA. Given the complexity of psychological phenomena, which involve many contributing factors, it is nearly impossible to make these types of broad statements with certainty. Test Yourself 2: Inductive or Deductive? A. The stove was on and the water in the pot was boiling over. The front door was standing open. These clues suggest the homeowner left unexpectedly and in a hurry. B. Gravity is associated with mass. Because the moon has a smaller mass than the Earth, it should have weaker gravity. C. Students don’t like to pay for high priced textbooks. It is likely that many students in the class will opt not to purchase a book. D. To earn a college degree, students need 100 credits. Janine has 85 credits, so she cannot graduate. [See answer at end of this module] The second possible result from the caffeine-memory study is that the group who had no caffeine demonstrates better memory. This result is the opposite of what the researcher expects to find (her hypothesis). Here, the researcher must admit the evidence does not support her hypothesis. She must be careful, however, not to extend that interpretation to other claims. For example, finding increased memory in the no-caffeine group would not be evidence that caffeine harms memory. Again, there are too many unknowns. Is this finding a freak occurrence, perhaps based on an unusual sample? Is there a problem with the design of the study? The researcher doesn’t know. She simply knows that she was not able to observe support for her hypothesis. There is at least one additional consideration: The researcher originally developed her caffeine-benefits-memory hypothesis based on conclusions drawn from previous research. That is, previous studies found results that suggested caffeine boosts memory. The researcher’s single study should not outweigh the conclusions of many studies. Perhaps the earlier research employed participants of different ages or who had different baseline levels of caffeine intake. This new study simply becomes a piece of fabric in the overall quilt of studies of the caffeine-memory relationship. It does not, on its own, definitively falsify the hypothesis. Finally, it’s possible that the results show no difference in memory between the two groups. How should the researcher interpret this? How would you? In this case, the researcher once again has to admit that she has not found support for her hypothesis. Interpreting the results of a study—regardless of outcome—rests on the quality of the observations from which those results are drawn. If you learn, say, that each group in a study included only four participants, or that they were all over 90 years old, you might have concerns. Specifically, you should be concerned that the observations, even if accurate, aren’t representative of the general population. This is one of the defining differences between conclusions drawn from personal anecdotes and those drawn from scientific observations. Anecdotal evidence—derived from personal experience and unsystematic observations (e.g., “common sense,”)—is limited by the quality and representativeness of observations, and by memory shortcomings. Well-designed research, on the other hand, relies on observations that are systematically recorded, of high quality, and representative of the population it claims to describe. Why Should I Trust Science If It Can’t Prove Anything? It’s worth delving a bit deeper into why we ought to trust the scientific inductive process, even when it relies on limited samples that don’t offer absolute “proof.” To do this, let’s examine a widespread practice in psychological science: null-hypothesis significance testing. To understand this concept, let’s begin with another research example. Imagine, for instance, a researcher is curious about the ways maturity affects academic performance. She might have a hypothesis that mature students are more likely to be responsible about studying and completing homework and, therefore, will do better in their courses. To test this hypothesis, the researcher needs a measure of maturity and a measure of course performance. She might calculate the correlation—or relationship—between student age (her measure of maturity) and points earned in a course (her measure of academic performance). Ultimately, the researcher is interested in the likelihood—or probability—that these two variables closely relate to one another. Null-hypothesis significance testing (NHST) assesses the probability that the collected data (the observations) would be the same if there were no relationship between the variables in the study. Using our example, the NHST would test the probability that the researcher would find a link between age and class performance if there were, in reality, no such link. Now, here’s where it gets a little complicated. NHST involves a null hypothesis, a statement that two variables are not related (in this case, that student maturity and academic performance are not related in any meaningful way). NHST also involves an alternative hypothesis, a statement that two variables are related (in this case, that student maturity and academic performance go together). To evaluate these two hypotheses, the researcher collects data. The researcher then compares what she expects to find (probability) with what she actually finds (the collected data) to determine whether she can falsify, or reject, the null hypothesis in favor of the alternative hypothesis. How does she do this? By looking at the distribution of the data. The distribution is the spread of values—in our example, the numeric values of students’ scores in the course. The researcher will test her hypothesis by comparing the observed distribution of grades earned by older students to those earned by younger students, recognizing that some distributions are more or less likely. Your intuition tells you, for example, that the chances of every single person in the course getting a perfect score are lower than their scores being distributed across all levels of performance. The researcher can use a probability table to assess the likelihood of any distribution she finds in her class. These tables reflect the work, over the past 200 years, of mathematicians and scientists from a variety of fields. You can see, in Table 2a, an example of an expected distribution if the grades were normally distributed (most are average, and relatively few are amazing or terrible). In Table 2b, you can see possible results of this imaginary study, and can clearly see how they differ from the expected distribution. In the process of testing these hypotheses, there are four possible outcomes. These are determined by two factors: 1) reality, and 2) what the researcher finds (see Table 3). The best possible outcome is accurate detection. This means that the researcher’s conclusion mirrors reality. In our example, let’s pretend the more mature students do perform slightly better. If this is what the researcher finds in her data, her analysis qualifies as an accurate detection of reality. Another form of accurate detection is when a researcher finds no evidence for a phenomenon, but that phenomenon doesn’t actually exist anyway! Using this same example, let’s now pretend that maturity has nothing to do with academic performance. Perhaps academic performance is instead related to intelligence or study habits. If the researcher finds no evidence for a link between maturity and grades and none actually exists, she will have also achieved accurate detection. There are a couple of ways that research conclusions might be wrong. One is referred to as a type I error—when the researcher concludes there is a relationship between two variables but, in reality, there is not. Back to our example: Let’s now pretend there’s no relationship between maturity and grades, but the researcher still finds one. Why does this happen? It may be that her sample, by chance, includes older students who also have better study habits and perform better: The researcher has “found” a relationship (the data appearing to show age as significantly correlated with academic performance), but the truth is that the apparent relationship is purely coincidental—the result of these specific older students in this particular sample having better-than-average study habits (the real cause of the relationship). They may have always had superior study habits, even when they were young. Another possible outcome of NHST is a type II error, when the data fail to show a relationship between variables that actually exists. In our example, this time pretend that maturity is —in reality—associated with academic performance, but the researcher doesn’t find it in her sample. Perhaps it was just her bad luck that her older students are just having an off day, suffering from test anxiety, or were uncharacteristically careless with their homework: The peculiarities of her particular sample, by chance, prevent the researcher from identifying the real relationship between maturity and academic performance. These types of errors might worry you, that there is just no way to tell if data are any good or not. Researchers share your concerns, and address them by using probability values (p-values) to set a threshold for type I or type II errors. When researchers write that a particular finding is “significant at a p < .05 level,” they’re saying that if the same study were repeated 100 times, we should expect this result to occur—by chance—fewer than five times. That is, in this case, a Type I error is unlikely. Scholars sometimes argue over the exact threshold that should be used for probability. The most common in psychological science are .05 (5% chance), .01 (1% chance), and .001 (1/10th of 1% chance). Remember, psychological science doesn’t rely on definitive proof; it’s about the probability of seeing a specific result. This is also why it’s so important that scientific findings be replicated in additional studies. It’s because of such methodologies that science is generally trustworthy. Not all claims and explanations are equal; some conclusions are better bets, so to speak. Scientific claims are more likely to be correct and predict real outcomes than “common sense” opinions and personal anecdotes. This is because researchers consider how to best prepare and measure their subjects, systematically collect data from large and—ideally—representative samples, and test their findings against probability. Scientific Theories The knowledge generated from research is organized according to scientific theories. A scientific theory is a comprehensive framework for making sense of evidence regarding a particular phenomenon. When scientists talk about a theory, they mean something different from how the term is used in everyday conversation. In common usage, a theory is an educated guess—as in, “I have a theory about which team will make the playoffs,” or, “I have a theory about why my sister is always running late for appointments.” Both of these beliefs are liable to be heavily influenced by many untrustworthy factors, such as personal opinions and memory biases. A scientific theory, however, enjoys support from many research studies, collectively providing evidence, including, but not limited to, that which has falsified competing explanations. A key component of good theories is that they describe, explain, and predict in a way that can be empirically tested and potentially falsified. Theories are open to revision if new evidence comes to light that compels reexamination of the accumulated, relevant data. In ancient times, for instance, people thought the Sun traveled around the Earth. This seemed to make sense and fit with many observations. In the 16th century, however, astronomers began systematically charting visible objects in the sky, and, over a 50-year period, with repeated testing, critique, and refinement, they provided evidence for a revised theory: The Earth and other cosmic objects revolve around the Sun. In science, we believe what the best and most data tell us. If better data come along, we must be willing to change our views in accordance with the new evidence. Is Science Objective? Thomas Kuhn (2012), a historian of science, argued that science, as an activity conducted by humans, is a social activity. As such, it is—according to Kuhn—subject to the same psychological influences of all human activities. Specifically, Kuhn suggested that there is no such thing as objective theory or data; all of science is informed by values. Scientists cannot help but let personal/cultural values, experiences, and opinions influence the types of questions they ask and how they make sense of what they find in their research. Kuhn’s argument highlights a distinction between facts (information about the world), and values (beliefs about the way the world is or ought to be). This distinction is an important one, even if it is not always clear. To illustrate the relationship between facts and values, consider the problem of global warming. A vast accumulation of evidence (facts) substantiates the adverse impact that human activity has on the levels of greenhouse gases in Earth’s atmosphere leading to changing weather patterns. There is also a set of beliefs (values), shared by many people, that influences their choices and behaviors in an attempt to address that impact (e.g., purchasing electric vehicles, recycling, bicycle commuting). Our values—in this case, that Earth as we know it is in danger and should be protected—influence how we engage with facts. People (including scientists) who strongly endorse this value, for example, might be more attentive to research on renewable energy. The primary point of this illustration is that (contrary to the image of scientists as outside observers to the facts, gathering them neutrally and without bias from the natural world) all science—especially social sciences like psychology—involves values and interpretation. As a result, science functions best when people with diverse values and backgrounds work collectively to understand complex natural phenomena. Indeed, science can benefit from multiple perspectives. One approach to achieving this is through levels of analysis. Levels of analysis is the idea that a single phenomenon may be explained at different levels simultaneously. Remember the question concerning cramming for a test versus studying over time? It can be answered at a number of different levels of analysis. At a low level, we might use brain scanning technologies to investigate whether biochemical processes differ between the two study strategies. At a higher level—the level of thinking—we might investigate processes of decision making (what to study) and ability to focus, as they relate to cramming versus spaced practice. At even higher levels, we might be interested in real world behaviors, such as how long people study using each of the strategies. Similarly, we might be interested in how the presence of others influences learning across these two strategies. Levels of analysis suggests that one level is not more correct—or truer—than another; their appropriateness depends on the specifics of the question asked. Ultimately, levels of analysis would suggest that we cannot understand the world around us, including human psychology, by reducing the phenomenon to only the biochemistry of genes and dynamics of neural networks. But, neither can we understand humanity without considering the functions of the human nervous system. Science in Context There are many ways to interpret the world around us. People rely on common sense, personal experience, and faith, in combination and to varying degrees. All of these offer legitimate benefits to navigating one’s culture, and each offers a unique perspective, with specific uses and limitations. Science provides another important way of understanding the world and, while it has many crucial advantages, as with all methods of interpretation, it also has limitations. Understanding the limits of science—including its subjectivity and uncertainty—does not render it useless. Because it is systematic, using testable, reliable data, it can allow us to determine causality and can help us generalize our conclusions. By understanding how scientific conclusions are reached, we are better equipped to use science as a tool of knowledge. Answer - Test Yourself 1: Can It Be Falsified? Answer explained: There are 4 hypotheses presented. Basically, the question asks “which of these could be tested and demonstrated to be false?". We can eliminate answers A, B and C. A is a matter of personal opinion. C is a concept for which there are currently no existing measures. B is a little trickier. A person could look at data on wars, assaults, and other forms of violence to draw a conclusion about which period is the most violent. The problem here is that we do not have data for all time periods, and there is no clear guide to which data should be used to address this hypothesis. The best answer is D, because we have the means to view other planets and to determine whether there is water on them (for example, Mars has ice). Answer - Test Yourself 2: Inductive or Deductive Answer explained: This question asks you to consider whether each of 5 examples represents inductive or deductive reasoning. 1) Inductive—it is possible to draw the conclusion—the homeowner left in a hurry—from specific observations such as the stove being on and the door being open. 2) Deductive—starting with a general principle (gravity is associated with mass), we draw a conclusion about the moon having weaker gravity than does the Earth because it has smaller mass. 3) Deductive—starting with a general principle (students do not like to pay for textbooks) it is possible to make a prediction about likely student behavior (they will not purchase textbooks). Note that this is a case of prediction rather than using observations. 4) Deductive—starting with a general principle (students need 100 credits to graduate) it is possible to draw a conclusion about Janine (she cannot graduate because she has fewer than the 100 credits required). Outside Resources Article: A meta-analysis of research on combating mis-information http://journals.sagepub.com/doi/full...56797617714579 Article: Fixing the Problem of Liberal Bias in Social Psychology https://www.scientificamerican.com/a...al-psychology/ Article: Flat out science rejection is rare, but motivated rejection of key scientific claims is relatively common. https://blogs.scientificamerican.com...-anti-science/ Article: How Anecdotal Evidence Can Undermine Scientific Results https://www.scientificamerican.com/a...tific-results/ Article: How fake news is affecting your memory http://www.nature.com/news/how-faceb...memory-1.21596 Article: New Study Indicates Existence of Eight Conservative Social Psychologists heterodoxacademy.org/2016/01...psychologists/ Article: The Objectivity Thing (or, Why Science Is a Team Sport). https://blogs.scientificamerican.com...-a-team-sport/ Article: Thomas Kuhn: the man who changed the way the world looked at science https://www.theguardian.com/science/...ic-revolutions Video: Karl Popper's Falsification - Karl Popper believed that human knowledge progresses through 'falsification'. A theory or idea shouldn't be described as scientific unless it could, in principle, be proven false. Video: Karl Popper, Science, and Pseudoscience: Crash Course Philosophy #8 Video: Simple visualization of Type I and Type II errors Web: An overview and history of the concept of fake news. en.Wikipedia.org/wiki/Fake_news Web: Heterodox Academy - an organization focused on improving "the quality of research and education in universities by increasing viewpoint diversity, mutual understanding, and constructive disagreement". https://heterodoxacademy.org/ Web: The People's Science - An orgnization dedicated to removing barriers between scientists and society. See examples of how researchers, including psychologists, are sharing their research with students, colleagues and the general public. thepeoplesscience.org/science...uman-sciences/ Discussion Questions 1. When you think of a “scientist,” what image comes to mind? How is this similar to or different from the image of a scientist described in this module? 2. What makes the inductive reasoning used in the scientific process different than the inductive reasoning we employ in our daily lives? How do these differences influence our trust in the conclusions? 3. Why aren’t horoscopes considered scientific? 4. If science cannot “prove” something, why do you think so many media reports of scientific research use this word? As an educated consumer of research, what kinds of questions should you ask when reading these secondary reports? 5. In thinking about the application of research in our lives, which is more meaningful: individual research studies and their conclusions or scientific theories? Why? 6. Although many people believe the conclusions offered by science generally, there is often a resistance to specific scientific conclusions or findings. Why might this be? Vocabulary Anecdotal evidence A piece of biased evidence, usually drawn from personal experience, used to support a conclusion that may or may not be correct. Causality In research, the determination that one variable causes—is responsible for—an effect. Correlation In statistics, the measure of relatedness of two or more variables. Data (also called observations) In research, information systematically collected for analysis and interpretation. Deductive reasoning A form of reasoning in which a given premise determines the interpretation of specific observations (e.g., All birds have feathers; since a duck is a bird, it has feathers). Distribution In statistics, the relative frequency that a particular value occurs for each possible value of a given variable. Empirical Concerned with observation and/or the ability to verify a claim. Fact Objective information about the world. Falsify In science, the ability of a claim to be tested and—possibly—refuted; a defining feature of science. Generalize In research, the degree to which one can extend conclusions drawn from the findings of a study to other groups or situations not included in the study. Hypothesis A tentative explanation that is subject to testing. Induction To draw general conclusions from specific observations. Inductive reasoning A form of reasoning in which a general conclusion is inferred from a set of observations (e.g., noting that “the driver in that car was texting; he just cut me off then ran a red light!” (a specific observation), which leads to the general conclusion that texting while driving is dangerous). Levels of analysis In science, there are complementary understandings and explanations of phenomena. Null-hypothesis significance testing (NHST) In statistics, a test created to determine the chances that an alternative hypothesis would produce a result as extreme as the one observed if the null hypothesis were actually true. Objective Being free of personal bias. Population In research, all the people belonging to a particular group (e.g., the population of left handed people). Probability A measure of the degree of certainty of the occurrence of an event. Probability values In statistics, the established threshold for determining whether a given value occurs by chance. Pseudoscience Beliefs or practices that are presented as being scientific, or which are mistaken for being scientific, but which are not scientific (e.g., astrology, the use of celestial bodies to make predictions about human behaviors, and which presents itself as founded in astronomy, the actual scientific study of celestial objects. Astrology is a pseudoscience unable to be falsified, whereas astronomy is a legitimate scientific discipline). Representative In research, the degree to which a sample is a typical example of the population from which it is drawn. Sample In research, a number of people selected from a population to serve as an example of that population. Scientific theory An explanation for observed phenomena that is empirically well-supported, consistent, and fruitful (predictive). Type I error In statistics, the error of rejecting the null hypothesis when it is true. Type II error In statistics, the error of failing to reject the null hypothesis when it is false. Value Belief about the way things should be.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_2%3A_Psychology_as_Science/2.2%3A_Thinking_like_a_Psychological_Scientist.txt
By Beth Chance and Allan Rossman California Polytechnic State University, San Luis Obispo As our society increasingly calls for evidence-based decision making, it is important to consider how and when we can draw valid inferences from data. This module will use four recent research studies to highlight key elements of a statistical investigation. learning objectives • Define basic elements of a statistical investigation. • Describe the role of p-values and confidence intervals in statistical inference. • Describe the role of random sampling in generalizing conclusions from a sample to a population. • Describe the role of random assignment in drawing cause-and-effect conclusions. • Critique statistical studies. Introduction Does drinking coffee actually increase your life expectancy? A recent study (Freedman, Park, Abnet, Hollenbeck, & Sinha, 2012) found that men who drank at least six cups of coffee a day had a 10% lower chance of dying (women 15% lower) than those who drank none. Does this mean you should pick up or increase your own coffee habit? Modern society has become awash in studies such as this; you can read about several such studies in the news every day. Moreover, data abound everywhere in modern life. Conducting such a study well, and interpreting the results of such studies well for making informed decisions or setting policies, requires understanding basic ideas of statistics, the science of gaining insight from data. Rather than relying on anecdote and intuition, statistics allows us to systematically study phenomena of interest. Key components to a statistical investigation are: • Planning the study: Start by asking a testable research question and deciding how to collect data. For example, how long was the study period of the coffee study? How many people were recruited for the study, how were they recruited, and from where? How old were they? What other variables were recorded about the individuals, such as smoking habits, on the comprehensive lifestyle questionnaires? Were changes made to the participants’ coffee habits during the course of the study? • Examining the data: What are appropriate ways to examine the data? What graphs are relevant, and what do they reveal? What descriptive statistics can be calculated to summarize relevant aspects of the data, and what do they reveal? What patterns do you see in the data? Are there any individual observations that deviate from the overall pattern, and what do they reveal? For example, in the coffee study, did the proportions differ when we compared the smokers to the non-smokers? • Inferring from the data: What are valid statistical methods for drawing inferences “beyond” the data you collected? In the coffee study, is the 10%–15% reduction in risk of death something that could have happened just by chance? • Drawing conclusions: Based on what you learned from your data, what conclusions can you draw? Who do you think these conclusions apply to? (Were the people in the coffee study older? Healthy? Living in cities?) Can you draw a cause-and-effect conclusion about your treatments? (Are scientists now saying that the coffee drinking is the cause of the decreased risk of death?) Notice that the numerical analysis (“crunching numbers” on the computer) comprises only a small part of overall statistical investigation. In this module, you will see how we can answer some of these questions and what questions you should be asking about any statistical investigation you read about. Distributional Thinking When data are collected to address a particular question, an important first step is to think of meaningful ways to organize and examine the data. The most fundamental principle of statistics is that data vary. The pattern of that variation is crucial to capture and to understand. Often, careful presentation of the data will address many of the research questions without requiring more sophisticated analyses. It may, however, point to additional questions that need to be examined in more detail. Example 1: Researchers investigated whether cancer pamphlets are written at an appropriate level to be read and understood by cancer patients (Short, Moriarty, & Cooley, 1995). Tests of reading ability were given to 63 patients. In addition, readability level was determined for a sample of 30 pamphlets, based on characteristics such as the lengths of words and sentences in the pamphlet. The results, reported in terms of grade levels, are displayed in Table 1. These two variables reveal two fundamental aspects of statistical thinking: • Data vary. More specifically, values of a variable (such as reading level of a cancer patient or readability level of a cancer pamphlet) vary. • Analyzing the pattern of variation, called the distribution of the variable, often reveals insights. Addressing the research question of whether the cancer pamphlets are written at appropriate levels for the cancer patients requires comparing the two distributions. A naïve comparison might focus only on the centers of the distributions. Both medians turn out to be ninth grade, but considering only medians ignores the variability and the overall distributions of these data. A more illuminating approach is to compare the entire distributions, for example with a graph, as in Figure 1. Figure 2.3.1 makes clear that the two distributions are not well aligned at all. The most glaring discrepancy is that many patients (17/63, or 27%, to be precise) have a reading level below that of the most readable pamphlet. These patients will need help to understand the information provided in the cancer pamphlets. Notice that this conclusion follows from considering the distributions as a whole, not simply measures of center or variability, and that the graph contrasts those distributions more immediately than the frequency tables. Statistical Significance Even when we find patterns in data, often there is still uncertainty in various aspects of the data. For example, there may be potential for measurement errors (even your own body temperature can fluctuate by almost 1 °F over the course of the day). Or we may only have a “snapshot” of observations from a more long-term process or only a small subset of individuals from the population of interest. In such cases, how can we determine whether patterns we see in our small set of data is convincing evidence of a systematic phenomenon in the larger process or population? Example 2: In a study reported in the November 2007 issue of Nature, researchers investigated whether pre-verbal infants take into account an individual’s actions toward others in evaluating that individual as appealing or aversive (Hamlin, Wynn, & Bloom, 2007). In one component of the study, 10-month-old infants were shown a “climber” character (a piece of wood with “googly” eyes glued onto it) that could not make it up a hill in two tries. Then the infants were shown two scenarios for the climber’s next try, one where the climber was pushed to the top of the hill by another character (“helper”), and one where the climber was pushed back down the hill by another character (“hinderer”). The infant was alternately shown these two scenarios several times. Then the infant was presented with two pieces of wood (representing the helper and the hinderer characters) and asked to pick one to play with. The researchers found that of the 16 infants who made a clear choice, 14 chose to play with the helper toy. One possible explanation for this clear majority result is that the helping behavior of the one toy increases the infants’ likelihood of choosing that toy. But are there other possible explanations? What about the color of the toy? Well, prior to collecting the data, the researchers arranged so that each color and shape (red square and blue circle) would be seen by the same number of infants. Or maybe the infants had right-handed tendencies and so picked whichever toy was closer to their right hand? Well, prior to collecting the data, the researchers arranged it so half the infants saw the helper toy on the right and half on the left. Or, maybe the shapes of these wooden characters (square, triangle, circle) had an effect? Perhaps, but again, the researchers controlled for this by rotating which shape was the helper toy, the hinderer toy, and the climber. When designing experiments, it is important to control for as many variables as might affect the responses as possible. It is beginning to appear that the researchers accounted for all the other plausible explanations. But there is one more important consideration that cannot be controlled—if we did the study again with these 16 infants, they might not make the same choices. In other words, there is some randomness inherent in their selection process. Maybe each infant had no genuine preference at all, and it was simply “random luck” that led to 14 infants picking the helper toy. Although this random component cannot be controlled, we can apply a probability model to investigate the pattern of results that would occur in the long run if random chance were the only factor. If the infants were equally likely to pick between the two toys, then each infant had a 50% chance of picking the helper toy. It’s like each infant tossed a coin, and if it landed heads, the infant picked the helper toy. So if we tossed a coin 16 times, could it land heads 14 times? Sure, it’s possible, but it turns out to be very unlikely. Getting 14 (or more) heads in 16 tosses is about as likely as tossing a coin and getting 9 heads in a row. This probability is referred to as a p-value. The p-value tells you how often a random process would give a result at least as extreme as what was found in the actual study, assuming there was nothing other than random chance at play. So, if we assume that each infant was choosing equally, then the probability that 14 or more out of 16 infants would choose the helper toy is found to be 0.0021. We have only two logical possibilities: either the infants have a genuine preference for the helper toy, or the infants have no preference (50/50) and an outcome that would occur only 2 times in 1,000 iterations happened in this study. Because this p-value of 0.0021 is quite small, we conclude that the study provides very strong evidence that these infants have a genuine preference for the helper toy. We often compare the p-value to some cut-off value (called the level of significance, typically around 0.05). If the p-value is smaller than that cut-off value, then we reject the hypothesis that only random chance was at play here. In this case, these researchers would conclude that significantly more than half of the infants in the study chose the helper toy, giving strong evidence of a genuine preference for the toy with the helping behavior. Generalizability One limitation to the previous study is that the conclusion only applies to the 16 infants in the study. We don’t know much about how those 16 infants were selected. Suppose we want to select a subset of individuals (a sample) from a much larger group of individuals (the population) in such a way that conclusions from the sample can be generalized to the larger population. This is the question faced by pollsters every day. Example 3: The General Social Survey (GSS) is a survey on societal trends conducted every other year in the United States. Based on a sample of about 2,000 adult Americans, researchers make claims about what percentage of the U.S. population consider themselves to be “liberal,” what percentage consider themselves “happy,” what percentage feel “rushed” in their daily lives, and many other issues. The key to making these claims about the larger population of all American adults lies in how the sample is selected. The goal is to select a sample that is representative of the population, and a common way to achieve this goal is to select a random sample that gives every member of the population an equal chance of being selected for the sample. In its simplest form, random sampling involves numbering every member of the population and then using a computer to randomly select the subset to be surveyed. Most polls don’t operate exactly like this, but they do use probability-based sampling methods to select individuals from nationally representative panels. In 2004, the GSS reported that 817 of 977 respondents (or 83.6%) indicated that they always or sometimes feel rushed. This is a clear majority, but we again need to consider variation due to random sampling. Fortunately, we can use the same probability model we did in the previous example to investigate the probable size of this error. (Note, we can use the coin-tossing model when the actual population size is much, much larger than the sample size, as then we can still consider the probability to be the same for every individual in the sample.) This probability model predicts that the sample result will be within 3 percentage points of the population value (roughly 1 over the square root of the sample size, the margin of error). A statistician would conclude, with 95% confidence, that between 80.6% and 86.6% of all adult Americans in 2004 would have responded that they sometimes or always feel rushed. The key to the margin of error is that when we use a probability sampling method, we can make claims about how often (in the long run, with repeated random sampling) the sample result would fall within a certain distance from the unknown population value by chance (meaning by random sampling variation) alone. Conversely, non-random samples are often suspect to bias, meaning the sampling method systematically over-represents some segments of the population and under-represents others. We also still need to consider other sources of bias, such as individuals not responding honestly. These sources of error are not measured by the margin of error. Cause and Effect Conclusions In many research studies, the primary question of interest concerns differences between groups. Then the question becomes how were the groups formed (e.g., selecting people who already drink coffee vs. those who don’t). In some studies, the researchers actively form the groups themselves. But then we have a similar question—could any differences we observe in the groups be an artifact of that group-formation process? Or maybe the difference we observe in the groups is so large that we can discount a “fluke” in the group-formation process as a reasonable explanation for what we find? Example 4: A psychology study investigated whether people tend to display more creativity when they are thinking about intrinsic or extrinsic motivations (Ramsey & Schafer, 2002, based on a study by Amabile, 1985). The subjects were 47 people with extensive experience with creative writing. Subjects began by answering survey questions about either intrinsic motivations for writing (such as the pleasure of self-expression) or extrinsic motivations (such as public recognition). Then all subjects were instructed to write a haiku, and those poems were evaluated for creativity by a panel of judges. The researchers conjectured beforehand that subjects who were thinking about intrinsic motivations would display more creativity than subjects who were thinking about extrinsic motivations. The creativity scores from the 47 subjects in this study are displayed in Figure 2, where higher scores indicate more creativity. In this example, the key question is whether the type of motivation affects creativity scores. In particular, do subjects who were asked about intrinsic motivations tend to have higher creativity scores than subjects who were asked about extrinsic motivations? Figure 2.3.2 reveals that both motivation groups saw considerable variability in creativity scores, and these scores have considerable overlap between the groups. In other words, it’s certainly not always the case that those with extrinsic motivations have higher creativity than those with intrinsic motivations, but there may still be a statistical tendency in this direction. (Psychologist Keith Stanovich (2013) refers to people’s difficulties with thinking about such probabilistic tendencies as “the Achilles heel of human cognition.”) The mean creativity score is 19.88 for the intrinsic group, compared to 15.74 for the extrinsic group, which supports the researchers’ conjecture. Yet comparing only the means of the two groups fails to consider the variability of creativity scores in the groups. We can measure variability with statistics using, for instance, the standard deviation: 5.25 for the extrinsic group and 4.40 for the intrinsic group. The standard deviations tell us that most of the creativity scores are within about 5 points of the mean score in each group. We see that the mean score for the intrinsic group lies within one standard deviation of the mean score for extrinsic group. So, although there is a tendency for the creativity scores to be higher in the intrinsic group, on average, the difference is not extremely large. We again want to consider possible explanations for this difference. The study only involved individuals with extensive creative writing experience. Although this limits the population to which we can generalize, it does not explain why the mean creativity score was a bit larger for the intrinsic group than for the extrinsic group. Maybe women tend to receive higher creativity scores? Here is where we need to focus on how the individuals were assigned to the motivation groups. If only women were in the intrinsic motivation group and only men in the extrinsic group, then this would present a problem because we wouldn’t know if the intrinsic group did better because of the different type of motivation or because they were women. However, the researchers guarded against such a problem by randomly assigning the individuals to the motivation groups. Like flipping a coin, each individual was just as likely to be assigned to either type of motivation. Why is this helpful? Because this random assignment tends to balance out all the variables related to creativity we can think of, and even those we don’t think of in advance, between the two groups. So we should have a similar male/female split between the two groups; we should have a similar age distribution between the two groups; we should have a similar distribution of educational background between the two groups; and so on. Random assignment should produce groups that are as similar as possible except for the type of motivation, which presumably eliminates all those other variables as possible explanations for the observed tendency for higher scores in the intrinsic group. But does this always work? No, so by “luck of the draw” the groups may be a little different prior to answering the motivation survey. So then the question is, is it possible that an unlucky random assignment is responsible for the observed difference in creativity scores between the groups? In other words, suppose each individual’s poem was going to get the same creativity score no matter which group they were assigned to, that the type of motivation in no way impacted their score. Then how often would the random-assignment process alone lead to a difference in mean creativity scores as large (or larger) than 19.88 – 15.74 = 4.14 points? We again want to apply to a probability model to approximate a p-value, but this time the model will be a bit different. Think of writing everyone’s creativity scores on an index card, shuffling up the index cards, and then dealing out 23 to the extrinsic motivation group and 24 to the intrinsic motivation group, and finding the difference in the group means. We (better yet, the computer) can repeat this process over and over to see how often, when the scores don’t change, random assignment leads to a difference in means at least as large as 4.41. Figure 2.3.3 shows the results from 1,000 such hypothetical random assignments for these scores. Only 2 of the 1,000 simulated random assignments produced a difference in group means of 4.41 or larger. In other words, the approximate p-value is 2/1000 = 0.002. This small p-value indicates that it would be very surprising for the random assignment process alone to produce such a large difference in group means. Therefore, as with Example 2, we have strong evidence that focusing on intrinsic motivations tends to increase creativity scores, as compared to thinking about extrinsic motivations. Notice that the previous statement implies a cause-and-effect relationship between motivation and creativity score; is such a strong conclusion justified? Yes, because of the random assignment used in the study. That should have balanced out any other variables between the two groups, so now that the small p-value convinces us that the higher mean in the intrinsic group wasn’t just a coincidence, the only reasonable explanation left is the difference in the type of motivation. Can we generalize this conclusion to everyone? Not necessarily—we could cautiously generalize this conclusion to individuals with extensive experience in creative writing similar the individuals in this study, but we would still want to know more about how these individuals were selected to participate. Conclusion Statistical thinking involves the careful design of a study to collect meaningful data to answer a focused research question, detailed analysis of patterns in the data, and drawing conclusions that go beyond the observed data. Random sampling is paramount to generalizing results from our sample to a larger population, and random assignment is key to drawing cause-and-effect conclusions. With both kinds of randomness, probability models help us assess how much random variation we can expect in our results, in order to determine whether our results could happen by chance alone and to estimate a margin of error. So where does this leave us with regard to the coffee study mentioned at the beginning of this module? We can answer many of the questions: • This was a 14-year study conducted by researchers at the National Cancer Institute. • The results were published in the June issue of the New England Journal of Medicine, a respected, peer-reviewed journal. • The study reviewed coffee habits of more than 402,000 people ages 50 to 71 from six states and two metropolitan areas. Those with cancer, heart disease, and stroke were excluded at the start of the study. Coffee consumption was assessed once at the start of the study. • About 52,000 people died during the course of the study. • People who drank between two and five cups of coffee daily showed a lower risk as well, but the amount of reduction increased for those drinking six or more cups. • The sample sizes were fairly large and so the p-values are quite small, even though percent reduction in risk was not extremely large (dropping from a 12% chance to about 10%–11%). • Whether coffee was caffeinated or decaffeinated did not appear to affect the results. • This was an observational study, so no cause-and-effect conclusions can be drawn between coffee drinking and increased longevity, contrary to the impression conveyed by many news headlines about this study. In particular, it’s possible that those with chronic diseases don’t tend to drink coffee. This study needs to be reviewed in the larger context of similar studies and consistency of results across studies, with the constant caution that this was not a randomized experiment. Whereas a statistical analysis can still “adjust” for other potential confounding variables, we are not yet convinced that researchers have identified them all or completely isolated why this decrease in death risk is evident. Researchers can now take the findings of this study and develop more focused studies that address new questions. Outside Resources Apps: Interactive web applets for teaching and learning statistics include the collection at http://www.rossmanchance.com/applets/ P-Value extravaganza Web: Inter-university Consortium for Political and Social Research http://www.icpsr.umich.edu/index.html Web: The Consortium for the Advancement of Undergraduate Statistics https://www.causeweb.org/ Discussion Questions 1. Find a recent research article in your field and answer the following: What was the primary research question? How were individuals selected to participate in the study? Were summary results provided? How strong is the evidence presented in favor or against the research question? Was random assignment used? Summarize the main conclusions from the study, addressing the issues of statistical significance, statistical confidence, generalizability, and cause and effect. Do you agree with the conclusions drawn from this study, based on the study design and the results presented? 2. Is it reasonable to use a random sample of 1,000 individuals to draw conclusions about all U.S. adults? Explain why or why not. Vocabulary Cause-and-effect Related to whether we say one variable is causing changes in the other variable, versus other variables that may be related to these two variables. Confidence interval An interval of plausible values for a population parameter; the interval of values within the margin of error of a statistic. Distribution The pattern of variation in data. Generalizability Related to whether the results from the sample can be generalized to a larger population. Margin of error The expected amount of random variation in a statistic; often defined for 95% confidence level. Parameter A numerical result summarizing a population (e.g., mean, proportion). Population A larger collection of individuals that we would like to generalize our results to. P-value The probability of observing a particular outcome in a sample, or more extreme, under a conjecture about the larger population or process. Random assignment Using a probability-based method to divide a sample into treatment groups. Random sampling Using a probability-based method to select a subset of individuals for the sample from the population. Sample The collection of individuals on which we collect data. Statistic A numerical result computed from a sample (e.g., mean, proportion). Statistical significance A result is statistically significant if it is unlikely to arise by chance alone.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_2%3A_Psychology_as_Science/2.3%3A_Statistical_Thinking.txt
Psychologists test research questions using a variety of methods. Most research relies on either correlations or experiments. With correlations, researchers measure variables as they naturally occur in people and compute the degree to which two variables go together. With experiments, researchers actively make changes in one variable and watch for changes in another variable. Experiments allow researchers to make causal inferences. Other types of methods include longitudinal and quasi-experimental designs. Many factors, including practical constraints, determine the type of methods researchers use. Often researchers survey people even though it would be better, but more expensive and time consuming, to track them longitudinally. learning objectives • Articulate the difference between correlational and experimental designs. • Understand how to interpret correlations. • Understand how experiments help us to infer causality. • Understand how surveys relate to correlational and experimental research. • Explain what a longitudinal study is. • List a strength and weakness of different research designs. Research Designs In the early 1970’s, a man named Uri Geller tricked the world: he convinced hundreds of thousands of people that he could bend spoons and slow watches using only the power of his mind. In fact, if you were in the audience, you would have likely believed he had psychic powers. Everything looked authentic—this man had to have paranormal abilities! So, why have you probably never heard of him before? Because when Uri was asked to perform his miracles in line with scientific experimentation, he was no longer able to do them. That is, even though it seemed like he was doing the impossible, when he was tested by science, he proved to be nothing more than a clever magician. When we look at dinosaur bones to make educated guesses about extinct life, or systematically chart the heavens to learn about the relationships between stars and planets, or study magicians to figure out how they perform their tricks, we are forming observations—the foundation of science. Although we are all familiar with the saying “seeing is believing,” conducting science is more than just what your eyes perceive. Science is the result of systematic and intentional study of the natural world. And psychology is no different. In the movie Jerry Maguire, Cuba Gooding, Jr. became famous for using the phrase, “Show me the money!” In psychology, as in all sciences, we might say, “Show me the data!” One of the important steps in scientific inquiry is to test our research questions, otherwise known as hypotheses. However, there are many ways to test hypotheses in psychological research. Which method you choose will depend on the type of questions you are asking, as well as what resources are available to you. All methods have limitations, which is why the best research uses a variety of methods. Most psychological research can be divided into two types: experimental and correlational research. Experimental Research If somebody gave you \$20 that absolutely had to be spent today, how would you choose to spend it? Would you spend it on an item you’ve been eyeing for weeks, or would you donate the money to charity? Which option do you think would bring you the most happiness? If you’re like most people, you’d choose to spend the money on yourself (duh, right?). Our intuition is that we’d be happier if we spent the money on ourselves. Knowing that our intuition can sometimes be wrong, Professor Elizabeth Dunn (2008) at the University of British Columbia set out to conduct an experiment on spending and happiness. She gave each of the participants in her experiment \$20 and then told them they had to spend the money by the end of the day. Some of the participants were told they must spend the money on themselves, and some were told they must spend the money on others (either charity or a gift for someone). At the end of the day she measured participants’ levels of happiness using a self-report questionnaire. (But wait, how do you measure something like happiness when you can’t really see it? Psychologists measure many abstract concepts, such as happiness and intelligence, by beginning with operational definitions of the concepts. See the Noba modules on Intelligence [noba.to/ncb2h79v] and Happiness [noba.to/qnw7g32t], respectively, for more information on specific measurement strategies.) In an experiment, researchers manipulate, or cause changes, in the independent variable, and observe or measure any impact of those changes in the dependent variable. The independent variable is the one under the experimenter’s control, or the variable that is intentionally altered between groups. In the case of Dunn’s experiment, the independent variable was whether participants spent the money on themselves or on others. The dependent variable is the variable that is not manipulated at all, or the one where the effect happens. One way to help remember this is that the dependent variable “depends” on what happens to the independent variable. In our example, the participants’ happiness (the dependent variable in this experiment) depends on how the participants spend their money (the independent variable). Thus, any observed changes or group differences in happiness can be attributed to whom the money was spent on. What Dunn and her colleagues found was that, after all the spending had been done, the people who had spent the money on others were happier than those who had spent the money on themselves. In other words, spending on others causes us to be happier than spending on ourselves. Do you find this surprising? But wait! Doesn’t happiness depend on a lot of different factors—for instance, a person’s upbringing or life circumstances? What if some people had happy childhoods and that’s why they’re happier? Or what if some people dropped their toast that morning and it fell jam-side down and ruined their whole day? It is correct to recognize that these factors and many more can easily affect a person’s level of happiness. So how can we accurately conclude that spending money on others causes happiness, as in the case of Dunn’s experiment? The most important thing about experiments is random assignment. Participants don’t get to pick which condition they are in (e.g., participants didn’t choose whether they were supposed to spend the money on themselves versus others). The experimenter assigns them to a particular condition based on the flip of a coin or the roll of a die or any other random method. Why do researchers do this? With Dunn’s study, there is the obvious reason: you can imagine which condition most people would choose to be in, if given the choice. But another equally important reason is that random assignment makes it so the groups, on average, are similar on all characteristics except what the experimenter manipulates. By randomly assigning people to conditions (self-spending versus other-spending), some people with happy childhoods should end up in each condition. Likewise, some people who had dropped their toast that morning (or experienced some other disappointment) should end up in each condition. As a result, the distribution of all these factors will generally be consistent across the two groups, and this means that on average the two groups will be relatively equivalent on all these factors. Random assignment is critical to experimentation because if the only difference between the two groups is the independent variable, we can infer that the independent variable is the cause of any observable difference (e.g., in the amount of happiness they feel at the end of the day). Here’s another example of the importance of random assignment: Let’s say your class is going to form two basketball teams, and you get to be the captain of one team. The class is to be divided evenly between the two teams. If you get to pick the players for your team first, whom will you pick? You’ll probably pick the tallest members of the class or the most athletic. You probably won’t pick the short, uncoordinated people, unless there are no other options. As a result, your team will be taller and more athletic than the other team. But what if we want the teams to be fair? How can we do this when we have people of varying height and ability? All we have to do is randomly assign players to the two teams. Most likely, some tall and some short people will end up on your team, and some tall and some short people will end up on the other team. The average height of the teams will be approximately the same. That is the power of random assignment! Other considerations In addition to using random assignment, you should avoid introducing confounds into your experiments. Confounds are things that could undermine your ability to draw causal inferences. For example, if you wanted to test if a new happy pill will make people happier, you could randomly assign participants to take the happy pill or not (the independent variable) and compare these two groups on their self-reported happiness (the dependent variable). However, if some participants know they are getting the happy pill, they might develop expectations that influence their self-reported happiness. This is sometimes known as a placebo effect. Sometimes a person just knowing that he or she is receiving special treatment or something new is enough to actually cause changes in behavior or perception: In other words, even if the participants in the happy pill condition were to report being happier, we wouldn’t know if the pill was actually making them happier or if it was the placebo effect—an example of a confound. A related idea is participant demand. This occurs when participants try to behave in a way they think the experimenter wants them to behave. Placebo effects and participant demand often occur unintentionally. Even experimenter expectations can influence the outcome of a study. For example, if the experimenter knows who took the happy pill and who did not, and the dependent variable is the experimenter’s observations of people’s happiness, then the experimenter might perceive improvements in the happy pill group that are not really there. One way to prevent these confounds from affecting the results of a study is to use a double-blind procedure. In a double-blind procedure, neither the participant nor the experimenter knows which condition the participant is in. For example, when participants are given the happy pill or the fake pill, they don’t know which one they are receiving. This way the participants shouldn’t experience the placebo effect, and will be unable to behave as the researcher expects (participant demand). Likewise, the researcher doesn’t know which pill each participant is taking (at least in the beginning—later, the researcher will get the results for data-analysis purposes), which means the researcher’s expectations can’t influence his or her observations. Therefore, because both parties are “blind” to the condition, neither will be able to behave in a way that introduces a confound. At the end of the day, the only difference between groups will be which pills the participants received, allowing the researcher to determine if the happy pill actually caused people to be happier. Correlational Designs When scientists passively observe and measure phenomena it is called correlational research. Here, we do not intervene and change behavior, as we do in experiments. In correlational research, we identify patterns of relationships, but we usually cannot infer what causes what. Importantly, with correlational research, you can examine only two variables at a time, no more and no less. So, what if you wanted to test whether spending on others is related to happiness, but you don’t have \$20 to give to each participant? You could use a correlational design—which is exactly what Professor Dunn did, too. She asked people how much of their income they spent on others or donated to charity, and later she asked them how happy they were. Do you think these two variables were related? Yes, they were! The more money people reported spending on others, the happier they were. More details about the correlation To find out how well two variables correspond, we can plot the relation between the two scores on what is known as a scatterplot (Figure 2.4.1). In the scatterplot, each dot represents a data point. (In this case it’s individuals, but it could be some other unit.) Importantly, each dot provides us with two pieces of information—in this case, information about how good the person rated the past month (x-axis) and how happy the person felt in the past month (y-axis). Which variable is plotted on which axis does not matter. The association between two variables can be summarized statistically using the correlation coefficient (abbreviated as r). A correlationcoefficient provides information about the direction and strength of the association between two variables. For the example above, the direction of the association is positive. This means that people who perceived the past month as being good reported feeling more happy, whereas people who perceived the month as being bad reported feeling less happy. With a positive correlation, the two variables go up or down together. In a scatterplot, the dots form a pattern that extends from the bottom left to the upper right (just as they do in Figure 2.4.1). The r value for a positive correlation is indicated by a positive number (although, the positive sign is usually omitted). Here, the r value is .81. A negative correlation is one in which the two variables move in opposite directions. That is, as one variable goes up, the other goes down. Figure 2.4.2 shows the association between the average height of males in a country (y-axis) and the pathogen prevalence (or commonness of disease; x-axis) of that country. In this scatterplot, each dot represents a country. Notice how the dots extend from the top left to the bottom right. What does this mean in real-world terms? It means that people are shorter in parts of the world where there is more disease. The r value for a negative correlation is indicated by a negative number—that is, it has a minus (–) sign in front of it. Here, it is –.83. The strength of a correlation has to do with how well the two variables align. Recall that in Professor Dunn’s correlational study, spending on others positively correlated with happiness: The more money people reported spending on others, the happier they reported to be. At this point you may be thinking to yourself, I know a very generous person who gave away lots of money to other people but is miserable! Or maybe you know of a very stingy person who is happy as can be. Yes, there might be exceptions. If an association has many exceptions, it is considered a weak correlation. If an association has few or no exceptions, it is considered a strong correlation. A strong correlation is one in which the two variables always, or almost always, go together. In the example of happiness and how good the month has been, the association is strong. The stronger a correlation is, the tighter the dots in the scatterplot will be arranged along a sloped line. The r value of a strong correlation will have a high absolute value. In other words, you disregard whether there is a negative sign in front of the r value, and just consider the size of the numerical value itself. If the absolute value is large, it is a strong correlation. A weak correlation is one in which the two variables correspond some of the time, but not most of the time. Figure 2.4.3 shows the relation between valuing happiness and grade point average (GPA). People who valued happiness more tended to earn slightly lower grades, but there were lots of exceptions to this. The r value for a weak correlation will have a low absolute value. If two variables are so weakly related as to be unrelated, we say they are uncorrelated, and the r value will be zero or very close to zero. In the previous example, is the correlation between height and pathogen prevalence strong? Compared to Figure 2.4.3, the dots in Figure 2.4.2 are tighter and less dispersed. The absolute value of –.83 is large. Therefore, it is a strong negative correlation. Can you guess the strength and direction of the correlation between age and year of birth? If you said this is a strong negative correlation, you are correct! Older people always have lower years of birth than younger people (e.g., 1950 vs. 1995), but at the same time, the older people will have a higher age (e.g., 65 vs. 20). In fact, this is a perfect correlation because there are no exceptions to this pattern. I challenge you to find a 10-year-old born before 2003! You can’t. Problems with the correlation If generosity and happiness are positively correlated, should we conclude that being generous causes happiness? Similarly, if height and pathogen prevalence are negatively correlated, should we conclude that disease causes shortness? From a correlation alone, we can’t be certain. For example, in the first case it may be that happiness causes generosity, or that generosity causes happiness. Or, a third variable might cause both happiness andgenerosity, creating the illusion of a direct link between the two. For example, wealth could be the third variable that causes both greater happiness and greater generosity. This is why correlation does not mean causation—an often repeated phrase among psychologists. Qualitative Designs Just as correlational research allows us to study topics we can’t experimentally manipulate (e.g., whether you have a large or small income), there are other types of research designs that allow us to investigate these harder-to-study topics. Qualitative designs, including participant observation, case studies, and narrative analysis are examples of such methodologies. Although something as simple as “observation” may seem like it would be a part of all research methods, participant observation is a distinct methodology that involves the researcher embedding him- or herself into a group in order to study its dynamics. For example, Festinger, Riecken, and Shacter (1956) were very interested in the psychology of a particular cult. However, this cult was very secretive and wouldn’t grant interviews to outside members. So, in order to study these people, Festinger and his colleagues pretended to be cult members, allowing them access to the behavior and psychology of the cult. Despite this example, it should be noted that the people being observed in a participant observation study usually know that the researcher is there to study them. Another qualitative method for research is the case study, which involves an intensive examination of specific individuals or specific contexts. Sigmund Freud, the father of psychoanalysis, was famous for using this type of methodology; however, more current examples of case studies usually involve brain injuries. For instance, imagine that researchers want to know how a very specific brain injury affects people’s experience of happiness. Obviously, the researchers can’t conduct experimental research that involves inflicting this type of injury on people. At the same time, there are too few people who have this type of injury to conduct correlational research. In such an instance, the researcher may examine only one person with this brain injury, but in doing so, the researcher will put the participant through a very extensive round of tests. Hopefully what is learned from this one person can be applied to others; however, even with thorough tests, there is the chance that something unique about this individual (other than the brain injury) will affect his or her happiness. But with such a limited number of possible participants, a case study is really the only type of methodology suitable for researching this brain injury. The final qualitative method to be discussed in this section is narrative analysis. Narrative analysis centers around the study of stories and personal accounts of people, groups, or cultures. In this methodology, rather than engaging with participants directly, or quantifying their responses or behaviors, researchers will analyze the themes, structure, and dialogue of each person’s narrative. That is, a researcher will examine people’s personal testimonies in order to learn more about the psychology of those individuals or groups. These stories may be written, audio-recorded, or video-recorded, and allow the researcher not only to study what the participant says but how he or she says it. Every person has a unique perspective on the world, and studying the way he or she conveys a story can provide insight into that perspective. Quasi-Experimental Designs What if you want to study the effects of marriage on a variable? For example, does marriage make people happier? Can you randomly assign some people to get married and others to remain single? Of course not. So how can you study these important variables? You can use a quasi-experimental design. A quasi-experimental design is similar to experimental research, except that random assignment to conditions is not used. Instead, we rely on existing group memberships (e.g., married vs. single). We treat these as the independent variables, even though we don’t assign people to the conditions and don’t manipulate the variables. As a result, with quasi-experimental designs causal inference is more difficult. For example, married people might differ on a variety of characteristics from unmarried people. If we find that married participants are happier than single participants, it will be hard to say that marriage causes happiness, because the people who got married might have already been happier than the people who have remained single. Because experimental and quasi-experimental designs can seem pretty similar, let’s take another example to distinguish them. Imagine you want to know who is a better professor: Dr. Smith or Dr. Khan. To judge their ability, you’re going to look at their students’ final grades. Here, the independent variable is the professor (Dr. Smith vs. Dr. Khan) and the dependent variable is the students’ grades. In an experimental design, you would randomly assign students to one of the two professors and then compare the students’ final grades. However, in real life, researchers can’t randomly force students to take one professor over the other; instead, the researchers would just have to use the preexisting classes and study them as-is (quasi-experimental design). Again, the key difference is random assignment to the conditions of the independent variable. Although the quasi-experimental design (where the students choose which professor they want) may seem random, it’s most likely not. For example, maybe students heard Dr. Smith sets low expectations, so slackers prefer this class, whereas Dr. Khan sets higher expectations, so smarter students prefer that one. This now introduces a confounding variable (student intelligence) that will almost certainly have an effect on students’ final grades, regardless of how skilled the professor is. So, even though a quasi-experimental design is similar to an experimental design (i.e., it has a manipulated independent variable), because there’s no random assignment, you can’t reasonably draw the same conclusions that you would with an experimental design. Longitudinal Studies Another powerful research design is the longitudinal study. Longitudinal studies track the same people over time. Some longitudinal studies last a few weeks, some a few months, some a year or more. Some studies that have contributed a lot to psychology followed the same people over decades. For example, one study followed more than 20,000 Germans for two decades. From these longitudinal data, psychologist Rich Lucas (2003) was able to determine that people who end up getting married indeed start off a bit happier than their peers who never marry. Longitudinal studies like this provide valuable evidence for testing many theories in psychology, but they can be quite costly to conduct, especially if they follow many people for many years. Surveys A survey is a way of gathering information, using old-fashioned questionnaires or the Internet. Compared to a study conducted in a psychology laboratory, surveys can reach a larger number of participants at a much lower cost. Although surveys are typically used for correlational research, this is not always the case. An experiment can be carried out using surveys as well. For example, King and Napa (1998) presented participants with different types of stimuli on paper: either a survey completed by a happy person or a survey completed by an unhappy person. They wanted to see whether happy people were judged as more likely to get into heaven compared to unhappy people. Can you figure out the independent and dependent variables in this study? Can you guess what the results were? Happy people (vs. unhappy people; the independent variable) were judged as more likely to go to heaven (the dependent variable) compared to unhappy people! Likewise, correlational research can be conducted without the use of surveys. For instance, psychologists LeeAnn Harker and Dacher Keltner (2001) examined the smile intensity of women’s college yearbook photos. Smiling in the photos was correlated with being married 10 years later! Tradeoffs in Research Even though there are serious limitations to correlational and quasi-experimental research, they are not poor cousins to experiments and longitudinal designs. In addition to selecting a method that is appropriate to the question, many practical concerns may influence the decision to use one method over another. One of these factors is simply resource availability—how much time and money do you have to invest in the research? (Tip: If you’re doing a senior honor’s thesis, do not embark on a lengthy longitudinal study unless you are prepared to delay graduation!) Often, we survey people even though it would be more precise—but much more difficult—to track them longitudinally. Especially in the case of exploratory research, it may make sense to opt for a cheaper and faster method first. Then, if results from the initial study are promising, the researcher can follow up with a more intensive method. Beyond these practical concerns, another consideration in selecting a research design is the ethics of the study. For example, in cases of brain injury or other neurological abnormalities, it would be unethical for researchers to inflict these impairments on healthy participants. Nonetheless, studying people with these injuries can provide great insight into human psychology (e.g., if we learn that damage to a particular region of the brain interferes with emotions, we may be able to develop treatments for emotional irregularities). In addition to brain injuries, there are numerous other areas of research that could be useful in understanding the human mind but which pose challenges to a true experimental design—such as the experiences of war, long-term isolation, abusive parenting, or prolonged drug use. However, none of these are conditions we could ethically experimentally manipulate and randomly assign people to. Therefore, ethical considerations are another crucial factor in determining an appropriate research design. Research Methods: Why You Need Them Just look at any major news outlet and you’ll find research routinely being reported. Sometimes the journalist understands the research methodology, sometimes not (e.g., correlational evidence is often incorrectly represented as causal evidence). Often, the media are quick to draw a conclusion for you. After reading this module, you should recognize that the strength of a scientific finding lies in the strength of its methodology. Therefore, in order to be a savvy consumer of research, you need to understand the pros and cons of different methods and the distinctions among them. Plus, understanding how psychologists systematically go about answering research questions will help you to solve problems in other domains, both personal and professional, not just in psychology. Outside Resources Article: Harker and Keltner study of yearbook photographs and marriage http://psycnet.apa.org/journals/psp/80/1/112/ Article: Rich Lucas’s longitudinal study on the effects of marriage on happiness http://psycnet.apa.org/journals/psp/84/3/527/ Article: Spending money on others promotes happiness. Elizabeth Dunn’s research https://www.sciencemag.org/content/3.../1687.abstract Article: What makes a life good? http://psycnet.apa.org/journals/psp/75/1/156/ Discussion Questions 1. What are some key differences between experimental and correlational research? 2. Why might researchers sometimes use methods other than experiments? 3. How do surveys relate to correlational and experimental designs? Vocabulary Confounds Factors that undermine the ability to draw causal inferences from an experiment. Correlation Measures the association between two variables, or how they go together. Dependent variable The variable the researcher measures but does not manipulate in an experiment. Experimenter expectations When the experimenter’s expectations influence the outcome of a study. Independent variable The variable the researcher manipulates and controls in an experiment. Longitudinal study A study that follows the same group of individuals over time. Operational definitions How researchers specifically measure a concept. Participant demand When participants behave in a way that they think the experimenter wants them to behave. Placebo effect When receiving special treatment or something new affects human behavior. Quasi-experimental design An experiment that does not require random assignment to conditions. Random assignment Assigning participants to receive different conditions of an experiment by chance.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_2%3A_Psychology_as_Science/2.4%3A_Research_Designs.txt
By Matthias R. Mehl University of Arizona Because of its ability to determine cause-and-effect relationships, the laboratory experiment is traditionally considered the method of choice for psychological science. One downside, however, is that as it carefully controls conditions and their effects, it can yield findings that are out of touch with reality and have limited use when trying to understand real-world behavior. This module highlights the importance of also conducting research outside the psychology laboratory, within participants’ natural, everyday environments, and reviews existing methodologies for studying daily life. learning objectives • Identify limitations of the traditional laboratory experiment. • Explain ways in which daily life research can further psychological science. • Know what methods exist for conducting psychological research in the real world. Introduction The laboratory experiment is traditionally considered the “gold standard” in psychology research. This is because only laboratory experiments can clearly separate cause from effect and therefore establish causality. Despite this unique strength, it is also clear that a scientific field that is mainly based on controlled laboratory studies ends up lopsided. Specifically, it accumulates a lot of knowledge on what can happen—under carefully isolated and controlled circumstances—but it has little to say about what actually does happen under the circumstances that people actually encounter in their daily lives. For example, imagine you are a participant in an experiment that looks at the effect of being in a good mood on generosity, a topic that may have a good deal of practical application. Researchers create an internally-valid, carefully-controlled experiment where they randomly assign you to watch either a happy movie or a neutral movie, and then you are given the opportunity to help the researcher out by staying longer and participating in another study. If people in a good mood are more willing to stay and help out, the researchers can feel confident that – since everything else was held constant – your positive mood led you to be more helpful. However, what does this tell us about helping behaviors in the real world? Does it generalize to other kinds of helping, such as donating money to a charitable cause? Would all kinds of happy movies produce this behavior, or only this one? What about other positive experiences that might boost mood, like receiving a compliment or a good grade? And what if you were watching the movie with friends, in a crowded theatre, rather than in a sterile research lab? Taking research out into the real world can help answer some of these sorts of important questions. As one of the founding fathers of social psychology remarked, “Experimentation in the laboratory occurs, socially speaking, on an island quite isolated from the life of society” (Lewin, 1944, p. 286). This module highlights the importance of going beyond experimentation and also conducting research outside the laboratory (Reis & Gosling, 2010), directly within participants’ natural environments, and reviews existing methodologies for studying daily life. Rationale for Conducting Psychology Research in the Real World One important challenge researchers face when designing a study is to find the right balance between ensuring internal validity, or the degree to which a study allows unambiguous causal inferences, and external validity, or the degree to which a study ensures that potential findings apply to settings and samples other than the ones being studied (Brewer, 2000). Unfortunately, these two kinds of validity tend to be difficult to achieve at the same time, in one study. This is because creating a controlled setting, in which all potentially influential factors (other than the experimentally-manipulated variable) are controlled, is bound to create an environment that is quite different from what people naturally encounter (e.g., using a happy movie clip to promote helpful behavior). However, it is the degree to which an experimental situation is comparable to the corresponding real-world situation of interest that determines how generalizable potential findings will be. In other words, if an experiment is very far-off from what a person might normally experience in everyday life, you might reasonably question just how useful its findings are. Because of the incompatibility of the two types of validity, one is often—by design—prioritized over the other. Due to the importance of identifying true causal relationships, psychology has traditionally emphasized internal over external validity. However, in order to make claims about human behavior that apply across populations and environments, researchers complement traditional laboratory research, where participants are brought into the lab, with field research where, in essence, the psychological laboratory is brought to participants. Field studies allow for the important test of how psychological variables and processes of interest “behave” under real-world circumstances (i.e., what actually does happen rather than what can happen). They can also facilitate “downstream” operationalizations of constructs that measure life outcomes of interest directly rather than indirectly. Take, for example, the fascinating field of psychoneuroimmunology, where the goal is to understand the interplay of psychological factors - such as personality traits or one’s stress level - and the immune system. Highly sophisticated and carefully controlled experiments offer ways to isolate the variety of neural, hormonal, and cellular mechanisms that link psychological variables such as chronic stress to biological outcomes such as immunosuppression (a state of impaired immune functioning; Sapolsky, 2004). Although these studies demonstrate impressively how psychological factors can affect health-relevant biological processes, they—because of their research design—remain mute about the degree to which these factors actually do undermine people’s everyday health in real life. It is certainly important to show that laboratory stress can alter the number of natural killer cells in the blood. But it is equally important to test to what extent the levels of stress that people experience on a day-to-day basis result in them catching a cold more often or taking longer to recover from one. The goal for researchers, therefore, must be to complement traditional laboratory experiments with less controlled studies under real-world circumstances. The term ecological validity is used to refer the degree to which an effect has been obtained under conditions that are typical for what happens in everyday life (Brewer, 2000). In this example, then, people might keep a careful daily log of how much stress they are under as well as noting physical symptoms such as headaches or nausea. Although many factors beyond stress level may be responsible for these symptoms, this more correlational approach can shed light on how the relationship between stress and health plays out outside of the laboratory. An Overview of Research Methods for Studying Daily Life Capturing “life as it is lived” has been a strong goal for some researchers for a long time. Wilhelm and his colleagues recently published a comprehensive review of early attempts to systematically document daily life (Wilhelm, Perrez, & Pawlik, 2012). Building onto these original methods, researchers have, over the past decades, developed a broad toolbox for measuring experiences, behavior, and physiology directly in participants’ daily lives (Mehl & Conner, 2012). Figure 1 provides a schematic overview of the methodologies described below. Studying Daily Experiences Starting in the mid-1970s, motivated by a growing skepticism toward highly-controlled laboratory studies, a few groups of researchers developed a set of new methods that are now commonly known as the experience-sampling method (Hektner, Schmidt, & Csikszentmihalyi, 2007), ecological momentary assessment (Stone & Shiffman, 1994), or the diary method (Bolger & Rafaeli, 2003). Although variations within this set of methods exist, the basic idea behind all of them is to collect in-the-moment (or, close-to-the-moment) self-report data directly from people as they go about their daily lives. This is typically accomplished by asking participants’ repeatedly (e.g., five times per day) over a period of time (e.g., a week) to report on their current thoughts and feelings. The momentary questionnaires often ask about their location (e.g., “Where are you now?”), social environment (e.g., “With whom are you now?”), activity (e.g., “What are you currently doing?”), and experiences (e.g., “How are you feeling?”). That way, researchers get a snapshot of what was going on in participants’ lives at the time at which they were asked to report. Technology has made this sort of research possible, and recent technological advances have altered the different tools researchers are able to easily use. Initially, participants wore electronic wristwatches that beeped at preprogrammed but seemingly random times, at which they completed one of a stack of provided paper questionnaires. With the mobile computing revolution, both the prompting and the questionnaire completion were gradually replaced by handheld devices such as smartphones. Being able to collect the momentary questionnaires digitally and time-stamped (i.e., having a record of exactly when participants responded) had major methodological and practical advantages and contributed to experience sampling going mainstream (Conner, Tennen, Fleeson, & Barrett, 2009). Over time, experience sampling and related momentary self-report methods have become very popular, and, by now, they are effectively the gold standard for studying daily life. They have helped make progress in almost all areas of psychology (Mehl & Conner, 2012). These methods ensure receiving many measurements from many participants, and has further inspired the development of novel statistical methods (Bolger & Laurenceau, 2013). Finally, and maybe most importantly, they accomplished what they sought out to accomplish: to bring attention to what psychology ultimately wants and needs to know about, namely “what people actually do, think, and feel in the various contexts of their lives” (Funder, 2001, p. 213). In short, these approaches have allowed researchers to do research that is more externally valid, or more generalizable to real life, than the traditional laboratory experiment. To illustrate these techniques, consider a classic study, Stone, Reed, and Neale (1987), who tracked positive and negative experiences surrounding a respiratory infection using daily experience sampling. They found that undesirable experiences peaked and desirable ones dipped about four to five days prior to participants coming down with the cold. More recently, Killingsworth and Gilbert (2010) collected momentary self-reports from more than 2,000 participants via a smartphone app. They found that participants were less happy when their mind was in an idling, mind-wandering state, such as surfing the Internet or multitasking at work, than when it was in an engaged, task-focused one, such as working diligently on a paper. These are just two examples that illustrate how experience-sampling studies have yielded findings that could not be obtained with traditional laboratory methods. Recently, the day reconstruction method (DRM) (Kahneman, Krueger, Schkade, Schwarz, & Stone, 2004) has been developed to obtain information about a person’s daily experiences without going through the burden of collecting momentary experience-sampling data. In the DRM, participants report their experiences of a given day retrospectively after engaging in a systematic, experiential reconstruction of the day on the following day. As a participant in this type of study, you might look back on yesterday, divide it up into a series of episodes such as “made breakfast,” “drove to work,” “had a meeting,” etc. You might then report who you were with in each episode and how you felt in each. This approach has shed light on what situations lead to moments of positive and negative mood throughout the course of a normal day. Studying Daily Behavior Experience sampling is often used to study everyday behavior (i.e., daily social interactions and activities). In the laboratory, behavior is best studied using direct behavioral observation (e.g., video recordings). In the real world, this is, of course, much more difficult. As Funder put it, it seems it would require a “detective’s report [that] would specify in exact detail everything the participant said and did, and with whom, in all of the contexts of the participant’s life” (Funder, 2007, p. 41). As difficult as this may seem, Mehl and colleagues have developed a naturalistic observation methodology that is similar in spirit. Rather than following participants—like a detective—with a video camera (see Craik, 2000), they equip participants with a portable audio recorder that is programmed to periodically record brief snippets of ambient sounds (e.g., 30 seconds every 12 minutes). Participants carry the recorder (originally a microcassette recorder, now a smartphone app) on them as they go about their days and return it at the end of the study. The recorder provides researchers with a series of sound bites that, together, amount to an acoustic diary of participants’ days as they naturally unfold—and that constitute a representative sample of their daily activities and social encounters. Because it is somewhat similar to having the researcher’s ear at the participant’s lapel, they called their method the electronically activated recorder, or EAR (Mehl, Pennebaker, Crow, Dabbs, & Price, 2001). The ambient sound recordings can be coded for many things, including participants’ locations (e.g., at school, in a coffee shop), activities (e.g., watching TV, eating), interactions (e.g., in a group, on the phone), and emotional expressions (e.g., laughing, sighing). As unnatural or intrusive as it might seem, participants report that they quickly grow accustomed to the EAR and say they soon find themselves behaving as they normally would. In a cross-cultural study, Ramírez-Esparza and her colleagues used the EAR method to study sociability in the United States and Mexico. Interestingly, they found that although American participants rated themselves significantly higher than Mexicans on the question, “I see myself as a person who is talkative,” they actually spent almost 10 percent less time talking than Mexicans did (Ramírez-Esparza, Mehl, Álvarez Bermúdez, & Pennebaker, 2009). In a similar way, Mehl and his colleagues used the EAR method to debunk the long-standing myth that women are considerably more talkative than men. Using data from six different studies, they showed that both sexes use on average about 16,000 words per day. The estimated sex difference of 546 words was trivial compared to the immense range of more than 46,000 words between the least and most talkative individual (695 versus 47,016 words; Mehl, Vazire, Ramírez-Esparza, Slatcher, & Pennebaker, 2007). Together, these studies demonstrate how naturalistic observation can be used to study objective aspects of daily behavior and how it can yield findings quite different from what other methods yield (Mehl, Robbins, & Deters, 2012). A series of other methods and creative ways for assessing behavior directly and unobtrusively in the real world are described in a seminal book on real-world, subtle measures (Webb, Campbell, Schwartz, Sechrest, & Grove, 1981). For example, researchers have used time-lapse photography to study the flow of people and the use of space in urban public places (Whyte, 1980). More recently, they have observed people’s personal (e.g., dorm rooms) and professional (e.g., offices) spaces to understand how personality is expressed and detected in everyday environments (Gosling, Ko, Mannarelli, & Morris, 2002). They have even systematically collected and analyzed people’s garbage to measure what people actually consume (e.g., empty alcohol bottles or cigarette boxes) rather than what they say they consume (Rathje & Murphy, 2001). Because people often cannot and sometimes may not want to accurately report what they do, the direct—and ideally nonreactive—assessment of real-world behavior is of high importance for psychological research (Baumeister, Vohs, & Funder, 2007). Studying Daily Physiology In addition to studying how people think, feel, and behave in the real world, researchers are also interested in how our bodies respond to the fluctuating demands of our lives. What are the daily experiences that make our “blood boil”? How do our neurotransmitters and hormones respond to the stressors we encounter in our lives? What physiological reactions do we show to being loved—or getting ostracized? You can see how studying these powerful experiences in real life, as they actually happen, may provide more rich and informative data than one might obtain in an artificial laboratory setting that merely mimics these experiences. Also, in pursuing these questions, it is important to keep in mind that what is stressful, engaging, or boring for one person might not be so for another. It is, in part, for this reason that researchers have found only limited correspondence between how people respond physiologically to a standardized laboratory stressor (e.g., giving a speech) and how they respond to stressful experiences in their lives. To give an example, Wilhelm and Grossman (2010) describe a participant who showed rather minimal heart rate increases in response to a laboratory stressor (about five to 10 beats per minute) but quite dramatic increases (almost 50 beats per minute) later in the afternoon while watching a soccer game. Of course, the reverse pattern can happen as well, such as when patients have high blood pressure in the doctor’s office but not in their home environment—the so-called white coat hypertension (White, Schulman, McCabe, & Dey, 1989). Ambulatory physiological monitoring – that is, monitoring physiological reactions as people go about their daily lives - has a long history in biomedical research and an array of monitoring devices exist (Fahrenberg & Myrtek, 1996). Among the biological signals that can now be measured in daily life with portable signal recording devices are the electrocardiogram (ECG), blood pressure, electrodermal activity (or “sweat response”), body temperature, and even the electroencephalogram (EEG) (Wilhelm & Grossman, 2010). Most recently, researchers have added ambulatory assessment of hormones (e.g., cortisol) and other biomarkers (e.g., immune markers) to the list (Schlotz, 2012). The development of ever more sophisticated ways to track what goes on underneath our skins as we go about our lives is a fascinating and rapidly advancing field. In a recent study, Lane, Zareba, Reis, Peterson, and Moss (2011) used experience sampling combined with ambulatory electrocardiography (a so-called Holter monitor) to study how emotional experiences can alter cardiac function in patients with a congenital heart abnormality (e.g., long QT syndrome). Consistent with the idea that emotions may, in some cases, be able to trigger a cardiac event, they found that typical—in most cases even relatively low intensity— daily emotions had a measurable effect on ventricular repolarization, an important cardiac indicator that, in these patients, is linked to risk of a cardiac event. In another study, Smyth and colleagues (1998) combined experience sampling with momentary assessment of cortisol, a stress hormone. They found that momentary reports of current or even anticipated stress predicted increased cortisol secretion 20 minutes later. Further, and independent of that, the experience of other kinds of negative affect (e.g., anger, frustration) also predicted higher levels of cortisol and the experience of positive affect (e.g., happy, joyful) predicted lower levels of this important stress hormone. Taken together, these studies illustrate how researchers can use ambulatory physiological monitoring to study how the little—and seemingly trivial or inconsequential—experiences in our lives leave objective, measurable traces in our bodily systems. Studying Online Behavior Another domain of daily life that has only recently emerged is virtual daily behavior or how people act and interact with others on the Internet. Irrespective of whether social media will turn out to be humanity’s blessing or curse (both scientists and laypeople are currently divided over this question), the fact is that people are spending an ever increasing amount of time online. In light of that, researchers are beginning to think of virtual behavior as being as serious as “actual” behavior and seek to make it a legitimate target of their investigations (Gosling & Johnson, 2010). One way to study virtual behavior is to make use of the fact that most of what people do on the Web—emailing, chatting, tweeting, blogging, posting— leaves direct (and permanent) verbal traces. For example, differences in the ways in which people use words (e.g., subtle preferences in word choice) have been found to carry a lot of psychological information (Pennebaker, Mehl, & Niederhoffer, 2003). Therefore, a good way to study virtual social behavior is to study virtual language behavior. Researchers can download people’s—often public—verbal expressions and communications and analyze them using modern text analysis programs (e.g., Pennebaker, Booth, & Francis, 2007). For example, Cohn, Mehl, and Pennebaker (2004) downloaded blogs of more than a thousand users of lifejournal.com, one of the first Internet blogging sites, to study how people responded socially and emotionally to the attacks of September 11, 2001. In going “the online route,” they could bypass a critical limitation of coping research, the inability to obtain baseline information; that is, how people were doing before the traumatic event occurred. Through access to the database of public blogs, they downloaded entries from two months prior to two months after the attacks. Their linguistic analyses revealed that in the first days after the attacks, participants expectedly expressed more negative emotions and were more cognitively and socially engaged, asking questions and sending messages of support. Already after two weeks, though, their moods and social engagement returned to baseline, and, interestingly, their use of cognitive-analytic words (e.g., “think,” “question”) even dropped below their normal level. Over the next six weeks, their mood hovered around their pre-9/11 baseline, but both their social engagement and cognitive-analytic processing stayed remarkably low. This suggests a social and cognitive weariness in the aftermath of the attacks. In using virtual verbal behavior as a marker of psychological functioning, this study was able to draw a fine timeline of how humans cope with disasters. Reflecting their rapidly growing real-world importance, researchers are now beginning to investigate behavior on social networking sites such as Facebook (Wilson, Gosling, & Graham, 2012). Most research looks at psychological correlates of online behavior such as personality traits and the quality of one’s social life but, importantly, there are also first attempts to export traditional experimental research designs into an online setting. In a pioneering study of online social influence, Bond and colleagues (2012) experimentally tested the effects that peer feedback has on voting behavior. Remarkably, their sample consisted of 16 million (!) Facebook users. They found that online political-mobilization messages (e.g., “I voted” accompanied by selected pictures of their Facebook friends) influenced real-world voting behavior. This was true not just for users who saw the messages but also for their friends and friends of their friends. Although the intervention effect on a single user was very small, through the enormous number of users and indirect social contagion effects, it resulted cumulatively in an estimated 340,000 additional votes—enough to tilt a close election. In short, although still in its infancy, research on virtual daily behavior is bound to change social science, and it has already helped us better understand both virtual and “actual” behavior. “Smartphone Psychology”? A review of research methods for studying daily life would not be complete without a vision of “what’s next.” Given how common they have become, it is safe to predict that smartphones will not just remain devices for everyday online communication but will also become devices for scientific data collection and intervention (Kaplan & Stone, 2013; Yarkoni, 2012). These devices automatically store vast amounts of real-world user interaction data, and, in addition, they are equipped with sensors to track the physical (e. g., location, position) and social (e.g., wireless connections around the phone) context of these interactions. Miller (2012, p. 234) states, “The question is not whether smartphones will revolutionize psychology but how, when, and where the revolution will happen.” Obviously, their immense potential for data collection also brings with it big new challenges for researchers (e.g., privacy protection, data analysis, and synthesis). Yet it is clear that many of the methods described in this module—and many still to be developed ways of collecting real-world data—will, in the future, become integrated into the devices that people naturally and happily carry with them from the moment they get up in the morning to the moment they go to bed. Conclusion This module sought to make a case for psychology research conducted outside the lab. If the ultimate goal of the social and behavioral sciences is to explain human behavior, then researchers must also—in addition to conducting carefully controlled lab studies—deal with the “messy” real world and find ways to capture life as it naturally happens. Mortensen and Cialdini (2010) refer to the dynamic give-and-take between laboratory and field research as “full-cycle psychology”. Going full cycle, they suggest, means that “researchers use naturalistic observation to determine an effect’s presence in the real world, theory to determine what processes underlie the effect, experimentation to verify the effect and its underlying processes, and a return to the natural environment to corroborate the experimental findings” (Mortensen & Cialdini, 2010, p. 53). To accomplish this, researchers have access to a toolbox of research methods for studying daily life that is now more diverse and more versatile than it has ever been before. So, all it takes is to go ahead and—literally—bring science to life. Outside Resources Website: Society for Ambulatory Assessment http://www.ambulatory-assessment.org Discussion Questions 1. What do you think about the tradeoff between unambiguously establishing cause and effect (internal validity) and ensuring that research findings apply to people’s everyday lives (external validity)? Which one of these would you prioritize as a researcher? Why? 2. What challenges do you see that daily-life researchers may face in their studies? How can they be overcome? 3. What ethical issues can come up in daily-life studies? How can (or should) they be addressed? 4. How do you think smartphones and other mobile electronic devices will change psychological research? What are their promises for the field? And what are their pitfalls? Vocabulary Ambulatory assessment An overarching term to describe methodologies that assess the behavior, physiology, experience, and environments of humans in naturalistic settings. Daily Diary method A methodology where participants complete a questionnaire about their thoughts, feelings, and behavior of the day at the end of the day. Day reconstruction method (DRM) A methodology where participants describe their experiences and behavior of a given day retrospectively upon a systematic reconstruction on the following day. Ecological momentary assessment An overarching term to describe methodologies that repeatedly sample participants’ real-world experiences, behavior, and physiology in real time. Ecological validity The degree to which a study finding has been obtained under conditions that are typical for what happens in everyday life. Electronically activated recorder, or EAR A methodology where participants wear a small, portable audio recorder that intermittently records snippets of ambient sounds around them. Experience-sampling method A methodology where participants report on their momentary thoughts, feelings, and behaviors at different points in time over the course of a day. External validity The degree to which a finding generalizes from the specific sample and context of a study to some larger population and broader settings. Full-cycle psychology A scientific approach whereby researchers start with an observational field study to identify an effect in the real world, follow up with laboratory experimentation to verify the effect and isolate the causal mechanisms, and return to field research to corroborate their experimental findings. Generalize Generalizing, in science, refers to the ability to arrive at broad conclusions based on a smaller sample of observations. For these conclusions to be true the sample should accurately represent the larger population from which it is drawn. Internal validity The degree to which a cause-effect relationship between two variables has been unambiguously established. Linguistic inquiry and word count A quantitative text analysis methodology that automatically extracts grammatical and psychological information from a text by counting word frequencies. Lived day analysis A methodology where a research team follows an individual around with a video camera to objectively document a person’s daily life as it is lived. White coat hypertension A phenomenon in which patients exhibit elevated blood pressure in the hospital or doctor’s office but not in their everyday lives.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_2%3A_Psychology_as_Science/2.5%3A_Conducting_Psychology_Research_in_the_Real_World.txt
By David B. Baker and Heather Sperry University of Akron, The University of Akron This module provides an introduction and overview of the historical development of the science and practice of psychology in America. Ever-increasing specialization within the field often makes it difficult to discern the common roots from which the field of psychology has evolved. By exploring this shared past, students will be better able to understand how psychology has developed into the discipline we know today. learning objectives • Describe the precursors to the establishment of the science of psychology. • Identify key individuals and events in the history of American psychology. • Describe the rise of professional psychology in America. • Develop a basic understanding of the processes of scientific development and change. • Recognize the role of women and people of color in the history of American psychology. Introduction It is always a difficult question to ask, where to begin to tell the story of the history of psychology. Some would start with ancient Greece; others would look to a demarcation in the late 19th century when the science of psychology was formally proposed and instituted. These two perspectives, and all that is in between, are appropriate for describing a history of psychology. The interested student will have no trouble finding an abundance of resources on all of these time frames and perspectives (Goodwin, 2011; Leahey, 2012; Schultz & Schultz, 2007). For the purposes of this module, we will examine the development of psychology in America and use the mid-19th century as our starting point. For the sake of convenience, we refer to this as a history of modern psychology. Psychology is an exciting field and the history of psychology offers the opportunity to make sense of how it has grown and developed. The history of psychology also provides perspective. Rather than a dry collection of names and dates, the history of psychology tells us about the important intersection of time and place that defines who we are. Consider what happens when you meet someone for the first time. The conversation usually begins with a series of questions such as, “Where did you grow up?” “How long have you lived here?” “Where did you go to school?” The importance of history in defining who we are cannot be overstated. Whether you are seeing a physician, talking with a counselor, or applying for a job, everything begins with a history. The same is true for studying the history of psychology; getting a history of the field helps to make sense of where we are and how we got here. A Prehistory of Psychology Precursors to American psychology can be found in philosophy and physiology. Philosophers such as John Locke (1632–1704) and Thomas Reid (1710–1796) promoted empiricism, the idea that all knowledge comes from experience. The work of Locke, Reid, and others emphasized the role of the human observer and the primacy of the senses in defining how the mind comes to acquire knowledge. In American colleges and universities in the early 1800s, these principles were taught as courses on mental and moral philosophy. Most often these courses taught about the mind based on the faculties of intellect, will, and the senses (Fuchs, 2000). Physiology and Psychophysics Philosophical questions about the nature of mind and knowledge were matched in the 19th century by physiological investigations of the sensory systems of the human observer. German physiologist Hermann von Helmholtz (1821–1894) measured the speed of the neural impulse and explored the physiology of hearing and vision. His work indicated that our senses can deceive us and are not a mirror of the external world. Such work showed that even though the human senses were fallible, the mind could be measured using the methods of science. In all, it suggested that a science of psychology was feasible. An important implication of Helmholtz’s work was that there is a psychological reality and a physical reality and that the two are not identical. This was not a new idea; philosophers like John Locke had written extensively on the topic, and in the 19th century, philosophical speculation about the nature of mind became subject to the rigors of science. The question of the relationship between the mental (experiences of the senses) and the material (external reality) was investigated by a number of German researchers including Ernst Weber and Gustav Fechner. Their work was called psychophysics, and it introduced methods for measuring the relationship between physical stimuli and human perception that would serve as the basis for the new science of psychology (Fancher & Rutherford, 2011). The formal development of modern psychology is usually credited to the work of German physician, physiologist, and philosopher Wilhelm Wundt (1832–1920). Wundt helped to establish the field of experimental psychology by serving as a strong promoter of the idea that psychology could be an experimental field and by providing classes, textbooks, and a laboratory for training students. In 1875, he joined the faculty at the University of Leipzig and quickly began to make plans for the creation of a program of experimental psychology. In 1879, he complemented his lectures on experimental psychology with a laboratory experience: an event that has served as the popular date for the establishment of the science of psychology. The response to the new science was immediate and global. Wundt attracted students from around the world to study the new experimental psychology and work in his lab. Students were trained to offer detailed self-reports of their reactions to various stimuli, a procedure known as introspection. The goal was to identify the elements of consciousness. In addition to the study of sensation and perception, research was done on mental chronometry, more commonly known as reaction time. The work of Wundt and his students demonstrated that the mind could be measured and the nature of consciousness could be revealed through scientific means. It was an exciting proposition, and one that found great interest in America. After the opening of Wundt’s lab in 1879, it took just four years for the first psychology laboratory to open in the United States (Benjamin, 2007). Scientific Psychology Comes to the United States Wundt’s version of psychology arrived in America most visibly through the work of Edward Bradford Titchener (1867–1927). A student of Wundt’s, Titchener brought to America a brand of experimental psychology referred to as “structuralism.” Structuralists were interested in the contents of the mind—what the mind is. For Titchener, the general adult mind was the proper focus for the new psychology, and he excluded from study those with mental deficiencies, children, and animals (Evans, 1972; Titchener, 1909). Experimental psychology spread rather rapidly throughout North America. By 1900, there were more than 40 laboratories in the United States and Canada (Benjamin, 2000). Psychology in America also organized early with the establishment of the American Psychological Association (APA) in 1892. Titchener felt that this new organization did not adequately represent the interests of experimental psychology, so, in 1904, he organized a group of colleagues to create what is now known as the Society of Experimental Psychologists (Goodwin, 1985). The group met annually to discuss research in experimental psychology. Reflecting the times, women researchers were not invited (or welcome). It is interesting to note that Titchener’s first doctoral student was a woman, Margaret Floy Washburn (1871–1939). Despite many barriers, in 1894, Washburn became the first woman in America to earn a Ph.D. in psychology and, in 1921, only the second woman to be elected president of the American Psychological Association (Scarborough & Furumoto, 1987). Striking a balance between the science and practice of psychology continues to this day. In 1988, the American Psychological Society (now known as the Association for Psychological Science) was founded with the central mission of advancing psychological science. Toward a Functional Psychology While Titchener and his followers adhered to a structural psychology, others in America were pursuing different approaches. William James, G. Stanley Hall, and James McKeen Cattell were among a group that became identified with “functionalism.” Influenced by Darwin’s evolutionary theory, functionalists were interested in the activities of the mind—what the mind does. An interest in functionalism opened the way for the study of a wide range of approaches, including animal and comparative psychology (Benjamin, 2007). William James (1842–1910) is regarded as writing perhaps the most influential and important book in the field of psychology, Principles of Psychology,published in 1890. Opposed to the reductionist ideas of Titchener, James proposed that consciousness is ongoing and continuous; it cannot be isolated and reduced to elements. For James, consciousness helped us adapt to our environment in such ways as allowing us to make choices and have personal responsibility over those choices. At Harvard, James occupied a position of authority and respect in psychology and philosophy. Through his teaching and writing, he influenced psychology for generations. One of his students, Mary Whiton Calkins (1863–1930), faced many of the challenges that confronted Margaret Floy Washburn and other women interested in pursuing graduate education in psychology. With much persistence, Calkins was able to study with James at Harvard. She eventually completed all the requirements for the doctoral degree, but Harvard refused to grant her a diploma because she was a woman. Despite these challenges, Calkins went on to become an accomplished researcher and the first woman elected president of the American Psychological Association in 1905 (Scarborough & Furumoto, 1987). G. Stanley Hall (1844–1924) made substantial and lasting contributions to the establishment of psychology in the United States. At Johns Hopkins University, he founded the first psychological laboratory in America in 1883. In 1887, he created the first journal of psychology in America, American Journal of Psychology. In 1892, he founded the American Psychological Association (APA); in 1909, he invited and hosted Freud at Clark University (the only time Freud visited America). Influenced by evolutionary theory, Hall was interested in the process of adaptation and human development. Using surveys and questionnaires to study children, Hall wrote extensively on child development and education. While graduate education in psychology was restricted for women in Hall’s time, it was all but non-existent for African Americans. In another first, Hall mentored Francis Cecil Sumner (1895–1954) who, in 1920, became the first African American to earn a Ph.D. in psychology in America (Guthrie, 2003). James McKeen Cattell (1860–1944) received his Ph.D. with Wundt but quickly turned his interests to the assessment of individual differences. Influenced by the work of Darwin’s cousin, Frances Galton, Cattell believed that mental abilities such as intelligence were inherited and could be measured using mental tests. Like Galton, he believed society was better served by identifying those with superior intelligence and supported efforts to encourage them to reproduce. Such beliefs were associated with eugenics (the promotion of selective breeding) and fueled early debates about the contributions of heredity and environment in defining who we are. At Columbia University, Cattell developed a department of psychology that became world famous also promoting psychological science through advocacy and as a publisher of scientific journals and reference works (Fancher, 1987; Sokal, 1980). The Growth of Psychology Throughout the first half of the 20th century, psychology continued to grow and flourish in America. It was large enough to accommodate varying points of view on the nature of mind and behavior. Gestalt psychology is a good example. The Gestalt movement began in Germany with the work of Max Wertheimer (1880–1943). Opposed to the reductionist approach of Wundt’s laboratory psychology, Wertheimer and his colleagues Kurt Koffka (1886–1941), Wolfgang Kohler (1887–1967), and Kurt Lewin (1890–1947) believed that studying the whole of any experience was richer than studying individual aspects of that experience. The saying “the whole is greater than the sum of its parts” is a Gestalt perspective. Consider that a melody is an additional element beyond the collection of notes that comprise it. The Gestalt psychologists proposed that the mind often processes information simultaneously rather than sequentially. For instance, when you look at a photograph, you see a whole image, not just a collection of pixels of color. Using Gestalt principles, Wertheimer and his colleagues also explored the nature of learning and thinking. Most of the German Gestalt psychologists were Jewish and were forced to flee the Nazi regime due to the threats posed on both academic and personal freedoms. In America, they were able to introduce a new audience to the Gestalt perspective, demonstrating how it could be applied to perception and learning (Wertheimer, 1938). In many ways, the work of the Gestalt psychologists served as a precursor to the rise of cognitive psychology in America (Benjamin, 2007). Behaviorism emerged early in the 20th century and became a major force in American psychology. Championed by psychologists such as John B. Watson (1878–1958) and B. F. Skinner (1904–1990), behaviorism rejected any reference to mind and viewed overt and observable behavior as the proper subject matter of psychology. Through the scientific study of behavior, it was hoped that laws of learning could be derived that would promote the prediction and control of behavior. Russian physiologist Ivan Pavlov (1849–1936) influenced early behaviorism in America. His work on conditioned learning, popularly referred to as classical conditioning, provided support for the notion that learning and behavior were controlled by events in the environment and could be explained with no reference to mind or consciousness (Fancher, 1987). For decades, behaviorism dominated American psychology. By the 1960s, psychologists began to recognize that behaviorism was unable to fully explain human behavior because it neglected mental processes. The turn toward a cognitive psychology was not new. In the 1930s, British psychologist Frederic C. Bartlett (1886–1969) explored the idea of the constructive mind, recognizing that people use their past experiences to construct frameworks in which to understand new experiences. Some of the major pioneers in American cognitive psychology include Jerome Bruner (1915–), Roger Brown (1925–1997), and George Miller (1920–2012). In the 1950s, Bruner conducted pioneering studies on cognitive aspects of sensation and perception. Brown conducted original research on language and memory, coined the term “flashbulb memory,” and figured out how to study the tip-of-the-tongue phenomenon (Benjamin, 2007). Miller’s research on working memory is legendary. His 1956 paper “The Magic Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information”is one of the most highly cited papers in psychology. A popular interpretation of Miller’s research was that the number of bits of information an average human can hold in working memory is 7 ± 2. Around the same time, the study of computer science was growing and was used as an analogy to explore and understand how the mind works. The work of Miller and others in the 1950s and 1960s has inspired tremendous interest in cognition and neuroscience, both of which dominate much of contemporary American psychology. Applied Psychology in America In America, there has always been an interest in the application of psychology to everyday life. Mental testing is an important example. Modern intelligence tests were developed by the French psychologist Alfred Binet (1857–1911). His goal was to develop a test that would identify schoolchildren in need of educational support. His test, which included tasks of reasoning and problem solving, was introduced in the United States by Henry Goddard (1866–1957) and later standardized by Lewis Terman (1877–1956) at Stanford University. The assessment and meaning of intelligence has fueled debates in American psychology and society for nearly 100 years. Much of this is captured in the nature-nurture debate that raises questions about the relative contributions of heredity and environment in determining intelligence (Fancher, 1987). Applied psychology was not limited to mental testing. What psychologists were learning in their laboratories was applied in many settings including the military, business, industry, and education. The early 20th century was witness to rapid advances in applied psychology. Hugo Munsterberg (1863–1916) of Harvard University made contributions to such areas as employee selection, eyewitness testimony, and psychotherapy. Walter D. Scott (1869–1955) and Harry Hollingworth (1880–1956) produced original work on the psychology of advertising and marketing. Lillian Gilbreth (1878–1972) was a pioneer in industrial psychology and engineering psychology. Working with her husband, Frank, they promoted the use of time and motion studies to improve efficiency in industry. Lillian also brought the efficiency movement to the home, designing kitchens and appliances including the pop-up trashcan and refrigerator door shelving. Their psychology of efficiency also found plenty of applications at home with their 12 children. The experience served as the inspiration for the movie Cheaper by the Dozen (Benjamin, 2007). Clinical psychology was also an early application of experimental psychology in America. Lightner Witmer (1867–1956) received his Ph.D. in experimental psychology with Wilhelm Wundt and returned to the University of Pennsylvania, where he opened a psychological clinic in 1896. Witmer believed that because psychology dealt with the study of sensation and perception, it should be of value in treating children with learning and behavioral problems. He is credited as the founder of both clinical and school psychology (Benjamin & Baker, 2004). Psychology as a Profession As the roles of psychologists and the needs of the public continued to change, it was necessary for psychology to begin to define itself as a profession. Without standards for training and practice, anyone could use the title psychologist and offer services to the public. As early as 1917, applied psychologists organized to create standards for education, training, and licensure. By the 1930s, these efforts led to the creation of the American Association for Applied Psychology (AAAP). While the American Psychological Association (APA) represented the interests of academic psychologists, AAAP served those in education, industry, consulting, and clinical work. The advent of WWII changed everything. The psychiatric casualties of war were staggering, and there were simply not enough mental health professionals to meet the need. Recognizing the shortage, the federal government urged the AAAP and APA to work together to meet the mental health needs of the nation. The result was the merging of the AAAP and the APA and a focus on the training of professional psychologists. Through the provisions of National Mental Health Act of 1946, funding was made available that allowed the APA, the Veterans Administration, and the Public Health Service to work together to develop training programs that would produce clinical psychologists. These efforts led to the convening of the Boulder Conference on Graduate Education in Clinical Psychology in 1949 in Boulder, Colorado. The meeting launched doctoral training in psychology and gave us the scientist-practitioner model of training. Similar meetings also helped launch doctoral training programs in counseling and school psychology. Throughout the second half of the 20th century, alternatives to Boulder have been debated. In 1973, the Vail Conference on Professional Training in Psychology proposed the scholar-practitioner model and the Psy.D. degree (Doctor of Psychology). It is a training model that emphasizes clinical training and practice that has become more common (Cautin & Baker, in press). Psychology and Society Given that psychology deals with the human condition, it is not surprising that psychologists would involve themselves in social issues. For more than a century, psychology and psychologists have been agents of social action and change. Using the methods and tools of science, psychologists have challenged assumptions, stereotypes, and stigma. Founded in 1936, the Society for the Psychological Study of Social Issues (SPSSI) has supported research and action on a wide range of social issues. Individually, there have been many psychologists whose efforts have promoted social change. Helen Thompson Woolley (1874–1947) and Leta S. Hollingworth (1886–1939) were pioneers in research on the psychology of sex differences. Working in the early 20th century, when women’s rights were marginalized, Thompson examined the assumption that women were overemotional compared to men and found that emotion did not influence women’s decisions any more than it did men’s. Hollingworth found that menstruation did not negatively impact women’s cognitive or motor abilities. Such work combatted harmful stereotypes and showed that psychological research could contribute to social change (Scarborough & Furumoto, 1987). Among the first generation of African American psychologists, Mamie Phipps Clark (1917–1983) and her husband Kenneth Clark (1914–2005) studied the psychology of race and demonstrated the ways in which school segregation negatively impacted the self-esteem of African American children. Their research was influential in the 1954 Supreme Court ruling in the case of Brown v. Board of Education,which ended school segregation (Guthrie, 2003). In psychology, greater advocacy for issues impacting the African American community were advanced by the creation of the Association of Black Psychologists (ABPsi) in 1968. In 1957, psychologist Evelyn Hooker (1907–1996) published the paper “The Adjustment of the Male Overt Homosexual,” reporting on her research that showed no significant differences in psychological adjustment between homosexual and heterosexual men. Her research helped to de-pathologize homosexuality and contributed to the decision by the American Psychiatric Association to remove homosexuality from the Diagnostic and Statistical Manual of Mental Disorders in 1973 (Garnets & Kimmel, 2003). Conclusion Growth and expansion have been a constant in American psychology. In the latter part of the 20th century, areas such as social, developmental, and personality psychology made major contributions to our understanding of what it means to be human. Today neuroscience is enjoying tremendous interest and growth. As mentioned at the beginning of the module, it is a challenge to cover all the history of psychology in such a short space. Errors of omission and commission are likely in such a selective review. The history of psychology helps to set a stage upon which the story of psychology can be told. This brief summary provides some glimpse into the depth and rich content offered by the history of psychology. The learning modules in the Noba psychology collection are all elaborations on the foundation created by our shared past. It is hoped that you will be able to see these connections and have a greater understanding and appreciation for both the unity and diversity of the field of psychology. Timeline 1600s – Rise of empiricism emphasizing centrality of human observer in acquiring knowledge 1850s - Helmholz measures neural impulse / Psychophysics studied by Weber & Fechner 1859 - Publication of Darwin's Origin of Species 1879 - Wundt opens lab for experimental psychology 1883 - First psychology lab opens in the United States 1887 – First American psychology journal is published: American Journal of Psychology 1890 – James publishes Principles of Psychology 1892 – APA established 1894 – Margaret Floy Washburn is first U.S. woman to earn Ph.D. in psychology 1904 - Founding of Titchener's experimentalists 1905 - Mary Whiton Calkins is first woman president of APA 1909 – Freud’s only visit to the United States 1913 - John Watson calls for a psychology of behavior 1920 – Francis Cecil Sumner is first African American to earn Ph.D. in psychology 1921 – Margaret Floy Washburn is second woman president of APA 1930s – Creation and growth of the American Association for Applied Psychology (AAAP) / Gestalt psychology comes to America 1936- Founding of The Society for the Psychological Study of Social Issues 1940s – Behaviorism dominates American psychology 1946 – National Mental Health Act 1949 – Boulder Conference on Graduate Education in Clinical Psychology 1950s – Cognitive psychology gains popularity 1954 – Brown v. Board of Education 1957 – Evelyn Hooker publishes The Adjustment of the Male Overt Homosexual 1968 – Founding of the Association of Black Psychologists 1973 – Psy.D. proposed at the Vail Conference on Professional Training in Psychology 1988 – Founding of the American Psychological Society (now known as the Association for Psychological Science) Outside Resources Podcast: History of Psychology Podcast Series http://www.yorku.ca/christo/podcasts/ Web: Advances in the History of Psychology http://ahp.apps01.yorku.ca/ Web: Center for the History of Psychology http://www.uakron.edu/chp Web: Classics in the History of Psychology http://psychclassics.yorku.ca/ Web: Psychology’s Feminist Voices http://www.feministvoices.com/ Web: This Week in the History of Psychology http://www.yorku.ca/christo/podcasts/ Discussion Questions 1. Why was psychophysics important to the development of psychology as a science? 2. How have psychologists participated in the advancement of social issues? 3. Name some ways in which psychology began to be applied to the general public and everyday problems. 4. Describe functionalism and structuralism and their influences on behaviorism and cognitive psychology. Vocabulary Behaviorism The study of behavior. Cognitive psychology The study of mental processes. Consciousness Awareness of ourselves and our environment. Empiricism The belief that knowledge comes from experience. Eugenics The practice of selective breeding to promote desired traits. Flashbulb memory A highly detailed and vivid memory of an emotionally significant event. Functionalism A school of American psychology that focused on the utility of consciousness. Gestalt psychology An attempt to study the unity of experience. Individual differences Ways in which people differ in terms of their behavior, emotion, cognition, and development. Introspection A method of focusing on internal processes. Neural impulse An electro-chemical signal that enables neurons to communicate. Practitioner-Scholar Model A model of training of professional psychologists that emphasizes clinical practice. Psychophysics Study of the relationships between physical stimuli and the perception of those stimuli. Realism A point of view that emphasizes the importance of the senses in providing knowledge of the external world. Scientist-practitioner model A model of training of professional psychologists that emphasizes the development of both research and clinical skills. Structuralism A school of American psychology that sought to describe the elements of conscious experience. Tip-of-the-tongue phenomenon The inability to pull a word from memory even though there is the sensation that that word is available.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_2%3A_Psychology_as_Science/2.6%3A_History_of_Psychology.txt
By Zachary Infantolino and Gregory A. Miller University of Delaware, University of California, Los Angeles As a generally noninvasive subset of neuroscience methods, psychophysiological methods are used across a variety of disciplines in order to answer diverse questions about psychology, both mental events and behavior. Many different techniques are classified as psychophysiological. Each technique has its strengths and weaknesses, and knowing them allows researchers to decide what each offers for a particular question. Additionally, this knowledge allows research consumers to evaluate the meaning of the results in a particular experiment. learning objectives • Learn what qualifies as psychophysiology within the broader field of neuroscience. • Review and compare several examples of psychophysiological methods. • Understand advantages and disadvantages of different psychophysiological methods. History In the mid-19th century, a railroad worker named Phineas Gage was in charge of setting explosive charges for blasting through rock in order to prepare a path for railroad tracks. He would lay the charge in a hole drilled into the rock, place a fuse and sand on top of the charge, and pack it all down using a tamping iron (a solid iron rod approximately one yard long and a little over an inch in diameter). On a September afternoon when Gage was performing this task, his tamping iron caused a spark that set off the explosive prematurely, sending the tamping iron flying through the air. Unfortunately for Gage, his head was above the hole and the tamping iron entered the side of his face, passed behind his left eye, and exited out of the top of his head, eventually landing 80 feet away. Gage lost a portion of his left frontal lobe in the accident, but survived and lived for another 12 years. What is most interesting from a psychological perspective is that Gage’s personality changed as a result of this accident. He became more impulsive, he had trouble carrying out plans, and, at times, he engaged in vulgar profanity, which was out of character. This case study leads one to believe that there are specific areas of the brain that are associated with certain psychological phenomena. When studying psychology, the brain is indeed an interesting source of information. Although it would be impossible to replicate the type of damage done to Gage in the name of research, methods have developed over the years that are able to safely measure different aspects of nervous system activity in order to help researchers better understand psychology as well as the relationship between psychology and biology. Introduction Psychophysiology is defined as any research in which the dependent variable (what the researcher measures) is a physiological measure, and the independent variable (what the researcher manipulates) is behavioral or mental. In most cases the work is done noninvasively with awake human participants. Physiological measures take many forms and range from blood flow or neural activity in the brain to heart rate variability and eye movements. These measures can provide information about processes including emotion, cognition, and the interactions between them. In these ways, physiological measures offer a very flexible set of tools for researchers to answer questions about behavior, cognition, and health. Psychophysiological methods are a subset of the very large domain of neuroscience methods. Many neuroscience methods are invasive, such as involving lesions of neural tissue, injection of neutrally active chemicals, or manipulation of neural activity via electrical stimulation. The present survey emphasizes noninvasive methods widely used with human subjects. Crucially, in examining the relationship between physiology and overt behavior or mental events, psychophysiology does not attempt to replace the latter with the former. As an example, happiness is a state of pleasurable contentment and is associated with various physiological measures, but one would not say that those physiological measures are happiness. We can make inferences about someone’s cognitive or emotional state based on his or her self-report, physiology, or overt behavior. Sometimes our interest is primarily in inferences about internal events and sometimes primarily in the physiology itself. Psychophysiology addresses both kinds of goals. Central Nervous System (CNS) This module provides an overview of several popular psychophysiological methods, though it is far from exhaustive. Each method can draw from a broad range of data-analysis strategies to provide an even more expansive set of tools. The psychophysiological methods discussed below focus on the central nervous system. Structural magnetic resonance imaging (sMRI) is a noninvasive technique that allows researchers and clinicians to view anatomical structures within a human. The participant is placed in a magnetic field that may be 66,000 times greater than the Earth’s magnetic field, which causes a small portion of the atoms in his or her body to line up in the same direction. The body is then pulsed with low-energy radio frequencies that are absorbed by the atoms in the body, causing them to tip over. As these atoms return to their aligned state, they give off energy in the form of harmless electromagnetic radiation, which is measured by the machine. The machine then transforms the measured energy into a three-dimensional picture of the tissue within the body. In psychophysiology research, this image may be used to compare the size of structures in different groups of people (e.g., are areas associated with pleasure smaller in individuals with depression?) or to increase the accuracy of spatial locations as measured with functional magnetic resonance imaging (fMRI). Functional magnetic resonance imaging (fMRI) is a method that is used to assess changes in activity of tissue, such as measuring changes in neural activity in different areas of the brain during thought. This technique builds on the principles of sMRI and also uses the property that, when neurons fire, they use energy, which must be replenished. Glucose and oxygen, two key components for energy production, are supplied to the brain from the blood stream as needed. Oxygen is transported through the blood using hemoglobin, which contains binding sites for oxygen. When these sites are saturated with oxygen, it is referred to as oxygenated hemoglobin. When the oxygen molecules have all been released from a hemoglobin molecule, it is known as deoxygenated hemoglobin. As a set of neurons begin firing, oxygen in the blood surrounding those neurons is consumed, leading to a reduction in oxygenated hemoglobin. The body then compensates and provides an abundance of oxygenated hemoglobin in the blood surrounding that activated neural tissue. When activity in that neural tissue declines, the level of oxygenated hemoglobin slowly returns to its original level, which typically takes several seconds. fMRI measures the change in the concentration of oxygenated hemoglobin, which is known as the blood-oxygen-level-dependent (BOLD) signal. This leads to two important facts about fMRI. First, fMRI measures blood volume and blood flow, and from this we infer neural activity; fMRI does not measure neural activity directly. Second, fMRI data typically have poor temporal resolution (the precision of measurement with respect to time); however, when combined with sMRI, fMRI provides excellent spatial resolution (the ability to distinguish one object from another in space). Temporal resolution for fMRI is typically on the order of seconds, whereas its spatial resolution is on the order of millimeters. Under most conditions there is an inverse relationship between temporal and spatial resolution—one can increase temporal resolution at the expense of spatial resolution and vice versa. This method is valuable for identifying specific areas of the brain that are associated with different physical or psychological tasks. Clinically, fMRI may be used prior to neurosurgery in order to identify areas that are associated with language so that the surgeon can avoid those areas during the operation. fMRI allows researchers to identify differential or convergent patterns of activation associated with tasks. For example, if participants are shown words on a screen and are expected to indicate the color of the letters, are the same brain areas recruited for this task if the words have emotional content or not? Does this relationship change in psychological disorders such as anxiety or depression? Is there a different pattern of activation even in the absence of overt performance differences? fMRI is an excellent tool for comparing brain activation in different tasks and/or populations. Figure 2.7.1 provides an example of results from fMRI analyses overlaid on an sMRI image. The blue and orange shapes represent areas with significant changes in the BOLD signal, thus changes in neural activation. Electroencephalography (EEG) is another technique for studying brain activation. This technique uses at least two and sometimes up to 256 electrodes to measure the difference in electrical charge (the voltage) between pairs of points on the head. These electrodes are typically fastened to a flexible cap (similar to a swimming cap) that is placed on the participant’s head. From the scalp, the electrodes measure the electrical activity that is naturally occurring within the brain. They do not introduce any new electrical activity. In contrast to fMRI, EEG measures neural activity directly, rather than a correlate of that activity. Electrodes used in EEG can also be placed within the skull, resting directly on the brain itself. This application, called electrocorticography (ECoG), is typically used prior to medical procedures for localizing activity, such as the origin of epileptic seizures. This invasive procedure allows for more precise localization of neural activity, which is essential in medical applications. However, it is generally not justifiable to open a person’s skull solely for research purposes, and instead electrodes are placed on the participant’s scalp, resulting in a noninvasive technique for measuring neural activity. Given that this electrical activity must travel through the skull and scalp before reaching the electrodes, localization of activity is less precise when measuring from the scalp, but it can still be within several millimeters when localizing activity that is near the scalp. One major advantage of EEG is its temporal resolution. Data can be recorded thousands of times per second, allowing researchers to document events that happen in less than a millisecond. EEG analyses typically investigate the change in amplitude or frequency components of the recorded EEG on an ongoing basis or averaged over dozens of trials (see Figure 2.7.2). Magnetoencephalography (MEG) is another technique for noninvasively measuring neural activity. The flow of electrical charge (the current) associated with neural activity produces very weak magnetic fields that can be detected by sensors placed near the participant’s scalp. The number of sensors used varies from a few to several hundred. Due to the fact that the magnetic fields of interest are so small, special rooms that are shielded from magnetic fields in the environment are needed in order to avoid contamination of the signal being measured. MEG has the same excellent temporal resolution as EEG. Additionally, MEG is not as susceptible to distortions from the skull and scalp. Magnetic fields are able to pass through the hard and soft tissue relatively unchanged, thus providing better spatial resolution than EEG. MEG analytic strategies are nearly identical to those used in EEG. However, the MEG recording apparatus is much more expensive than EEG, so MEG is much less widely available. EEG and MEG are both excellent for elucidating the temporal dynamics of neural processes. For example, if someone is reading a sentence that ends with an unexpected word (e.g., Michelle is going outside to water the book), how long after he or she reads the unexpected word does he or she recognize this as unexpected? In addition to these types of questions, EEG and MEG methods allow researchers to investigate the degree to which different parts of the brain “talk” to each other. This allows for a better understanding of brain networks, such as their role in different tasks and how they may function abnormally in psychopathology. Positron emission tomography (PET) is a medical imaging technique that is used to measure processes in the body, including the brain. This method relies on a positron-emitting tracer atom that is introduced into the blood stream in a biologically active molecule, such as glucose, water, or ammonia. A positron is a particle much like an electron but with a positive charge. One example of a biologically active molecule is fludeoxyglucose, which acts similarly to glucose in the body. Fludeoxyglucose will concentrate in areas where glucose is needed—commonly areas with higher metabolic needs. Over time, this tracer molecule emits positrons, which are detected by a sensor. The spatial location of the tracer molecule in the brain can be determined based on the emitted positrons. This allows researchers to construct a three-dimensional image of the areas of the brain that have the highest metabolic needs, typically those that are most active. Images resulting from PET usually represent neural activity that has occurred over tens of minutes, which is very poor temporal resolution for some purposes. PET images are often combined with computed tomography (CT) images to improve spatial resolution, as fine as several millimeters. Tracers can also be incorporated into molecules that bind to neurotransmitter receptors, which allow researchers to answer some unique questions about the action of neurotransmitters. Unfortunately, very few research centers have the equipment required to obtain the images or the special equipment needed to create the positron-emitting tracer molecules, which typically need to be produced on site. Transcranial magnetic stimulation (TMS) is a noninvasive method that causes depolarization or hyperpolarization in neurons near the scalp. This method is not considered psychophysiological because the independent variable is physiological, rather than the dependent. However, it does qualify as a neuroscience method because it deals with the function of the nervous system, and it can readily be combined with conventional psychophysiological methods. In TMS, a coil of wire is placed just above the participant’s scalp. When electricity flows through the coil, it produces a magnetic field. This magnetic field travels through the skull and scalp and affects neurons near the surface of the brain. When the magnetic field is rapidly turned on and off, a current is induced in the neurons, leading to depolarization or hyperpolarization, depending on the number of magnetic field pulses. Single- or paired-pulse TMS depolarizes site-specific neurons in the cortex, causing them to fire. If this method is used over primary motor cortex, it can produce or block muscle activity, such as inducing a finger twitch or preventing someone from pressing a button. If used over primary visual cortex, it can produce sensations of flashes of light or impair visual processes. This has proved to be a valuable tool in studying the function and timing of specific processes such as the recognition of visual stimuli. Repetitive TMS produces effects that last longer than the initial stimulation. Depending on the intensity, coil orientation, and frequency, neural activity in the stimulated area may be either attenuated or amplified. Used in this manner, TMS is able to explore neural plasticity, which is the ability of connections between neurons to change. This has implications for treating psychological disorders as well as understanding long-term changes in neuronal excitability. Peripheral Nervous System The psychophysiological methods discussed above focus on the central nervous system. Considerable research has also focused on the peripheral nervous system. These methods include skin conductance, cardiovascular responses, muscle activity, pupil diameter, eye blinks, and eye movements. Skin conductance, for example, measures the electrical conductance (the inverse of resistance) between two points on the skin, which varies with the level of moisture. Sweat glands are responsible for this moisture and are controlled by the sympathetic nervous system (SNS). Increases in skin conductance can be associated with changes in psychological activity. For example, studying skin conductance allows a researcher to investigate whether psychopaths react to fearful pictures in a normal way. Skin conductance provides relatively poor temporal resolution, with the entire response typically taking several seconds to emerge and resolve. However, it is an easy way to measure SNS response to a variety of stimuli. Cardiovascular measures include heart rate, heart rate variability, and blood pressure. The heart is innervated by the parasympathetic nervous system (PNS) and SNS. Input from the PNS decreases heart rate and contractile strength, whereas input from the SNS increases heart rate and contractile strength. Heart rate can easily be monitored using a minimum of two electrodes and is measured by counting the number of heartbeats in a given time period, such as one minute, or by assessing the time between successive heartbeats. Psychological activity can prompt increases and decreases in heart rate, often in less than a second, making heart rate a sensitive measure of cognition. Measures of heart rate variability are concerned with consistency in the time interval between heartbeats. Changes in heart rate variability are associated with stress as well as psychiatric conditions. Figure 2.7.3 is an example of an electrocardiogram, which is used to measure heart rate and heart rate variability. These cardiovascular measures allow researchers to monitor SNS and PNS reactivity to various stimuli or situations. For example, when an arachnophobe views pictures of spiders, does their heart rate increase more than that of a person not afraid of spiders? Electromyography (EMG) measures electrical activity produced by skeletal muscles. Similar to EEG, EMG measures the voltage between two points. This technique can be used to determine when a participant first initiates muscle activity to engage in a motor response to a stimulus or the degree to which a participant begins to engage in an incorrect response (such as pressing the wrong button), even if it is never visibly executed. It has also been used in emotion research to identify activity in muscles that are used to produce smiles and frowns. Using EMG, it is possible to detect very small facial movements that are not observable from looking at the face. The temporal resolution of EMG is similar to that of EEG and MEG. Valuable information can also be gleaned from eye blinks, eye movements, and pupil diameter. Eye blinks are most often assessed using EMG electrodes placed just below the eyelid, but electrical activity associated directly with eye blinks or eye movements can be measured with electrodes placed on the face near the eyes, because there is voltage across the entire eyeball. Another option for the measurement of eye movement is a camera used to record video of an eye. This video method is particularly valuable when determination of absolute direction of gaze (not just change in direction of gaze) is of interest, such as when the eyes scan a picture. With the help of a calibration period in which a participant looks at multiple, known targets, eye position is then extracted from each video frame during the main task and compared with data from the calibration phase, allowing researchers to identify the sequence, direction, and duration of gaze fixations. For example, when viewing pleasant or unpleasant images, people spend different amounts of time looking at the most arousing parts. This, in turn, can vary as a function of psychopathology. Additionally, the diameter of a participant’s pupil can be measured and recorded over time from the video record. As with heart rate, pupil diameter is controlled by competing inputs from the SNS and PNS. Pupil diameter is commonly used as an index of mental effort when performing a task. When to Use What As the reader, you may be wondering, how do I know what tool is right for a given question? Generally, there are no definitive answers. If you wanted to know the temperature in the morning, would you check your phone? Look outside to see how warm it looks? Ask your roommate what he or she is wearing today? Look to see what other people are wearing? There is not a single way to answer the question. The same is true for research questions. However, there are some guidelines that one can consider. For example, if you are interested in what brain structures are associated with cognitive control, you wouldn’t use peripheral nervous system measures. A technique such as fMRI or PET might be more appropriate. If you are interested in how cognitive control unfolds over time, EEG or MEG would be a good choice. If you are interested in studying the bodily response to fear in different groups of people, peripheral nervous system measures might be most appropriate. The key to deciding what method is most appropriate is properly defining the question that you are trying to answer. What aspects are most interesting? Do you care about identifying the most relevant brain structures? Temporal dynamics? Bodily responses? Then, it is important to think about the strengths and weaknesses of the different psychophysiological measures and pick one, or several, whose attributes work best for the question at hand. In fact, it is common to record several at once. Conclusion The outline of psychophysiological methods above provides a glimpse into the exciting techniques that are available to researchers studying a broad range of topics from clinical to social to cognitive psychology. Some of the most interesting psychophysiological studies use several methods, such as in sleep assessments or multimodal neuroimaging. Psychophysiological methods have applications outside of mainstream psychology in areas where psychological phenomena are central, such as economics, health-related decision making, and brain–computer interfaces. Examples of applications for each method are provided above, but this list is by no means exhaustive. Furthermore, the field is continually evolving, with new methods and new applications being developed. The wide variety of methods and applications provide virtually limitless possibilities for researchers. Outside Resources Book: Luck, S. J. (2005). An introduction to the event-related potential technique. Cambridge, MA: MIT Press. Book: Poldrack, R. A., Mumford, J. A., & Nichols, T. E. (2011). Handbook of functional MRI data analysis. New York: Cambridge University Press. Web: For a list of additional psychophysiology teaching materials: www.sprweb.org/teaching/index.cfm Web: For visualizations on MRI physics (requires a free registration): http://www.imaios.com/en/e-Courses/e-MRI/NMR/ Discussion Questions 1. Pick a psychological phenomenon that you would like to know more about. What specific hypothesis would you like to test? What psychophysiological methods might be appropriate for testing this hypothesis and why? 2. What types of questions would require high spatial resolution in measuring brain activity? What types of questions would require high temporal resolution? 3. Take the hypothesis you picked in the first question, and choose what you think would be the best psychophysiological method. What additional information could you obtain using a complementary method? For example, if you want to learn about memory, what two methods could you use that would each provide you with distinct information? 4. The popular press has shown an increasing interest in findings that contain images of brains and neuroscience language. Studies have shown that people often find presentations of results that contain these features more convincing than presentations of results that do not, even if the actual results are the same. Why would images of the brain and neuroscience language be more convincing to people? Given that results with these features are more convincing, what do you think is the researcher’s responsibility in reporting results with brain images and neuroscience language? 5. Many claims in the popular press attempt to reduce complex psychological phenomena to biological events. For example, you may have heard it said that schizophrenia is a brain disorder or that depression is simply a chemical imbalance. However, this type of “reductionism” so far does not appear to be tenable. There has been surprisingly little discussion of possible causal relationships, in either direction, between biological and psychological phenomena. We are aware of no such documented causal mechanisms. Do you think that it will ever be possible to explain how a change in biology can result in a change of a psychological phenomenon, or vice versa? Vocabulary Blood-oxygen-level-dependent (BOLD) The signal typically measured in fMRI that results from changes in the ratio of oxygenated hemoglobin to deoxygenated hemoglobin in the blood. Central nervous system The part of the nervous system that consists of the brain and spinal cord. Deoxygenated hemoglobin Hemoglobin not carrying oxygen. Depolarization A change in a cell’s membrane potential, making the inside of the cell more positive and increasing the chance of an action potential. Hemoglobin The oxygen-carrying portion of a red blood cell. Hyperpolarization A change in a cell’s membrane potential, making the inside of the cell more negative and decreasing the chance of an action potential. Invasive Procedure A procedure that involves the skin being broken or an instrument or chemical being introduced into a body cavity. Lesions Abnormalities in the tissue of an organism usually caused by disease or trauma. Neural plasticity The ability of synapses and neural pathways to change over time and adapt to changes in neural process, behavior, or environment. Neuroscience methods A research method that deals with the structure or function of the nervous system and brain. Noninvasive procedure A procedure that does not require the insertion of an instrument or chemical through the skin or into a body cavity. Oxygenated hemoglobin Hemoglobin carrying oxygen. Parasympathetic nervous system (PNS) One of the two major divisions of the autonomic nervous system, responsible for stimulation of “rest and digest” activities. Peripheral nervous system The part of the nervous system that is outside the brain and spinal cord. Positron A particle having the same mass and numerically equal but positive charge as an electron. Psychophysiological methods Any research method in which the dependent variable is a physiological measure and the independent variable is behavioral or mental (such as memory). Spatial resolution The degree to which one can separate a single object in space from another. Sympathetic nervous system (SNS) One of the two major divisions of the autonomic nervous system, responsible for stimulation of “fight or flight” activities. Temporal resolution The degree to which one can separate a single point in time from another. Voltage The difference in electric charge between two points.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_2%3A_Psychology_as_Science/2.7%3A_Psychophysiological_Methods_in_Neuroscience.txt
By Edward Diener and Robert Biswas-Diener University of Utah, University of Virginia, Portland State University In science, replication is the process of repeating research to determine the extent to which findings generalize across time and across situations. Recently, the science of psychology has come under criticism because a number of research findings do not replicate. In this module we discuss reasons for non-replication, the impact this phenomenon has on the field, and suggest solutions to the problem. learning objectives • Define “replication” • Explain the difference between exact and conceptual replication • List 4 explanations for non-replication • Name 3 potential solutions to the replication crisis The Disturbing Problem If you were driving down the road and you saw a pirate standing at an intersection you might not believe your eyes. But if you continued driving and saw a second, and then a third, you might become more confident in your observations. The more pirates you saw the less likely the first sighting would be a false positive (you were driving fast and the person was just wearing an unusual hat and billowy shirt) and the more likely it would be the result of a logical reason (there is a pirate themed conference in town). This somewhat absurd example is a real-life illustration of replication: the repeated findings of the same results. The replication of findings is one of the defining hallmarks of science. Scientists must be able to replicate the results of studies or their findings do not become part of scientific knowledge. Replication protects against false positives (seeing a result that is not really there) and also increases confidence that the result actually exists. If you collect satisfaction data among homeless people living in Kolkata, India, for example, it might seem strange that they would report fairly high satisfaction with their food (which is exactly what we found in Biswas-Diener & Diener, 2001). If you find the exact same result, but at a different time, and with a different sample of homeless people living in Kolkata, however, you can feel more confident that this result is true (as we did in Biswas-Diener & Diener, 2006). In modern times, the science of psychology is facing a crisis. It turns out that many studies in psychology—including many highly cited studies—do not replicate. In an era where news is instantaneous, the failure to replicate research raises important questions about the scientific process in general and psychology specifically. People have the right to know if they can trust research evidence. For our part, psychologists also have a vested interest in ensuring that our methods and findings are as trustworthy as possible. Psychology is not alone in coming up short on replication. There have been notable failures to replicate findings in other scientific fields as well. For instance, in 1989 scientists reported that they had produced “cold fusion,” achieving nuclear fusion at room temperatures. This could have been an enormous breakthrough in the advancement of clean energy. However, other scientists were unable to replicate the findings. Thus, the potentially important results did not become part of the scientific canon, and a new energy source did not materialize. In medical science as well, a number of findings have been found not to replicate—which is of vital concern to all of society. The non-reproducibility of medical findings suggests that some treatments for illness could be ineffective. One example of non-replication has emerged in the study of genetics and diseases: when replications were attempted to determine whether certain gene-disease findings held up, only about 4% of the findings consistently did so. The non-reproducibility of findings is disturbing because it suggests the possibility that the original research was done sloppily. Even worse is the suspicion that the research may have been falsified. In science, faking results is the biggest of sins, the unforgivable sin, and for this reason the field of psychology has been thrown into an uproar. However, as we will discuss, there are a number of explanations for non-replication, and not all are bad. What is Replication? There are different types of replication. First, there is a type called “exact replication” (also called "direct replication"). In this form, a scientist attempts to exactly recreate the scientific methods used in conditions of an earlier study to determine whether the results come out the same. If, for instance, you wanted to exactly replicate Asch’s (1956) classic findings on conformity, you would follow the original methodology: you would use only male participants, you would use groups of 8, and you would present the same stimuli (lines of differing lengths) in the same order. The second type of replication is called “conceptual replication.” This occurs when—instead of an exact replication, which reproduces the methods of the earlier study as closely as possible—a scientist tries to confirm the previous findings using a different set of specific methods that test the same idea. The same hypothesis is tested, but using a different set of methods and measures. A conceptual replication of Asch’s research might involve both male and female confederates purposefully misidentifying types of fruit to investigate conformity—rather than only males misidentifying line lengths. Both exact and conceptual replications are important because they each tell us something new. Exact replications tell us whether the original findings are true, at least under the exact conditions tested. Conceptual replications help confirm whether the theoretical idea behind the findings is true, and under what conditions these findings will occur. In other words, conceptual replication offers insights into how generalizable the findings are. Enormity of the Current Crisis Recently, there has been growing concern as psychological research fails to replicate. To give you an idea of the extent of non-replicability of psychology findings, below are data reported in 2015 by the Open Science Collaboration project, led by University of Virginia psychologist Brian Nosek (Open Science Collaboration, 2015). Because these findings were reported in the prestigious journal, Science, they received widespread attention from the media. Here are the percentages of research that replicated—selected from several highly prestigious journals: Clearly, there is a very large problem when only about 1/3 of the psychological studies in premier journals replicate! It appears that this problem is particularly pronounced for social psychology but even the 53% replication level of cognitive psychology is cause for concern. The situation in psychology has grown so worrisome that the Nobel Prize-winning psychologist Daniel Kahneman called on social psychologists to clean up their act (Kahneman, 2012). The Nobel laureate spoke bluntly of doubts about the integrity of psychology research, calling the current situation in the field a “mess.” His missive was pointed primarily at researchers who study social “priming,” but in light of the non-replication results that have since come out, it might be more aptly directed at the behavioral sciences in general. Examples of Non-replications in Psychology A large number of scientists have attempted to replicate studies on what might be called “metaphorical priming,” and more often than not these replications have failed. Priming is the process by which a recent reference (often a subtle, subconscious cue) can increase the accessibility of a trait. For example, if your instructor says, “Please put aside your books, take out a clean sheet of paper, and write your name at the top,” you might find your pulse quickening. Over time, you have learned that this cue means you are about to be given a pop quiz. This phrase primes all the features associated with pop quizzes: they are anxiety-provoking, they are tricky, your performance matters. One example of a priming study that, at least in some cases, does not replicate, is the priming of the idea of intelligence. In theory, it might be possible to prime people to actually become more intelligent (or perform better on tests, at least). For instance, in one study, priming students with the idea of a stereotypical professor versus soccer hooligans led participants in the “professor” condition to earn higher scores on a trivia game (Dijksterhuis & van Knippenberg, 1998). Unfortunately, in several follow-up instances this finding has not replicated (Shanks et al, 2013). This is unfortunate for all of us because it would be a very easy way to raise our test scores and general intelligence. If only it were true. Another example of a finding that seems not to replicate consistently is the use of spatial distance cues to prime people’s feelings of emotional closeness to their families (Williams & Bargh, 2008). In this type of study, participants are asked to plot points on graph paper, either close together or far apart. The participants are then asked to rate how close they are to their family members. Although the original researchers found that people who plotted close-together points on graph paper reported being closer to their relatives, studies reported on PsychFileDrawer—an internet repository of replication attempts—suggest that the findings frequently do not replicate. Again, this is unfortunate because it would be a handy way to help people feel closer to their families. As one can see from the examples, some of the studies that fail to replicate report extremely interesting findings—even counterintuitive findings that appear to offer new insights into the human mind. Critics claim that psychologists have become too enamored with such newsworthy, surprising “discoveries” that receive a lot of media attention. Which raises the question of timing: might the current crisis of non-replication be related to the modern, media-hungry context in which psychological research (indeed, all research) is conducted? Put another way: is the non-replication crisis new? Nobody has tried to systematically replicate studies from the past, so we do not know if published studies are becoming less replicable over time. In 1990, however, Amir and Sharon were able to successfully replicate most of the main effects of six studies from another culture, though they did fail to replicate many of the interactions. This particular shortcoming in their overall replication may suggest that published studies are becoming less replicable over time, but we cannot be certain. What we can be sure of is that there is a significant problem with replication in psychology, and it’s a trend the field needs to correct. Without replicable findings, nobody will be able to believe in scientific psychology. Reasons for Non-replication When findings do not replicate, the original scientists sometimes become indignant and defensive, offering reasons or excuses for non-replication of their findings—including, at times, attacking those attempting the replication. They sometimes claim that the scientists attempting the replication are unskilled or unsophisticated, or do not have sufficient experience to replicate the findings. This, of course, might be true, and it is one possible reason for non-replication. One reason for defensive responses is the unspoken implication that the original results might have been falsified. Faked results are only one reason studies may not replicate, but it is the most disturbing reason. We hope faking is rare, but in the past decade a number of shocking cases have turned up. Perhaps the most well-known come from social psychology. Diederik Stapel, a renowned social psychologist in the Netherlands, admitted to faking the results of a number of studies. Marc Hauser, a popular professor at Harvard, apparently faked results on morality and cognition. Karen Ruggiero at the University of Texas was also found to have falsified a number of her results (proving that bad behavior doesn’t have a gender bias). Each of these psychologists—and there are quite a few more examples—was believed to have faked data. Subsequently, they all were disgraced and lost their jobs. Another reason for non-replication is that, in studies with small sample sizes, statistically-significant results may often be the result of chance. For example, if you ask five people if they believe that aliens from other planets visit Earth and regularly abduct humans, you may get three people who agree with this notion—simply by chance. Their answers may, in fact, not be at all representative of the larger population. On the other hand, if you survey one thousand people, there is a higher probability that their belief in alien abductions reflects the actual attitudes of society. Now consider this scenario in the context of replication: if you try to replicate the first study—the one in which you interviewed only five people—there is only a small chance that you will randomly draw five new people with exactly the same (or similar) attitudes. It’s far more likely that you will be able to replicate the findings using another large sample, because it is simply more likely that the findings are accurate. Another reason for non-replication is that, while the findings in an original study may be true, they may only be true for some people in some circumstances and not necessarily universal or enduring. Imagine that a survey in the 1950s found a strong majority of respondents to have trust in government officials. Now imagine the same survey administered today, with vastly different results. This example of non-replication does not invalidate the original results. Rather, it suggests that attitudes have shifted over time. A final reason for non-replication relates to the quality of the replication rather than the quality of the original study. Non-replication might be the product of scientist-error, with the newer investigation not following the original procedures closely enough. Similarly, the attempted replication study might, itself, have too small a sample size or insufficient statistical power to find significant results. In Defense of Replication Attempts Failures in replication are not all bad and, in fact, some non-replication should be expected in science. Original studies are conducted when an answer to a question is uncertain. That is to say, scientists are venturing into new territory. In such cases we should expect some answers to be uncovered that will not pan out in the long run. Furthermore, we hope that scientists take on challenging new topics that come with some amount of risk. After all, if scientists were only to publish safe results that were easy to replicate, we might have very boring studies that do not advance our knowledge very quickly. But, with such risks, some non-replication of results is to be expected. A recent example of risk-taking can be seen in the research of social psychologist Daryl Bem. In 2011, Bem published an article claiming he had found in a number of studies that future events could influence the past. His proposition turns the nature of time, which is assumed by virtually everyone except science fiction writers to run in one direction, on its head. Needless to say, attacks on Bem’s article came fast and furious, including attacks on his statistics and methodology (Ritchie, Wiseman & French, 2012). There were attempts at replication and most of them failed, but not all. A year after Bem’s article came out, the prestigious journal where it was published, Journal of Personality and Social Psychology, published another paper in which a scientist failed to replicate Bem’s findings in a number of studies very similar to the originals (Galak, Lebeouf, Nelson & Simmons, 2012). Some people viewed the publication of Bem’s (2011) original study as a failure in the system of science. They argued that the paper should not have been published. But the editor and reviewers of the article had moved forward with publication because, although they might have thought the findings provocative and unlikely, they did not see obvious flaws in the methodology. We see the publication of the Bem paper, and the ensuing debate, as a strength of science. We are willing to consider unusual ideas if there is evidence to support them: we are open-minded. At the same time, we are critical and believe in replication. Scientists should be willing to consider unusual or risky hypotheses but ultimately allow good evidence to have the final say, not people’s opinions. Solutions to the Problem Dissemination of Replication Attempts • Psychfiledrawer.org: Archives attempted replications of specific studies and whether replication was achieved. • Center for Open Science: Psychologist Brian Nosek, a champion of replication in psychology, has created the Open Science Framework, where replications can be reported. • Association of Psychological Science: Has registered replications of studies, with the overall results published in Perspectives on Psychological Science. • Plos One: Public Library of Science—publishes a broad range of articles, including failed replications, and there are occasional summaries of replication attempts in specific areas. • The Replication Index: Created in 2014 by Ulrich Schimmack, the so-called "R Index" is a statistical tool for estimating the replicability of studies, of journals, and even of specific researchers. Schimmack describes it as a "doping test". The fact that replications, including failed replication attempts, now have outlets where they can be communicated to other researchers is a very encouraging development, and should strengthen the science considerably. One problem for many decades has been the near-impossibility of publishing replication attempts, regardless of whether they’ve been positive or negative. More Systematic Programs of Scientific Research The reward structure in academia has served to discourage replication. Many psychologists—especially those who work full time at universities—are often rewarded at work—with promotions, pay raises, tenure, and prestige—through their research. Replications of one’s own earlier work, or the work of others, is typically discouraged because it does not represent original thinking. Instead, academics are rewarded for high numbers of publications, and flashy studies are often given prominence in media reports of published studies. Psychological scientists need to carefully pursue programmatic research. Findings from a single study are rarely adequate, and should be followed up by additional studies using varying methodologies. Thinking about research this way—as if it were a program rather than a single study—can help. We would recommend that laboratories conduct careful sets of interlocking studies, where important findings are followed up using various methods. It is not sufficient to find some surprising outcome, report it, and then move on. When findings are important enough to be published, they are often important enough to prompt further, more conclusive research. In this way scientists will discover whether their findings are replicable, and how broadly generalizable they are. If the findings do not always replicate, but do sometimes, we will learn the conditions in which the pattern does or doesn’t hold. This is an important part of science—to discover how generalizable the findings are. When researchers criticize others for being unable to replicate the original findings, saying that the conditions in the follow-up study were changed, this is important to pay attention to as well. Not all criticism is knee-jerk defensiveness or resentment. The replication crisis has stirred heated emotions among research psychologists and the public, but it is time for us to calm down and return to a more scientific attitude and system of programmatic research. Textbooks and Journals Some psychologists blame the trend toward non-replication on specific journal policies, such as the policy of Psychological Science to publish short single studies. When single studies are published we do not know whether even the authors themselves can replicate their findings. The journal Psychological Science has come under perhaps the harshest criticism. Others blame the rash of nonreplicable studies on a tendency of some fields for surprising and counterintuitive findings that grab the public interest. The irony here is that such counterintuitive findings are in fact less likely to be true precisely because they are so strange—so they should perhaps warrant more scrutiny and further analysis. The criticism of journals extends to textbooks as well. In our opinion, psychology textbooks should stress true science, based on findings that have been demonstrated to be replicable. There are a number of inaccuracies that persist across common psychology textbooks, including small mistakes in common coverage of the most famous studies, such as the Stanford Prison Experiment (Griggs & Whitehead, 2014) and the Milgram studies (Griggs & Whitehead, 2015). To some extent, the inclusion of non-replicated studies in textbooks is the product of market forces. Textbook publishers are under pressure to release new editions of their books, often far more frequently than advances in psychological science truly justify. As a result, there is pressure to include “sexier” topics such as controversial studies. Ultimately, people also need to learn to be intelligent consumers of science. Instead of getting overly-excited by findings from a single study, it’s wise to wait for replications. When a corpus of studies is built on a phenomenon, we can begin to trust the findings. Journalists must be educated about this too, and learn not to readily broadcast and promote findings from single flashy studies. If the results of a study seem too good to be true, maybe they are. Everyone needs to take a more skeptical view of scientific findings, until they have been replicated. Outside Resources Article: New Yorker article on the "replication crisis" http://www.newyorker.com/tech/elemen...logy-that-isnt Web: Collaborative Replications and Education Project - This is a replication project where students are encouraged to conduct replications as part of their courses. https://osf.io/wfc6u/ Web: Commentary on what makes for a convincing replication. http://papers.ssrn.com/sol3/papers.c...act_id=2283856 Web: Open Science Framework - The Open Science Framework is an open source software project that facilitates open collaboration in science research. https://osf.io/ Web: Psych File Drawer - A website created to address “the file drawer problem”. PsychFileDrawer.org allows users to upload results of serious replication attempts in all research areas of psychology. http://psychfiledrawer.org/ Discussion Questions 1. Why do scientists see replication by other laboratories as being so crucial to advances in science? 2. Do the failures of replication shake your faith in what you have learned about psychology? Why or why not? 3. Can you think of any psychological findings that you think might not replicate? 4. What findings are so important that you think they should be replicated? 5. Why do you think quite a few studies do not replicate? 6. How frequently do you think faking results occurs? Why? How might we prevent that? Vocabulary Conceptual Replication A scientific attempt to copy the scientific hypothesis used in an earlier study in an effort to determine whether the results will generalize to different samples, times, or situations. The same—or similar—results are an indication that the findings are generalizable. Confederate An actor working with the researcher. Most often, this individual is used to deceive unsuspecting research participants. Also known as a “stooge.” Exact Replication (also called Direct Replication) A scientific attempt to exactly copy the scientific methods used in an earlier study in an effort to determine whether the results are consistent. The same—or similar—results are an indication that the findings are accurate. Falsified data (faked data) Data that are fabricated, or made up, by researchers intentionally trying to pass off research results that are inaccurate. This is a serious ethical breach and can even be a criminal offense. Priming The process by which exposing people to one stimulus makes certain thoughts, feelings or behaviors more salient. Sample Size The number of participants in a study. Sample size is important because it can influence the confidence scientists have in the accuracy and generalizability of their results.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_2%3A_Psychology_as_Science/2.8%3A_The_Replication_Crisis_in_Psychology.txt
• 3.1: Personality Assessment This module provides a basic overview to the assessment of personality. It discusses objective personality tests (based on both self-report and informant ratings), projective and implicit tests, and behavioral/performance measures. It describes the basic features of each method, as well as reviewing the strengths, weaknesses, and overall validity of each approach. • 3.2: Personality Traits Personality traits reflect people’s characteristic patterns of thoughts, feelings, and behaviors. Personality traits imply consistency and stability—someone who scores high on a specific trait like Extraversion is expected to be sociable in different situations and over time. Thus, trait psychology rests on the idea that people differ from one another in terms of where they stand on a set of basic trait dimensions that persist over time and across situations. • 3.3: Creativity Psychologists who investigate creativity most often adopt one of three perspectives. First, they can ask how creators think, and thus focus on the cognitive processes behind creativity. Second, they can ask who is creative, and hence investigate the personal characteristics of highly creative people. Third, they can ask about the social context, and, thereby, examine the environments that influence creativity. • 3.5: Self and Identity Psychologists have approached the study of self in many different ways, but three central metaphors for the self repeatedly emerge. First, the self may be seen as a social actor, who enacts roles and displays traits by performing behaviors in the presence of others. Second, the self is a motivated agent, who acts upon inner desires and formulates goals, values, and plans to guide behavior in the future. Third, the self eventually becomes an autobiographical author. • 3.6: Self-Regulation and Conscientiousness Self-regulation means changing oneself based on standards, that is, ideas of how one should or should not be. It is a centrally important capacity that contributes to socially desirable behavior, including moral behavior. Effective self-regulation requires knowledge of standards for proper behavior, careful monitoring of one’s actions and feelings, and the ability to make desired changes. • 3.7: Intellectual Abilities, Interests, and Mastery Psychologists interested in the study of human individuality have found that accomplishments in education, the world of work, and creativity are a joint function of talent, passion, and commitment — or how much effort and time one is willing to invest in personal development when the opportunity is provided. This module reviews models and measures that psychologists have designed to assess intellect, interests, and energy for personal development. • 3.8: Self-Efficacy The term “self-efficacy” refers to your beliefs about your ability to effectively perform the tasks needed to attain a valued goal. Self-efficacy does not refer to your abilities but to how strongly you believe you can use your abilities to work toward goals. Self-efficacy is not a unitary construct or trait; rather, people have self-efficacy beliefs in different domains, such as academic self-efficacy, problem-solving self-efficacy, and self-regulatory self-efficacy. • 3.9: The Psychodynamic Perspective Originating in the work of Sigmund Freud, the psychodynamic perspective emphasizes unconscious psychological processes (for example, wishes and fears of which we’re not fully aware), and contends that childhood experiences are crucial in shaping adult personality. The psychodynamic perspective has evolved considerably since Freud’s time, and now includes innovative new approaches such as object relations theory and neuropsychoanalysis. • 3.10: Personality Stability and Change This module describes different ways to address questions about personality stability across the lifespan. Definitions of the major types of personality stability are provided, and evidence concerning the different kinds of stability and change are reviewed. The mechanisms thought to produce personality stability and personality change are identified and explained. Chapter 3: Personality By David Watson University of Notre Dame This module provides a basic overview to the assessment of personality. It discusses objective personality tests (based on both self-report and informant ratings), projective and implicit tests, and behavioral/performance measures. It describes the basic features of each method, as well as reviewing the strengths, weaknesses, and overall validity of each approach. learning objectives • Appreciate the diversity of methods that are used to measure personality characteristics. • Understand the logic, strengths and weaknesses of each approach. • Gain a better sense of the overall validity and range of applications of personality tests. Introduction Personality is the field within psychology that studies the thoughts, feelings, behaviors, goals, and interests of normal individuals. It therefore covers a very wide range of important psychological characteristics. Moreover, different theoretical models have generated very different strategies for measuring these characteristics. For example, humanistically oriented models argue that people have clear, well-defined goals and are actively striving to achieve them (McGregor, McAdams, & Little, 2006). It, therefore, makes sense to ask them directly about themselves and their goals. In contrast, psychodynamically oriented theories propose that people lack insight into their feelings and motives, such that their behavior is influenced by processes that operate outside of their awareness (e.g., McClelland, Koestner, & Weinberger, 1989; Meyer & Kurtz, 2006). Given that people are unaware of these processes, it does not make sense to ask directly about them. One, therefore, needs to adopt an entirely different approach to identify these nonconscious factors. Not surprisingly, researchers have adopted a wide range of approaches to measure important personality characteristics. The most widely used strategies will be summarized in the following sections. Objective Tests Definition Objective tests (Loevinger, 1957; Meyer & Kurtz, 2006) represent the most familiar and widely used approach to assessing personality. Objective tests involve administering a standard set of items, each of which is answered using a limited set of response options (e.g., true or false; strongly disagree, slightly disagree, slightly agree, strongly agree). Responses to these items then are scored in a standardized, predetermined way. For example, self-ratings on items assessing talkativeness, assertiveness, sociability, adventurousness, and energy can be summed up to create an overall score on the personality trait of extraversion. It must be emphasized that the term “objective” refers to the method that is used to score a person’s responses, rather than to the responses themselves. As noted by Meyer and Kurtz (2006, p. 233), “What is objective about such a procedure is that the psychologist administering the test does not need to rely on judgment to classify or interpret the test-taker’s response; the intended response is clearly indicated and scored according to a pre-existing key.” In fact, as we will see, a person’s test responses may be highly subjective and can be influenced by a number of different rating biases. Basic Types of Objective Tests Self-report measures Objective personality tests can be further subdivided into two basic types. The first type—which easily is the most widely used in modern personality research—asks people to describe themselves. This approach offers two key advantages. First, self-raters have access to an unparalleled wealth of information: After all, who knows more about you than you yourself? In particular, self-raters have direct access to their own thoughts, feelings, and motives, which may not be readily available to others (Oh, Wang, & Mount, 2011; Watson, Hubbard, & Weise, 2000). Second, asking people to describe themselves is the simplest, easiest, and most cost-effective approach to assessing personality. Countless studies, for instance, have involved administering self-report measures to college students, who are provided some relatively simple incentive (e.g., extra course credit) to participate. The items included in self-report measures may consist of single words (e.g., assertive), short phrases (e.g., am full of energy), or complete sentences (e.g., I like to spend time with others). Table 1 presents a sample self-report measure assessing the general traits comprising the influential five-factor model (FFM) of personality: neuroticism, extraversion, openness, agreeableness, and conscientiousness (John & Srivastava, 1999; McCrae, Costa, & Martin, 2005). The sentences shown in Table 1 are modified versions of items included in the International Personality Item Pool (IPIP) (Goldberg et al., 2006), which is a rich source of personality-related content in the public domain (for more information about IPIP, go to: http://ipip.ori.org/). Self-report personality tests show impressive validity in relation to a wide range of important outcomes. For example, self-ratings of conscientiousness are significant predictors of both overall academic performance (e.g., cumulative grade point average; Poropat, 2009) and job performance (Oh, Wang, and Mount, 2011). Roberts, Kuncel, Shiner, Caspi, and Goldberg (2007) reported that self-rated personality predicted occupational attainment, divorce, and mortality. Similarly, Friedman, Kern, and Reynolds (2010) showed that personality ratings collected early in life were related to happiness/well-being, physical health, and mortality risk assessed several decades later. Finally, self-reported personality has important and pervasive links to psychopathology. Most notably, self-ratings of neuroticism are associated with a wide array of clinical syndromes, including anxiety disorders, depressive disorders, substance use disorders, somatoform disorders, eating disorders, personality and conduct disorders, and schizophrenia/schizotypy (Kotov, Gamez, Schmidt, & Watson, 2010; Mineka, Watson, & Clark, 1998). At the same time, however, it is clear that this method is limited in a number of ways. First, raters may be motivated to present themselves in an overly favorable, socially desirable way (Paunonen & LeBel, 2012). This is a particular concern in “high-stakes testing,” that is, situations in which test scores are used to make important decisions about individuals (e.g., when applying for a job). Second, personality ratings reflect a self-enhancement bias (Vazire & Carlson, 2011); in other words, people are motivated to ignore (or at least downplay) some of their less desirable characteristics and to focus instead on their more positive attributes. Third, self-ratings are subject to the reference group effect (Heine, Buchtel, & Norenzayan, 2008); that is, we base our self-perceptions, in part, on how we compare to others in our sociocultural reference group. For instance, if you tend to work harder than most of your friends, you will see yourself as someone who is relatively conscientious, even if you are not particularly conscientious in any absolute sense. Informant ratings Another approach is to ask someone who knows a person well to describe his or her personality characteristics. In the case of children or adolescents, the informant is most likely to be a parent or teacher. In studies of older participants, informants may be friends, roommates, dating partners, spouses, children, or bosses (Oh et al., 2011; Vazire & Carlson, 2011; Watson et al., 2000). Generally speaking, informant ratings are similar in format to self-ratings. As was the case with self-report, items may consist of single words, short phrases, or complete sentences. Indeed, many popular instruments include parallel self- and informant-rating versions, and it often is relatively easy to convert a self-report measure so that it can be used to obtain informant ratings. Table 2 illustrates how the self-report instrument shown in Table 1 can be converted to obtain spouse-ratings (in this case, having a husband describe the personality characteristics of his wife). Informant ratings are particularly valuable when self-ratings are impossible to collect (e.g., when studying young children or cognitively impaired adults) or when their validity is suspect (e.g., as noted earlier, people may not be entirely honest in high-stakes testing situations). They also may be combined with self-ratings of the same characteristics to produce more reliable and valid measures of these attributes (McCrae, 1994). Informant ratings offer several advantages in comparison to other approaches to assessing personality. A well-acquainted informant presumably has had the opportunity to observe large samples of behavior in the person he or she is rating. Moreover, these judgments presumably are not subject to the types of defensiveness that potentially can distort self-ratings (Vazire & Carlson, 2011). Indeed, informants typically have strong incentives for being accurate in their judgments. As Funder and Dobroth (1987, p. 409), put it, “Evaluations of the people in our social environment are central to our decisions about who to befriend and avoid, trust and distrust, hire and fire, and so on.” Informant personality ratings have demonstrated a level of validity in relation to important life outcomes that is comparable to that discussed earlier for self-ratings. Indeed, they outperform self-ratings in certain circumstances, particularly when the assessed traits are highly evaluative in nature (e.g., intelligence, charm, creativity; see Vazire & Carlson, 2011). For example, Oh et al. (2011) found that informant ratings were more strongly related to job performance than were self-ratings. Similarly, Oltmanns and Turkheimer (2009) summarized evidence indicating that informant ratings of Air Force cadets predicted early, involuntary discharge from the military better than self-ratings. Nevertheless, informant ratings also are subject to certain problems and limitations. One general issue is the level of relevant information that is available to the rater (Funder, 2012). For instance, even under the best of circumstances, informants lack full access to the thoughts, feelings, and motives of the person they are rating. This problem is magnified when the informant does not know the person particularly well and/or only sees him or her in a limited range of situations (Funder, 2012; Beer & Watson, 2010). Informant ratings also are subject to some of the same response biases noted earlier for self-ratings. For instance, they are not immune to the reference group effect. Indeed, it is well-established that parent ratings often are subject to a sibling contrast effect, such that parents exaggerate the true magnitude of differences between their children (Pinto, Rijsdijk, Frazier-Wood, Asherson, & Kuntsi, 2012). Furthermore, in many studies, individuals are allowed to nominate (or even recruit) the informants who will rate them. Because of this, it most often is the case that informants (who, as noted earlier, may be friends, relatives, or romantic partners) like the people they are rating. This, in turn, means that informants may produce overly favorable personality ratings. Indeed, their ratings actually can be more favorable than the corresponding self-ratings (Watson & Humrichouse, 2006). This tendency for informants to produce unrealistically positive ratings has been termed the letter of recommendation effect (Leising, Erbs, & Fritz, 2010) and the honeymoon effect when applied to newlyweds (Watson & Humrichouse, 2006). Other Ways of Classifying Objective Tests Comprehensiveness In addition to the source of the scores, there are at least two other important dimensions on which personality tests differ. The first such dimension concerns the extent to which an instrument seeks to assess personality in a reasonably comprehensive manner. At one extreme, many widely used measures are designed to assess a single core attribute. Examples of these types of measures include the Toronto Alexithymia Scale (Bagby, Parker, & Taylor, 1994), the Rosenberg Self-Esteem Scale (Rosenberg, 1965), and the Multidimensional Experiential Avoidance Questionnaire (Gamez, Chmielewski, Kotov, Ruggero, & Watson, 2011). At the other extreme, a number of omnibus inventories contain a large number of specific scales and purport to measure personality in a reasonably comprehensive manner. These instruments include the California Psychological Inventory (Gough, 1987), the Revised HEXACO Personality Inventory (HEXACO-PI-R) (Lee & Ashton, 2006), the Multidimensional Personality Questionnaire (Patrick, Curtin, & Tellegen, 2002), the NEO Personality Inventory-3 (NEO-PI-3) (McCrae et al., 2005), the Personality Research Form (Jackson, 1984), and the Sixteen Personality Factor Questionnaire (Cattell, Eber, & Tatsuoka, 1980). Breadth of the target characteristics Second, personality characteristics can be classified at different levels of breadth or generality. For example, many models emphasize broad, “big” traits such as neuroticism and extraversion. These general dimensions can be divided up into several distinct yet empirically correlated component traits. For example, the broad dimension of extraversion contains such specific component traits as dominance (extraverts are assertive, persuasive, and exhibitionistic), sociability (extraverts seek out and enjoy the company of others), positive emotionality (extraverts are active, energetic, cheerful, and enthusiastic), and adventurousness (extraverts enjoy intense, exciting experiences). Some popular personality instruments are designed to assess only the broad, general traits. For example, similar to the sample instrument displayed in Table 1, the Big Five Inventory (John & Srivastava, 1999) contains brief scales assessing the broad traits of neuroticism, extraversion, openness, agreeableness, and conscientiousness. In contrast, many instruments—including several of the omnibus inventories mentioned earlier—were designed primarily to assess a large number of more specific characteristics. Finally, some inventories—including the HEXACO-PI-R and the NEO-PI-3—were explicitly designed to provide coverage of both general and specific trait characteristics. For instance, the NEO-PI-3 contains six specific facet scales (e.g., Gregariousness, Assertiveness, Positive Emotions, Excitement Seeking) that then can be combined to assess the broad trait of extraversion. Projective and Implicit Tests Projective Tests As noted earlier, some approaches to personality assessment are based on the belief that important thoughts, feelings, and motives operate outside of conscious awareness. Projective tests represent influential early examples of this approach. Projective tests originally were based on the projective hypothesis (Frank, 1939; Lilienfeld, Wood, & Garb, 2000): If a person is asked to describe or interpret ambiguous stimuli—that is, things that can be understood in a number of different ways—their responses will be influenced by nonconscious needs, feelings, and experiences (note, however, that the theoretical rationale underlying these measures has evolved over time) (see, for example, Spangler, 1992). Two prominent examples of projective tests are the Rorschach Inkblot Test (Rorschach, 1921) and the Thematic Apperception Test (TAT) (Morgan & Murray, 1935). The former asks respondents to interpret symmetrical blots of ink, whereas the latter asks them to generate stories about a series of pictures. For instance, one TAT picture depicts an elderly woman with her back turned to a young man; the latter looks downward with a somewhat perplexed expression. Another picture displays a man clutched from behind by three mysterious hands. What stories could you generate in response to these pictures? In comparison to objective tests, projective tests tend to be somewhat cumbersome and labor intensive to administer. The biggest challenge, however, has been to develop a reliable and valid scheme to score the extensive set of responses generated by each respondent. The most widely used Rorschach scoring scheme is the Comprehensive System developed by Exner (2003). The most influential TAT scoring system was developed by McClelland, Atkinson and colleagues between 1947 and 1953 (McClelland et al., 1989; see also Winter, 1998), which can be used to assess motives such as the need for achievement. The validity of the Rorschach has been a matter of considerable controversy (Lilienfeld et al., 2000; Mihura, Meyer, Dumitrascu, & Bombel, 2012; Society for Personality Assessment, 2005). Most reviews acknowledge that Rorschach scores do show some ability to predict important outcomes. Its critics, however, argue that it fails to provide important incremental information beyond other, more easily acquired information, such as that obtained from standard self-report measures (Lilienfeld et al., 2000). Validity evidence is more impressive for the TAT. In particular, reviews have concluded that TAT-based measures of the need for achievement (a) show significant validity to predict important criteria and (b) provide important information beyond that obtained from objective measures of this motive (McClelland et al., 1989; Spangler, 1992). Furthermore, given the relatively weak associations between objective and projective measures of motives, McClelland et al. (1989) argue that they tap somewhat different processes, with the latter assessing implicit motives (Schultheiss, 2008). Implicit Tests In recent years, researchers have begun to use implicit measures of personality (Back, Schmuckle, & Egloff, 2009; Vazire & Carlson, 2011). These tests are based on the assumption that people form automatic or implicit associations between certain concepts based on their previous experience and behavior. If two concepts (e.g., meand assertive) are strongly associated with each other, then they should be sorted together more quickly and easily than two concepts (e.g., me and shy) that are less strongly associated. Although validity evidence for these measures still is relatively sparse, the results to date are encouraging: Back et al. (2009), for example, showed that implicit measures of the FFM personality traits predicted behavior even after controlling for scores on objective measures of these same characteristics. Behavioral and Performance Measures A final approach is to infer important personality characteristics from direct samples of behavior. For example, Funder and Colvin (1988) brought opposite-sex pairs of participants into the laboratory and had them engage in a five-minute “getting acquainted” conversation; raters watched videotapes of these interactions and then scored the participants on various personality characteristics. Mehl, Gosling, and Pennebaker (2006) used the electronically activated recorder (EAR) to obtain samples of ambient sounds in participants’ natural environments over a period of two days; EAR-based scores then were related to self- and observer-rated measures of personality. For instance, more frequent talking over this two-day period was significantly related to both self- and observer-ratings of extraversion. As a final example, Gosling, Ko, Mannarelli, and Morris (2002) sent observers into college students’ bedrooms and then had them rate the students’ personality characteristics on the Big Five traits. The averaged observer ratings correlated significantly with participants’ self-ratings on all five traits. Follow-up analyses indicated that conscientious students had neater rooms, whereas those who were high in openness to experience had a wider variety of books and magazines. Behavioral measures offer several advantages over other approaches to assessing personality. First, because behavior is sampled directly, this approach is not subject to the types of response biases (e.g., self-enhancement bias, reference group effect) that can distort scores on objective tests. Second, as is illustrated by the Mehl et al. (2006) and Gosling et al. (2002) studies, this approach allows people to be studied in their daily lives and in their natural environments, thereby avoiding the artificiality of other methods (Mehl et al., 2006). Finally, this is the only approach that actually assesses what people do, as opposed to what they think or feel (see Baumeister, Vohs, & Funder, 2007). At the same time, however, this approach also has some disadvantages. This assessment strategy clearly is much more cumbersome and labor intensive than using objective tests, particularly self-report. Moreover, similar to projective tests, behavioral measures generate a rich set of data that then need to be scored in a reliable and valid way. Finally, even the most ambitious study only obtains relatively small samples of behavior that may provide a somewhat distorted view of a person’s true characteristics. For example, your behavior during a “getting acquainted” conversation on a single given day inevitably will reflect a number of transient influences (e.g., level of stress, quality of sleep the previous night) that are idiosyncratic to that day. Conclusion No single method of assessing personality is perfect or infallible; each of the major methods has both strengths and limitations. By using a diversity of approaches, researchers can overcome the limitations of any single method and develop a more complete and integrative view of personality. Discussion Questions 1. Under what conditions would you expect self-ratings to be most similar to informant ratings? When would you expect these two sets of ratings to be most different from each other? 2. The findings of Gosling, et al. (2002) demonstrate that we can obtain important clues about students’ personalities from their dorm rooms. What other aspects of people’s lives might give us important information about their personalities? 3. Suppose that you were planning to conduct a study examining the personality trait of honesty. What method or methods might you use to measure it? Vocabulary Big Five Five, broad general traits that are included in many prominent models of personality. The five traits are neuroticism (those high on this trait are prone to feeling sad, worried, anxious, and dissatisfied with themselves), extraversion (high scorers are friendly, assertive, outgoing, cheerful, and energetic), openness to experience (those high on this trait are tolerant, intellectually curious, imaginative, and artistic), agreeableness (high scorers are polite, considerate, cooperative, honest, and trusting), and conscientiousness (those high on this trait are responsible, cautious, organized, disciplined, and achievement-oriented). High-stakes testing Settings in which test scores are used to make important decisions about individuals. For example, test scores may be used to determine which individuals are admitted into a college or graduate school, or who should be hired for a job. Tests also are used in forensic settings to help determine whether a person is competent to stand trial or fits the legal definition of sanity. Honeymoon effect The tendency for newly married individuals to rate their spouses in an unrealistically positive manner. This represents a specific manifestation of the letter of recommendation effect when applied to ratings made by current romantic partners. Moreover, it illustrates the very important role played by relationship satisfaction in ratings made by romantic partners: As marital satisfaction declines (i.e., when the “honeymoon is over”), this effect disappears. Implicit motives These are goals that are important to a person, but that he/she cannot consciously express. Because the individual cannot verbalize these goals directly, they cannot be easily assessed via self-report. However, they can be measured using projective devices such as the Thematic Apperception Test (TAT). Letter of recommendation effect The general tendency for informants in personality studies to rate others in an unrealistically positive manner. This tendency is due a pervasive bias in personality assessment: In the large majority of published studies, informants are individuals who like the person they are rating (e.g., they often are friends or family members) and, therefore, are motivated to depict them in a socially desirable way. The term reflects a similar tendency for academic letters of recommendation to be overly positive and to present the referent in an unrealistically desirable manner. Projective hypothesis The theory that when people are confronted with ambiguous stimuli (that is, stimuli that can be interpreted in more than one way), their responses will be influenced by their unconscious thoughts, needs, wishes, and impulses. This, in turn, is based on the Freudian notion of projection, which is the idea that people attribute their own undesirable/unacceptable characteristics to other people or objects. Reference group effect The tendency of people to base their self-concept on comparisons with others. For example, if your friends tend to be very smart and successful, you may come to see yourself as less intelligent and successful than you actually are. Informants also are prone to these types of effects. For instance, the sibling contrast effect refers to the tendency of parents to exaggerate the true extent of differences between their children. Reliablility The consistency of test scores across repeated assessments. For example, test-retest reliability examines the extent to which scores change over time. Self-enhancement bias The tendency for people to see and/or present themselves in an overly favorable way. This tendency can take two basic forms: defensiveness (when individuals actually believe they are better than they really are) and impression management (when people intentionally distort their responses to try to convince others that they are better than they really are). Informants also can show enhancement biases. The general form of this bias has been called the letter-of-recommendation effect, which is the tendency of informants who like the person they are rating (e.g., friends, relatives, romantic partners) to describe them in an overly favorable way. In the case of newlyweds, this tendency has been termed the honeymoon effect. Sibling contrast effect The tendency of parents to use their perceptions of all of their children as a frame of reference for rating the characteristics of each of them. For example, suppose that a mother has three children; two of these children are very sociable and outgoing, whereas the third is relatively average in sociability. Because of operation of this effect, the mother will rate this third child as less sociable and outgoing than he/she actually is. More generally, this effect causes parents to exaggerate the true extent of differences between their children. This effect represents a specific manifestation of the more general reference group effect when applied to ratings made by parents. Validity Evidence related to the interpretation and use of test scores. A particularly important type of evidence is criterion validity, which involves the ability of a test to predict theoretically relevant outcomes. For example, a presumed measure of conscientiousness should be related to academic achievement (such as overall grade point average).
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_3%3A_Personality/3.01%3A_Personality_Assessment.txt
By Edward Diener and Richard E. Lucas University of Utah, University of Virginia, Michigan State University Personality traits reflect people’s characteristic patterns of thoughts, feelings, and behaviors. Personality traits imply consistency and stability—someone who scores high on a specific trait like Extraversion is expected to be sociable in different situations and over time. Thus, trait psychology rests on the idea that people differ from one another in terms of where they stand on a set of basic trait dimensions that persist over time and across situations. The most widely used system of traits is called the Five-Factor Model. This system includes five broad traits that can be remembered with the acronym OCEAN: Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. Each of the major traits from the Big Five can be divided into facets to give a more fine-grained analysis of someone's personality. In addition, some trait theorists argue that there are other traits that cannot be completely captured by the Five-Factor Model. Critics of the trait concept argue that people do not act consistently from one situation to the next and that people are very influenced by situational forces. Thus, one major debate in the field concerns the relative power of people’s traits versus the situations in which they find themselves as predictors of their behavior. learning objectives • List and describe the “Big Five” (“OCEAN”) personality traits that comprise the Five-Factor Model of personality. • Describe how the facet approach extends broad personality traits. • Explain a critique of the personality-trait concept. • Describe in what ways personality traits may be manifested in everyday behavior. • Describe each of the Big Five personality traits, and the low and high end of the dimension. • Give examples of each of the Big Five personality traits, including both a low and high example. • Describe how traits and social learning combine to predict your social activities. • Describe your theory of how personality traits get refined by social learning. Introduction When we observe people around us, one of the first things that strikes us is how different people are from one another. Some people are very talkative while others are very quiet. Some are active whereas others are couch potatoes. Some worry a lot, others almost never seem anxious. Each time we use one of these words, words like “talkative,” “quiet,” “active,” or “anxious,” to describe those around us, we are talking about a person’s personalitythe characteristic ways that people differ from one another. Personality psychologists try to describe and understand these differences. Although there are many ways to think about the personalities that people have, Gordon Allport and other “personologists” claimed that we can best understand the differences between individuals by understanding their personality traits. Personality traits reflect basic dimensions on which people differ (Matthews, Deary, & Whiteman, 2003). According to trait psychologists, there are a limited number of these dimensions (dimensions like Extraversion, Conscientiousness, or Agreeableness), and each individual falls somewhere on each dimension, meaning that they could be low, medium, or high on any specific trait. An important feature of personality traits is that they reflect continuous distributions rather than distinct personality types. This means that when personality psychologists talk about Introverts and Extraverts, they are not really talking about two distinct types of people who are completely and qualitatively different from one another. Instead, they are talking about people who score relatively low or relatively high along a continuous distribution. In fact, when personality psychologists measure traits like Extraversion, they typically find that most people score somewhere in the middle, with smaller numbers showing more extreme levels. The figure below shows the distribution of Extraversion scores from a survey of thousands of people. As you can see, most people report being moderately, but not extremely, extraverted, with fewer people reporting very high or very low scores. There are three criteria that are characterize personality traits: (1) consistency, (2) stability, and (3) individual differences. 1. To have a personality trait, individuals must be somewhat consistent across situations in their behaviors related to the trait. For example, if they are talkative at home, they tend also to be talkative at work. 2. Individuals with a trait are also somewhat stable over time in behaviors related to the trait. If they are talkative, for example, at age 30, they will also tend to be talkative at age 40. 3. People differ from one another on behaviors related to the trait. Using speech is not a personality trait and neither is walking on two feet—virtually all individuals do these activities, and there are almost no individual differences. But people differ on how frequently they talk and how active they are, and thus personality traits such as Talkativeness and Activity Level do exist. A challenge of the trait approach was to discover the major traits on which all people differ. Scientists for many decades generated hundreds of new traits, so that it was soon difficult to keep track and make sense of them. For instance, one psychologist might focus on individual differences in “friendliness,” whereas another might focus on the highly related concept of “sociability.” Scientists began seeking ways to reduce the number of traits in some systematic way and to discover the basic traits that describe most of the differences between people. The way that Gordon Allport and his colleague Henry Odbert approached this was to search the dictionary for all descriptors of personality (Allport & Odbert, 1936). Their approach was guided by the lexical hypothesis, which states that all important personality characteristics should be reflected in the language that we use to describe other people. Therefore, if we want to understand the fundamental ways in which people differ from one another, we can turn to the words that people use to describe one another. So if we want to know what words people use to describe one another, where should we look? Allport and Odbert looked in the most obvious place—the dictionary. Specifically, they took all the personality descriptors that they could find in the dictionary (they started with almost 18,000 words but quickly reduced that list to a more manageable number) and then used statistical techniques to determine which words “went together.” In other words, if everyone who said that they were “friendly” also said that they were “sociable,” then this might mean that personality psychologists would only need a single trait to capture individual differences in these characteristics. Statistical techniques were used to determine whether a small number of dimensions might underlie all of the thousands of words we use to describe people. The Five-Factor Model of Personality Research that used the lexical approach showed that many of the personality descriptors found in the dictionary do indeed overlap. In other words, many of the words that we use to describe people are synonyms. Thus, if we want to know what a person is like, we do not necessarily need to ask how sociable they are, how friendly they are, and how gregarious they are. Instead, because sociable people tend to be friendly and gregarious, we can summarize this personality dimension with a single term. Someone who is sociable, friendly, and gregarious would typically be described as an “Extravert.” Once we know she is an extravert, we can assume that she is sociable, friendly, and gregarious. Statistical methods (specifically, a technique called factor analysis) helped to determine whether a small number of dimensions underlie the diversity of words that people like Allport and Odbert identified. The most widely accepted system to emerge from this approach was “The Big Five” or “Five-Factor Model” (Goldberg, 1990; McCrae & John, 1992; McCrae & Costa, 1987). The Big Five comprises five major traits shown in the Figure 3.2.2 below. A way to remember these five is with the acronym OCEAN (O is for Openness; C is for Conscientiousness; E is for Extraversion; A is for Agreeableness; N is for Neuroticism). Figure 3.2.3 provides descriptions of people who would score high and low on each of these traits. Scores on the Big Five traits are mostly independent. That means that a person’s standing on one trait tells very little about their standing on the other traits of the Big Five. For example, a person can be extremely high in Extraversion and be either high or low on Neuroticism. Similarly, a person can be low in Agreeableness and be either high or low in Conscientiousness. Thus, in the Five-Factor Model, you need five scores to describe most of an individual’s personality. In the Appendix to this module, we present a short scale to assess the Five-Factor Model of personality (Donnellan, Oswald, Baird, & Lucas, 2006). You can take this test to see where you stand in terms of your Big Five scores. John Johnson has also created a helpful website that has personality scales that can be used and taken by the general public: http://www.personal.psu.edu/j5j/IPIP/ipipneo120.htm After seeing your scores, you can judge for yourself whether you think such tests are valid. Traits are important and interesting because they describe stable patterns of behavior that persist for long periods of time (Caspi, Roberts, & Shiner, 2005). Importantly, these stable patterns can have broad-ranging consequences for many areas of our life (Roberts, Kuncel, Shiner, Caspi, & Goldberg, 2007). For instance, think about the factors that determine success in college. If you were asked to guess what factors predict good grades in college, you might guess something like intelligence. This guess would be correct, but we know much more about who is likely to do well. Specifically, personality researchers have also found the personality traits like Conscientiousness play an important role in college and beyond, probably because highly conscientious individuals study hard, get their work done on time, and are less distracted by nonessential activities that take time away from school work. In addition, highly conscientious people are often healthier than people low in conscientiousness because they are more likely to maintain healthy diets, to exercise, and to follow basic safety procedures like wearing seat belts or bicycle helmets. Over the long term, this consistent pattern of behaviors can add up to meaningful differences in health and longevity. Thus, personality traits are not just a useful way to describe people you know; they actually help psychologists predict how good a worker someone will be, how long he or she will live, and the types of jobs and activities the person will enjoy. Thus, there is growing interest in personality psychology among psychologists who work in applied settings, such as health psychology or organizational psychology. Facets of Traits (Subtraits) So how does it feel to be told that your entire personality can be summarized with scores on just five personality traits? Do you think these five scores capture the complexity of your own and others’ characteristic patterns of thoughts, feelings, and behaviors? Most people would probably say no, pointing to some exception in their behavior that goes against the general pattern that others might see. For instance, you may know people who are warm and friendly and find it easy to talk with strangers at a party yet are terrified if they have to perform in front of others or speak to large groups of people. The fact that there are different ways of being extraverted or conscientious shows that there is value in considering lower-level units of personality that are more specific than the Big Five traits. These more specific, lower-level units of personality are often called facets. To give you a sense of what these narrow units are like, Figure 3.2.4 shows facets for each of the Big Five traits. It is important to note that although personality researchers generally agree about the value of the Big Five traits as a way to summarize one’s personality, there is no widely accepted list of facets that should be studied. The list seen here, based on work by researchers Paul Costa and Jeff McCrae, thus reflects just one possible list among many. It should, however, give you an idea of some of the facets making up each of the Five-Factor Model. Facets can be useful because they provide more specific descriptions of what a person is like. For instance, if we take our friend who loves parties but hates public speaking, we might say that this person scores high on the “gregariousness” and “warmth” facets of extraversion, while scoring lower on facets such as “assertiveness” or “excitement-seeking.” This precise profile of facet scores not only provides a better description, it might also allow us to better predict how this friend will do in a variety of different jobs (for example, jobs that require public speaking versus jobs that involve one-on-one interactions with customers; Paunonen & Ashton, 2001). Because different facets within a broad, global trait like extraversion tend to go together (those who are gregarious are often but not always assertive), the broad trait often provides a useful summary of what a person is like. But when we really want to know a person, facet scores add to our knowledge in important ways. Other Traits Beyond the Five-Factor Model Despite the popularity of the Five-Factor Model, it is certainly not the only model that exists. Some suggest that there are more than five major traits, or perhaps even fewer. For example, in one of the first comprehensive models to be proposed, Hans Eysenck suggested that Extraversion and Neuroticism are most important. Eysenck believed that by combining people’s standing on these two major traits, we could account for many of the differences in personality that we see in people (Eysenck, 1981). So for instance, a neurotic introvert would be shy and nervous, while a stable introvert might avoid social situations and prefer solitary activities, but he may do so with a calm, steady attitude and little anxiety or emotion. Interestingly, Eysenck attempted to link these two major dimensions to underlying differences in people’s biology. For instance, he suggested that introverts experienced too much sensory stimulation and arousal, which made them want to seek out quiet settings and less stimulating environments. More recently, Jeffrey Gray suggested that these two broad traits are related to fundamental reward and avoidance systems in the brain—extraverts might be motivated to seek reward and thus exhibit assertive, reward-seeking behavior, whereas people high in neuroticism might be motivated to avoid punishment and thus may experience anxiety as a result of their heightened awareness of the threats in the world around them (Gray, 1981. This model has since been updated; see Gray & McNaughton, 2000). These early theories have led to a burgeoning interest in identifying the physiological underpinnings of the individual differences that we observe. Another revision of the Big Five is the HEXACO model of traits (Ashton & Lee, 2007). This model is similar to the Big Five, but it posits slightly different versions of some of the traits, and its proponents argue that one important class of individual differences was omitted from the Five-Factor Model. The HEXACO adds Honesty-Humility as a sixth dimension of personality. People high in this trait are sincere, fair, and modest, whereas those low in the trait are manipulative, narcissistic, and self-centered. Thus, trait theorists are agreed that personality traits are important in understanding behavior, but there are still debates on the exact number and composition of the traits that are most important. There are other important traits that are not included in comprehensive models like the Big Five. Although the five factors capture much that is important about personality, researchers have suggested other traits that capture interesting aspects of our behavior. In Figure 5 below we present just a few, out of hundreds, of the other traits that have been studied by personologists. Not all of the above traits are currently popular with scientists, yet each of them has experienced popularity in the past. Although the Five-Factor Model has been the target of more rigorous research than some of the traits above, these additional personality characteristics give a good idea of the wide range of behaviors and attitudes that traits can cover. The Person-Situation Debate and Alternatives to the Trait Perspective The ideas described in this module should probably seem familiar, if not obvious to you. When asked to think about what our friends, enemies, family members, and colleagues are like, some of the first things that come to mind are their personality characteristics. We might think about how warm and helpful our first teacher was, how irresponsible and careless our brother is, or how demanding and insulting our first boss was. Each of these descriptors reflects a personality trait, and most of us generally think that the descriptions that we use for individuals accurately reflect their “characteristic pattern of thoughts, feelings, and behaviors,” or in other words, their personality. But what if this idea were wrong? What if our belief in personality traits were an illusion and people are not consistent from one situation to the next? This was a possibility that shook the foundation of personality psychology in the late 1960s when Walter Mischel published a book called Personality and Assessment (1968). In this book, Mischel suggested that if one looks closely at people’s behavior across many different situations, the consistency is really not that impressive. In other words, children who cheat on tests at school may steadfastly follow all rules when playing games and may never tell a lie to their parents. In other words, he suggested, there may not be any general trait of honesty that links these seemingly related behaviors. Furthermore, Mischel suggested that observers may believe that broad personality traits like honesty exist, when in fact, this belief is an illusion. The debate that followed the publication of Mischel’s book was called the person-situation debatebecause it pitted the power of personality against the power of situational factors as determinants of the behavior that people exhibit. Because of the findings that Mischel emphasized, many psychologists focused on an alternative to the trait perspective. Instead of studying broad, context-free descriptions, like the trait terms we’ve described so far, Mischel thought that psychologists should focus on people’s distinctive reactions to specific situations. For instance, although there may not be a broad and general trait of honesty, some children may be especially likely to cheat on a test when the risk of being caught is low and the rewards for cheating are high. Others might be motivated by the sense of risk involved in cheating and may do so even when the rewards are not very high. Thus, the behavior itself results from the child’s unique evaluation of the risks and rewards present at that moment, along with her evaluation of her abilities and values. Because of this, the same child might act very differently in different situations. Thus, Mischel thought that specific behaviors were driven by the interaction between very specific, psychologically meaningful features of the situation in which people found themselves, the person’s unique way of perceiving that situation, and his or her abilities for dealing with it. Mischel and others argued that it was these social-cognitive processes that underlie people’s reactions to specific situations that provide some consistency when situational features are the same. If so, then studying these broad traits might be more fruitful than cataloging and measuring narrow, context-free traits like Extraversion or Neuroticism. In the years after the publication of Mischel’s (1968) book, debates raged about whether personality truly exists, and if so, how it should be studied. And, as is often the case, it turns out that a more moderate middle ground than what the situationists proposed could be reached. It is certainly true, as Mischel pointed out, that a person’s behavior in one specific situation is not a good guide to how that person will behave in a very different specific situation. Someone who is extremely talkative at one specific party may sometimes be reticent to speak up during class and may even act like a wallflower at a different party. But this does not mean that personality does not exist, nor does it mean that people’s behavior is completely determined by situational factors. Indeed, research conducted after the person-situation debate shows that on average, the effect of the “situation” is about as large as that of personality traits. However, it is also true that if psychologists assess a broad range of behaviors across many different situations, there are general tendencies that emerge. Personality traits give an indication about how people will act on average, but frequently they are not so good at predicting how a person will act in a specific situation at a certain moment in time. Thus, to best capture broad traits, one must assess aggregatebehaviors, averaged over time and across many different types of situations. Most modern personality researchers agree that there is a place for broad personality traits and for the narrower units such as those studied by Walter Mischel. Appendix The Mini-IPIP Scale (Donnellan, Oswald, Baird, & Lucas, 2006) Instructions: Below are phrases describing people’s behaviors. Please use the rating scale below to describe how accurately each statement describes you. Describe yourself as you generally are now, not as you wish to be in the future. Describe yourself as you honestly see yourself, in relation to other people you know of the same sex as you are, and roughly your same age. Please read each statement carefully, and put a number from 1 to 5 next to it to describe how accurately the statement describes you. 1 = Very inaccurate 2 = Moderately inaccurate 3 = Neither inaccurate nor accurate 4 = Moderately accurate 5 = Very accurate 1. _______ Am the life of the party (E) 2. _______ Sympathize with others’ feelings (A) 3. _______ Get chores done right away (C) 4. _______ Have frequent mood swings (N) 5. _______ Have a vivid imagination (O) 6. _______Don’t talk a lot (E) 7. _______ Am not interested in other people’s problems (A) 8. _______ Often forget to put things back in their proper place (C) 9. _______ Am relaxed most of the time (N) 10. ______ Am not interested in abstract ideas (O) 11. ______ Talk to a lot of different people at parties (E) 12. ______ Feel others’ emotions (A) 13. ______ Like order (C) 14. ______ Get upset easily (N) 15. ______ Have difficulty understanding abstract ideas (O) 16. ______ Keep in the background (E) 17. ______ Am not really interested in others (A) 18. ______ Make a mess of things (C) 19. ______ Seldom feel blue (N) 20. ______ Do not have a good imagination (O) Scoring: The first thing you must do is to reverse the items that are worded in the opposite direction. In order to do this, subtract the number you put for that item from 6. So if you put a 4, for instance, it will become a 2. Cross out the score you put when you took the scale, and put the new number in representing your score subtracted from the number 6. Items to be reversed in this way: 6, 7, 8, 9, 10, 15, 16, 17, 18, 19, 20 Next, you need to add up the scores for each of the five OCEAN scales (including the reversed numbers where relevant). Each OCEAN score will be the sum of four items. Place the sum next to each scale below. __________ Openness: Add items 5, 10, 15, 20 __________ Conscientiousness: Add items 3, 8, 13, 18 __________ Extraversion: Add items 1, 6, 11, 16 __________ Agreeableness: Add items 2, 7, 12, 17 __________ Neuroticism: Add items 4, 9,14, 19 Compare your scores to the norms below to see where you stand on each scale. If you are low on a trait, it means you are the opposite of the trait label. For example, low on Extraversion is Introversion, low on Openness is Conventional, and low on Agreeableness is Assertive. 19–20 Extremely High, 17–18 Very High, 14–16 High, 11–13 Neither high nor low; in the middle, 8–10 Low, 6–7 Very low, 4–5 Extremely low Outside Resources Video 1: Gabriela Cintron’s – 5 Factors of Personality (OCEAN Song). This is a student-made video which cleverly describes, through song, common behavioral characteristics of the Big 5 personality traits. It was one of the winning entries in the 2016-17 Noba + Psi Chi Student Video Award. Video 2: Michael Harris’ – Personality Traits: The Big 5 and More. This is a student-made video that looks at characteristics of the OCEAN traits through a series of funny vignettes. It also presents on the Person vs Situation Debate. It was one of the winning entries in the 2016-17 Noba + Psi Chi Student Video Award. Video 3: David M. Cole’s – Grouchy with a Chance of Stomping. This is a student-made video that makes a very important point about the relationship between personality traits and behavior using a handy weather analogy. It was one of the winning entries in the 2016-17 Noba + Psi Chi Student Video Award. Web: International Personality Item Pool http://ipip.ori.org/ Web: John Johnson personality scales http://www.personal.psu.edu/j5j/IPIP/ipipneo120.htm Web: Personality trait systems compared http://www.personalityresearch.org/bigfive/goldberg.html Web: Sam Gosling website homepage.psy.utexas.edu/homep...samgosling.htm Discussion Questions 1. Consider different combinations of the Big Five, such as O (Low), C (High), E (Low), A (High), and N (Low). What would this person be like? Do you know anyone who is like this? Can you select politicians, movie stars, and other famous people and rate them on the Big Five? 2. How do you think learning and inherited personality traits get combined in adult personality? 3. Can you think of instances where people do not act consistently—where their personality traits are not good predictors of their behavior? 4. Has your personality changed over time, and in what ways? 5. Can you think of a personality trait not mentioned in this module that describes how people differ from one another? 6. When do extremes in personality traits become harmful, and when are they unusual but productive of good outcomes? Vocabulary Agreeableness A personality trait that reflects a person’s tendency to be compassionate, cooperative, warm, and caring to others. People low in agreeableness tend to be rude, hostile, and to pursue their own interests over those of others. Conscientiousness A personality trait that reflects a person’s tendency to be careful, organized, hardworking, and to follow rules. Continuous distributions Characteristics can go from low to high, with all different intermediate values possible. One does not simply have the trait or not have it, but can possess varying amounts of it. Extraversion A personality trait that reflects a person’s tendency to be sociable, outgoing, active, and assertive. Facets Broad personality traits can be broken down into narrower facets or aspects of the trait. For example, extraversion has several facets, such as sociability, dominance, risk-taking and so forth. Factor analysis A statistical technique for grouping similar things together according to how highly they are associated. Five-Factor Model (also called the Big Five) The Five-Factor Model is a widely accepted model of personality traits. Advocates of the model believe that much of the variability in people’s thoughts, feelings, and behaviors can be summarized with five broad traits. These five traits are Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. HEXACO model The HEXACO model is an alternative to the Five-Factor Model. The HEXACO model includes six traits, five of which are variants of the traits included in the Big Five (Emotionality [E], Extraversion [X], Agreeableness [A], Conscientiousness [C], and Openness [O]). The sixth factor, Honesty-Humility [H], is unique to this model. Independent Two characteristics or traits are separate from one another-- a person can be high on one and low on the other, or vice-versa. Some correlated traits are relatively independent in that although there is a tendency for a person high on one to also be high on the other, this is not always the case. Lexical hypothesis The lexical hypothesis is the idea that the most important differences between people will be encoded in the language that we use to describe people. Therefore, if we want to know which personality traits are most important, we can look to the language that people use to describe themselves and others. Neuroticism A personality trait that reflects the tendency to be interpersonally sensitive and the tendency to experience negative emotions like anxiety, fear, sadness, and anger. Openness to Experience A personality trait that reflects a person’s tendency to seek out and to appreciate new things, including thoughts, feelings, values, and experiences. Personality Enduring predispositions that characterize a person, such as styles of thought, feelings and behavior. Personality traits Enduring dispositions in behavior that show differences across individuals, and which tend to characterize the person across varying types of situations. Person-situation debate The person-situation debate is a historical debate about the relative power of personality traits as compared to situational influences on behavior. The situationist critique, which started the person-situation debate, suggested that people overestimate the extent to which personality traits are consistent across situations.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_3%3A_Personality/3.02%3A_Personality_Traits.txt
By Dean Keith Simonton University of California, Davis An idea or solution is considered creative if it is original, useful, and surprising. However, depending on who actually judges these three criteria, we must distinguish personal “little-c creativity” from consensual “Big-C Creativity.” In any case, psychologists who investigate creativity most often adopt one of three perspectives. First, they can ask how creators think, and thus focus on the cognitive processes behind creativity. Second, they can ask who is creative, and hence investigate the personal characteristics of highly creative people. Third, they can ask about the social context, and, thereby, examine the environments that influence creativity. Although psychologists have made major advances in the study of creativity, many exciting and important questions remain to be answered. learning objectives • Comprehend the three criteria that have to be satisfied to conclude that an idea is creative. • Appreciate some of the cognitive processes that provide the basis for creativity. • Know some of the personal characteristics of highly creative people. • Understand how certain social environments influence creativity. What do the following have in common: the drug penicillin, the Eiffel Tower, the film Lord of the Rings, the General Theory of Relativity, the hymn Amazing Grace, the iPhone, the novel Don Quixote, the painting The Mona Lisa, a recipe for chocolate fudge, the soft drink Coca-Cola, the video game Wii Sports, the West Coast offense in football, and the zipper? You guessed right! All of the named items were products of the creative mind. Not one of them existed until somebody came up with the idea. Creativity is not something that you just pick like apples from a tree. Because creative ideas are so special, creators who come up with the best ideas are often highly rewarded with fame, fortune, or both. Nobel Prizes, Oscars, Pulitzers, and other honors bring fame, and big sales and box office bring fortune. Yet what is creativity in the first place? Creativity: What Is It? Creativity happens when someone comes up with a creative idea. An example would be a creative solution to a difficult problem. But what makes an idea or solution creative? Although psychologists have offered several definitions (Plucker, Beghetto, & Dow, 2004; Runco & Jaeger, 2012), probably the best definition is the one recently adapted from the three criteria that the U.S. Patent Office uses to decide whether an invention can receive patent protection (Simonton, 2012). The first criterion is originality. The idea must have a low probability. Indeed, it often should be unique. Albert Einstein’s special theory of relativity certainly satisfied this criterion. No other scientist came up with the idea. The second criterion is usefulness. The idea should be valuable or work. For example, a solution must, in fact, solve the problem. An original recipe that produces a dish that tastes too terrible to eat cannot be creative. In the case of Einstein’s theory, his relativity principle provided explanations for what otherwise would be inexplicable empirical results. The third and last criterion is surprise. The idea should be surprising, or at least nonobvious (to use the term used by the Patent Office). For instance, a solution that is a straightforward derivation from acquired expertise cannot be considered surprising even if it were original. Einstein’s relativity theory was not a step-by-step deduction from classical physics but rather the theory was built upon a new foundation that challenged the very basis of traditional physics. When applying these three criteria, it is critical to recognize that originality, usefulness, and surprise are all quantitative rather than qualitative attributes of an idea. Specifically, we really have to speak of degree to which an idea satisfies each of the three criteria. In addition, the three attributes should have a zero point, that is, it should be possible to speak of an idea lacking any originality, usefulness, or surprise whatsoever. Finally, we have to assume that if an idea scores zero on any one criterion then it must have zero creativity as well. For example, someone who reinvents the wheel is definitely producing a useful idea, but the idea has zero originality and hence no creativity whatsoever. Similarly, someone who invented a parachute made entirely out of steel reinforced concrete would get lots of credit for originality—and surprise!—but none for usefulness. Yet, certainly, we have to ask: Who makes these judgments? The person who generated the idea or other people who the person expects to appreciate the idea? If the former, we can speak of subjective or personal “little-c creativity,” and if the later, we have objective or consensual “Big-C Creativity” (Simonton, in press). This distinction is important because sometimes personal and consensual assessments do not have to agree. Such disagreements are especially conspicuous in “neglected geniuses,” such as the poet Emily Dickinson, the painter Vincent Van Gogh, and the scientist Gregor Mendel—all producing ideas that received only posthumous recognition for their creativity. Creativity is a very complex phenomenon (Hennessey & Amabile, 2010; Runco, 2004). As a result, psychologists who study creativity can do so from many different perspectives. Nevertheless, the three most common perspectives are cognitive processes, personal characteristics, and social contexts. Cognitive Processes: How Do Creators Think? Cognitive scientists have long been interested in the thinking processes that lead to creative ideas (Simonton & Damian, 2013). Indeed, many so-called “creativity tests” are actually measures of the thought processes believed to underlie the creative act (Simonton, 2003b). The following two measures are among the best known. The first is the Remote Associates Test, or RAT, that was introduced by Mednick (1962). Mednick believed that the creative process requires the ability to associate ideas that are considered very far apart conceptually. The RAT consists of items that require the respondent to identify a word that can be associated to three rather distinct stimulus words. For example, what word can be associated with the words “widow, bite, monkey”? The answer is spider (black widow spider, spider bite, spider monkey). This particular question is relatively easy, others are much more difficult, but it gives you the basic idea. The second measure is the Unusual Uses Task (Guilford, 1967; Torrance, 1974). Here, the participant is asked to generate alternative uses for a common object, such as a brick. The responses can be scored on four dimensions: (a) fluency, the total number of appropriate uses generated; (b) originality, the statistical rarity of the uses given; (c) flexibility, the number of distinct conceptual categories implied by the various uses; and (d) elaboration, the amount of detail given for the generated uses. For example, using a brick as a paperweight represents a different conceptual category that using its volume to conserve water in a toilet tank. The capacity to produce unusual uses is but one example of the general cognitive ability to engage in divergent thinking (Guilford, 1967). Unlike convergent thinking, which converges on the single best answer or solution, divergent thinking comes up with multiple possibilities that might vary greatly in usefulness. Unfortunately, many different cognitive processes have been linked to creativity (Simonton & Damian, 2013). That is why we cannot use the singular; there is no such thing as the “creative process.” Nonetheless, the various processes do share one feature: All enable the person to “think outside the box” imposed by routine thinking—to venture into territory that would otherwise be ignored (Simonton, 2011). Creativity requires that you go where you don’t know where you’re going. Personal Characteristics: Who Is Creative? Can anybody be creative? Or is creativity subject to individual differences, such as intelligence? Might creativity even be normally distributed just like scores on IQ tests? The answer is complex. Unlike general intelligence, which represents a more or less cohesive cognitive ability, creativity is just as much a personal attribute as an intellectual capacity. This feature is evident in the fact that some “creativity tests” are actually measures of personality, interests, and values (Simonton, 2003b). An example is the Creative Personality Scale of the Gough Adjective Check List (Gough, 1979; see also Carson, Peterson, & Higgins, 2005). In this measure, a person is asked to check off whatever adjectives are viewed as especially self-descriptive. The relevant adjectives are shown in Table 1. How would you describe yourself? Would you use more adjectives in the left column or the right column? Another reason to speak of the “creative personality” is that creativity correlates with scores on standard personality measures (Feist, 1998). Most notably, the creative person is more likely to score on the openness-to-experience factor of the Big Five Factor Model (Carson, Peterson, & Higgins, 2005; Harris, 2004; McCrae, 1987). This factor concerns whether a person has a strong intellectual curiosity, preference for variety, and an active imagination and is aesthetically sensitive, attentive to inner feelings, as well as receptive to new ideas and values. It would seem obvious that persons high on this factor would behave differently than those scoring low. For instance, we would expect such persons to be less conventional, to have a wider range of leisure activities, and to be more versatile. Yet, it is equally important to note that people high in openness also think differently. Besides scoring higher in divergent thinking (Carson, Peterson, & Higgins, 2005), openness is also associated with the diminished capacity to filter out extraneous information, a tendency often called cognitive disinhibition or reduced latent inhibition (Peterson & Carson, 2000). This “defocused attention” enables the creative person to make observations that others would overlook—such as what happens in serendipitous discovery. A classic example was when Alexander Flemming noticed that a bacteria culture was being killed by a certain mold, a discovery that directly led to penicillin. Now you may wonder, isn’t cognitive disinhibition a bad way of thinking? Isn’t it a good thing to be able to ignore irrelevant stimuli? The answer is yes. In fact, reduced latent inhibition is also connected with mental illness (Carson, 2011; Eysenck, 1995). Thus arises a link between creativity and psychopathology (Simonton, 2010). Even so, creative individuals are seldom outright mentally ill. Instead, creators possess other personal traits and capacities that convert a potential cognitive disability into an exceptional ability (Carson, 2011). Among the most important of these characteristics is high general intelligence (Carson, Peterson, & Higgins, 2005; Kéri, 2011). The creator then has the capacity not just to generate original and surprising ideas but also to test and develop them for usefulness. Mental illness arises when the person always skips the last step—the reality check. At this point, we must add an important qualification: For Big-C Creativity, you have to have more going for you than boasting a creative personality that can engage in creative thought. You also must acquire appropriate expertise in the domain in which you hope to make creative contributions. Einstein had to learn physics and mathematics; Leonardo da Vinci had to learn how to draw and paint. In fact, it typically requires about a decade of extensive training and practice before a person can become a Big-C Creator (Ericsson, 1996). Even so, just because you become an expert in your field it does not mean that you’ll be creative, too. Social Contexts: What Environments Affect Creativity? Although creativity is often viewed as an entirely psychological phenomenon, research by social psychologists shows that certain social environments have a part to play as well. These contextual influences can assume many forms. Sometimes these effects are relatively short term or transient. Other times the effects can be more long lasting. To illustrate the former possibility, creativity is often enhanced when persons are exposed to incongruous or novel stimuli. For example, one recent experiment used virtual reality to create three conditions (Ritter et al., 2012). In one condition, participants walked around in a room in which the normal laws of physics were violated. Objects fell up rather than fell down, and the objects got smaller as you approached them rather than getting bigger. In a second condition, the participants were in the same virtual reality situation, but everything behaved as it would in normal reality. In the third and last condition, the participants merely saw a film clip of what the participants in the first condition experienced—a passive rather than active exposure to an otherworldly environment. Only those who directly experienced the strange environment showed an increase in cognitive flexibility, an important component of creativity, as noted earlier. In a second experiment, the participants were again subjected to three conditions, but this time the manipulation concerned cultural scripts—in this case, the customary way to make a popular breakfast meal. Only those participants who directly experienced the violation of the norms showed an increase in cognitive flexibility. Those who made breakfast the normal way or who vicariously watched somebody else make breakfast an unusual way showed no effect. The above effect is most likely transient. It is doubtful that those participants exposed to such incongruous experiences would exhibit any long-term change in their creativity. But what would happen if the exposure was much longer, years rather than minutes? Then the benefit might endure a lifetime. An example is the long-term benefits that accrue to persons who have acquired multicultural experiences, such as living in a foreign country for a significant amount of time (Leung, Maddux, Galinsky, & Chiu, 2008). Daily life abroad exposes a person to different ways of doing everyday activities. Moreover, because the visitor quickly learns that “when in Rome do as the Romans do,” the exposure becomes direct rather than vicarious (Maddux, Adam, & Galinsky, 2010). To be sure, not everybody’s creativity benefits from multicultural environments. The person also has to score high on openness to experience (Leung & Chiu, 2008). Otherwise, they will close themselves off from the potential stimulation, and then just gripe about the “peculiar customs of the natives” rather than actively practice those customs—such as making a totally different breakfast! Finally, both little-c and Big-C creativity—but especially the latter—are more likely to appear in specific sociocultural systems (Simonton, 2003a). Some political, social, cultural, and economic environments are supportive of exceptional creativity, whereas others tend to suppress if not destroy creativity. For this reason, the history of any civilization or nation tends to have “Dark Ages” as well as “Golden Ages.” Early medieval Europe illustrates the former, while Renaissance Italy exemplifies the latter. It would take us too far beyond introductory psychology to discuss all of the relevant factors. Yet, one factor fits nicely with what was discussed in the previous paragraph. Highly creative societies are far more likely to be multicultural, with abundant influences from other civilizations. For instance, Japanese civilization tended to undergo a revival of creativity after the infusion of new ideas from other civilizations, including Korean, Chinese, Indian, and European (Simonton, 1997). This influx involved not just Japanese living abroad but also non-Japanese immigrating to Japan. Conclusion Creativity certainly must be considered a crucial human behavior. Indeed, like language, creativity sets Homo sapiens well apart from even our closest evolutionary relatives. It is virtually impossible to imagine a world in which all of the products of the creative mind were removed. I couldn’t even type this very sentence at this instant. Even the alphabet was invented. Creativity permeates every aspect of modern life: technology, science, literature, the visual arts, music, cooking, sports, politics, war, business, advertising ... well, I could go on and on. Fortunately, psychologists have made major strides in understanding the phenomenon. In fact, some of the best studies of creativity are also excellent examples of scientific creativity. At the same time, it remains clear that we still have a long ways to go before we know everything we need to know about the psychology of creativity. Hence, creativity research has a bright future. Outside Resources Video: Amy Tan: Where does creativity hide? http://www.ted.com/talks/amy_tan_on_creativity.html Video: Creativity science Video: How to be creative Web: American Creativity Association www.aca.cloverpad.org/ Web: Be More Creative http://www.bemorecreative.com/ Web: Creating Minds http://creatingminds.org/ Web: Creative Quotations http://www.creativequotations.com/ Web: Creativity at Work http://www.creativityatwork.com/ Discussion Questions 1. To be creative an idea must be useful. Although it is easy to see how a new invention can be useful, what does it mean for a scientific discovery or artistic composition to be useful? When, in 1865, Mendel discovered that the traits of peas were inherited according to genetic laws, what possible use could that finding have at the time? What conceivable utility could there be for a painting by Van Gogh or a poem by Dickinson? Should some other word be used, such as valuable or appropriate? Or, should we acknowledge that a theory, painting, or poem is useful in a different way than an invention? Can a new idea be creative just because it satisfies our intellectual curiosity or aesthetic appreciation? 2. Computers can do some amazing things—such as beat humans at chess and Jeopardy! But, do you think that they can ever display genuine creativity? Will a computer one day make a scientific discovery or write a poem that earns it a Nobel Prize? If not, why not? If so, who should get the award money, the computer or the computer’s programmer? 3. All of the personal characteristics of very creative people are also highly inheritable. For instance, intelligence, openness to experience, and cognitive disinhibition all have a partial genetic basis. Does that mean that creators are born and not made? 4. Highly creative people believe that they possess certain personality traits. If you make yourself have the same traits, will that make you more creative? For example, will you become more creative if you become more egotistical, individualistic, informal, reflective, self-confident, sexy, and unconventional? Or, how about widening your interests and becoming more open to experience? Which comes first, the personality or the capacity? Vocabulary Big-C Creativity Creative ideas that have an impact well beyond the everyday life of home or work. At the highest level, this kind of creativity is that of the creative genius. Convergent thinking The opposite of divergent thinking, the capacity to narrow in on the single “correct” answer or solution to a given question or problem (e.g., giving the right response on an intelligence tests). Divergent thinking The opposite of convergent thinking, the capacity for exploring multiple potential answers or solutions to a given question or problem (e.g., coming up with many different uses for a common object). Latent inhibition The ability to filter out extraneous stimuli, concentrating only on the information that is deemed relevant. Reduced latent inhibition is associated with higher creativity. Little-c creativity Creative ideas that appear at the personal level, whether the home or the workplace. Such creativity needs not have a larger impact to be considered creative. Multicultural experiences Individual exposure to two or more cultures, such as obtained by living abroad, emigrating to another country, or working or going to school in a culturally diverse setting. Openness to experience One of the factors of the Big Five Model of personality, the factor assesses the degree that a person is open to different or new values, interests, and activities. Originality When an idea or solution has a low probability of occurrence. Remote associations Associations between words or concepts that are semantically distant and thus relatively unusual or original. Unusual uses A test of divergent thinking that asks participants to find many uses for commonplace objects, such as a brick or paperclip.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_3%3A_Personality/3.03%3A_Creativity.txt
This module discusses gender and its related concepts, including sex, gender roles, gender identity, sexual orientation, and sexism. In addition, this module includes a discussion of differences that exist between males and females and how these real gender differences compare to the stereotypes society holds about gender differences. In fact, there are significantly fewer real gender differences than one would expect relative to the large number of stereotypes about gender differences. This module then discusses theories of how gender roles develop and how they contribute to strong expectations for gender differences. Finally, the module concludes with a discussion of some of the consequences of relying on and expecting gender differences, such as gender discrimination, sexual harassment, and ambivalent sexism. learning objectives • Distinguish gender and sex, as well as gender identity and sexual orientation. • Discuss gender differences that exist, as well as those that do not actually exist. • Understand and explain different theories of how gender roles are formed. • Discuss sexism and its impact on both genders. Introduction Before we discuss gender in detail, it is important to understand what gender actually is. The terms sex and gender are frequently used interchangeably, though they have different meanings. In this context, sex refers to the biological category of male or female, as defined by physical differences in genetic composition and in reproductive anatomy and function. On the other hand, gender refers to the cultural, social, and psychological meanings that are associated with masculinity and femininity (Wood & Eagly, 2002). You can think of “male” and “female” as distinct categories of sex (a person is typically born a male or a female), but “masculine” and “feminine” as continuums associated with gender (everyone has a certain degree of masculine and feminine traits and qualities). Beyond sex and gender, there are a number of related terms that are also often misunderstood. Gender roles are the behaviors, attitudes, and personality traits that are designated as either masculine or feminine in a given culture. It is common to think of gender roles in terms of gender stereotypes, or the beliefs and expectations people hold about the typical characteristics, preferences, and behaviors of men and women. A person’s gender identityrefers to their psychological sense of being male or female. In contrast, a person’s sexual orientation is the direction of their emotional and erotic attraction toward members of the opposite sex, the same sex, or both sexes. These are important distinctions, and though we will not discuss each of these terms in detail, it is important to recognize that sex, gender, gender identity, and sexual orientation do not always correspond with one another. A person can be biologically male but have a female gender identity while being attracted to women, or any other combination of identities and orientations. Gender Differences Differences between males and females can be based on (a) actual gender differences (i.e., men and women are actually different in some abilities), (b) gender roles (i.e., differences in how men and women are supposed to act), or (c) gender stereotypes (i.e., differences in how we think men and women are). Sometimes gender stereotypes and gender roles reflect actual gender differences, but sometimes they do not. What are actual gender differences? In terms of language and language skills, girls develop language skills earlier and know more words than boys; this does not, however, translate into long-term differences. Girls are also more likely than boys to offer praise, to agree with the person they’re talking to, and to elaborate on the other person’s comments; boys, in contrast, are more likely than girls to assert their opinion and offer criticisms (Leaper & Smith, 2004). In terms of temperament, boys are slightly less able to suppress inappropriate responses and slightly more likely to blurt things out than girls (Else-Quest, Hyde, Goldsmith, & Van Hulle, 2006). With respect to aggression, boys exhibit higher rates of unprovoked physical aggression than girls, but no difference in provoked aggression (Hyde, 2005). Some of the biggest differences involve the play styles of children. Boys frequently play organized rough-and-tumble games in large groups, while girls often play less physical activities in much smaller groups (Maccoby, 1998). There are also differences in the rates of depression, with girls much more likely than boys to be depressed after puberty. After puberty, girls are also more likely to be unhappy with their bodies than boys. However, there is considerable variability between individual males and individual females. Also, even when there are mean level differences, the actual size of most of these differences is quite small. This means, knowing someone’s gender does not help much in predicting his or her actual traits. For example, in terms of activity level, boys are considered more active than girls. However, 42% of girls are more active than the average boy (but so are 50% of boys; see Figure 3.4.1 for a depiction of this phenomenon in a comparison of male and female self-esteem). Furthermore, many gender differences do not reflect innate differences, but instead reflect differences in specific experiences and socialization. For example, one presumed gender difference is that boys show better spatial abilities than girls. However, Tzuriel and Egozi (2010) gave girls the chance to practice their spatial skills (by imagining a line drawing was different shapes) and discovered that, with practice, this gender difference completely disappeared. Many domains we assume differ across genders are really based on gender stereotypes and not actual differences. Based on large meta-analyses, the analyses of thousands of studies across more than one million people, research has shown: Girls are not more fearful, shy, or scared of new things than boys; boys are not more angry than girls and girls are not more emotional than boys; boys do not perform better at math than girls; and girls are not more talkative than boys (Hyde, 2005). In the following sections, we’ll investigate gender roles, the part they play in creating these stereotypes, and how they can affect the development of real gender differences. Gender Roles As mentioned earlier, gender roles are well-established social constructions that may change from culture to culture and over time. In American culture, we commonly think of gender roles in terms of gender stereotypes, or the beliefs and expectations people hold about the typical characteristics, preferences, and behaviors of men and women. By the time we are adults, our gender roles are a stable part of our personalities, and we usually hold many gender stereotypes. When do children start to learn about gender? Very early. By their first birthday, children can distinguish faces by gender. By their second birthday, they can label others’ gender and even sort objects into gender-typed categories. By the third birthday, children can consistently identify their own gender (see Martin, Ruble, & Szkrybalo, 2002, for a review). At this age, children believe sex is determined by external attributes, not biological attributes. Between 3 and 6 years of age, children learn that gender is constant and can’t change simply by changing external attributes, having developed gender constancy. During this period, children also develop strong and rigid gender stereotypes. Stereotypes can refer to play (e.g., boys play with trucks, and girls play with dolls), traits (e.g., boys are strong, and girls like to cry), and occupations (e.g., men are doctors and women are nurses). These stereotypes stay rigid until children reach about age 8 or 9. Then they develop cognitive abilities that allow them to be more flexible in their thinking about others. How do our gender roles and gender stereotypes develop and become so strong? Many of our gender stereotypes are so strong because we emphasize gender so much in culture (Bigler & Liben, 2007). For example, males and females are treated differently before they are even born. When someone learns of a new pregnancy, the first question asked is “Is it a boy or a girl?” Immediately upon hearing the answer, judgments are made about the child: Boys will be rough and like blue, while girls will be delicate and like pink. Developmental intergroup theory postulates that adults’ heavy focus on gender leads children to pay attention to gender as a key source of information about themselves and others, to seek out any possible gender differences, and to form rigid stereotypes based on gender that are subsequently difficult to change. There are also psychological theories that partially explain how children form their own gender roles after they learn to differentiate based on gender. The first of these theories is gender schema theory. Gender schema theory argues that children are active learners who essentially socialize themselves. In this case, children actively organize others’ behavior, activities, and attributes into gender categories, which are known as schemas. These schemas then affect what children notice and remember later. People of all ages are more likely to remember schema-consistent behaviors and attributes than schema-inconsistent behaviors and attributes. So, people are more likely to remember men, and forget women, who are firefighters. They also misremember schema-inconsistent information. If research participants are shown pictures of someone standing at the stove, they are more likely to remember the person to be cooking if depicted as a woman, and the person to be repairing the stove if depicted as a man. By only remembering schema-consistent information, gender schemas strengthen more and more over time. A second theory that attempts to explain the formation of gender roles in children is social learning theory. Social learning theory argues that gender roles are learned through reinforcement, punishment, and modeling. Children are rewarded and reinforced for behaving in concordance with gender roles and punished for breaking gender roles. In addition, social learning theory argues that children learn many of their gender roles by modeling the behavior of adults and older children and, in doing so, develop ideas about what behaviors are appropriate for each gender. Social learning theory has less support than gender schema theory—research shows that parents do reinforce gender-appropriate play, but for the most part treat their male and female children similarly (Lytton & Romney, 1991). Gender Sexism and Socialization Treating boys and girls, and men and women, differently is both a consequence of gender differences and a causeof gender differences. Differential treatment on the basis of gender is also referred to gender discrimination and is an inevitable consequence of gender stereotypes. When it is based on unwanted treatment related to sexual behaviors or appearance, it is called sexual harassment. By the time boys and girls reach the end of high school, most have experienced some form of sexual harassment, most commonly in the form of unwanted touching or comments, being the target of jokes, having their body parts rated, or being called names related to sexual orientation. Different treatment by gender begins with parents. A meta-analysis of research from the United States and Canada found that parents most frequently treated sons and daughters differently by encouraging gender-stereotypical activities (Lytton & Romney, 1991). Fathers, more than mothers, are particularly likely to encourage gender-stereotypical play, especially in sons. Parents also talk to their children differently based on stereotypes. For example, parents talk about numbers and counting twice as often with sons than daughters (Chang, Sandhofer, & Brown, 2011) and talk to sons in more detail about science than with daughters. Parents are also much more likely to discuss emotions with their daughters than their sons. Children do a large degree of socializing themselves. By age 3, children play in gender-segregated play groups and expect a high degree of conformity. Children who are perceived as gender atypical (i.e., do not conform to gender stereotypes) are more likely to be bullied and rejected than their more gender-conforming peers. Gender stereotypes typically maintain gender inequalities in society. The concept of ambivalent sexismrecognizes the complex nature of gender attitudes, in which women are often associated with positive and negative qualities (Glick & Fiske, 2001). It has two components. First, hostile sexism refers to the negative attitudes of women as inferior and incompetent relative to men. Second, benevolent sexism refers to the perception that women need to be protected, supported, and adored by men. There has been considerable empirical support for benevolent sexism, possibly because it is seen as more socially acceptable than hostile sexism. Gender stereotypes are found not just in American culture. Across cultures, males tend to be associated with stronger and more active characteristics than females (Best, 2001). In recent years, gender and related concepts have become a common focus of social change and social debate. Many societies, including American society, have seen a rapid change in perceptions of gender roles, media portrayals of gender, and legal trends relating to gender. For example, there has been an increase in children’s toys attempting to cater to both genders (such as Legos marketed to girls), rather than catering to traditional stereotypes. Nationwide, the drastic surge in acceptance of homosexuality and gender questioning has resulted in a rapid push for legal change to keep up with social change. Laws such as “Don’t Ask, Don’t Tell” and the Defense of Marriage Act (DOMA), both of which were enacted in the 1990s, have met severe resistance on the grounds of being discriminatory toward sexual minority groups and have been accused of unconstitutionality less than 20 years after their implementation. Change in perceptions of gender is also evident in social issues such as sexual harassment, a term that only entered the mainstream mindset in the 1991 Clarence Thomas/Anita Hill scandal. As society’s gender roles and gender restrictions continue to fluctuate, the legal system and the structure of American society will continue to change and adjust. Important Gender-related Events in the United States 1920 -- 19th Amendment (women's Suffrage Ratified) 1941-1945 -- World War II forces millions of women to enter the workforce 1948 -- Universal Declaration of Human Rights 1963 -- Congress passes Equal Pay Act 1964 -- Congress passes Civil Rights Act, which outlaws sex discrimination 1969 -- Stonewall riots in NYC, forcing gay rights into the American spotlight 1972 --Congress passes Equal Rights Amendment; TitleIX prohibits sex discrimination is schools and sports 1973 -- American Psychiatric Association removes homosexuality from the DSM 1981 -- First woman appointed to the US Supreme Court 1987 -- Average woman earned \$0.68 for every \$1.00 earned by a man 1992 -- World Health Organization no longer considers homosexuality an illness 1993 -- Supreme Court rules that sexual harassment in the workplace is illegal 2011 -- Don't Ask Don't Tell is repealed, allowing people who identify as gay serve openly in the US military 2012 -- President Barack Obama becomes the first American president to openly support LGBT rights and marriage equality Outside Resources Video: Human Sexuality is Complicated Web: Big Think with Professor of Neuroscience Lise Eliot bigthink.com/users/liseeliot Web: Understanding Prejudice: Sexism http://www.understandingprejudice.or...nks/sexism.htm Discussion Questions 1. What are the differences and associations among gender, sex, gender identity, and sexual orientation? 2. Are the gender differences that exist innate (biological) differences or are they caused by other variables? 3. Discuss the theories relating to the development of gender roles and gender stereotypes. Which theory do you support? Why? 4. Using what you’ve read in this module: a. Why do you think gender stereotypes are so inflated compared with actual gender differences? b. Why do you think people continue to believe in such strong gender differences despite evidence to the contrary? 5. Brainstorm additional forms of gender discrimination aside from sexual harassment. Have you seen or experienced gender discrimination personally? 6. How is benevolent sexism detrimental to women, despite appearing positive? Vocabulary Ambivalent sexism A concept of gender attitudes that encompasses both positive and negative qualities. Benevolent sexism The “positive” element of ambivalent sexism, which recognizes that women are perceived as needing to be protected, supported, and adored by men. Developmental intergroup theory A theory that postulates that adults’ focus on gender leads children to pay attention to gender as a key source of information about themselves and others, to seek out possible gender differences, and to form rigid stereotypes based on gender. Gender The cultural, social, and psychological meanings that are associated with masculinity and femininity. Gender constancy The awareness that gender is constant and does not change simply by changing external attributes; develops between 3 and 6 years of age. Gender discrimination Differential treatment on the basis of gender. Gender identity A person’s psychological sense of being male or female. Gender roles The behaviors, attitudes, and personality traits that are designated as either masculine or feminine in a given culture. Gender schema theory This theory of how children form their own gender roles argues that children actively organize others’ behavior, activities, and attributes into gender categories or schemas. Gender stereotypes The beliefs and expectations people hold about the typical characteristics, preferences, and behaviors of men and women. Hostile sexism The negative element of ambivalent sexism, which includes the attitudes that women are inferior and incompetent relative to men. Schemas The gender categories into which, according to gender schema theory, children actively organize others’ behavior, activities, and attributes. Sex Biological category of male or female as defined by physical differences in genetic composition and in reproductive anatomy and function. Sexual harassment A form of gender discrimination based on unwanted treatment related to sexual behaviors or appearance. Sexual orientation Refers to the direction of emotional and erotic attraction toward members of the opposite sex, the same sex, or both sexes. Social learning theory This theory of how children form their own gender roles argues that gender roles are learned through reinforcement, punishment, and modeling.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_3%3A_Personality/3.04%3A_Gender.txt
By Dan P. McAdams Northwestern University For human beings, the self is what happens when “I” encounters “Me.” The central psychological question of selfhood, then, is this: How does a person apprehend and understand who he or she is? Over the past 100 years, psychologists have approached the study of self (and the related concept of identity) in many different ways, but three central metaphors for the self repeatedly emerge. First, the self may be seen as a social actor, who enacts roles and displays traits by performing behaviors in the presence of others. Second, the self is a motivated agent, who acts upon inner desires and formulates goals, values, and plans to guide behavior in the future. Third, the self eventually becomes an autobiographical author, too, who takes stock of life — past, present, and future — to create a story about who I am, how I came to be, and where my life may be going. This module briefly reviews central ideas and research findings on the self as an actor, an agent, and an author, with an emphasis on how these features of selfhood develop over the human life course. learning objectives • Explain the basic idea of reflexivity in human selfhood—how the “I” encounters and makes sense of itself (the “Me”). • Describe fundamental distinctions between three different perspectives on the self: the self as actor, agent, and author. • Describe how a sense of self as a social actor emerges around the age of 2 years and how it develops going forward. • Describe the development of the self’s sense of motivated agency from the emergence of the child’s theory of mind to the articulation of life goals and values in adolescence and beyond. • Define the term narrative identity, and explain what psychological and cultural functions narrative identity serves. Introduction In the Temple of Apollo at Delphi, the ancient Greeks inscribed the words: “Know thyself.” For at least 2,500 years, and probably longer, human beings have pondered the meaning of the ancient aphorism. Over the past century, psychological scientists have joined the effort. They have formulated many theories and tested countless hypotheses that speak to the central question of human selfhood: How does a person know who he or she is? The ancient Greeks seemed to realize that the self is inherently reflexive—it reflects back on itself. In the disarmingly simple idea made famous by the great psychologist William James (1892/1963), the self is what happens when “I” reflects back upon “Me.” The self is both the I and the Me—it is the knower, and it is what the knower knows when the knower reflects upon itself. When you look back at yourself, what do you see? When you look inside, what do you find? Moreover, when you try to change your self in some way, what is it that you are trying to change? The philosopher Charles Taylor (1989) describes the self as a reflexive project. In modern life, Taylor agues, we often try to manage, discipline, refine, improve, or develop the self. We work on our selves, as we might work on any other interesting project. But what exactly is it that we work on? Imagine for a moment that you have decided to improve yourself. You might, say, go on a diet to improve your appearance. Or you might decide to be nicer to your mother, in order to improve that important social role. Or maybe the problem is at work—you need to find a better job or go back to school to prepare for a different career. Perhaps you just need to work harder. Or get organized. Or recommit yourself to religion. Or maybe the key is to begin thinking about your whole life story in a completely different way, in a way that you hope will bring you more happiness, fulfillment, peace, or excitement. Although there are many different ways you might reflect upon and try to improve the self, it turns out that many, if not most, of them fall roughly into three broad psychological categories (McAdams & Cox, 2010). The I may encounter the Me as (a) a social actor, (b) a motivated agent, or (c) an autobiographical author. The Social Actor Shakespeare tapped into a deep truth about human nature when he famously wrote, “All the world’s a stage, and all the men and women merely players.” He was wrong about the “merely,” however, for there is nothing more important for human adaptation than the manner in which we perform our roles as actors in the everyday theatre of social life. What Shakespeare may have sensed but could not have fully understood is that human beings evolved to live in social groups. Beginning with Darwin (1872/1965) and running through contemporary conceptions of human evolution, scientists have portrayed human nature as profoundly social (Wilson, 2012). For a few million years, Homo sapiens and their evolutionary forerunners have survived and flourished by virtue of their ability to live and work together in complex social groups, cooperating with each other to solve problems and overcome threats and competing with each other in the face of limited resources. As social animals, human beings strive to get along and get ahead in the presence of each other (Hogan, 1982). Evolution has prepared us to care deeply about social acceptance and social status, for those unfortunate individuals who do not get along well in social groups or who fail to attain a requisite status among their peers have typically been severely compromised when it comes to survival and reproduction. It makes consummate evolutionary sense, therefore, that the human "I" should apprehend the "Me" first and foremost as a social actor. For human beings, the sense of the self as a social actor begins to emerge around the age of 18 months. Numerous studies have shown that by the time they reach their second birthday most toddlers recognize themselves in mirrors and other reflecting devices (Lewis & Brooks-Gunn, 1979; Rochat, 2003). What they see is an embodied actor who moves through space and time. Many children begin to use words such as “me” and “mine” in the second year of life, suggesting that the I now has linguistic labels that can be applied reflexively to itself: I call myself “me.” Around the same time, children also begin to express social emotions such as embarrassment, shame, guilt, and pride (Tangney, Stuewig, & Mashek, 2007). These emotions tell the social actor how well he or she is performing in the group. When I do things that win the approval of others, I feel proud of myself. When I fail in the presence of others, I may feel embarrassment or shame. When I violate a social rule, I may experience guilt, which may motivate me to make amends. Many of the classic psychological theories of human selfhood point to the second year of life as a key developmental period. For example, Freud (1923/1961) and his followers in the psychoanalytic tradition traced the emergence of an autonomous ego back to the second year. Freud used the term “ego” (in German das Ich, which also translates into “the I”) to refer to an executive self in the personality. Erikson (1963) argued that experiences of trust and interpersonal attachment in the first year of life help to consolidate the autonomy of the ego in the second. Coming from a more sociological perspective, Mead (1934) suggested that the I comes to know the Me through reflection, which may begin quite literally with mirrors but later involves the reflected appraisals of others. I come to know who I am as a social actor, Mead argued, by noting how other people in my social world react to my performances. In the development of the self as a social actor, other people function like mirrors—they reflect who I am back to me. Research has shown that when young children begin to make attributions about themselves, they start simple (Harter, 2006). At age 4, Jessica knows that she has dark hair, knows that she lives in a white house, and describes herself to others in terms of simple behavioral traits. She may say that she is “nice,” or “helpful,” or that she is “a good girl most of the time.” By the time, she hits fifth grade (age 10), Jessica sees herself in more complex ways, attributing traits to the self such as “honest,” “moody,” “outgoing,” “shy,” “hard-working,” “smart,” “good at math but not gym class,” or “nice except when I am around my annoying brother.” By late childhood and early adolescence, the personality traits that people attribute to themselves, as well as those attributed to them by others, tend to correlate with each other in ways that conform to a well-established taxonomy of five broad trait domains, repeatedly derived in studies of adult personality and often called the Big Five: (1) extraversion, (2) neuroticism, (3) agreeableness, (4) conscientiousness, and (5) openness to experience (Roberts, Wood, & Caspi, 2008). By late childhood, moreover, self-conceptions will likely also include important social roles: “I am a good student,” “I am the oldest daughter,” or “I am a good friend to Sarah.” Traits and roles, and variations on these notions, are the main currency of the self as social actor (McAdams & Cox, 2010). Trait terms capture perceived consistencies in social performance. They convey what I reflexively perceive to be my overall acting style, based in part on how I think others see me as an actor in many different social situations. Roles capture the quality, as I perceive it, of important structured relationships in my life. Taken together, traits and roles make up the main features of my social reputation, as I apprehend it in my own mind (Hogan, 1982). If you have ever tried hard to change yourself, you may have taken aim at your social reputation, targeting your central traits or your social roles. Maybe you woke up one day and decided that you must become a more optimistic and emotionally upbeat person. Taking into consideration the reflected appraisals of others, you realized that even your friends seem to avoid you because you bring them down. In addition, it feels bad to feel so bad all the time: Wouldn’t it be better to feel good, to have more energy and hope? In the language of traits, you have decided to “work on” your “neuroticism.” Or maybe instead, your problem is the trait of “conscientiousness”: You are undisciplined and don’t work hard enough, so you resolve to make changes in that area. Self-improvement efforts such as these—aimed at changing one’s traits to become a more effective social actor—are sometimes successful, but they are very hard—kind of like dieting. Research suggests that broad traits tend to be stubborn, resistant to change, even with the aid of psychotherapy. However, people often have more success working directly on their social roles. To become a more effective social actor, you may want to take aim at the important roles you play in life. What can I do to become a better son or daughter? How can I find new and meaningful roles to perform at work, or in my family, or among my friends, or in my church and community? By doing concrete things that enrich your performances in important social roles, you may begin to see yourself in a new light, and others will notice the change, too. Social actors hold the potential to transform their performances across the human life course. Each time you walk out on stage, you have a chance to start anew. The Motivated Agent Whether we are talking literally about the theatrical stage or more figuratively, as I do in this module, about the everyday social environment for human behavior, observers can never fully know what is in the actor’s head, no matter how closely they watch. We can see actors act, but we cannot know for sure what they want or what they value, unless they tell us straightaway. As a social actor, a person may come across as friendly and compassionate, or cynical and mean-spirited, but in neither case can we infer their motivations from their traits or their roles. What does the friendly person want? What is the cynical father trying to achieve? Many broad psychological theories of the self prioritize the motivational qualities of human behavior—the inner needs, wants, desires, goals, values, plans, programs, fears, and aversions that seem to give behavior its direction and purpose (Bandura, 1989; Deci & Ryan, 1991; Markus & Nurius, 1986). These kinds of theories explicitly conceive of the self as a motivated agent. To be an agent is to act with direction and purpose, to move forward into the future in pursuit of self-chosen and valued goals. In a sense, human beings are agents even as infants, for babies can surely act in goal-directed ways. By age 1 year, moreover, infants show a strong preference for observing and imitating the goal-directed, intentional behavior of others, rather than random behaviors (Woodward, 2009). Still, it is one thing to act in goal-directed ways; it is quite another for the I to know itself (the Me) as an intentional and purposeful force who moves forward in life in pursuit of self-chosen goals, values, and other desired end states. In order to do so, the person must first realize that people indeed have desires and goals in their minds and that these inner desires and goals motivate (initiate, energize, put into motion) their behavior. According to a strong line of research in developmental psychology, attaining this kind of understanding means acquiring a theory of mind (Wellman, 1993), which occurs for most children by the age of 4. Once a child understands that other people’s behavior is often motivated by inner desires and goals, it is a small step to apprehend the self in similar terms. Building on theory of mind and other cognitive and social developments, children begin to construct the self as a motivated agent in the elementary school years, layered over their still-developing sense of themselves as social actors. Theory and research on what developmental psychologists call the age 5-to-7 shift converge to suggest that children become more planful, intentional, and systematic in their pursuit of valued goals during this time (Sameroff & Haith, 1996). Schooling reinforces the shift in that teachers and curricula place increasing demands on students to work hard, adhere to schedules, focus on goals, and achieve success in particular, well-defined task domains. Their relative success in achieving their most cherished goals, furthermore, goes a long way in determining children’s self-esteem (Robins, Tracy, & Trzesniewski, 2008). Motivated agents feel good about themselves to the extent they believe that they are making good progress in achieving their goals and advancing their most important values. Goals and values become even more important for the self in adolescence, as teenagers begin to confront what Erikson (1963) famously termed the developmental challenge of identity. For adolescents and young adults, establishing a psychologically efficacious identity involves exploring different options with respect to life goals, values, vocations, and intimate relationships and eventually committing to a motivational and ideological agenda for adult life—an integrated and realistic sense of what I want and value in life and how I plan to achieve it (Kroger & Marcia, 2011). Committing oneself to an integrated suite of life goals and values is perhaps the greatest achievement for the self as motivated agent. Establishing an adult identity has implications, as well, for how a person moves through life as a social actor, entailing new role commitments and, perhaps, a changing understanding of one’s basic dispositional traits. According to Erikson, however, identity achievement is always provisional, for adults continue to work on their identities as they move into midlife and beyond, often relinquishing old goals in favor of new ones, investing themselves in new projects and making new plans, exploring new relationships, and shifting their priorities in response to changing life circumstances (Freund & Riediger, 2006; Josselson, 1996). There is a sense whereby any time you try to change yourself, you are assuming the role of a motivated agent. After all, to strive to change something is inherently what an agent does. However, what particular feature of selfhood you try to change may correspond to your self as actor, agent, or author, or some combination. When you try to change your traits or roles, you take aim at the social actor. By contrast, when you try to change your values or life goals, you are focusing on yourself as a motivated agent. Adolescence and young adulthood are periods in the human life course when many of us focus attention on our values and life goals. Perhaps you grew up as a traditional Catholic, but now in college you believe that the values inculcated in your childhood no longer function so well for you. You no longer believe in the central tenets of the Catholic Church, say, and are now working to replace your old values with new ones. Or maybe you still want to be Catholic, but you feel that your new take on faith requires a different kind of personal ideology. In the realm of the motivated agent, moreover, changing values can influence life goals. If your new value system prioritizes alleviating the suffering of others, you may decide to pursue a degree in social work, or to become a public interest lawyer, or to live a simpler life that prioritizes people over material wealth. A great deal of the identity work we do in adolescence and young adulthood is about values and goals, as we strive to articulate a personal vision or dream for what we hope to accomplish in the future. The Autobiographical Author Even as the “I”continues to develop a sense of the “Me” as both a social actor and a motivated agent, a third standpoint for selfhood gradually emerges in the adolescent and early-adult years. The third perspective is a response to Erikson’s (1963) challenge of identity. According to Erikson, developing an identity involves more than the exploration of and commitment to life goals and values (the self as motivated agent), and more than committing to new roles and re-evaluating old traits (the self as social actor). It also involves achieving a sense of temporal continuity in life—a reflexive understanding of how I have come to be the person I am becoming, or put differently, how my past self has developed into my present self, and how my present self will, in turn, develop into an envisioned future self. In his analysis of identity formation in the life of the 15th-century Protestant reformer Martin Luther, Erikson (1958) describes the culmination of a young adult’s search for identity in this way: "To be adult means among other things to see one’s own life in continuous perspective, both in retrospect and prospect. By accepting some definition of who he is, usually on the basis of a function in an economy, a place in the sequence of generations, and a status in the structure of society, the adult is able to selectively reconstruct his past in such a way that, step for step, it seems to have planned him, or better, he seems to have planned it. In this sense, psychologically we do choose our parents, our family history, and the history of our kings, heroes, and gods. By making them our own, we maneuver ourselves into the inner position of proprietors, of creators." -- (Erikson, 1958, pp. 111–112; emphasis added). In this rich passage, Erikson intimates that the development of a mature identity in young adulthood involves the I’s ability to construct a retrospective and prospective story about the Me (McAdams, 1985). In their efforts to find a meaningful identity for life, young men and women begin “to selectively reconstruct” their past, as Erikson wrote, and imagine their future to create an integrative life story, or what psychologists today often call a narrative identity. A narrative identity is an internalized and evolving story of the self that reconstructs the past and anticipates the future in such a way as to provide a person’s life with some degree of unity, meaning, and purpose over time (McAdams, 2008; McLean, Pasupathi, & Pals, 2007). The self typically becomes an autobiographical author in the early-adult years, a way of being that is layered over the motivated agent, which is layered over the social actor. In order to provide life with the sense of temporal continuity and deep meaning that Erikson believed identity should confer, we must author a personalized life story that integrates our understanding of who we once were, who we are today, and who we may become in the future. The story helps to explain, for the author and for the author’s world, why the social actor does what it does and why the motivated agent wants what it wants, and how the person as a whole has developed over time, from the past’s reconstructed beginning to the future’s imagined ending. By the time they are 5 or 6 years of age, children can tell well-formed stories about personal events in their lives (Fivush, 2011). By the end of childhood, they usually have a good sense of what a typical biography contains and how it is sequenced, from birth to death (Thomsen & Bernsten, 2008). But it is not until adolescence, research shows, that human beings express advanced storytelling skills and what psychologists call autobiographical reasoning (Habermas & Bluck, 2000; McLean & Fournier, 2008). In autobiographical reasoning, a narrator is able to derive substantive conclusions about the self from analyzing his or her own personal experiences. Adolescents may develop the ability to string together events into causal chains and inductively derive general themes about life from a sequence of chapters and scenes (Habermas & de Silveira, 2008). For example, a 16-year-old may be able to explain to herself and to others how childhood experiences in her family have shaped her vocation in life. Her parents were divorced when she was 5 years old, the teenager recalls, and this caused a great deal of stress in her family. Her mother often seemed anxious and depressed, but she (the now-teenager when she was a little girl—the story’s protagonist) often tried to cheer her mother up, and her efforts seemed to work. In more recent years, the teenager notes that her friends often come to her with their boyfriend problems. She seems to be very adept at giving advice about love and relationships, which stems, the teenager now believes, from her early experiences with her mother. Carrying this causal narrative forward, the teenager now thinks that she would like to be a marriage counselor when she grows up. Unlike children, then, adolescents can tell a full and convincing story about an entire human life, or at least a prominent line of causation within a full life, explaining continuity and change in the story’s protagonist over time. Once the cognitive skills are in place, young people seek interpersonal opportunities to share and refine their developing sense of themselves as storytellers (the I) who tell stories about themselves (the Me). Adolescents and young adults author a narrative sense of the self by telling stories about their experiences to other people, monitoring the feedback they receive from the tellings, editing their stories in light of the feedback, gaining new experiences and telling stories about those, and on and on, as selves create stories that, in turn, create new selves (McLean et al., 2007). Gradually, in fits and starts, through conversation and introspection, the I develops a convincing and coherent narrative about the Me. Contemporary research on the self as autobiographical author emphasizes the strong effect of culture on narrative identity (Hammack, 2008). Culture provides a menu of favored plot lines, themes, and character types for the construction of self-defining life stories. Autobiographical authors sample selectively from the cultural menu, appropriating ideas that seem to resonate well with their own life experiences. As such, life stories reflect the culture, wherein they are situated as much as they reflect the authorial efforts of the autobiographical I. As one example of the tight link between culture and narrative identity, McAdams (2013) and others (e.g., Kleinfeld, 2012) have highlighted the prominence of redemptive narratives in American culture. Epitomized in such iconic cultural ideals as the American dream, Horatio Alger stories, and narratives of Christian atonement, redemptive stories track the move from suffering to an enhanced status or state, while scripting the development of a chosen protagonist who journeys forth into a dangerous and unredeemed world (McAdams, 2013). Hollywood movies often celebrate redemptive quests. Americans are exposed to similar narrative messages in self-help books, 12-step programs, Sunday sermons, and in the rhetoric of political campaigns. Over the past two decades, the world’s most influential spokesperson for the power of redemption in human lives may be Oprah Winfrey, who tells her own story of overcoming childhood adversity while encouraging others, through her media outlets and philanthropy, to tell similar kinds of stories for their own lives (McAdams, 2013). Research has demonstrated that American adults who enjoy high levels of mental health and civic engagement tend to construct their lives as narratives of redemption, tracking the move from sin to salvation, rags to riches, oppression to liberation, or sickness/abuse to health/recovery (McAdams, Diamond, de St. Aubin, & Mansfield, 1997; McAdams, Reynolds, Lewis, Patten, & Bowman, 2001; Walker & Frimer, 2007). In American society, these kinds of stories are often seen to be inspirational. At the same time, McAdams (2011, 2013) has pointed to shortcomings and limitations in the redemptive stories that many Americans tell, which mirror cultural biases and stereotypes in American culture and heritage. McAdams has argued that redemptive stories support happiness and societal engagement for some Americans, but the same stories can encourage moral righteousness and a naïve expectation that suffering will always be redeemed. For better and sometimes for worse, Americans seem to love stories of personal redemption and often aim to assimilate their autobiographical memories and aspirations to a redemptive form. Nonetheless, these same stories may not work so well in cultures that espouse different values and narrative ideals (Hammack, 2008). It is important to remember that every culture offers its own storehouse of favored narrative forms. It is also essential to know that no single narrative form captures all that is good (or bad) about a culture. In American society, the redemptive narrative is but one of many different kinds of stories that people commonly employ to make sense of their lives. What is your story? What kind of a narrative are you working on? As you look to the past and imagine the future, what threads of continuity, change, and meaning do you discern? For many people, the most dramatic and fulfilling efforts to change the self happen when the I works hard, as an autobiographical author, to construct and, ultimately, to tell a new story about the Me. Storytelling may be the most powerful form of self-transformation that human beings have ever invented. Changing one’s life story is at the heart of many forms of psychotherapy and counseling, as well as religious conversions, vocational epiphanies, and other dramatic transformations of the self that people often celebrate as turning points in their lives (Adler, 2012). Storytelling is often at the heart of the little changes, too, minor edits in the self that we make as we move through daily life, as we live and experience life, and as we later tell it to ourselves and to others. Conclusion For human beings, selves begin as social actors, but they eventually become motivated agents and autobiographical authors, too. The I first sees itself as an embodied actor in social space; with development, however, it comes to appreciate itself also as a forward-looking source of self-determined goals and values, and later yet, as a storyteller of personal experience, oriented to the reconstructed past and the imagined future. To “know thyself” in mature adulthood, then, is to do three things: (a) to apprehend and to perform with social approval my self-ascribed traits and roles, (b) to pursue with vigor and (ideally) success my most valued goals and plans, and (c) to construct a story about life that conveys, with vividness and cultural resonance, how I became the person I am becoming, integrating my past as I remember it, my present as I am experiencing it, and my future as I hope it to be. Outside Resources Web: The website for the Foley Center for the Study of Lives, at Northwestern University. The site contains research materials, interview protocols, and coding manuals for conducting studies of narrative identity. http://www.sesp.northwestern.edu/foley/ Discussion Questions 1. Back in the 1950s, Erik Erikson argued that many adolescents and young adults experience a tumultuous identity crisis. Do you think this is true today? What might an identity crisis look and feel like? And, how might it be resolved? 2. Many people believe that they have a true self buried inside of them. From this perspective, the development of self is about discovering a psychological truth deep inside. Do you believe this to be true? How does thinking about the self as an actor, agent, and author bear on this question? 3. Psychological research shows that when people are placed in front of mirrors they often behave in a more moral and conscientious manner, even though they sometimes experience this procedure as unpleasant. From the standpoint of the self as a social actor, how might we explain this phenomenon? 4. By the time they reach adulthood, does everybody have a narrative identity? Do some people simply never develop a story for their life? 5. What happens when the three perspectives on self—the self as actor, agent, and author—conflict with each other? Is it necessary for people’s self-ascribed traits and roles to line up well with their goals and their stories? 6. William James wrote that the self includes all things that the person considers to be “mine.” If we take James literally, a person’s self might extend to include his or her material possessions, pets, and friends and family. Does this make sense? 7. To what extent can we control the self? Are some features of selfhood easier to control than others? 8. What cultural differences may be observed in the construction of the self? How might gender, ethnicity, and class impact the development of the self as actor, as agent, and as author? Vocabulary Autobiographical reasoning The ability, typically developed in adolescence, to derive substantive conclusions about the self from analyzing one’s own personal experiences. Big Five A broad taxonomy of personality trait domains repeatedly derived from studies of trait ratings in adulthood and encompassing the categories of (1) extraversion vs. introversion, (2) neuroticism vs. emotional stability, (3) agreeable vs. disagreeableness, (4) conscientiousness vs. nonconscientiousness, and (5) openness to experience vs. conventionality. By late childhood and early adolescence, people’s self-attributions of personality traits, as well as the trait attributions made about them by others, show patterns of intercorrelations that confirm with the five-factor structure obtained in studies of adults. Ego Sigmund Freud’s conception of an executive self in the personality. Akin to this module’s notion of “the I,” Freud imagined the ego as observing outside reality, engaging in rational though, and coping with the competing demands of inner desires and moral standards. Identity Sometimes used synonymously with the term “self,” identity means many different things in psychological science and in other fields (e.g., sociology). In this module, I adopt Erik Erikson’s conception of identity as a developmental task for late adolescence and young adulthood. Forming an identity in adolescence and young adulthood involves exploring alternative roles, values, goals, and relationships and eventually committing to a realistic agenda for life that productively situates a person in the adult world of work and love. In addition, identity formation entails commitments to new social roles and reevaluation of old traits, and importantly, it brings with it a sense of temporal continuity in life, achieved though the construction of an integrative life story. Narrative identity An internalized and evolving story of the self designed to provide life with some measure of temporal unity and purpose. Beginning in late adolescence, people craft self-defining stories that reconstruct the past and imagine the future to explain how the person came to be the person that he or she is becoming. Redemptive narratives Life stories that affirm the transformation from suffering to an enhanced status or state. In American culture, redemptive life stories are highly prized as models for the good self, as in classic narratives of atonement, upward mobility, liberation, and recovery. Reflexivity The idea that the self reflects back upon itself; that the I (the knower, the subject) encounters the Me (the known, the object). Reflexivity is a fundamental property of human selfhood. Self as autobiographical author The sense of the self as a storyteller who reconstructs the past and imagines the future in order to articulate an integrative narrative that provides life with some measure of temporal continuity and purpose. Self as motivated agent The sense of the self as an intentional force that strives to achieve goals, plans, values, projects, and the like. Self as social actor The sense of the self as an embodied actor whose social performances may be construed in terms of more or less consistent self-ascribed traits and social roles. Self-esteem The extent to which a person feels that he or she is worthy and good. The success or failure that the motivated agent experiences in pursuit of valued goals is a strong determinant of self-esteem. Social reputation The traits and social roles that others attribute to an actor. Actors also have their own conceptions of what they imagine their respective social reputations indeed are in the eyes of others. The Age 5-to-7 Shift Cognitive and social changes that occur in the early elementary school years that result in the child’s developing a more purposeful, planful, and goal-directed approach to life, setting the stage for the emergence of the self as a motivated agent. The “I” The self as knower, the sense of the self as a subject who encounters (knows, works on) itself (the Me). The “Me” The self as known, the sense of the self as the object or target of the I’s knowledge and work. Theory of mind Emerging around the age of 4, the child’s understanding that other people have minds in which are located desires and beliefs, and that desires and beliefs, thereby, motivate behavior.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_3%3A_Personality/3.05%3A_Self_and_Identity.txt
By Roy F. Baumeister Florida State University Self-regulation means changing oneself based on standards, that is, ideas of how one should or should not be. It is a centrally important capacity that contributes to socially desirable behavior, including moral behavior. Effective self-regulation requires knowledge of standards for proper behavior, careful monitoring of one’s actions and feelings, and the ability to make desired changes. learning objectives • Understand what self-regulation means and how it works. • Understand the requirements and benefits of effective self-regulation. • Understand differences in state (ego depletion) and trait (conscientiousness). Introduction Self-regulation is the capacity to alter one’s responses. It is broadly related to the term “self-control”. The term “regulate” means to change something—but not just any change, rather change to bring it into agreement with some idea, such as a rule, a goal, a plan, or a moral principle. To illustrate, when the government regulates how houses are built, that means the government inspects the buildings to check that everything is done “up to code” or according to the rules about good building. In a similar fashion, when you regulate yourself, you watch and change yourself to bring your responses into line with some ideas about how they should be. People regulate four broad categories of responses. They control their thinking, such as in trying to concentrate or to shut some annoying earworm tune out of their mind. They control their emotions, as in trying to cheer themselves up or to calm down when angry (or to stay angry, if that’s helpful). They control their impulses, as in trying not to eat fattening food, trying to hold one’s tongue, or trying to quit smoking. Last, they try to control their task performances, such as in pushing themselves to keep working when tired and discouraged, or deciding whether to speed up (to get more done) or slow down (to make sure to get it right). Early Work on Delay of Gratification Research on self-regulation was greatly stimulated by early experiments conducted by Walter Mischel and his colleagues (e.g., Mischel, 1974) on the capacity to delay gratification, which means being able to refuse current temptations and pleasures to work toward future benefits. In a typical study with what later came to be called the “marshmallow test,” a 4-year-old child would be seated in a room, and a favorite treat such as a cookie or marshmallow was placed on the table. The experimenter would tell the child, “I have to leave for a few minutes and then I’ll be back. You can have this treat any time, but if you can wait until I come back, you can have two of them.” Two treats are better than one, but to get the double treat, the child had to wait. Self-regulation was required to resist that urge to gobble down the marshmallow on the table so as to reap the larger reward. Many situations in life demand similar delays for best results. Going to college to get an education often means living in poverty and debt rather than getting a job to earn money right away. But in the long run, the college degree increases your lifetime income by hundreds of thousands of dollars. Very few nonhuman animals can bring themselves to resist immediate temptations so as to pursue future rewards, but this trait is an important key to success in human life. Benefits of Self-Control People who are good at self-regulation do better than others in life. Follow-up studies with Mischel’s samples found that the children who resisted temptation and delayed gratification effectively grew into adults who were better than others in school and work, more popular with other people, and who were rated as nicer, better people by teachers and others (Mischel, Shoda, & Peake, 1988; Shoda, Mischel, & Peake, 1990). College students with high self-control get better grades, have better close relationships, manage their emotions better, have fewer problems with drugs and alcohol, are less prone to eating disorders, are better adjusted, have higher self-esteem, and get along better with other people, as compared to people with low self-control (Tangney, Baumeister, & Boone, 2004). They are happier and have less stress and conflict (Hofmann, Vohs, Fisher, Luhmann, & Baumeister, 2013). Longitudinal studies have found that children with good self-control go through life with fewer problems, are more successful, are less likely to be arrested or have a child out of wedlock, and enjoy other benefits (Moffitt et al., 2011). Criminologists have concluded that low self-control is a—if not the—key trait for understanding the criminal personality (Gottfredson & Hirschi, 1990; Pratt & Cullen, 2000). Some researchers have searched for evidence that too much self-control can be bad (Tangney et al., 2004)—but without success. There is such a thing as being highly inhibited or clinically “over-controlled,” which can impair initiative and reduce happiness, but that does not appear to be an excess of self-regulation. Rather, it may stem from having been punished excessively as a child and, therefore, adopting a fearful, inhibited approach to life. In general, self-control resembles intelligence in that the more one has, the better off one is, and the benefits are found through a broad range of life activities. Three Ingredients of Effective Self-Regulation For self-regulation to be effective, three parts or ingredients are involved. The first is standards, which are ideas about how things should (or should not) be. The second is monitoring, which means keeping track of the target behavior that is to be regulated. The third is the capacity to change. Standards are an indispensable foundation for self-regulation. We already saw that self-regulation means change in relation to some idea; without such guiding ideas, change would largely be random and lacking direction. Standards include goals, laws, moral principles, personal rules, other people’s expectations, and social norms. Dieters, for example, typically have a goal in terms of how much weight they wish to lose. They help their self-regulation further by developing standards for how much or how little to eat and what kinds of foods they will eat. The second ingredient is monitoring. It is hard to regulate something without being aware of it. For example, dieters count their calories. That is, they keep track of how much they eat and how fattening it is. In fact, some evidence suggests that dieters stop keeping track of how much they eat when they break their diet or go on an eating binge, and the failure of monitoring contributes to eating more (Polivy, 1976). Alcohol has been found to impair all sorts of self-regulation, partly because intoxicated persons fail to keep track of their behavior and compare it to their standards. The combination of standards and monitoring was featured in an influential theory about self-regulation by Carver and Scheier (1981, 1982, 1998). Those researchers started their careers studying self-awareness, which is a key human trait. The study of self-awareness recognized early on that people do not simply notice themselves the way they might notice a tree or car. Rather, self-awareness always seemed to involve comparing oneself to a standard. For example, when a man looks in a mirror, he does not just think, “Oh, there I am,” but more likely thinks, “Is my hair a mess? Do my clothes look good?” Carver and Scheier proposed that the reason for this comparison to standards is that it enables people to regulate themselves, such as by changing things that do not measure up to their standards. In the mirror example, the man might comb his hair to bring it into line with his standards for personal appearance. Good students keep track of their grades, credits, and progress toward their degree and other goals. Athletes keep track of their times, scores, and achievements, as a way to monitor improvement. The process of monitoring oneself can be compared to how a thermostat operates. The thermostat checks the temperature in the room, compares it to a standard (the setting for desired temperature), and if those do not match, it turns on the heat or air conditioner to change the temperature. It checks again and again, and when the room temperature matches the desired setting, the thermostat turns off the climate control. In the same way, people compare themselves to their personal standards, make changes as needed, and stop working on change once they have met their goals. People feel good not just when they reach their goals but even when they deem they are making good progress (Carver & Scheier, 1990). They feel bad when they are not making sufficient progress. That brings up the third ingredient, which is the capacity to change oneself. In effective self-regulation, people operate on themselves to bring about these changes. The popular term for this is “willpower,” which suggests some kind of energy is expended in the process. Psychologists hesitate to adopt terms associated with folk wisdom, because there are many potential implications. Here, the term is used to refer specifically to some energy that is involved in the capacity to change oneself. Consistent with the popular notion of willpower, people do seem to expend some energy during self-regulation. Many studies have found that after people exert self-regulation to change some response, they perform worse on the next unrelated task if it too requires self-regulation (Hagger, Wood, Stiff, & Chatzisarantis, 2010). That pattern suggests that some energy such as willpower was used up during the first task, leaving less available for the second task. The term for this state of reduced energy available for self-regulation is ego depletion (Baumeister, Bratslavsky, Muraven, & Tice, 1998). As people go about their daily lives, they have to resist many desires and impulses and must control themselves in other ways, and so over the course of a typical day many people gradually become ego depleted. The result is that they become increasingly likely to give in to impulses and desires that they would have resisted successfully earlier in the day (Hofmann, Vohs, & Baumeister, 2012). During the state of ego depletion, people become less helpful and more aggressive, prone to overeat, misbehave sexually, express more prejudice, and in other ways do things that they may later regret. Thus, a person’s capacity for self-regulation is not constant, but rather it fluctuates. To be sure, some people are generally better than others at controlling themselves (Tangney et al., 2004). But even someone with excellent self-control may occasionally find that control breaks down under ego depletion. In general, self-regulation can be improved by getting enough sleep and healthy food, and by minimizing other demands on one’s willpower. There is some evidence that regular exercise of self-control can build up one’s willpower, like strengthening a muscle (Baumeister & Tierney, 2011; Oaten & Cheng, 2006). Even in early adulthood, one’s self-control can be strengthened. Furthermore, research has shown that disadvantaged, minority children who take part in preschool programs such as Head Start (often based on the Perry program) end up doing better in life even as adults. This was thought for a while to be due to increases in intelligence quotient (IQ), but changes in IQ from such programs are at best temporary. Instead, recent work indicates that improvement in self-control and related traits may be what produce the benefits (Heckman, Pinto, & Savelyev, in press). It’s not doing math problems or learning to spell at age 3 that increases subsequent adult success—but rather the benefit comes from having some early practice at planning, getting organized, and following rules. Conscientiousness Conscientiousness is a stable dimension of personality, which means that some people are typically higher on it than others. Being a personality trait does not mean that it is unchangeable. Most people do show some changes over time, particularly becoming higher on conscientiousness as they grow older. Some psychologists look specifically at the trait of self-control, which is understood (and measured) in personality psychology in a very specific, narrowly focused, well-defined sense. Conscientiousness, in contrast, is one of five super-traits that supposedly account for all the other traits, in various combinations. The trait self-control is one big part of conscientiousness, but there are other parts. Two aspects of conscientiousness that have been well documented are being orderly and being industrious (Roberts, Lejuez, Krueger, Richards, & Hill, 2012). Orderliness includes being clean and neat, making and following plans, and being punctual (which is helpful with following plans!). Low conscientious means the opposite: being disorganized, messy, late, or erratic. Being industrious not only means working hard but also persevering in the face of failures and difficulties, as well as aspiring to excellence. Most of these reflect good self-control. Conscientious people are careful, disciplined, responsible, and thorough, and they tend to plan and think things through before acting. People who are low in conscientiousness tend to be more impulsive and spontaneous, even reckless. They are easygoing and may often be late or sloppy, partly because they are not strongly focused on future goals for success and not highly concerned to obey all rules and stay on schedule. Psychologists prefer not to make a value judgment about whether it is better to be high or low in any personality trait. But when it comes specifically to self-control, it is difficult to resist the conclusion that high self-control is better, both for the person and for society at large. Some aspects of conscientiousness have less apparent connection to self-control, however. People high in conscientiousness tend to be decisive. They are often formal, in the sense of following social norms and rules, such as dressing properly, waiting one’s turn, or holding doors for others. They tend to respect traditions and traditional values. Conscientious people behave differently from people who score low on that trait. People scoring low on conscientiousness are more likely than others to report driving without wearing seatbelts, daydreaming, swearing, telling dirty jokes, and picking up hitchhikers (Hirsh, DeYoung, & Peterson, 2009). In terms of more substantial life outcomes, people low on conscientiousness are more likely than others to get divorced, presumably because they make bad choices and misbehave during the marriage such as by saying hurtful things, getting into arguments and fights, and behaving irresponsibly (Roberts, Jackson, Fayard, Edmonds, & Meints, 2009). People low on conscientiousness are more likely than others to lose their jobs, to become homeless, to do time in prison, to have money problems, and to have drug problems. Conscientious people make better spouses. They are less likely than others to get divorced, partly because they avoid many behaviors that undermine intimacy, such as abusing their partners, drinking excessively, or having extramarital affairs (Roberts et al., 2009). Encompassing self-control, conscientiousness is the personality trait with the strongest effect on life or death: People high on that trait live longer than others (Deary, Weiss, & Batty, 2010). Why? Among other things, they avoid many behavior patterns associated with early death, including alcohol abuse, obesity and other eating problems, drug abuse, smoking, failure to exercise, risky sex, suicide, violence, and unsafe driving (Bogg & Roberts, in press). They also visit physicians more regularly and take their prescribed medicines more reliably than people low in conscientiousness. Their good habits help avoid many life-threatening diseases. Outside Resources Book: For more advanced and in-depth coverage, consult The Handbook of Self-Regulation (2nd Edition), edited by Kathleen Vohs and Roy Baumeister. This book contains different chapters by different experts in the field, covering large amounts of research findings. Book: To read more, the easiest and most fun source would be The New York Times bestseller Willpower: Rediscovering the Greatest Human Strength, by Roy Baumeister and John Tierney, published by Penguin. This is intended not as a purely scientific work but as an entertaining summary for the general public. Video: For an enjoyable and brief re-enactment of Mischel’s “marshmallow” studies on delay of gratification, try the following video. Watching those children struggle to resist temptation is sure to bring a smile. Discussion Questions 1. Why do you think criminals are often poor at self-regulation? 2. On average, children growing up without both parents present do worse at many things, from math achievement in school to the likelihood of being arrested for crimes. Might self-control be part of the explanation? Why? 3. Many people make New Year’s resolutions to change themselves in various ways, but often they fail at these. Why? 4. Is good self-control something one is born with or something that is learned? 5. How would a parent teach his or her children to have good self-control? 6. Why are people with good self-control happier than other people? Vocabulary Conscientiousness A personality trait consisting of self-control, orderliness, industriousness, and traditionalism. Ego depletion The state of diminished willpower or low energy associated with having exerted self-regulation. Monitoring Keeping track of a target behavior that is to be regulated. Self-regulation The process of altering one’s responses, including thoughts, feelings, impulses, actions, and task performance. Standards Ideas about how things should (or should not) be.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_3%3A_Personality/3.06%3A_Self-Regulation_and_Conscientiousness.txt
By David Lubinski Vanderbilt University Psychologists interested in the study of human individuality have found that accomplishments in education, the world of work, and creativity are a joint function of talent, passion, and commitment — or how much effort and time one is willing to invest in personal development when the opportunity is provided. This module reviews models and measures that psychologists have designed to assess intellect, interests, and energy for personal development. The module begins with a model for organizing these three psychological domains, which is useful for understanding talent development. This model is not only helpful for understanding the many different ways that positive development may unfold among people, but it is also useful for conceptualizing personal development and ways of selecting opportunities in learning and work settings that are more personally meaningful. Data supporting this model are reviewed. learning objectives • Compare and contrast satisfaction and satisfactoriness. • Discuss why the model of talent development offered in this module places equal emphasis on assessing the person and assessing the environment. • Articulate the relationship between ability and learning and performance. • Understand the issue of an "ability threshold" beyond which more ability may or may not matter. • List personal attributes other than interests and abilities that are important to individual accomplishment. An amalgam of intelligence, interests, and mastery are appropriate topics for an essay on the cross-cutting themes running through these vast domains of psychological diversity. For effective performance and valued accomplishments, these three classes of determinants are needed for comprehensive treatments of psychological phenomena supporting learning, occupational performance, and for advancing knowledge through innovative solutions. Historically, these personal attributes go back to at least Plato’s triarchic view of the human psyche, described in Phaedra, wherein he depicts the intellect as a charioteer, and affect(interests) and will(to master) as horses that draw the chariot. Ever since that time, cognitive, affective, and conative factors have all been found in comprehensive models of human development, or “The Trilogy of Mind” (Hilgard, 1980). To predict the magnitude, nature, and sophistication of intellectual development toward learning, working, and creating, all three classes are indispensable and deficits on any one can markedly hobble the effectiveness of the others in meeting standards for typical as well as extraordinary performance. These three aspects of human individuality all operate in parallel confluences of behaviors, perceptions, and stimuli to engender stream of consciousness experiences as well as effective functioning. Hilgard (1980) was indeed justified to criticize formulations in cognitive psychology, which neglect affection and conation; technically, such truncated frameworks of human psychological phenomena are known as under-determined or misspecified causal models (Lubinski, 2000;Lubinski & Humphreys, 1997). A Framework for Understanding Talent Development Figure 3.7.1 is an adaptation of the Theory of Work Adjustment (TWA; Dawis & Lofquist, 1984; Lubinski & Benbow, 2000). It provides a useful organizational scheme for this treatment by outlining critical dimensions of human individuality for performance in learning and work settings (and in transitioning between such settings). Here, the dominant models of intellectual abilities and educational–occupational interests are assembled. Because this review will be restricted to measures of individual differences that harbor real-world significance, these two models are linked to corresponding features of learning and work environments, ability requirements and incentive or reward structures, which set standards for meeting expectations (performance) and rewarding valued performance (compensation). Correspondence between abilities and ability requirements constitutes satisfactoriness (“competence”), whereas correspondence between an interests and reward structures constitutes satisfaction (“fulfillment”). To the extent that satisfactoriness and satisfaction co-occur, the individual is motivated to maintain contact with the environment and the environment is motivated to retain the individual; if one of these dimensions is dis-correspondent, the individual is motivated to leave the environment or the environment is motivated to dismiss. This model of talent development places equal emphasis on assessing the individual (abilities and interests) and the environment (response requirements and reward structures). Comprehensive reviews of outcomes within education (Lubinski, 1996; Lubinski & Benbow, 2000), counseling (Dawis, 1992; Gottfredson, 2003; Rounds & Tracey, 1990), and industrial/organizational psychology all emphasize this person/environment tandem (Dawis, 1991; Katzell, 1994; Lubinski & Dawis, 1992; Strong, 1943): aligning competency/motivational proclivities to performance standards and reward structures for learning and work (Bouchard, 1997; Scarr, 1996; Scarr & McCartney, 1983). And indeed, educational, counseling, and industrial psychology can be contiguously sequenced by this framework. They all share a common feature: the scientific study of implementing interventions or opportunities, based on individual differences, for maximizing positive psychological growth across different stages of life span development (Lubinski, 1996). For making individual decisions about personal development, or institutional decisions about organizational development, it is frequently useful to go beyond a minimum requisite approach of “do you like it” (satisfaction) and “can you do it” (satisfactoriness), and instead consider what individuals like the most and can do the best (Lubinski & Benbow, 2000, 2001). This framework is useful for identifying “optimal promise” for personal as well as organizational development. For now, however, cognitive abilities and interests will be reviewed and, ultimately, linked to conative determinants that mobilize, and in part account for, individual differences in how capabilities and motives are expressed. Cognitive Abilities Over the past several decades—the past 20 years in particular—a remarkable consensus has emerged that cognitive abilities are organized hierarchically (Carroll, 1993). A general outline of this hierarchy is represented graphically by a radex (Guttman, 1954), depicted in the upper left region of Figure 3.7.1. This illustrates the reliable finding that cognitive ability assessments covary as a function of their content or complexity (Corno, Cronbach et al., 2002; Lubinski & Dawis, 1992; Snow & Lohman, 1989). Cognitive ability tests can be scaled in this space based on how highly they covary with one another. The more that two tests share complexity and content, the more they covary and the closer they are to one another as points within the radex. Test complexity is scaled from the center of the radex (“g”) out, and, along lines emanating from the origin, complexity decreases but test content remains the same. Test content is scaled around the circular bands with equal distance from the center of the radex and, progressing around these bands, the relative density of test content changes from spatial/mechanical to verbal/linguistic to quantitative/numerical, but test complexity remains constant. Therefore, test content varies within each band (but complexity remains constant), whereas test complexity varies between bands (but on lines from the origin to the periphery, content remains constant). Because the extent to which tests covary is represented by how close together they are within this space (Lubinski & Dawis, 1992; Snow & Lohman, 1989; Wai, Lubinski, & Benbow, 2009), this model is helpful in organizing the many different kinds of specific ability tests. As Piaget astutely pointed out, “Intelligence is what you use when you don’t know what to do,” and this model affords an excellent overview of the content and sophistication of thought applied to familiar and novel problem-solving tasks. Mathematical, spatial, and verbal reasoning constitute the chief specific abilities with implications for different choices and performance after those choices in learning and work settings (Corno et al., 2002; Dawis, 1992; Gottfredson, 2003; Lubinski, 2004; Wai et al., 2009). The content of measures or tests of these specific abilities index individual differences in different modalities of thought: reasoning with numbers, words, and figures or shapes. Yet, despite this disparate content and focus, contrasting specific ability tests are all positively correlated, because they all index an underlying general property of intellectual thought. This general (common) dimension, identified over 100 years ago (Spearman, 1904) and corroborated by a massive quantity of subsequent research (Carroll, 1993; Jensen, 1998), is general mental ability, the general factor, or simply g (Gottfredson, 1997). General mental ability represents the complexity/sophistication of a person’s intellectual repertoire (Jensen, 1998; Lubinski & Dawis, 1992). The more complex a test is, regardless of its content, the better a measure of g it is. Further, because g underlies all cognitive reasoning processes, any test that assesses a specific ability is also, to some extent, a measure of g (Lubinski, 2004). In school, work, and a variety of everyday life circumstances, assessments of this general dimension covary more broadly and deeper than any other measure of human individuality (Hunt, 2011; Jensen, 1998; Lubinski, 2000, 2004). Measures of g manifest their life importance by going beyond educational settings (where they covary with educational achievement assessments in the .70–.80 range), by playing a role in shaping phenomena within Freud’s two important life domains, arbeiten and lieben, working and loving (or, resource acquisition and mating). Measures of g covary .20–.60 with work performance as a function of job complexity, .30–.40 with income, and –.20 with criminal behavior, .40 with SES of origin, and .50–.70 with achieved SES; assortative mating correlations on g are around .50 (Jensen, 1998; Lubinski, 2004; Schmidt & Hunter, 1998). Furthermore, Malcolm Gladwell (2008) notwithstanding, there does not appear to be an ability threshold; that is, the idea that after a certain point more ability does not matter. More ability does matter. Although other determinants are certainly needed (interests, persistence, opportunity), more ability does make a difference in learning, working, and creating, even among the top 1% of ability, or IQ equivalents ranging from approximately 137 to over 200 (see Figure 3.7.2). When appropriate assessment and criterion measures are utilized to capture the breadth of ability and accomplishment differences among the profoundly talented, individual differences within the top 1% of ability are shown to matter a great deal. In the past this has been difficult to demonstrate, because intellectual assessments and criterion measures lacked sufficient scope in gifted or intellectually talented populations, which resulted in no variation in assessments among the able and exceptionally able (ceiling effects). Without variation there cannot be co-variation, but modern methods have now corrected for this (Kell, Lubinski, & Benbow, 2013a; Lubinski, 2009; Park, Lubinski, & Benbow, 2007, 2008). Yet, even when g is measured in its full scope, and validated with large samples and appropriate low-base-rate-criteria over protracted longitudinal intervals, there is much more to intellectual functioning than measures of g or general ability. To reveal how general and specific abilities operate over the course of development, Figure 3.7.3 contains data from over 400,000 high schools students assessed between grades 9 through 12, and tracked for 11 years. Specifically, Figure 3.7.3 graphs the general and specific ability profiles of students earning terminal degrees in nine disciplines (Wai et al., 2009). Given that highly congruent findings were observed for all four cohorts (grades 9 through 12), the cohorts were combined. High general intelligence and an intellectual orientation dominated by high mathematical and spatial abilities, relative to verbal ability, were salient characteristics of individuals who pursued advanced education credentials in science, technology, engineering, and mathematics (STEM). These participants occupy a region in the intellectual space defined by the dimensions of ability level and ability pattern different from participants who earn undergraduate and graduate degrees in other domains. Two major differences distinguish the STEM from the non-STEM educational groups. First, students who ultimately secure educational credentials in STEM domains are more capable than those earning degrees in other areas, especially in nonverbal intellectual abilities. Within all educational domains, more advanced degrees are associated with more general and specific abilities. Second, for all three STEM educational groupings (and the advanced degrees within these groupings), spatial ability > verbal ability—whereas for all others, ranging from education to biology, spatial ability < verbal ability (with business being an exception). Young adolescents who subsequently secured advanced educational credentials in STEM manifested a spatial–verbal ability pattern opposite that of those who ultimately earned educational credentials in other areas. These same patterns play out in occupational arenas in predictable ways (Kell, Lubinski, Benbow, & Steiger, 2013b). In the past decade, individual differences within the top 1% of ability have revealed that these patterns portend important outcomes for technical innovation and creativity, with respect to both ability level (Lubinski, 2009; Park et al., 2008) and pattern (Kell et al. 2013a, Kell et al., 2013b; Park et al., 2007). Level of general ability has predictive validity for the magnitude of accomplishment (how extraordinary they are), whereas ability pattern has predictive validity for the nature of accomplishments (the domains they occur in). Interests Just because people can do something well doesn’t mean they like doing it. Psychological information on motivational differences (personal passions) is needed to understand attractions and aversions, different ways to create a meaningful life, and how differential development unfolds. Even people with the same intellectual equipment vary widely in their motivational proclivities. Paraphrasing Plato, different horses drive intellectual development down different life paths. The lower left region of Figure 3.7.1 provides the dominant model of vocational interests, one developed from decades of large-scale longitudinal and cross-cultural research. It shows a hexagonal structure consisting of six general themes: Realistic (R) = working with gadgets and things, the outdoors, need for structure; Investigative (I) = scientific pursuits, especially mathematics and the physical science, an interest in theory; Artistic (A) = creative expression in art and writing, little need for structure; Social (S) = people interests, the helping professions, teaching, nursing, counseling; Enterprising (E) = likes leadership roles directed toward economic objectives; and Conventional (C) = liking of well-structured environments and clear chains of command, such as office practices. These six themes covary inversely with the distance between them, hence, the hexagonal structure circling around R-I-A-S-E-C. John Holland (1959, 1996) justifiably receives most of the credit for this model (Day & Rounds, 1998), although Guilford et al. (1954) uncovered a similar framework based on military data and labeled them Mechanical, Scientific, Aesthetic Expression, Social Welfare, Business, and Clerical. Although each theme contains multiple subcomponents, Holland’s hexagon, like the radex of cognitive abilities, captures the general outlines of the educational/occupational interest domain, but there are molecular strands of intellective and interest dimensions that add nuance to these general outlines (for abilities, see Carroll, 1993; for interests, see Dawis, 1991; Savickas & Spokane, 1999). There are also super-ordinal themes such as people versus things (Su, Rounds, & Armstrong, 2009), which manifest arguably the largest sex-difference on a psychological dimension of human individuality. At superordinate levels of people versus things or data versus ideas (Prediger, 1982), or at the RIASEC level of analysis, interest dimensions covary in different ways with mathematical, spatial, and verbal abilities (Ackerman, 1996; Ackerman & Heggestad, 1997; Schmidt, Lubinski, & Benbow, 1998); and intense selection, when exclusively restricted to a specific ability, will eventuate in distinctive interest profiles across the three abilities with implications for differential development (Humphreys, Lubinski, & Yao, 1993; Webb, Lubinski, & Benbow, 2007). Although correlations between abilities and interests are “only” in the .20–.30 range, when selection is extreme, distinct profiles emerge and reflect different “types” (Lubinski & Benbow, 2000, 2006). For basic science, this shows how ostensibly different kinds of intelligence at the extreme do not stem from different qualities, but rather from endpoint extremes within a multivariate space of systematic sources of individual differences, which “pull” with them constellations of nonintellectual personal attributes. For applied practice, skilled educational–vocational counselors routinely combine information on abilities and interests to distill learning and work environments that individuals are likely to thrive in competence and experience fulfillment (Dawis, 1992; Rounds & Tracy, 1990). For further insights, a final class of important psychological determinants is needed, however. Mastery As all parents of more than one child know, there are huge individual differences in the extent to which individuals embrace opportunities for positive development. Seasoned faculty at top institutions for graduate training have observed the same phenonemon—among highly select graduate students, task commitment varies tremendously. Even among the intellectual elite, individual differences in accomplishments stem from more than abilities, interests, and opportunity; conative determinants are critical catalysts. Galton (1869) called it “zeal,” Hull (1928) called it “industriousness,” and Webb (1915) called it “will.” Such labels as “grit” or “strivers” are sometimes used to define resources that people call upon to mobilize their abilities and interests over protracted intervals. Conative factors are distinct from abilities and preferences, having more to do with individual differences in energy or psychological tempo rather than the content of what people can do or how rapidly they learn. Indeed, characteristic across scientific studies of expertise and world-class accomplishment are attributes specifically indicative of indefatigable capacities for study and work. This is an underappreciated class of individual differences, although Ackerman (1996) has discussed typical intellectual engagement (TIE) and Dawis and Lofquist (1984) have discussed pace and endurance. This class of attributes simply has not received the attention it deserves. Nevertheless, in the field of talent development and identification, the greatest consensus appears to be found on the topic of conation, rather than cognition or affect. Exceptional performers are deeply committed to what they do, and they devote a great deal of time to doing it. Regardless of the theorist, Howard Gardner, Dean Simonton, Arthur Jensen, Anders Erikson, and Harriet Zuckerman all agree that this is a uniform characteristic of world class performers at the top of their game. In the words of Dean Simonton and E. O. Wilson, respectively: [M]aking it big [becoming a star] is a career. People who wish to do so must organize their whole lives around a single enterprise. They must be monomaniacs, even megalomaniacs, about their pursuits. They must start early, labor continuously, and never give up the cause. Success is not for the lazy, procrastinating, or mercurial. (Simonton, 1994, p. 181) I have been presumptuous enough to counsel new Ph.D.’s in biology as follows: If you choose an academic career you will need forty hours a week to perform teaching and administrative duties, another twenty hours on top of that to conduct respectable research, and still another twenty hours to accomplish really important research. This formula is not boot-camp rhetoric. (Wilson, 1998, pp. 55–56) Figure 3.7.4 contains data from two extraordinary populations of individuals (Lubinski, Benbow, Webb, Bleske-Rechek, 2006). One group consists of a sample of profoundly gifted adolescents identified at age 12 as in the top 1 in 10,000 in mathematical or verbal reasoning ability; they were subsequently tracked for 20 years. Members of the second group were identified in their early twenties, as first- or second-year STEM graduate students enrolled in a top-15 U.S. university; they were subsequently tracked for 10 years. Now in their mid-thirties, subjects were asked how much they would be willing to work in their “ideal job” and, second, how much they actually do work. The data are clear. There are huge individual differences associated with how much time people are willing to invest in their career development and work. The STEM graduate students are particularly interesting inasmuch as in their mid-twenties they were assessed on abilities, interests, and personality, and both sexes were found to be highly similar on these psychological dimensions (Lubinski, Benbow, Shea, Eftekhari-Sanjani, & Halvorson, 2001). But subsequently, over the life span, they markedly diverged in time allocation and life priorities (Ceci & Williams, 2011; Ferriman, Lubinski, & Benbow, 2009). These figures reveal huge noncognitive individual differences among individuals with exceptional intellectual talent. One only needs to imagine the ticking of a tenure clock and the differences likely to accrue over a 5-year interval between two faculty working 45- versus 65-hour weeks (other things being equal). Making partner in a prestigious law firm is no different, nor is achieving genuine excellence in most intellectually demanding areas. Conclusion Since Spearman (1904) advanced the idea of general intelligence, a steady stream of systematic scientific knowledge has accrued in the psychological study of human individuality. We have learned that the intellect is organized hierarchically, that interests are multidimensional and only covary slightly with abilities, and that individual differences are huge in terms of investing in personal development. When these aspects of human psychological diversity are combined with commensurate attention devoted to opportunities for learning, work, and personal growth, a framework for understanding human development begins to take shape. Because frameworks may be found that emphasize only one set of these determinants, this essay closes with the recommendation—based on the empirical evidence—to stress all three. Outside Resources Book: Human Cognitive Abilities, by John Carroll constitutes a definitive treatment of the nature and hierarchical organization of cognitive abilities, based on a conceptual and empirical analysis of the past century’s factor analytic research. www.amazon.com/Human-Cognitiv...tive+abilities Book: Human Intelligence, by Earl Hunt, provides a superb overview of empirical research on cognitive abilities. Collectively, these three sources capture the psychological significance of what this important domain of human psychological diversity affords. www.amazon.com/Human-Intellig.../dp/0521707811 Book: The g Factor, by Arthur Jensen, explicates the depth and breadth of the central dimension running through all cognitive abilities, the summit of Carroll’s (1993) hierarchical organization: general intelligence (or “g”). Revealed here is the practical and scientific significance for coming to terms with a rich array of critical human outcomes found in schools, work, and everyday life. www.amazon.com/The-Factor-Evo.../dp/0275961036 Book: For additional reading on the history of intellectual assessment, read Century of Ability Testing, by Robert Thorndike and David F. Lohman www.amazon.com/Century-Abilit.../dp/0829251561 Discussion Questions 1. Why are abilities and interests insufficient for conceptualizing educational and occupational development? 2. Why does the model of talent development discussed in this module place equal emphasis on assessing the individual and assessing the environment. 3. What is the most widely agreed on empirical finding, among investigators who study the development of truly outstanding careers? 4. Besides what you can do and what you like, what other factors are important to consider when making choices about your personal development in learning and work settings? Vocabulary g or general mental ability The general factor common to all cognitive ability measures, “a very general mental capacity that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings—‘catching on,’ ‘making sense of things,’ or ‘figuring out’ what to do” (Gottfredson, 1997, p. 13). Satisfaction Correspondence between an individual’s needs or preferences and the rewards offered by the environment. Satisfactoriness Correspondence between an individual’s abilities and the ability requirements of the environment. Specific abilities Cognitive abilities that contain an appreciable component of g or general ability, but also contain a large component of a more content-focused talent such as mathematical, spatial, or verbal ability; patterns of specific abilities channel development down different paths as a function of an individual’s relative strengths and weaknesses. Under-determined or misspecified causal models Psychological frameworks that miss or neglect to include one or more of the critical determinants of the phenomenon under analysis.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_3%3A_Personality/3.07%3A_Intellectual_Abilities_Interests_and_Mastery.txt
By James E Maddux and Evan Kleiman George Mason University The term “self-efficacy” refers to your beliefs about your ability to effectively perform the tasks needed to attain a valued goal. Self-efficacy does not refer to your abilities but to how strongly you believe you can use your abilities to work toward goals. Self-efficacy is not a unitary construct or trait; rather, people have self-efficacy beliefs in different domains, such as academic self-efficacy, problem-solving self-efficacy, and self-regulatory self-efficacy. Stronger self-efficacy beliefs are associated with positive outcomes, such as better grades, greater athletic performance, happier romantic relationships, and a healthier lifestyle. learning objectives • Define self-efficacy. • List the major factors that influence self-efficacy. • Explain how self-efficacy develops. • Understand the influence of self-efficacy on psychological and physical health andwell-being as well as academic and vocational success. • Define collective efficacy and explain why it is important. Introduction: What Is Self-Efficacy? Imagine two students, Sally and Lucy, who are about to take the same math test. Sally and Lucy have the same exact ability to do well in math, the same level of intelligence, and the same motivation to do well on the test. They also studied together. They even have the same brand of shoes on. The only difference between the two is that Sally is very confident in her mathematical and her test-taking abilities, while Lucy is not. So, who is likely to do better on the test? Sally, of course, because she has the confidence to use her mathematical and test-taking abilities to deal with challenging math problems and to accomplish goals that are important to her—in this case, doing well on the test. This difference between Sally and Lucy—the student who got the A and the student who got the B-, respectively—is self-efficacy. As you will read later, self-efficacy influences behavior and emotions in particular ways that help people better manage challenges and achieve valued goals. A concept that was first introduced by Albert Bandura in 1977, self-efficacy refers to a person’s beliefs that he or she is able to effectively perform the tasks needed to attain a valued goal (Bandura, 1977). Since then, self-efficacy has become one of the most thoroughly researched concepts in psychology. Just about every important domain of human behavior has been investigated using self-efficacy theory (Bandura, 1997; Maddux, 1995; Maddux & Gosselin, 2011, 2012). Self-efficacy does not refer to your abilities but rather to your beliefs about what you can do with your abilities. Also, self-efficacy is not a trait—there are not certain types of people with high self-efficacies and others with low self-efficacies (Stajkovic & Luthans, 1998). Rather, people have self-efficacy beliefs about specific goals and life domains. For example, if you believe that you have the skills necessary to do well in school and believe you can use those skills to excel, then you have high academic self-efficacy. Self-efficacy may sound similar to a concept you may be familiar with already—self-esteem—but these are very different notions. Self-esteem refers to how much you like or “esteem” yourself—to what extent you believe you are a good and worthwhile person. Self-efficacy, however, refers to your self-confidence to perform well and to achieve in specific areas of life such as school, work, and relationships. Self-efficacy does influence self-esteem because how you feel about yourself overall is greatly influenced by your confidence in your ability to perform well in areas that are important to you and to achieve valued goals. For example, if performing well in athletics is very important to you, then your self-efficacy for athletics will greatly influence your self-esteem; however, if performing well in athletics is not at all important you to you, then your self-efficacy for athletics will probably have little impact on your self-esteem. How Do We Measure Self-Efficacy? Like many other concepts in psychology, self-efficacy is not necessarily measured in a straightforward manner and requires much thought to be measured accurately. Self-efficacy is unlike weight, which is simple to objectively measure by using a scale, or height, which is simple to objectively measure by using a tape measure. Rather, self-efficacy is an abstract concept you can’t touch or see. To measure an abstract concept like self-efficacy, we use something called a self-report measure. A self-report measure is a type of questionnaire, like a survey, where people answer questions usually with answers that correspond to numerical values that can be added to create an overall index of some construct. For example, a well-known self-report measure is the Perceived Stress Scale (Cohen, Kamarck, & Mermelstein, 1983). It asks questions like, “In the last month, how often have you been upset because of something that happened unexpectedly?” and “In the last month, how often have you been angered because of things that were outside of your control?” Participants answer the questions on a 1 through 5 scale, where 1 means “not often” and 5 means “very often.” Then all of the answers are summed together to create a total “stress” score, with higher scores equating to higher levels of stress. It is very important to develop tools to measure self-efficacy that take people’s subjective beliefs about their self-efficacy and turn them into the most objective possible measure. This means that one person’s score of 6 out of 10 on a measure of self-efficacy will be similar to another person’s score of 6 out of 10 on the same measure. We will discuss two broad types of self-report measures for self-efficacy. The first category includes measures of general self-efficacy (e.g., Schwarzer & Jerusalem, 1995; Sherer et al., 1982). These scales ask people to rate themselves on general items, such as “It is easy for me to stick to my aims and accomplish my goals” and “I can usually handle whatever comes my way.” If you remember from earlier in this module, however, self-efficacy is not a global trait, so there are problems with lumping all types of self-efficacy together in one measure. Thus, the second category of self-efficacy measures includes task-specific measures of self-efficacy. Rather than gauge self-efficacy in general, these measures ask about a person’s self-efficacy beliefs about a particular task. There can be an unlimited number of these types of measures. Task-specific measures of self-efficacy describe several situations relating to a behavior and then ask the participant to write down how confidently he or she feels about doing that behavior. For example, a measure of dieting self-efficacy would list a variety of situations where it can be hard to stick to a diet—such as during vacations, when bored, or when going out to eat with others who are not on a diet. A measure of exercise self-efficacy would list a variety of situations where it can be hard to exercise—such as when feeling depressed, when feeling tired, and when you are with other people who do not want to exercise. Finally, a measure of children’s or teens’ self-regulatory self-efficacy would include a variety of situations where it can be hard to resist impulses—such as controlling temper, resisting peer pressure to smoke cigarettes, and defying pressure to have unprotected sex. Most studies agree that the task-specific measures of self-efficacy are better predictors of behavior than the general measures of self-efficacy (Bandura, 2006). What Are the Major Influences on Self-Efficacy? Self-efficacy beliefs are influenced in five different ways (Bandura, 1997), which are summarized in Table 1 . These five types of self-efficacy influence can take many real-world forms that almost everyone has experienced. You may have had previous performance experiences affect your academic self-efficacy when you did well on a test and believed that you would do well on the next test. A vicarious performance may have affected your athletic self-efficacy when you saw your best friend skateboard for the first time and thought that you could skateboard well, too. Verbal persuasion could have affected your academic self-efficacy when a teacher that you respect told you that you could get into the college of your choice if you studied hard for the SATs. It’s important to know that not all people are equally likely to influence your self-efficacy though verbal persuasion. People who appear trustworthy or attractive, or who seem to be experts, are more likely to influence your self-efficacy than are people who do not possess these qualities (Petty & Brinol, 2010). That’s why a teacher you respect is more likely to influence your self-efficacy than a teacher you do not respect. Imaginal performances are an effective way to increase your self-efficacy. For example, imagining yourself doing well on a job interview actually leads to more effective interviewing (Knudstrup, Segrest, & Hurley, 2003). Affective states and physical sensations abound when you think about the times you have given presentations in class. For example, you may have felt your heart racing while giving a presentation. If you believed your heart was racing because you had just had a lot of caffeine, it likely would not affect your performance. If you believed your heart was racing because you were doing a poor job, you might believe that you cannot give the presentation well. This is because you associate the feeling of anxiety with failure and expect to fail when you are feeling anxious. When and How Does Self-Efficacy Develop? Self-efficacy begins to develop in very young children. Once self-efficacy is developed, it does not remain constant—it can change and grow as an individual has different experiences throughout his or her lifetime. When children are very young, their parents’ self-efficacies are important (Jones & Prinz, 2005). Children of parents who have high parental self-efficacies perceive their parents as more responsive to their needs (Gondoli & Silverberg, 1997). Around the ages of 12 through 16, adolescents’ friends also become an important source of self-efficacy beliefs. Adolescents who associate with peer groups that are not academically motivated tend to experience a decline in academic self-efficacy (Wentzel, Barry, & Caldwell, 2004). Adolescents who watch their peers succeed, however, experience a rise in academic self-efficacy (Schunk & Miller, 2002). This is an example of gaining self-efficacy through vicarious performances, as discussed above. The effects of self-efficacy that develop in adolescence are long lasting. One study found that greater social and academic self-efficacy measured in people ages 14 to 18 predicted greater life satisfaction five years later (Vecchio, Gerbino, Pastorelli, Del Bove, & Caprara, 2007). What Are the Benefits of High Self-Efficacy? Academic Achievement Consider academic self-efficacy in your own life and recall the earlier example of Sally and Lucy. Are you more like Sally, who has high academic self-efficacy and believes that she can use her abilities to do well in school, or are you more like Lucy, who does not believe that she can effectively use her academic abilities to excel in school? Do you think your own self-efficacy has ever affected your academic ability? Do you think you have ever studied more or less intensely because you did or did not believe in your abilities to do well? Many researchers have considered how self-efficacy works in academic settings, and the short answer is that academic self-efficacy affects every possible area of academic achievement (Pajares, 1996). Students who believe in their ability to do well academically tend to be more motivated in school (Schunk, 1991). When self-efficacious students attain their goals, they continue to set even more challenging goals (Schunk, 1990). This can all lead to better performance in school in terms of higher grades and taking more challenging classes (Multon, Brown, & Lent, 1991). For example, students with high academic self-efficacies might study harder because they believe that they are able to use their abilities to study effectively. Because they studied hard, they receive an A on their next test. Teachers’ self-efficacies also can affect how well a student performs in school. Self-efficacious teachers encourage parents to take a more active role in their children’s learning, leading to better academic performance (Hoover-Dempsey, Bassler, & Brissie, 1987). Although there is a lot of research about how self-efficacy is beneficial to school-aged children, college students can also benefit from self-efficacy. Freshmen with higher self-efficacies about their ability to do well in college tend to adapt to their first year in college better than those with lower self-efficacies (Chemers, Hu, & Garcia, 2001). The benefits of self-efficacy continue beyond the school years: people with strong self-efficacy beliefs toward performing well in school tend to perceive a wider range of career options (Lent, Brown, & Larkin, 1986). In addition, people who have stronger beliefs of self-efficacy toward their professional work tend to have more successful careers (Stajkovic & Luthans, 1998). One question you might have about self-efficacy and academic performance is how a student’s actual academic ability interacts with self-efficacy to influence academic performance. The answer is that a student’s actual ability does play a role, but it is also influenced by self-efficacy. Students with greater ability perform better than those with lesser ability. But, among a group of students with the same exact level of academic ability, those with stronger academic self-efficacies outperform those with weaker self-efficacies. One study (Collins, 1984) compared performance on difficult math problems among groups of students with different levels of math ability and different levels of math self-efficacy. Among a group of students with average levels of math ability, the students with weak math self-efficacies got about 25% of the math problems correct. The students with average levels of math ability and strong math self-efficacies got about 45% of the questions correct. This means that by just having stronger math self-efficacy, a student of average math ability will perform 20% better than a student with similar math ability but weaker math self-efficacy. You might also wonder if self-efficacy makes a difference only for people with average or below-average abilities. Self-efficacy is important even for above-average students. In this study, those with above-average math abilities and low math self-efficacies answered only about 65% of the questions correctly; those with above-average math abilities and high math self-efficacies answered about 75% of the questions correctly. Healthy Behaviors Think about a time when you tried to improve your health, whether through dieting, exercising, sleeping more, or any other way. Would you be more likely to follow through on these plans if you believed that you could effectively use your skills to accomplish your health goals? Many researchers agree that people with stronger self-efficacies for doing healthy things (e.g., exercise self-efficacy, dieting self-efficacy) engage in more behaviors that prevent health problems and improve overall health (Strecher, DeVellis, Becker, & Rosenstock, 1986). People who have strong self-efficacy beliefs about quitting smoking are able to quit smoking more easily (DiClemente, Prochaska, & Gibertini, 1985). People who have strong self-efficacy beliefs about being able to reduce their alcohol consumption are more successful when treated for drinking problems (Maisto, Connors, & Zywiak, 2000). People who have stronger self-efficacy beliefs about their ability to recover from heart attacks do so more quickly than those who do not have such beliefs (Ewart, Taylor, Reese, & DeBusk, 1983). One group of researchers (Roach Yadrick, Johnson, Boudreaux, Forsythe, & Billon, 2003) conducted an experiment with people trying to lose weight. All people in the study participated in a weight loss program that was designed for the U.S. Air Force. This program had already been found to be very effective, but the researchers wanted to know if increasing people’s self-efficacies could make the program even more effective. So, they divided the participants into two groups: one group received an intervention that was designed to increase weight loss self-efficacy along with the diet program, and the other group received only the diet program. The researchers tried several different ways to increase self-efficacy, such as having participants read a copy of Oh, The Places You’ll Go! by Dr. Seuss (1990), and having them talk to someone who had successfully lost weight. The people who received the diet program and an intervention to increase self-efficacy lost an average of 8.2 pounds over the 12 weeks of the study; those participants who had only the diet program lost only 5.8 pounds. Thus, just by increasing weight loss self-efficacy, participants were able to lose over 50% more weight. Studies have found that increasing a person’s nutritional self-efficacy can lead them to eat more fruits and vegetables (Luszczynska, Tryburcy, & Schwarzer, 2006). Self-efficacy plays a large role in successful physical exercise (Maddux & Dawson, 2014). People with stronger self-efficacies for exercising are more likely to plan on beginning an exercise program, actually beginning that program (DuCharme & Brawley, 1995), and continuing it (Marcus, Selby, Niaura, & Rossi, 1992). Self-efficacy is especially important when it comes to safe sex. People with greater self-efficacies about condom usage are more likely to engage in safe sex (Kaneko, 2007), making them more likely to avoid sexually transmitted diseases, such as HIV (Forsyth & Carey, 1998). Athletic Performance If you are an athlete, self-efficacy is especially important in your life. Professional and amateur athletes with stronger self-efficacy beliefs about their athletic abilities perform better than athletes with weaker levels of self-efficacy (Wurtele, 1986). This holds true for athletes in all types of sports, including track and field (Gernigon & Delloye, 2003), tennis (Sheldon & Eccles, 2005), and golf (Bruton, Mellalieu, Shearer, Roderique-Davies, & Hall, 2013). One group of researchers found that basketball players with strong athletic self-efficacy beliefs hit more foul shots than did basketball players with weak self-efficacy beliefs (Haney & Long, 1995). These researchers also found that the players who hit more foul shots had greater increases in self-efficacy after they hit the foul shots compared to those who hit fewer foul shots and did not experience increases in self-efficacy. This is an example of how we gain self-efficacy through performance experiences. Self-Regulation One of the major reasons that higher self-efficacy usually leads to better performance and greater success is that self-efficacy is an important component of self-regulation. Self-regulation is the complex process through which you control your thoughts, emotions, and actions (Gross, 1998). It is crucial to success and well-being in almost every area of your life. Every day, you are exposed to situations where you might want to act or feel a certain way that would be socially inappropriate or that might be unhealthy for you in the long run. For example, when sitting in a boring class, you might want to take out your phone and text your friends, take off your shoes and take a nap, or perhaps scream because you are so bored. Self-regulation is the process that you use to avoid such behaviors and instead sit quietly through class. Self-regulation takes a lot of effort, and it is often compared to a muscle that can be exhausted (Baumeister, Bratslavsky, Muraven, & Tice, 1998). For example, a child might be able to resist eating a pile of delicious cookies if he or she is in the room with the cookies for only a few minutes, but if that child were forced to spend hours with the cookies, his or her ability to regulate the desire to eat the cookies would wear down. Eventually, his or her self-regulatory abilities would be exhausted, and the child would eat the cookies. A person with strong self-efficacy beliefs might become less distressed in the face of failure than might someone with weak self-efficacy. Because self-efficacious people are less likely to become distressed, they draw less on their self-regulation reserves; thus, self-efficacious people persist longer in the face of a challenge. Self-efficacy influences self-regulation in many ways to produce better performance and greater success (Maddux & Volkmann, 2010). First, people with stronger self-efficacies have greater motivation to perform in the area for which they have stronger self-efficacies (Bandura & Locke, 2003). This means that people are motivated to work harder in those areas where they believe they can effectively perform. Second, people with stronger self-efficacies are more likely to persevere through challenges in attaining goals (Vancouver, More, & Yoder, 2008). For example, people with high academic self-efficacies are better able to motivate themselves to persevere through such challenges as taking a difficult class and completing their degrees because they believe that their efforts will pay off. Third, self-efficacious people believe that they have more control over a situation. Having more control over a situation means that self-efficacious people might be more likely to engage in the behaviors that will allow them to achieve their desired goal. Finally, self-efficacious people have more confidence in their problem-solving abilities and, thus, are able to better use their cognitive resources and make better decisions, especially in the face of challenges and setbacks (Cervone, Jiwani, & Wood, 1991). Collective Efficacy Collective efficacy is a concept related to self-efficacy. Collective efficacy refers to the shared beliefs among members of a group about the group’s ability to effectively perform the tasks needed to attain a valued goal (Bandura, 1997). Groups and teams that have higher collective efficacies perform better than groups and teams with lower collective efficacies (Marks, 1999). Collective efficacy is especially important during tasks that require a lot of teamwork (Katz-Navon & Erez, 2005). For example, when you have to do a group project that involves each group member contributing a portion of the final project, your group’s performance will be much better if all members share the belief that your group can perform the necessary tasks together. Collective efficacy plays a role in romantic relationships. Married couples who strongly believe in their ability to accomplish shared goals are happier than couples with weaker efficacy beliefs (Kaplan & Maddux, 2002). Although collective efficacy is an important part of how well a team or group performs, self-efficacy also plays a role in team situations. For example, better decision-making self-efficacy predicts better performance in team sports, such as baseball (Hepler & Feltz, 2012). Conclusion Self-efficacy refers to your beliefs about your ability to effectively perform the tasks needed to attain a valued goal and it affects your daily life in many ways. Self-efficacious adolescents perform better at school and self-efficacious adults perform better at work. These individuals have happier romantic relationships and work better in teams. People with strong self-efficacies have better health than those with weak self-efficacies; they are more likely to engage in behaviors that prevent health problems and actually increase their health. They are more likely to begin and continue exercise, have safer sex, and eat better foods. Higher self-efficacy is also useful for getting out of bad habits. People with strong self-efficacies are able to lose weight, quit smoking, and cut down on alcohol consumption more successfully than can people with low self-efficacies. As illustrated by the well-known children’s book The Little Engine That Could (Piper, 1930),telling yourself “I think I can” can be a powerful motivator and can increase your chances for success. Our own final words on self-efficacy also draw from children’s literature. Many people receive a copy of Oh, The Places You’ll Go! when they reach a major milestone, such as graduating high school to go on to college or graduating college to enter the workforce. Whether or not you or whoever gave you the book knew it, Oh, The Places You’ll Go! is all about self-efficacy. This book speaks directly to readers by talking about all of the challenges they might face on their journeys. Throughout the book, the narrator continues to assure readers that they will be able to use their abilities to effectively handle these challenges. So, we leave you with Dr. Seuss’ wise words: “You’re on your own. And you know what you know. And you are the guy who’ll decide where to go…. And will you succeed? Yes! You will, indeed! 98 and 3/4 percent guaranteed.” Outside Resources Video: Association for Psychological Science presents an interview with Albert Bandura Video: Self-efficacy’s role and sources Web: Professor Frank Pajares’ self-efficacy site. http://www.uky.edu/~eushe2/Pajares/self-efficacy.html Discussion Questions 1. Can you think of ways your own self-efficacy beliefs play a role in your daily life? In which areas do you have strong self-efficacy? In which areas would you like your self-efficacy to be a bit stronger? How could you increase your self-efficacy in those areas? 2. Can you think of a time when a teacher, coach, or parent did something to encourage your self-efficacy? What did he or she do and say? How did it enhance your self-efficacy? 3. What are some ways that you can help strengthen the self-efficacies of the people in your life? 4. Can you think of a time when collective efficacy played a role in your team or group activities? What did you notice about being on a team or in a group that had high collective efficacy? What about a team or group with low collective efficacy? Vocabulary Collective efficacy The shared beliefs among members of a group about the group’s ability to effectively perform the tasks needed to attain a valued goal. Imaginal performances When imagining yourself doing well increases self-efficacy. Performance experiences When past successes or failures lead to changes in self-efficacy. Self-efficacy The belief that you are able to effectively perform the tasks needed to attain a valued goal. Self-regulation The complex process through which people control their thoughts, emotions, and actions. Self-report measure A type of questionnaire in which participants answer questions whose answers correspond to numerical values that can be added to create an overall index of some construct. Task-specific measures of self-efficacy Measures that ask about self-efficacy beliefs for a particular task (e.g., athletic self-efficacy, academic self-efficacy). Verbal persuasion When trusted people (friends, family, experts) influence your self-efficacy for better or worse by either encouraging or discouraging you about your ability to succeed. Vicarious performances When seeing other people succeed or fail leads to changes in self-efficacy.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_3%3A_Personality/3.08%3A_Self-Efficacy.txt
By Robert Bornstein Adelphi University Originating in the work of Sigmund Freud, the psychodynamic perspective emphasizes unconscious psychological processes (for example, wishes and fears of which we’re not fully aware), and contends that childhood experiences are crucial in shaping adult personality. The psychodynamic perspective has evolved considerably since Freud’s time, and now includes innovative new approaches such as object relations theory and neuropsychoanalysis. Some psychodynamic concepts have held up well to empirical scrutiny while others have not, and aspects of the theory remain controversial, but the psychodynamic perspective continues to influence many different areas of contemporary psychology. learning objectives • Describe the major models of personality within the psychodynamic perspective. • Define the concept of ego defense, and give examples of commonly used ego defenses. • Identify psychodynamic concepts that have been supported by empirical research. • Discuss current trends in psychodynamic theory. Introduction Have you ever done something that didn’t make sense? Perhaps you waited until the last minute to begin studying for an exam, even though you knew that delaying so long would ensure that you got a poor grade. Or maybe you spotted a person you liked across the room—someone about whom you had romantic feelings—but instead of approaching that person you headed the other way (and felt ashamed about it afterward). If you’ve ever done something that didn’t seem to make sense—and who among us hasn’t—the psychodynamic perspective on personality might be useful for you. It can help you understand why you chose not to study for that test, or why you ran the other way when the person of your dreams entered the room. Psychodynamic theory (sometimes called psychoanalytic theory) explains personality in terms of unconscious psychological processes (for example, wishes and fears of which we’re not fully aware), and contends that childhood experiences are crucial in shaping adult personality. Psychodynamic theory is most closely associated with the work of Sigmund Freud, and with psychoanalysis, a type of psychotherapy that attempts to explore the patient’s unconscious thoughts and emotions so that the person is better able to understand him- or herself. Freud’s work has been extremely influential, its impact extending far beyond psychology (several years ago Time magazine selected Freud as one of the most important thinkers of the 20th century). Freud’s work has been not only influential, but quite controversial as well. As you might imagine, when Freud suggested in 1900 that much of our behavior is determined by psychological forces of which we’re largely unaware—that we literally don’t know what’s going on in our own minds—people were (to put it mildly) displeased (Freud, 1900/1953a). When he suggested in 1905 that we humans have strong sexual feelings from a very early age, and that some of these sexual feelings are directed toward our parents, people were more than displeased—they were outraged (Freud, 1905/1953b). Few theories in psychology have evoked such strong reactions from other professionals and members of the public. Controversy notwithstanding, no competent psychologist, or student of psychology, can ignore psychodynamic theory. It is simply too important for psychological science and practice, and continues to play an important role in a wide variety of disciplines within and outside psychology (for example, developmental psychology, social psychology, sociology, and neuroscience; see Bornstein, 2005, 2006; Solms & Turnbull, 2011). This module reviews the psychodynamic perspective on personality. We begin with a brief discussion of the core assumptions of psychodynamic theory, followed by an overview of the evolution of the theory from Freud’s time to today. We then discuss the place of psychodynamic theory within contemporary psychology, and look toward the future as well. Core Assumptions of the Psychodynamic Perspective The core assumptions of psychodynamic theory are surprisingly simple. Moreover, these assumptions are unique to the psychodynamic framework: No other theories of personality accept these three ideas in their purest form. Assumption 1:Primacy of the Unconscious Psychodynamic theorists contend that the majority of psychological processes take place outside conscious awareness. In psychoanalytic terms, the activities of the mind (or psyche) are presumed to be largely unconscious. Research confirms this basic premise of psychoanalysis: Many of our mental activities—memories, motives, feelings, and the like—are largely inaccessible to consciousness (Bargh & Morsella, 2008; Bornstein, 2010; Wilson, 2009). Assumption 2: Critical Importance of Early Experiences Psychodynamic theory is not alone in positing that early childhood events play a role in shaping personality, but the theory is unique in the degree to which it emphasizes these events as determinants of personality development and dynamics. According to the psychodynamic model, early experiences—including those occurring during the first weeks or months of life—set in motion personality processes that affect us years, even decades, later (Blatt & Levy, 2003; McWilliams, 2009). This is especially true of experiences that are outside the normal range (for example, losing a parent or sibling at a very early age). Assumption 3: Psychic Causality The third core assumption of psychodynamic theory is that nothing in mental life happens by chance—that there is no such thing as a random thought, feeling, motive, or behavior. This has come to be known as the principle of psychic causality, and though few psychologists accept the principle of psychic causality precisely as psychoanalysts conceive it, most theorists and researchers agree that thoughts, motives, emotional responses, and expressed behaviors do not arise randomly, but always stem from some combination of identifiable biological and psychological processes (Elliott, 2002; Robinson & Gordon, 2011). The Evolution of Psychodynamic Theory Given Freud’s background in neurology, it is not surprising that the first incarnation of psychoanalytic theory was primarily biological: Freud set out to explain psychological phenomena in terms that could be linked to neurological functioning as it was understood in his day. Because Freud’s work in this area evolved over more than 50 years (he began in 1885, and continued until he died in 1939), there were numerous revisions along the way. Thus, it is most accurate to think of psychodynamic theory as a set of interrelated models that complement and build upon each other. Three are particularly important: the topographic model, the psychosexual stage model, and the structural model. The Topographic Model In his 1900 book, The Interpretation of Dreams, Freud introduced his topographic model of the mind, which contended that the mind could be divided into three regions: conscious, preconscious, and unconscious. The conscious part of the mind holds information that you’re focusing on at this moment—what you’re thinking and feeling right now. The preconscious contains material that is capable of becoming conscious but is not conscious at the moment because your attention is not being directed toward it. You can move material from the preconscious into consciousness simply by focusing your attention on it. Consider, for example, what you had for dinner last night. A moment ago that information was preconscious; now it’s conscious, because you “pulled it up” into consciousness. (Not to worry, in a few moments it will be preconscious again, and you can move on to more important things.) The unconscious—the most controversial part of the topographic model—contains anxiety-producing material (for example, sexual impulses, aggressive urges) that are deliberately repressed (held outside of conscious awareness as a form of self-protection because they make you uncomfortable). The terms conscious, preconscious, and unconscious continue to be used today in psychology, and research has provided considerable support for Freud’s thinking regarding conscious and preconscious processing (Erdelyi, 1985, 2004). The existence of the unconscious remains controversial, with some researchers arguing that evidence for it is compelling and others contending that “unconscious” processing can be accounted for without positing the existence of a Freudian repository of repressed wishes and troubling urges and impulses (Eagle, 2011; Luborsky & Barrett, 2006). The Psychosexual Stage Model Freud remained devoted to the topographic model, but by 1905 he had outlined the key elements of his psychosexual stage model, which argued that early in life we progress through a sequence of developmental stages, each with its own unique challenge and its own mode of sexual gratification. Freud’s psychosexual stages—oral, anal, Oedipal, latency, and genital—are well-known even to non-analytic psychologists. Frustration or overgratification during a particular stage was hypothesized to result in “fixation” at that stage, and to the development of an oral, anal, or Oedipal personality style (Bornstein, 2005, 2006). Table 1 illustrates the basic organization of Freud’s (1905/1953b) psychosexual stage model, and the three personality styles that result. Note that—consistent with the developmental challenges that the child confronts during each stage—oral fixation is hypothesized to result in a dependent personality, whereas anal fixation results in a lifelong preoccupation with control. Oedipal fixation leads to an aggressive, competitive personality orientation. The Structural Model Ultimately, Freud recognized that the topographic model was helpful in understanding how people process and store information, but not all that useful in explaining other important psychological phenomena (for example, why certain people develop psychological disorders and others do not). To extend his theory, Freud developed a complementary framework to account for normal and abnormal personality development—the structural model—which posits the existence of three interacting mental structures called the id, ego, and superego. The id is the seat of drives and instincts, whereas the ego represents the logical, reality-oriented part of the mind, and the superego is basically your conscience—the moral guidelines, rules, and prohibitions that guide your behavior. (You acquire these through your family and through the culture in which you were raised.) According to the structural model, our personality reflects the interplay of these three psychic structures, which differ across individuals in relative power and influence. When the id predominates and instincts rule, the result is an impulsive personality style. When the superego is strongest, moral prohibitions reign supreme, and a restrained, overcontrolled personality ensues. When the ego is dominant, a more balanced set of personality traits develop (Eagle, 2011; McWilliams, 2009). The Ego and Its Defenses In addition to being the logical, rational, reality-oriented part of the mind, the ego serves another important function: It helps us manage anxiety through the use of ego defenses. Ego defenses are basically mental strategies that we use automatically and unconsciously when we feel threatened (Cramer, 2000, 2006). They help us navigate upsetting events, but there’s a cost as well: All ego defenses involve some distortion of reality. For example, repression (the most basic ego defense, according to Freud) involves removing from consciousness upsetting thoughts and feelings, and moving those thoughts and feelings to the unconscious. When you read about a person who “blocked out” upsetting memories of child abuse, that’s an example of repression. Another ego defense is denial. In denial (unlike repression), we are aware that a particular event occurred, but we don’t allow ourselves to see the implications of that event. When you hear a person with a substance abuse problem say “I’m fine—even though people complain about my drinking I never miss a day of work,” that person is using denial. Table 2 lists some common ego defenses in psychodynamic theory, along with a definition and example of each. Psychodynamic Theories: Where Are We Now? The topographic model, psychosexual stage model, and structural model continue to influence contemporary psychology, but it is important to keep in mind that psychodynamic theory is never static, ever changing and evolving in response to new ideas and findings. In the following sections we discussion four current trends in the psychodynamic perspective: object relations theory, the empirical testing of psychodynamic concepts, psychoanalysis and culture, and the opportunities and challenges of neuroscience. Object Relations Theory and the Growth of the Psychodynamic Perspective In recent years a number of new psychodynamic frameworks have emerged to explain personality development and dynamics. The most important of these is object relations theory. (In psychoanalytic language, the term “object” refers to a person, so object relations theory is really something more like “interpersonal relations theory.”) Object relations theory contends that personality can be understood as reflecting the mental images of significant figures (especially the parents) that we form early in life in response to interactions taking place within the family (Kernberg, 2004; Wachtel, 1997). These mental images (sometimes called introjects) serve as templates for later interpersonal relationships—almost like relationship blueprints or “scripts.” So if you internalized positive introjects early in life (for example, a mental image of mom or dad as warm and accepting), that’s what you expect to occur in later relationships as well. If you internalized a mental image of mom or dad as harsh and judgmental, you might instead become a self-critical person, and feel that you can never live up to other people’s standards . . . or your own (Luyten & Blatt, 2013). Object relations theory has increased many psychologists’ interest in studying psychodynamic ideas and concepts, in part because it represents a natural bridge between the psychodynamic perspective and research in other areas of psychology. For example, developmental and social psychologists also believe that mental representations of significant people play an important role in shaping our behavior. In developmental psychology you might read about this in the context of attachment theory (which argues that attachments—or bonds—to significant people are key to understanding human behavior; Fraley, 2002). In social psychology, mental representations of significant figures play an important role in social cognition (thoughts and feelings regarding other people; Bargh & Morsella, 2008; Robinson & Gordon, 2011). Empirical Research on Psychodynamic Theories Empirical research assessing psychodynamic concepts has produced mixed results, with some concepts receiving good empirical support, and others not faring as well. For example, the notion that we express strong sexual feelings from a very early age, as the psychosexual stage model suggests, has not held up to empirical scrutiny. On the other hand, the idea that there are dependent, control-oriented, and competitive personality types—an idea also derived from the psychosexual stage model—does seem useful. Many ideas from the psychodynamic perspective have been studied empirically. Luborsky and Barrett (2006) reviewed much of this research; other useful reviews are provided by Bornstein (2005), Gerber (2007), and Huprich (2009). For now, let’s look at three psychodynamic hypotheses that have received strong empirical support. • Unconscious processes influence our behavior as the psychodynamic perspective predicts. We perceive and process much more information than we realize, and much of our behavior is shaped by feelings and motives of which we are, at best, only partially aware (Bornstein, 2009, 2010). Evidence for the importance of unconscious influences is so compelling that it has become a central element of contemporary cognitive and social psychology (Robinson & Gordon, 2011). • We all use ego defenses and they help determine our psychological adjustment and physical health. People really do differ in the degree that they rely on different ego defenses—so much so that researchers now study each person’s “defense style” (the unique constellation of defenses that we use). It turns out that certain defenses are more adaptive than others: Rationalization and sublimation are healthier (psychologically speaking) than repression and reaction formation (Cramer, 2006). Denial is, quite literally, bad for your health, because people who use denial tend to ignore symptoms of illness until it’s too late (Bond, 2004). • Mental representations of self and others do indeed serve as blueprints for later relationships. Dozens of studies have shown that mental images of our parents, and other significant figures, really do shape our expectations for later friendships and romantic relationships. The idea that you choose a romantic partner who resembles mom or dad is a myth, but it’s true that you expect to be treated by others as you were treated by your parents early in life (Silverstein, 2007; Wachtel, 1997). Psychoanalysis and Culture One of Freud’s lifelong goals was to use psychoanalytic principles to understand culture and improve intergroup relations (he actually exchanged several letters with Albert Einstein prior to World War II, in which they discussed this issue). During the past several decades, as society has become increasingly multicultural, this effort has taken on new importance; psychoanalysts have been active in incorporating ideas and findings regarding cultural influences into their research and clinical work. For example, studies have shown that individuals raised in individualistic, independence-focused cultures (for example, the United States, Great Britain) tend to define themselves primarily in terms of personal attributes (like attitudes and interests), whereas individuals raised in more sociocentric, interdependent cultures (for example, Japan, India) are more likely to describe themselves in terms of interpersonal relations and connections with others (Oyserman, Coon, & Kemmelmeier, 2002). Our self-representations are, quite literally, a product of our cultural milieu (Markus & Kitayama, 2010). The Opportunities and Challenges of Neuroscience Fifteen years ago, Nobel Laureate Eric Kandel (1998) articulated a vision for an empirically oriented psychodynamic perspective firmly embedded within the principles and findings of neuroscience. Kandel’s vision ultimately led to the development of neuropsychoanalysis, an integration of psychodynamic and neuropsychological concepts that has enhanced researchers’ understanding of numerous aspects of human behavior and mental functioning (Solms & Turnbull, 2011). Some of the first efforts to integrate psychodynamic principles with findings from neuroscience involved sleep and dreams, and contemporary models of dream formation now incorporate principles from both domains (Levin & Nielsen, 2007). Neuroimaging techniques such as functional magnetic resonance imagery (fMRI) have begun to play an increasingly central role in this ongoing psychoanalysis–neuroscience integration as well (Gerber, 2007; Slipp, 2000). Looking Ahead: Psychodynamic Theory in the 21st Century (and Beyond) Despite being surrounded by controversy, the psychodynamic perspective on personality has survived for more than a century, reinventing itself in response to new empirical findings, theoretical shifts, and changing social forces. The psychodynamic perspective evolved considerably during the 20th century and will continue to evolve throughout the 21st century as well. Psychodynamic theory may be the closest thing we have to an overarching, all-encompassing theory in psychology. It deals with a broad range of issues—normal and pathological functioning, motivation and emotion, childhood and adulthood, individual and culture—and the psychodynamic perspective continues to have tremendous potential for integrating ideas and findings across the many domains of contemporary psychology. Outside Resources Institution: Institute for Psychoanalytic Training and Research (IPTAR) - A branch of the International Psychoanalytic Association, IPTAR plays an active role in supporting empirical research on psychoanalytic theory and therapy. http://www.iptar.org/ Institution: The American Psychoanalytic Association - The American Psychoanalytic Association supports psychodynamic training and research, and sponsors a number of workshops (as well as two annual meetings) each year. http://www.apsa.org/ Institution: The American Psychological Association Division of Psychoanalysis - Division 39 of the American Psychological Association is the “psychological home” of psychodynamic theory and research. http://www.apadivisions.org/division-39/ Web: Library of Congress Exhibit – Freud: Conflict and Culture. This is a terrific website full of photos, original manuscripts, and links to various Freud artifacts. Toward the end of Section Three (From the Individual to Society) there is a link to Freud’s 1938 BBC radio address; play it and you’ll get to hear Freud’s voice. http://www.loc.gov/exhibits/freud/ Discussion Questions 1. What is psychic causality? 2. What are the main differences between the preconscious and the unconscious in Freud’s topographic model? 3. What are the three key structures in the structural model of the mind—and what does each structure do? 4. Which ego defense do you think is more adaptive: reaction formation or sublimation? Why? 5. How do people raised in individualistic societies differ from those raised in more sociocentric societies with respect to their self-concept—how do they perceive and describe themselves? 6. According to object relations theory, how do early relationships with our parents and other significant figures affect later friendships and romantic relationships? 7. Which field has the potential to benefit more from the emerging new discipline of neuropsychoanalysis: neuroscience, or psychoanalysis? Why? Vocabulary Ego defenses Mental strategies, rooted in the ego, that we use to manage anxiety when we feel threatened (some examples include repression, denial, sublimation, and reaction formation). Neuropsychoanalysis An integrative, interdisciplinary domain of inquiry seeking to integrate psychoanalytic and neuropsychological ideas and findings to enhance both areas of inquiry (you can learn more by visiting the webpage of the International Neuropsychoanalysis Society at http://www.neuropsa.org.uk/). Object relations theory A modern offshoot of the psychodynamic perspective, this theory contends that personality can be understood as reflecting mental images of significant figures (especially the parents) that we form early in life in response to interactions taking place within the family; these mental images serve as templates (or “scripts”) for later interpersonal relationships. Primacy of the Unconscious The hypothesis—supported by contemporary empirical research—that the vast majority of mental activity takes place outside conscious awareness. Psychic causality The assumption that nothing in mental life happens by chance—that there is no such thing as a “random” thought or feeling. Psychosexual stage model Probably the most controversial aspect of psychodynamic theory, the psychosexual stage model contends that early in life we progress through a sequence of developmental stages (oral, anal, Oedipal, latency, and genital), each with its own unique mode of sexual gratification. Structural model Developed to complement and extend the topographic model, the structural model of the mind posits the existence of three interacting mental structures called the id, ego, and superego. Topographic model Freud’s first model of the mind, which contended that the mind could be divided into three regions: conscious, preconscious, and unconscious. (The “topographic” comes from the fact that topography is the study of maps.)
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_3%3A_Personality/3.09%3A_The_Psychodynamic_Perspective.txt
By M. Brent Donnellan Michigan State University This module describes different ways to address questions about personality stability across the lifespan. Definitions of the major types of personality stability are provided, and evidence concerning the different kinds of stability and change are reviewed. The mechanisms thought to produce personality stability and personality change are identified and explained. Learning objectives • Define heterotypic stability, homotypic stability, absolute stability, and differential stability. • Describe evidence concerning the absolute and differential stability of personality attributes across the lifespan. • Explain the maturity, cumulative continuity, and corresponsive principles of personality development. • Explain person-environment transactions, and distinguish between active, reactive, and evocative person-environment transactions. • Identify the four processes that promote personality stability (attraction, selection, manipulation, and attrition). Provide examples of these processes. • Describe the mechanisms behind the possibility of personality transformation. Introduction Personality psychology is about how individuals differ from each other in their characteristic ways of thinking, feeling, and behaving. Some of the most interesting questions about personality attributes involve issues of stability and change. Are shy children destined to become shy adults? Are the typical personality attributes of adults different from the typical attributes of adolescents? Do people become more self-controlled and better able to manage their negative emotions as they become adults? What mechanisms explain personality stability and what mechanisms account for personality change? Defining Different Kinds of Personality Stability Something frustrating happens when you attempt to learn about personality stability[1]: As with many topics in psychology, there are a number of different ways to conceptualize and quantify personality stability (e.g., Caspi & Bem, 1990; Roberts, Wood, & Caspi, 2008). This means there are multiple ways to consider questions about personality stability. Thus, the simple (and obviously frustrating) way to respond to most blanket questions about personality stability is to simply answer that it depends on what one means by personality stability. To provide a more satisfying answer to questions about stability, I will first describe the different ways psychologists conceptualize and evaluate personality stability. I will make an important distinction between heterotypic and homotypic stability. I will then describe absolute and differential stability, two ways of considering homotypic stability. I will also draw your attention to the important concept of individual differences in personality development. Heterotypic stability refers to the psychological coherence of an individual’s thoughts, feelings, and behaviors across development. Questions about heterotypic stability concern the degree of consistency in underlying personality attributes. The tricky part of studying heterotypic stability is that the underlying psychological attribute can have different behavioral expressions at different ages. (You may already know that the prefix “hetero” means something like “different” in Greek.) Shyness is a good example of such an attribute because shyness is expressed differently by toddlers and young children than adults. The shy toddler might cling to a caregiver in a crowded setting and burst into tears when separated from this caregiver. The shy adult, on the other hand, may avoid making eye contact with strangers and seem aloof and distant at social gatherings. It would be highly unusual to observe an adult burst into tears in a crowded setting. The observable behaviors typically associated with shyness “look” different at different ages. Researchers can study heterotypic continuity only once they have a theory that specifies the different behavioral manifestations of the psychological attribute at different points in the lifespan. As it stands, there is evidence that attributes such as shyness and aggression exhibit heterotypic stability across the lifespan (Caspi, Bem, & Elder, 1989). Individuals who act shy as children often act shy as adults, but the degree of correspondence is far from perfect because many things can intervene between childhood and adulthood to alter how an individual develops. Nonetheless, the important point is that the patterns of behavior observed in childhood sometimes foreshadow adult personality attributes. Homotypic stability concerns the amount of similarity in the same observable personality characteristics across time. (The prefix “homo” means something like the “same” in Greek.) For example, researchers might ask whether stress reaction or the tendency to become easily distressed by the normal challenges of life exhibits homotypic stability from age 25 to age 45. The assumption is that this attribute has the same manifestations at these different ages. Researchers make further distinctions between absolute stability and differential stability when considering homotypic stability. Absolute stability refers to the consistency of the level of the same personality attribute across time. If an individual received a score of 45 on a hypothetical measure of stress reaction at age 20 and at age 40, researchers would conclude there was evidence of absolute stability. Questions about absolute stability can be considered at the group level or the individual level. At the group level, it is common for personality researchers to compare average scores on personality measures for groups of different ages. For example, it is possible to investigate whether the average 40-year-old adult has a lower (or higher) level of stress reaction than the average 20-year-old. The answer to this question would tell researchers something about typical patterns of personality development. It is important to consider absolute stability from both the group and individual perspectives. The individual level is interesting because different people might have different patterns of absolute change over time. One person might report consistently low levels of stress reaction throughout adulthood, whereas another person may report dramatic increases in stress reaction during her 30s and 40s. These different individual patterns can be present even if the overall trend is for a decline in stress reaction with age. Personality psychology is about individual differences and whether an individual’s attributes change or remain the same across time might be an important individual difference. Indeed, there are intriguing hints that the rate and direction of change in characteristics such as stress reaction (or neuroticism) predicts mortality (Mroczek & Spiro, 2007). Differential stability refers to the consistency of a personality attribute in terms of an individual’s rank-ordering. A typical question about differential stability might be whether a 20-year-old who is low in stress reaction relative to her same aged peers develops into a 40-year-old who is also low in stress reaction compared to her peers. Differential stability is often interesting because many psychological attributes show average changes across the lifespan. Regardless of average changes with age, however, it is common to assume that more trait-like attributes have a high degree of differential stability. Consider athletic performance as an attribute that may exhibit differential stability. The average 35-year-old is likely to run a 5K race faster than the average 55-year-old. Nonetheless, individuals who are fast relative to their peers in their 30s might also be fast relative to their peers in their 50s. Likewise, even if most people decline on a stress reaction as they age, it is still useful to investigate whether there is consistency over time in their relative standing on this attribute. Basic Findings about Absolute and Differential Stability Absolute Stability. There are two common ways to investigate average levels of personality attributes at different ages. The simplest approach is to conduct a cross-sectional study and compare different age groups on a given attribute assessed at the same time. For instance, researchers might collect data from a sample of individuals ranging in age from 18 to 99 years and compare stress reaction scores for groups of different ages. A more complicated design involves following the same group of individuals and assessing their personalities at multiple time points (often two). This is a longitudinal study,and it is a much better way to study personality stability than a cross-sectional study. If all of the individuals in the sample are roughly the same age at the start of the study, they would all be considered members of the same birth cohort. One of the chief drawbacks of a cross-sectional study is that individuals who are of different ages are also members of different birth cohorts. Thus, researchers have no way of knowing whether any personality differences observed in a cross-sectional study are attributable to the influence of age per se or birth cohort. A longitudinal study is better able to isolate age effects (i.e., differences in personality related to maturation and development) from cohort effects (i.e., differences in personality related to being born at a particular point in history) than a cross-sectional study. Cohort is a constant (i.e., an unchanging value) in a longitudinal study when all participants start the study at roughly the same age. A number of large-scale, cross-sectional studies have evaluated age differences in personality (Anusic, Lucas, & Donnellan, 2012; Lucas & Donnellan, 2009; McCrae & Costa, 2003; Soto, John, Gosling, & Potter, 2011; Srivastava, John, Gosling, & Potter, 2003) as have a number of longitudinal studies (Lucas & Donnellan, 2011; Specht, Egloff, & Schmukle, 2011; Terracciano, McCrae, Brant, & Costa, 2005; Wortman, Lucas, & Donnellan, in press). Fortunately, many of the general trends from these different designs converge on the same basic set of findings. Most notably, Roberts, Walton, and Viechtbauer (2006) combined the results of 92 longitudinal studies to provide an overview of absolute changes in personality across the lifespan. They used the Big Five taxonomy (e.g., John, Naumann, & Soto, 2008) to categorize the different personality attributes examined in the individual studies to make sense of the vast literature. The Big Five domains include extraversion (attributes such as assertive, confident, independent, outgoing, and sociable), agreeableness (attributes such as cooperative, kind, modest, and trusting), conscientiousness (attributes such as hard working, dutiful, self-controlled, and goal-oriented), neuroticism (attributes such as anxious, tense, moody, and easily angered), and openness (attributes such as artistic, curious, inventive, and open-minded). The Big Five is one of the most common ways of organizing the vast range of personality attributes that seem to distinguish one person from the next. This organizing framework made it possible for Roberts et al. (2006) to draw broad conclusions from the literature. In general, average levels of extraversion (especially the attributes linked to self-confidence and independence), agreeableness, and conscientiousness appear to increase with age whereas neuroticism appears to decrease with age (Roberts et al., 2006). Openness also declines with age, especially after mid-life (Roberts et al., 2006). These changes are often viewed as positive trends given that higher levels of agreeableness and conscientiousness and lower levels of neuroticism are associated with seemingly desirable outcomes such as increased relationship stability and quality, greater success at work, better health, a reduced risk of criminality and mental health problems, and even decreased mortality (e.g., Kotov, Gamez, Schmidt, & Watson, 2010; Miller & Lynam 2001; Ozer & Benet-Martínez, 2006; Roberts, Kuncel, Shiner, Caspi, & Goldberg, 2007). This pattern of positive average changes in personality attributes is known as the maturity principle of adult personality development (Caspi, Roberts, & Shiner, 2005). The basic idea is that attributes associated with positive adaptation and attributes associated with the successful fulfillment of adult roles tend to increase during adulthood in terms of their average levels. Beyond providing insights into the general outline of adult personality development, Roberts et al. (2006) found that young adulthood (the period between the ages of 18 and the late 20s) was the most active time in the lifespan for observing average changes, although average differences in personality attributes were observed across the lifespan. Such a result might be surprising in light of the intuition that adolescence is a time of personality change and maturation. However, young adulthood is typically a time in the lifespan that includes a number of life changes in terms of finishing school, starting a career, committing to romantic partnerships, and parenthood (Donnellan, Conger, & Burzette, 2007; Rindfuss, 1991). Finding that young adulthood is an active time for personality development provides circumstantial evidence that adult roles might generate pressures for certain patterns of personality development. Indeed, this is one potential explanation for the maturity principle of personality development. It should be emphasized again that average trends are summaries that do not necessarily apply to all individuals. Some people do not conform to the maturity principle. The possibility of exceptions to general trends is the reason it is necessary to study individual patterns of personality development. The methods for this kind of research are becoming increasingly popular (e.g., Vaidya, Gray, Haig, Mroczek, & Watson, 2008) and existing studies suggest that personality changes differ across people (Roberts & Mroczek, 2008). These new research methods work best when researchers collect more than two waves of longitudinal data covering longer spans of time. This kind of research design is still somewhat uncommon in psychological studies but it will likely characterize the future of research on personality stability. Differential stability. The evaluation of differential stability requires a longitudinal study. The simplest strategy is to follow a large sample of participants of the same age and measure their personality attributes at two points separated by a meaningful span of time. The researcher then calculates the correlation between scores at the first assessment and scores at the second assessment (a coefficient sometimes called a test-retest correlation or even a stability coefficient). As you know, a correlation coefficient is a numerical summary of the linear association between two variables. Correlations around .1 or –.1 are often called “small” associations, whereas correlations around .50 and –.50 (or larger) are often called “large” associations (Cohen, 1988). Roberts and DelVecchio (2000) summarized 3,217 test-retest correlations for a wide range of personality attributes reported in 152 longitudinal studies. They used statistical methods to equate the different test-retest correlations to a common interval of about seven years. This allowed them to compare results from studies of differing lengths of time because not all studies followed participants for the same interval of time. Roberts and DelVecchio found that differential stability increased with age. The correlations ranged from about .30 for samples involving young children to about .70 for samples involving older adults. Ferguson (2010) updated and replicated this basic pattern. This pattern of increasing stability with age is called the cumulative continuity principle of personality development (Caspi et al., 2005). This general pattern holds for both women and men and applies to a wide range of different personality attributes ranging from extraversion to openness and curiosity. It is important to emphasize, however, that the observed correlations are never perfect at any age (i.e., the correlations do not reach 1.0). This indicates that personality changes can occur at any time in the lifespan; it just seems that greater inconsistency is observed in childhood and adolescence than in adulthood. Key Messages So Far It is useful to summarize the key ideas of this module so far. The starting point was the realization that there are several different ways to define and measure personality stability. Heterotypic stability refers to the consistency of the underlying psychological attribute that may have different behavioral manifestations at different ages. Homotypic stability, on the other hand, refers to the consistency of the same observable manifestations of a personality attribute. This type of stability is commonly studied in the current literature, and absolute and differential stability are a focus on many studies. A consideration of the broad literature on personality stability yields two major conclusions. 1. Average levels of personality attributes seem to change in predictable ways across the lifespan in line with maturity principle of personality development. Traits that are correlated with positive outcomes (such as conscientiousness) seem to increase from adolescence to adulthood. This perspective on personality stability is gained from considering absolute stability in the form of average levels of personality attributes at different ages. 2. Personality attributes are relatively enduring attributes that become increasingly consistent during adulthood in line with the cumulative continuity principle. This perspective on stability is gained from considering differential stability in the form of test-retest correlations from longitudinal studies. In general, the picture that emerges from the literature is that personality traits are relatively enduring attributes that become more stable from childhood to adulthood. Nonetheless, the stability of personality attributes is not perfect at any period in the lifespan. This is an important conclusion because it challenges two extreme perspectives that have been influential in psychological research. More than 100 years ago, the famous psychologist William James remarked that character (personality) was “set like plaster” for most people by age 30. This perspective implies near perfect stability of personality in adulthood. In contrast, other psychologists have sometimes denied there was any stability to personality at all. Their perspective is that individual thoughts and feelings are simply responses to transitory situational influences that are unlikely to show much consistency across the lifespan. As discussed so far, current research does not support either of these extreme perspectives. Nonetheless, the existence of some degree of stability raises important questions about the exact processes and mechanisms that produce personality stability (and personality change). The How and Why of Personality Stability and Change: Different Kinds of Interplay Between Individuals and Their Environments Personality stability is the result of the interplay between the individual and her/his environment. Psychologists use the term person–environment transactions (e.g., Roberts et al., 2008) to capture the mutually transforming interplay between individuals and their contextual circumstances. Several different types of these transactions have been described by psychological researchers. Active person–environment transactions occur when individuals seek out certain kinds of environments and experiences that are consistent with their personality characteristics. Risk-taking individuals may spend their leisure time very differently than more cautious individuals. Some prefer extreme sports whereas others prefer less intense experiences. Reactive person–environment transactions occur when individuals react differently to the same objective situation because of their personalities. A large social gathering represents a psychologically different context to the highly extraverted person compared with the highly introverted person. Evocative person–environment transactions occur whenever individuals draw out or evoke certain kinds of responses from their social environments because of their personality attributes. A warm and secure individual invites different kinds of responses from peers than a cold and aloof individual. Current researchers make distinctions between the mechanisms likely to produce personality stability and the mechanisms likely to produce changes (Roberts, 2006; Roberts et al., 2008). Brent Roberts coined the helpful acronym ASTMA to aid in remembering many of these mechanisms: Attraction (A), selection (S), manipulation (M), and attrition (A) tend to produce personality stability, whereas transformation (T) explains personality change. Individuals sometimes select careers, friends, social clubs, and lifestyles because of their personality attributes. This is the active process of attraction—individuals are attracted to environments because of their personality attributes. Situations that match with our personalities seem to feel “right” (e.g., Cesario, Grant, & Higgins, 2004). On the flipside of this process, gatekeepers, such as employers, admissions officers, and even potential relationship partners, often select individuals because of their personalities. Extraverted and outgoing individuals are likely to make better salespeople than quiet individuals who are uncomfortable with social interactions. All in all, certain individuals are “admitted” by gatekeepers into particular kinds of environments because of their personalities. Likewise, individuals with characteristics that are a bad fit with a particular environment may leave such settings or be asked to leave by gatekeepers. A lazy employee will not last long at a demanding job. These examples capture the process of attrition (dropping out). The processes of selection and attrition reflect evocative person–environment transactions. Last, individuals can actively manipulate their environments to match their personalities. An outgoing person will find ways to introduce more social interactions into the workday, whereas a shy individual may shun the proverbial water cooler to avoid having contact with others. These four processes of attraction, selection, attrition, and manipulation explain how a kind of matching occurs between personality attributes and environmental conditions for many individuals. This positive matching typically produces personality consistency because the “press” of the situation reinforces the attributes of the person. This observation is at the core of the corresponsive principle of personality development (Caspi et al., 2005; Roberts, Caspi, & Moffitt, 2003). Preexisting personality attributes and environmental contexts work in concert to promote personality continuity. The idea is that environments often reinforce those personality attributes that were partially responsible for the initial environmental conditions in the first place. For example, ambitious and confident individuals might be attracted to and selected for more demanding jobs (Roberts et al., 2003). These kinds of jobs often require drive, dedication, and achievement striving thereby accentuating dispositional tendencies toward ambition and confidence. Additional considerations related to person–environment transactions may help to further explain personality stability. Individuals gain more autonomy to select their own environment as they transition from childhood to adulthood (Scarr & McCartney, 1983). This might help explain why the differential stability of personality attributes increases from adolescence into adulthood. Reactive and evocative person–environment transactions also facilitate personality stability. The overarching idea is that personality attributes shape how individuals respond to situations and shape the kinds of responses individuals elicit from their environments. These responses and reactions can generate self-fulfilling cycles. For example, aggressive individuals seem to interpret ambiguous social cues as threatening (something called a hostile attribution bias or a hostile attribution of intent; see Crick & Dodge, 1996; Orobio de Castro, Veerman, Koops, Bosch, & Monshouwer, 2002). If a stranger runs into you and you spill your hot coffee all over a clean shirt, how do you interpret the situation? Do you believe the other person was being aggressive, or were you just unlucky? A rude, caustic, or violent response might invite a similar response from the individual who ran into you. The basic point is that personality attributes help shape reactions to and responses from the social world, and these processes often (but not always) end up reinforcing dispositional tendencies. Although a number of mechanisms account for personality continuity by generating a match between the individual’s characteristics and the environment, personality change or transformation is nonetheless possible. Recall that differential stability is not perfect. The simplest mechanism for producing change is a cornerstone of behaviorism: Patterns of behavior that produce positive consequences (pleasure) are repeated, whereas patterns of behavior that produce negative consequences (pain) will diminish (Thorndike, 1933). Social settings may have the power to transform personality if the individual is exposed to different rewards and punishments and the setting places limitations on how a person can reasonably behave (Caspi & Moffitt, 1993). For example, environmental contexts that limit agency and have very clear reward structures such as the military might be particularly powerful contexts for producing lasting personality changes (e.g., Jackson, Thoemmes, Jonkmann, Lüdke, & Trautwein, 2012). It is also possible that individuals might change their personality attributes by actively striving to change their behaviors and emotional reactions with help from outsiders. This idea lies at the heart of psychotherapy. As it stands, the conditions that produce lasting personality changes are an active area of research. Personality researchers have historically sought to demonstrate the existence of personality stability, and they are now turning their full attention to the conditions that facilitate personality change. There are currently a few examples of interventions that end up producing short-term personality changes (Jackson, Hill, Payne, Roberts, & Stine-Morrow, 2012), and this is an exciting area for future research (Edmonds, Jackson, Fayard, & Roberts, 2008). Insights about personality change are important for creating effective interventions designed to foster positive human development. Finding ways to promote self-control, emotional stability, creativity, and an agreeable disposition would likely lead to improvements for both individuals and society as a whole because these attributes predict a range of consequential life outcomes (Ozer & Benet-Martínez, 2006; Roberts et al., 2007) Conclusion There are multiple ways to evaluate personality stability. The existing evidence suggests that personality attributes are relatively enduring attributes that show predictable average-level changes across the lifespan. Personality stability is produced by a complicated interplay between individuals and their social settings. Many personality attributes are linked to life experiences in a mutually reinforcing cycle: Personality attributes seem to shape environmental contexts, and those contexts often then accentuate and reinforce those very personality attributes. Even so, personality change or transformation is possible because individuals respond to their environments. Individuals may also want to change their personalities. Personality researchers are now beginning to address important questions about the possibility of lasting personality changes through intervention efforts. [1] Throughout most of this module I will use the term stability to refer to continuity, stability/change, and consistency/inconsistency. Discussion Questions 1. Why is it difficult to give a simple answer to the question of whether personality is stable across the lifespan? 2. What happens during young adulthood that might explain findings about average changes in personality attributes? 3. Why does differential stability increase during adulthood? 4. What are some concrete examples of the ASTMA processes? 5. Can you explain the corresponsive principle of personality development? Provide several clear examples. 6. Do you think dramatic personality changes are likely to happen in adulthood? Why or why not? 7. What kinds of environments might be particularly powerful for changing personality? What specific features of these environments seem to make them powerful for producing change? 8. Is it easy to change your personality in adulthood? What steps do you think are needed to produce noticeable and lasting changes in your personality? What steps are needed to change the personalities of others? 9. Do you find the evidence that personality attributes are relatively enduring attributes reflects a largely positive aspect of adult development or a more unpleasant aspect? Why? Vocabulary Absolute stability Consistency in the level or amount of a personality attribute over time. Active person–environment transactions The interplay between individuals and their contextual circumstances that occurs whenever individuals play a key role in seeking out, selecting, or otherwise manipulating aspects of their environment. Age effects Differences in personality between groups of different ages that are related to maturation and development instead of birth cohort differences. Attraction A connection between personality attributes and aspects of the environment that occurs because individuals with particular traits are drawn to certain environments. Attrition A connection between personality attributes and aspects of the environment that occurs because individuals with particular traits drop out from certain environments. Birth cohort Individuals born in a particular year or span of time. Cohort effects Differences in personality that are related to historical and social factors unique to individuals born in a particular year. Corresponsive principle The idea that personality traits often become matched with environmental conditions such that an individual’s social context acts to accentuate and reinforce their personality attributes. Cross-sectional study/design A research design that uses a group of individuals with different ages (and birth cohorts) assessed at a single point in time. Cumulative continuity principle The generalization that personality attributes show increasing stability with age and experience. Differential stability Consistency in the rank-ordering of personality across two or more measurement occasions. Evocative person–environment transactions The interplay between individuals and their contextual circumstances that occurs whenever attributes of the individual draw out particular responses from others in their environment. Group level A focus on summary statistics that apply to aggregates of individuals when studying personality development. An example is considering whether the average score of a group of 50 year olds is higher than the average score of a group of 21 year olds when considering a trait like conscientiousness. Heterotypic stability Consistency in the underlying psychological attribute across development regardless of any changes in how the attribute is expressed at different ages. Homotypic stability Consistency of the exact same thoughts, feelings, and behaviors across development. Hostile attribution bias The tendency of some individuals to interpret ambiguous social cues and interactions as examples of aggressiveness, disrespect, or antagonism. Individual level A focus on individual level statistics that reflect whether individuals show stability or change when studying personality development. An example is evaluating how many individuals increased in conscientiousness versus how many decreased in conscientiousness when considering the transition from adolescence to adulthood. Longitudinal study/design A research design that follows the same group of individuals at multiple time points. Manipulation A connection between personality attributes and aspects of the environment that occurs whenever individuals with particular traits actively shape their environments. Maturity principle The generalization that personality attributes associated with the successful fulfillment of adult roles increase with age and experience. Person–environment transactions The interplay between individuals and their contextual circumstances that ends up shaping both personality and the environment. Reactive person–environment transactions The interplay between individuals and their contextual circumstances that occurs whenever attributes of the individual shape how a person perceives and responds to their environment. Selection A connection between personality attributes and aspects of the environment that occurs whenever individuals with particular attributes choose particular kinds of environments. Stress reaction The tendency to become easily distressed by the normal challenges of life. Transformation The term for personality changes associated with experience and life events.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_3%3A_Personality/3.10%3A_Personality_Stability_and_Change.txt
• 4.1: Affective Neuroscience This module provides a brief overview of the neuroscience of emotion. It integrates findings from human and animal research to describe the brain networks and associated neurotransmitters involved in basic affective systems. • 4.2: Functions of Emotions Emotions play a crucial role in our lives because they have important functions. This module describes those functions, dividing the discussion into three areas: the intrapersonal, the interpersonal, and the social and cultural functions of emotions. All in all we will see that emotions are a crucially important aspect of our psychological composition, having meaning and function to each of us individually, to our relationships with others in groups, and to our societies as a whole. • 4.3: Emotion Experience and Well Being Emotions don’t just feel good or bad, they also contribute crucially to people’s well-being and health. In general, experiencing positive emotions is good for us, whereas experiencing negative emotions is bad for us. However, recent research on emotions and well-being suggests this simple conclusion is incomplete and sometimes even wrong. Taking a closer look at this research, the present module provides a more complex relationship between emotion and well-being. • 4.4: Emotional Intelligence In this module, we review the construct of emotional intelligence by examining its underlying theoretical model, measurement tools, validity, and applications in real-world settings. We use empirical research from the past few decades to support and discuss competing definitions of emotional intelligence and possible future directions for the field. • 4.5: Drive States Our thoughts and behaviors are strongly influenced by affective experiences known as drive states. These drive states motivate us to fulfill goals that are beneficial to our survival and reproduction. This module provides an overview of key drive states, including information about their neurobiology and their psychological effects. • 4.6: Motives and Goals Your decisions and behaviors are often the result of a goal or motive you possess. This module provides an overview of the main theories and findings on goals and motivation. We address the origins, manifestations, and types of goals, and the various factors that influence motivation in goal pursuit. We further address goal conflict and, specifically, the exercise of self-control in protecting long-term goals from momentary temptations. • 4.7: Knowledge Emotions: Feelings that Foster Learning, Exploring, and Reflecting When people think of emotions they usually think of the obvious ones, such as happiness, fear, anger, and sadness. This module looks at the knowledge emotions, a family of emotional states that foster learning, exploring, and reflecting. The knowledge emotions thus don’t gear up the body like fear, anger, and happiness do, but they do gear up the mind—a critical task for humans, who must learn essentially everything that they know. • 4.8: Culture and Emotion How do people’s cultural ideas and practices shape their emotions (and other types of feelings)? In this module, we will discuss findings from studies comparing North American (United States, Canada) and East Asian (Chinese, Japanese, Korean) contexts. These studies reveal both cultural similarities and differences in various aspects of emotional life. Throughout, we will highlight the scientific and practical importance of these findings and conclude with recommendations for future research. Chapter 4: Emotions and Motivation By Eddie Harmon-Jones and Cindy Harmon-Jones University of New South Wales This module provides a brief overview of the neuroscience of emotion. It integrates findings from human and animal research to describe the brain networks and associated neurotransmitters involved in basic affective systems. learning objectives • Define affective neuroscience. • Describe neuroscience techniques used to study emotions in humans and animals. • Name five emotional systems and their associated neural structures and neurotransmitters. • Give examples of exogenous chemicals (e.g., drugs) that influence affective systems, and discuss their effects. • Discuss multiple affective functions of the amygdala and the nucleus accumbens. • Name several specific human emotions, and discuss their relationship to the affective systems of nonhuman animals. Affective Neuroscience: What is it? Affective neuroscience examines how the brain creates emotional responses. Emotions are psychological phenomena that involve changes to the body (e.g., facial expression), changes in autonomic nervous system activity, feeling states (subjective responses), and urges to act in specific ways (motivations; Izard, 2010). Affective neuroscience aims to understand how matter (brain structures and chemicals) creates one of the most fascinating aspects of mind, the emotions. Affective neuroscience uses unbiased, observable measures that provide credible evidence to other sciences and laypersons on the importance of emotions. It also leads to biologically based treatments for affective disorders (e.g., depression). The human brain and its responses, including emotions, are complex and flexible. In comparison, nonhuman animals possess simpler nervous systems and more basic emotional responses. Invasive neuroscience techniques, such as electrode implantation, lesioning, and hormone administration, can be more easily used in animals than in humans. Human neuroscience must rely primarily on noninvasive techniques such as electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), and on studies of individuals with brain lesions caused by accident or disease. Thus, animal research provides useful models for understanding affective processes in humans. Affective circuits found in other species, particularly social mammals such as rats, dogs, and monkeys, function similarly to human affective networks, although nonhuman animals’ brains are more basic. In humans, emotions and their associated neural systems have additional layers of complexity and flexibility. Compared to animals, humans experience a vast variety of nuanced and sometimes conflicting emotions. Humans also respond to these emotions in complex ways, such that conscious goals, values, and other cognitions influence behavior in addition to emotional responses. However, in this module we focus on the similarities between organisms, rather than the differences. We often use the term “organism” to refer to the individual who is experiencing an emotion or showing evidence of particular neural activations. An organism could be a rat, a monkey, or a human. Across species, emotional responses are organized around the organism’s survival and reproductive needs. Emotions influence perception, cognition, and behavior to help organisms survive and thrive (Farb, Chapman, & Anderson, 2013). Networks of structures in the brain respond to different needs, with some overlap between different emotions. Specific emotions are not located in a single structure of the brain. Instead, emotional responses involve networks of activation, with many parts of the brain activated during any emotional process. In fact, the brain circuits involved in emotional reactions include nearly the entire brain (Berridge & Kringelbach, 2013). Brain circuits located deep within the brain below the cerebral cortex are primarily responsible for generating basic emotions (Berridge & Kringelbach, 2013; Panksepp & Biven, 2012). In the past, research attention was focused on specific brain structures that will be reviewed here, but future research may find that additional areas of the brain are also important in these processes. Basic Emotions Desire: The neural systems of reward seeking One of the most important affective neuronal systems relates to feelings of desire, or the appetite for rewards. Researchers refer to these appetitive processes using terms such as “wanting” (Berridge & Kringelbach, 2008), “seeking” (Panksepp & Biven, 2012), or “behavioural activation sensitivity” (Gray, 1987). When the appetitive system is aroused, the organism shows enthusiasm, interest, and curiosity. These neural circuits motivate the animal to move through its environment in search of rewards such as appetizing foods, attractive sex partners, and other pleasurable stimuli. When the appetitive system is underaroused, the organism appears depressed and helpless. Much evidence for the structures involved in this system comes from animal research using direct brain stimulation. When an electrode is implanted in the lateral hypothalamus or in cortical or mesencephalic regions to which the hypothalamus is connected, animals will press a lever to deliver electrical stimulation, suggesting that they find the stimulation pleasurable. The regions in the desire system also include the amygdala, nucleus accumbens, and frontal cortex (Panksepp & Biven, 2012). The neurotransmitter dopamine, produced in the mesolimbic and mesocortical dopamine circuits, activates these regions. It creates a sense of excitement, meaningfulness, and anticipation. These structures are also sensitive to drugs such as cocaine and amphetamines, chemicals that have similar effects to dopamine (Panksepp & Biven, 2012). Research in both humans and nonhuman animals shows that the left frontal cortex (compared to the right frontal cortex) is more active during appetitive emotions such as desire and interest. Researchers first noted that persons who had suffered damage to the left frontal cortex developed depression, whereas those with damage to the right frontal cortex developed mania (Goldstein, 1939). The relationship between left frontal activation and approach-related emotions has been confirmed in healthy individuals using EEG and fMRI (Berkman & Lieberman, 2010). For example, increased left frontal activation occurs in 2- to 3-day-old infants when sucrose is placed on their tongues (Fox & Davidson, 1986), and in hungry adults as they view pictures of desirable desserts (Gable & Harmon-Jones, 2008). In addition, greater left frontal activity in appetitive situations has been found to relate to dopamine (Wacker, Mueller, Pizzagalli, Hennig, & Stemmler, 2013). “Liking”: The neural circuits of pleasure and enjoyment Surprisingly, the amount of desire an individual feels toward a reward need not correspond to how much he or she likes that reward. This is because the neural structures involved in the enjoyment of rewards are different from the structures involved in the desire for the rewards. “Liking” (e.g., enjoyment of a sweet liquid) can be measured in babies and nonhuman animals by measuring licking speed, tongue protrusions, and happy facial expressions, whereas “wanting” (desire) is shown by the willingness to work hard to obtain a reward (Berridge & Kringelbach, 2008). Liking has been distinguished from wanting in research on topics such as drug abuse. For example, drug addicts often desire drugs even when they know that the ones available will not provide pleasure (Stewart, de Wit, & Eikelboom, 1984). Research on liking has focused on a small area within the nucleus accumbens and on the posterior half of the ventral pallidum. These brain regions are sensitive to opioids and endocannabinoids. Stimulation of other regions of the reward system increases wanting, but does not increase liking, and in some cases even decreases liking. The research on the distinction between desire and enjoyment contributes to the understanding of human addiction, particularly why individuals often continue to frantically pursue rewards such as cocaine, opiates, gambling, or sex, even when they no longer experience pleasure from obtaining these rewards due to habituation. The experience of pleasure also involves the orbitofrontal cortex. Neurons in this region fire when monkeys taste, or merely see pictures of, desirable foods. In humans, this region is activated by pleasant stimuli including money, pleasant smells, and attractive faces (Gottfried, O’Doherty & Dolan, 2002; O’Doherty, Deichmann, Critchley, & Dolan, 2002; O’Doherty, Kringelbach, Rolls, Hornak, & Andrews, 2001; O’Doherty, Winston, Critchley, Perrett, Burt, & Dolan, 2003). Fear: The neural system of freezing and fleeing Fear is an unpleasant emotion that motivates avoidance of potentially harmful situations. Slight stimulation of the fear-related areas in the brain causes animals to freeze, whereas intense stimulation causes them to flee. The fear circuit extends from the central amygdala to the periaqueductal gray in the midbrain. These structures are sensitive to glutamate, corticotrophin releasing factor, adreno-cortico-trophic hormone, cholecystokinin, and several different neuropeptides. Benzodiazepines and other tranquilizers inhibit activation in these areas (Panksepp & Biven, 2012). The role of the amygdala in fear responses has been extensively studied. Perhaps because fear is so important to survival, two pathways send signals to the amygdala from the sensory organs. When an individual sees a snake, for example, the sensory information travels from the eye to the thalamus and then to the visual cortex. The visual cortex sends the information on to the amygdala, provoking a fear response. However, the thalamus also quickly sends the information straight to the amygdala, so that the organism can react before consciously perceiving the snake (LeDoux, Farb, & Ruggiero, 1990). The pathway from the thalamus to the amygdala is fast but less accurate than the slower pathway from the visual cortex. Damage to the amygdala or areas of the ventral hypocampus interferes with fear conditioning in both humans and nonhuman animals (LeDoux, 1996). Rage: The circuits of anger and attack Anger or rage is an arousing, unpleasant emotion that motivates organisms to approach and attack (Harmon-Jones, Harmon-Jones, & Price, 2013). Anger can be evoked through goal frustration, physical pain, or physical restraint. In territorial animals, anger is provoked by a stranger entering the organism’s home territory (Blanchard & Blanchard, 2003). The neural networks for anger and fear are near one another, but separate (Panksepp & Biven, 2012). They extend from the medial amygdala, through specific parts of the hypothalamus, and into the periaqueductal gray of the midbrain. The anger circuits are linked to the appetitive circuits, such that lack of an anticipated reward can provoke rage. In addition, when humans are angered, they show increased left frontal cortical activation, supporting the idea that anger is an approach-related emotion (Harmon-Jones et al., 2013). The neurotransmitters involved in rage are not yet well understood, but Substance P may play an important role (Panksepp & Biven, 2012). Other neurochemicals that may be involved in anger include testosterone (Peterson & Harmon-Jones, 2012) and arginine-vasopressin (Heinrichs, von Dawans, & Domes, 2009). Several chemicals inhibit the rage system, including opioids and high doses of antipsychotics, such as chlorpromazine (Panksepp & Biven, 2012). Love: The neural systems of care and attachment For social animals such as humans, attachment to other members of the same species produces the positive emotions of attachment: love, warm feelings, and affection. The emotions that motivate nurturing behavior (e.g., maternal care) are distinguishable from those that motivate staying close to an attachment figure in order to receive care and protection (e.g., infant attachment). Important regions for maternal nurturing include the dorsal preoptic area (Numan & Insel, 2003) and the bed nucleus of the stria terminalis(Panksepp, 1998). These regions overlap with the areas involved in sexual desire, and are sensitive to some of the same neurotransmitters, including oxytocin, arginine-vasopressin, and endogenous opioids (endorphins and enkephalins). Grief: The neural networks of loneliness and panic The neural networks involved in infant attachment are also sensitive to separation. These regions produce the painful emotions of grief, panic, and loneliness. When infant humans or other infant mammals are separated from their mothers, they produce distress vocalizations, or crying. The attachment circuits are those that cause organisms to produce distress vocalizations when electrically stimulated. The attachment system begins in the midbrain periaqueductal gray, very close to the area that produces physical pain responses, suggesting that it may have originated from the pain circuits (Panksepp, 1998). Separation distress can also be evoked by stimulating the dorsomedial thalamus, ventral septum, dorsal preoptic region, and areas in the bed nucleus of stria terminalis (near sexual and maternal circuits; Panksepp, Normansell, Herman, Bishop, & Crepeau, 1988). These regions are sensitive to endogenous opiates, oxytocin, and prolactin. All of these neurotransmitters prevent separation distress. Opiate drugs such as morphine and heroin, as well as nicotine, artificially produce feelings of pleasure and gratification, similar to those normally produced during positive social interactions. This may explain why these drugs are addictive. Panic attacks appear to be an intense form of separation distress triggered by the attachment system, and panic can be effectively relieved by opiates. Testosterone also reduces separation distress, perhaps by reducing attachment needs. Consistent with this, panic attacks are more common in women than in men. Plasticity: Experiences can alter the brain The responses of specific neural regions may be modified by experience. For example, the front shell of the nucleus accumbens is generally involved in appetitive behaviors, such as eating, and the back shell is generally involved in fearful defensive behaviors (Reynolds & Berridge, 2001, 2002). Research using human neuroimaging has also revealed this front–back distinction in the functions of the nucleus accumbens (Seymour, Daw, Dayan, Singer, & Dolan, 2007). However, when rats are exposed to stressful environments, their fear-generating regions expand toward the front, filling almost 90% of the nucleus accumbens shell. On the other hand, when rats are exposed to preferred home environments, their fear-generating regions shrink and the appetitive regions expand toward the back, filling approximately 90% of the shell (Reynolds & Berridge, 2008). Brain structures have multiple functions Although much affective neuroscience research has emphasized whole structures, such as the amygdala and nucleus accumbens, it is important to note that many of these structures are more accurately referred to as complexes. They include distinct groups of nuclei that perform different tasks. At present, human neuroimaging techniques such as fMRI are unable to examine the activity of individual nuclei in the way that invasive animal neuroscience can. For instance, the amygdala of the nonhuman primate can be divided into 13 nuclei and cortical areas (Freese & Amaral, 2009). These regions of the amygdala perform different functions. The central nucleus sends outputs involving brainstem areas that result in innate emotional expressions and associated physiological responses. The basal nucleus is connected with striatal areas that are involved with actions such as running toward safety. Furthermore, it is not possible to make one-to-one maps of emotions onto brain regions. For example, extensive research has examined the involvement of the amygdala in fear, but research has also shown that the amygdala is active during uncertainty (Whalen, 1998) as well as positive emotions (Anderson et al., 2003; Schulkin, 1990). Conclusion Research in affective neuroscience has contributed to knowledge regarding emotional, motivational, and behavioral processes. The study of the basic emotional systems of nonhuman animals provides information about the organization and development of more complex human emotions. Although much still remains to be discovered, current findings in affective neuroscience have already influenced our understanding of drug use and abuse, psychological disorders such as panic disorder, and complex human emotions such as desire and enjoyment, grief and love. Outside Resources Video: A 1-hour interview with Jaak Panksepp, the father of affective neuroscience Video: A 15-minute interview with Kent Berridge on pleasure in the brain Video: A 5-minute interview with Joseph LeDoux on the amygdala and fear Web: Brain anatomy interactive 3D model http://www.pbs.org/wnet/brain/3d/index.html Discussion Questions 1. The neural circuits of “liking” are different from the circuits of “wanting.” How might this relate to the problems people encounter when they diet, fight addictions, or try to change other habits? 2. The structures and neurotransmitters that produce pleasure during social contact also produce panic and grief when organisms are deprived of social contact. How does this contribute to an understanding of love? 3. Research shows that stressful environments increase the area of the nucleus accumbens that is sensitive to fear, whereas preferred environments increase the area that is sensitive to rewards. How might these changes be adaptive? Vocabulary Affect An emotional process; includes moods, subjective feelings, and discrete emotions. Amygdala Two almond-shaped structures located in the medial temporal lobes of the brain. Hypothalamus A brain structure located below the thalamus and above the brain stem. Neuroscience The study of the nervous system. Nucleus accumbens A region of the basal forebrain located in front of the preoptic region. Orbital frontal cortex A region of the frontal lobes of the brain above the eye sockets. Periaqueductal gray The gray matter in the midbrain near the cerebral aqueduct. Preoptic region A part of the anterior hypothalamus. Stria terminalis A band of fibers that runs along the top surface of the thalamus. Thalamus A structure in the midline of the brain located between the midbrain and the cerebral cortex. Visual cortex The part of the brain that processes visual information, located in the back of the brain.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_4%3A_Emotions_and_Motivation/4.1%3A_Affective_Neuroscience.txt
By Hyisung Hwang and David Matsumoto San Francisco State University Emotions play a crucial role in our lives because they have important functions. This module describes those functions, dividing the discussion into three areas: the intrapersonal, the interpersonal, and the social and cultural functions of emotions. The section on the intrapersonal functions of emotion describes the roles that emotions play within each of us individually; the section on the interpersonal functions of emotion describes the meanings of emotions to our relationships with others; and the section on the social and cultural functions of emotion describes the roles and meanings that emotions have to the maintenance and effective functioning of our societies and cultures at large. All in all we will see that emotions are a crucially important aspect of our psychological composition, having meaning and function to each of us individually, to our relationships with others in groups, and to our societies as a whole. learning objectives • Gain an appreciation of the importance of emotion in human life. • Understand the functions and meanings of emotion in three areas of life: the intrapersonal, interpersonal, and social–cultural. • Give examples of the role and function of emotion in each of the three areas described Introduction It is impossible to imagine life without emotion. We treasure our feelings—the joy at a ball game, the pleasure of the touch of a loved one, or the fun with friends on a night out. Even negative emotions are important, such as the sadness when a loved one dies, the anger when violated, the fear that overcomes us in a scary or unknown situation, or the guilt or shame toward others when our sins are made public. Emotions color life experiences and give those experiences meaning and flavor. In fact, emotions play many important roles in people’s lives and have been the topic of scientific inquiry in psychology for well over a century (Cannon, 1927; Darwin, 1872; James, 1890). This module explores why we have emotions and why they are important. Doing so requires us to understand the function of emotions, and this module does so below by dividing the discussion into three sections. The first concerns the intrapersonal functions of emotion, which refer to the role that emotions play within each of us individually. The second concerns the interpersonal functions of emotion, which refer to the role emotions play between individuals within a group. The third concerns the social and cultural functions of emotion, which refer to the role that emotions play in the maintenance of social order within a society. All in all, we will see that emotions inform us of who we are, what our relationships with others are like, and how to behave in social interactions. Emotions give meaning to events; without emotions, those events would be mere facts. Emotions help coordinate interpersonal relationships. And emotions play an important role in the cultural functioning of keeping human societies together. Intrapersonal Functions of Emotion Emotions Help us Act Quickly with Minimal Conscious Awareness Emotions are rapid information-processing systems that help us act with minimal thinking (Tooby & Cosmides, 2008). Problems associated with birth, battle, death, and seduction have occurred throughout evolutionary history and emotions evolved to aid humans in adapting to those problems rapidly and with minimal conscious cognitive intervention. If we did not have emotions, we could not make rapid decisions concerning whether to attack, defend, flee, care for others, reject food, or approach something useful, all of which were functionally adaptive in our evolutionary history and helped us to survive. For instance, drinking spoiled milk or eating rotten eggs has negative consequences for our welfare. The emotion of disgust, however, helps us immediately take action by not ingesting them in the first place or by vomiting them out. This response is adaptive because it aids, ultimately, in our survival and allows us to act immediately without much thinking. In some instances, taking the time to sit and rationally think about what to do, calculating cost–benefit ratios in one’s mind, is a luxury that might cost one one’s life. Emotions evolved so that we can act without that depth of thinking. Emotions Prepare the Body for Immediate Action Emotions prepare us for behavior. When triggered, emotions orchestrate systems such as perception, attention, inference, learning, memory, goal choice, motivational priorities, physiological reactions, motor behaviors, and behavioral decision making (Cosmides & Tooby, 2000; Tooby & Cosmides, 2008). Emotions simultaneously activate certain systems and deactivate others in order to prevent the chaos of competing systems operating at the same time, allowing for coordinated responses to environmental stimuli (Levenson, 1999). For instance, when we are afraid, our bodies shut down temporarily unneeded digestive processes, resulting in saliva reduction (a dry mouth); blood flows disproportionately to the lower half of the body; the visual field expands; and air is breathed in, all preparing the body to flee. Emotions initiate a system of components that includes subjective experience, expressive behaviors, physiological reactions, action tendencies, and cognition, all for the purposes of specific actions; the term “emotion” is, in reality, a metaphor for these reactions. One common misunderstanding many people have when thinking about emotions, however, is the belief that emotions must always directly produce action. This is not true. Emotion certainly prepares the body for action; but whether people actually engage in action is dependent on many factors, such as the context within which the emotion has occurred, the target of the emotion, the perceived consequences of one’s actions, previous experiences, and so forth (Baumeister, Vohs, DeWall, & Zhang, 2007; Matsumoto & Wilson, 2008). Thus, emotions are just one of many determinants of behavior, albeit an important one. Emotions Influence Thoughts Emotions are also connected to thoughts and memories. Memories are not just facts that are encoded in our brains; they are colored with the emotions felt at those times the facts occurred (Wang & Ross, 2007). Thus, emotions serve as the neural glue that connects those disparate facts in our minds. That is why it is easier to remember happy thoughts when happy, and angry times when angry. Emotions serve as the affective basis of many attitudes, values, and beliefs that we have about the world and the people around us; without emotions those attitudes, values, and beliefs would be just statements without meaning, and emotions give those statements meaning. Emotions influence our thinking processes, sometimes in constructive ways, sometimes not. It is difficult to think critically and clearly when we feel intense emotions, but easier when we are not overwhelmed with emotions (Matsumoto, Hirayama, & LeRoux, 2006). Emotions Motivate Future Behaviors Because emotions prepare our bodies for immediate action, influence thoughts, and can be felt, they are important motivators of future behavior. Many of us strive to experience the feelings of satisfaction, joy, pride, or triumph in our accomplishments and achievements. At the same time, we also work very hard to avoid strong negative feelings; for example, once we have felt the emotion of disgust when drinking the spoiled milk, we generally work very hard to avoid having those feelings again (e.g., checking the expiration date on the label before buying the milk, smelling the milk before drinking it, watching if the milk curdles in one’s coffee before drinking it). Emotions, therefore, not only influence immediate actions but also serve as an important motivational basis for future behaviors. Interpersonal Functions of Emotion Emotions are expressed both verbally through words and nonverbally through facial expressions, voices, gestures, body postures, and movements. We are constantly expressing emotions when interacting with others, and others can reliably judge those emotional expressions (Elfenbein & Ambady, 2002; Matsumoto, 2001); thus, emotions have signal value to others and influence others and our social interactions. Emotions and their expressions communicate information to others about our feelings, intentions, relationship with the target of the emotions, and the environment. Because emotions have this communicative signal value, they help solve social problems by evoking responses from others, by signaling the nature of interpersonal relationships, and by providing incentives for desired social behavior (Keltner, 2003). Emotional Expressions Facilitate Specific Behaviors in Perceivers Because facial expressions of emotion are universal social signals, they contain meaning not only about the expressor’s psychological state but also about that person’s intent and subsequent behavior. This information affects what the perceiver is likely to do. People observing fearful faces, for instance, are more likely to produce approach-related behaviors, whereas people who observe angry faces are more likely to produce avoidance-related behaviors (Marsh, Ambady, & Kleck, 2005). Even subliminal presentation of smiles produces increases in how much beverage people pour and consume and how much they are willing to pay for it; presentation of angry faces decreases these behaviors (Winkielman, Berridge, & Wilbarger, 2005). Also, emotional displays evoke specific, complementary emotional responses from observers; for example, anger evokes fear in others (Dimberg & Ohman, 1996; Esteves, Dimberg, & Ohman, 1994), whereas distress evokes sympathy and aid (Eisenberg et al., 1989). Emotional Expressions Signal the Nature of Interpersonal Relationships Emotional expressions provide information about the nature of the relationships among interactants. Some of the most important and provocative set of findings in this area come from studies involving married couples (Gottman & Levenson, 1992; Gottman, Levenson, & Woodin, 2001). In this research, married couples visited a laboratory after having not seen each other for 24 hours, and then engaged in intimate conversations about daily events or issues of conflict. Discrete expressions of contempt, especially by the men, and disgust, especially by the women, predicted later marital dissatisfaction and even divorce. Emotional Expressions Provide Incentives for Desired Social Behavior Facial expressions of emotion are important regulators of social interaction. In the developmental literature, this concept has been investigated under the concept of social referencing (Klinnert, Campos, & Sorce, 1983); that is, the process whereby infants seek out information from others to clarify a situation and then use that information to act. To date, the strongest demonstration of social referencing comes from work on the visual cliff. In the first study to investigate this concept, Campos and colleagues (Sorce, Emde, Campos, & Klinnert, 1985) placed mothers on the far end of the “cliff” from the infant. Mothers first smiled to the infants and placed a toy on top the safety glass to attract them; infants invariably began crawling to their mothers. When the infants were in the center of the table, however, the mother then posed an expression of fear, sadness, anger, interest, or joy. The results were clearly different for the different faces; no infant crossed the table when the mother showed fear; only 6% did when the mother posed anger, 33% crossed when the mother posed sadness, and approximately 75% of the infants crossed when the mother posed joy or interest. Other studies provide similar support for facial expressions as regulators of social interaction. In one study (Bradshaw, 1986), experimenters posed facial expressions of neutral, anger, or disgust toward babies as they moved toward an object and measured the amount of inhibition the babies showed in touching the object. The results for 10- and 15-month olds were the same: anger produced the greatest inhibition, followed by disgust, with neutral the least. This study was later replicated (Hertenstein & Campos, 2004) using joy and disgust expressions, altering the method so that the infants were not allowed to touch the toy (compared with a distractor object) until one hour after exposure to the expression. At 14 months of age, significantly more infants touched the toy when they saw joyful expressions, but fewer touched the toy when the infants saw disgust. Social and Cultural Functions of Emotion If you stop to think about many things we take for granted in our daily lives, we cannot help but come to the conclusion that modern human life is a colorful tapestry of many groups and individual lives woven together in a complex yet functional way. For example, when you’re hungry, you might go to the local grocery store and buy some food. Ever stop to think about how you’re able to do that? You might buy a banana that was grown in a field in southeast Asia being raised by farmers there, where they planted the tree, cared for it, and picked the fruit. They probably handed that fruit off to a distribution chain that allowed multiple people somewhere to use tools such as cranes, trucks, cargo bins, ships or airplanes (that were also created by multiple people somewhere) to bring that banana to your store. The store had people to care for that banana until you came and got it and to barter with you for it (with your money). You may have gotten to the store riding a vehicle that was produced somewhere else in the world by others, and you were probably wearing clothes produced by some other people somewhere else. Thus, human social life is complex. Individuals are members of multiple groups, with multiple social roles, norms, and expectations, and people move rapidly in and out of the multiple groups of which they are members. Moreover, much of human social life is unique because it revolves around cities, where many people of disparate backgrounds come together. This creates the enormous potential for social chaos, which can easily occur if individuals are not coordinated well and relationships not organized systematically. One of the important functions of culture is to provide this necessary coordination and organization. Doing so allows individuals and groups to negotiate the social complexity of human social life, thereby maintaining social order and preventing social chaos. Culture does this by providing a meaning and information system to its members, which is shared by a group and transmitted across generations, that allows the group to meet basic needs of survival, pursue happiness and well-being, and derive meaning from life (Matsumoto & Juang, 2013). Culture is what allowed the banana from southeast Asia to appear on your table. Cultural transmission of the meaning and information system to its members is, therefore, a crucial aspect of culture. One of the ways this transmission occurs is through the development of worldviews (including attitudes, values, beliefs, and norms) related to emotions (Matsumoto & Hwang, 2013; Matsumoto et al., 2008). Worldviews related to emotions provide guidelines for desirable emotions that facilitate norms for regulating individual behaviors and interpersonal relationships. Our cultural backgrounds tell us which emotions are ideal to have, and which are not (Tsai, Knutson, & Fung, 2006). The cultural transmission of information related to emotions occurs in many ways, from childrearers to children, as well as from the cultural products available in our world, such as books, movies, ads, and the like (Schönpflug, 2009; Tsai, Louie, Chen, & Uchida, 2007). Cultures also inform us about what to do with our emotions—that is, how to manage or modify them—when we experience them. One of the ways in which this is done is through the management of our emotional expressions through cultural display rules (Friesen, 1972). These are rules that are learned early in life that specify the management and modification of our emotional expressions according to social circumstances. Thus, we learn that “big boys don’t cry” or to laugh at the boss’s jokes even though they’re not funny. By affecting how individuals express their emotions, culture also influences how people experience them as well. Because one of the major functions of culture is to maintain social order in order to ensure group efficiency and thus survival, cultures create worldviews, rules, guidelines, and norms concerning emotions because emotions have important intra- and interpersonal functions, as described above, and are important motivators of behavior. Norms concerning emotion and its regulation in all cultures serve the purpose of maintaining social order. Cultural worldviews and norms help us manage and modify our emotional reactions (and thus behaviors) by helping us to have certain kinds of emotional experiences in the first place and by managing our reactions and subsequent behaviors once we have them. By doing so, our culturally moderated emotions can help us engage in socially appropriate behaviors, as defined by our cultures, and thus reduce social complexity and increase social order, avoiding social chaos. All of this allows us to live relatively harmonious and constructive lives in groups. If cultural worldviews and norms about emotions did not exist, people would just run amok having all kinds of emotional experiences, expressing their emotions and then behaving in all sorts of unpredictable and potentially harmful ways. If that were the case, it would be very difficult for groups and societies to function effectively, and even for humans to survive as a species, if emotions were not regulated in culturally defined ways for the common, social good. Thus, emotions play a critical role in the successful functioning of any society and culture. Outside Resources Alberta, G. M., Rieckmann, T. R., & Rush, J. D. (2000). Issues and recommendations for teaching an ethnic/culture-based course. Teaching of Psychology, 27,102-107. doi:10.1207/S15328023TOP2702_05 http://top.sagepub.com/content/27/2/102.short CrashCourse (2014, August 4). Feeling all the feels: Crash course psychology #25. [Video file]. Retrieved from: Hughesm A. (2011). Exercises and demonstrations to promote student engagement in motivation and courses. In R. Miller, E. Balcetis, S. Burns, D. Daniel, B. Saville, & W. Woody (Eds.), Promoting Student Engagement: Volume 2: Activities, Exercises and Demonstrations for Psychology Courses. (pp. 79-82) Washington DC, Society for the Teaching of Psychology, American Psychological Association. http://teachpsych.org/ebooks/pse2011/vol2/index.php Johnston, E., & Olson, L. (2015). The feeling brain: The biology and psychology of emotions. New York, NY: W.W. Norton & Company. http://books.wwnorton.com/books/The-Feeling-Brain/ NPR News: Science Of Sadness And Joy: 'Inside Out' Gets Childhood Emotions Right www.npr.org/sections/health-s...emotions-right Online Psychology Laboratory: Motivation and Emotion resources opl.apa.org/Resources.aspx#Motivation Web: See how well you can read other people’s facial expressions of emotion http://www.humintell.com/free-demos/ Discussion Questions 1. When emotions occur, why do they simultaneously activate certain physiological and psychological systems in the body and deactivate others? 2. Why is it difficult for people to act rationally and think happy thoughts when they are angry? Conversely, why is it difficult to remember sad memories or have sad thoughts when people are happy? 3. You’re walking down a deserted street when you come across a stranger who looks scared. What would you say? What would you do? Why? 4. You’re walking down a deserted street when you come across a stranger who looks angry. What would you say? What would you do? Why? 5. Think about the messages children receive from their environment (such as from parents, mass media, the Internet, Hollywood movies, billboards, and storybooks). In what ways do these messages influence the kinds of emotions that children should and should not feel? Vocabulary Cultural display rules These are rules that are learned early in life that specify the management and modification of emotional expressions according to social circumstances. Cultural display rules can work in a number of different ways. For example, they can require individuals to express emotions “as is” (i.e., as they feel them), to exaggerate their expressions to show more than what is actually felt, to tone down their expressions to show less than what is actually felt, to conceal their feelings by expressing something else, or to show nothing at all. Interpersonal This refers to the relationship or interaction between two or more individuals in a group. Thus, the interpersonal functions of emotion refer to the effects of one’s emotion on others, or to the relationship between oneself and others. Intrapersonal This refers to what occurs within oneself. Thus, the intrapersonal functions of emotion refer to the effects of emotion to individuals that occur physically inside their bodies and psychologically inside their minds. Social and cultural Society refers to a system of relationships between individuals and groups of individuals; culture refers to the meaning and information afforded to that system that is transmitted across generations. Thus, the social and cultural functions of emotion refer to the effects that emotions have on the functioning and maintenance of societies and cultures. Social referencing This refers to the process whereby individuals look for information from others to clarify a situation, and then use that information to act. Thus, individuals will often use the emotional expressions of others as a source of information to make decisions about their own behavior.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_4%3A_Emotions_and_Motivation/4.2%3A_Functions_of_Emotions.txt
By Brett Ford and Iris B. Mauss University of California, Berkeley Emotions don’t just feel good or bad, they also contribute crucially to people’s well-being and health. In general, experiencing positive emotions is good for us, whereas experiencing negative emotions is bad for us. However, recent research on emotions and well-being suggests this simple conclusion is incomplete and sometimes even wrong. Taking a closer look at this research, the present module provides a more complex relationship between emotion and well-being. At least three aspects of the emotional experience appear to affect how a given emotion is linked with well-being: the intensity of the emotion experienced, the fluctuation of the emotion experienced, and the context in which the emotion is experienced. While it is generally good to experience more positive emotion and less negative emotion, this is not always the guide to the good life. learning objectives • Describe the general pattern of associations between emotion experience and well-being. • Identify at least three aspects of emotion experience beyond positivity and negativity of the emotion that affect the link between emotion experience and well-being. How we feel adds much of the flavor to life’s highest—and lowest—moments. Can you think of an important moment in your life that didn’t involve strong feelings? In fact, it might be hard to recall any times when you had no feeling at all. Given how saturated human life is with feelings, and given how profoundly feelings affect us, it is not surprising that much theorizing and research has been devoted to uncovering how we can optimize our feelings, or, “emotion experiences,” as they are referred to in psychological research. Feelings contribute to well-being So, which emotions are the “best” ones to feel? Take a moment to think about how you might answer this question. At first glance, the answer might seem obvious. Of course, we should experience as much positive emotion and as little negative emotion as possible! Why? Because it is pleasant to experience positive emotions and it is unpleasant to experience negative emotions (Russell & Barrett, 1999). The conclusion that positive feelings are good and negative feelings are bad might seem so obvious as not to even warrant the question, much less bona fide psychological research. In fact, the very labels of “positive” and “negative” imply the answer to this question. However, for the purposes of this module, it may be helpful to think of “positive” and “negative” as descriptive terms used to discuss two different types of experiences, rather than a true value judgment. Thus, whether positive or negative emotions are good or bad for us is an empirical question. As it turns out, this empirical question has been on the minds of theorists and researchers for many years. Such psychologists as Alice Isen, Charles Carver, Michael Scheier, and, more recently, Barbara Fredrickson, Dacher Keltner, Sonja Lyubomirsky, and others began asking whether the effects of feelings could go beyond the obvious momentary pleasure or displeasure. In other words, can emotions do more for us than simply make us feel good or bad? This is not necessarily a new question; variants of it have appeared in the texts of thinkers such as Charles Darwin (1872) and Aristotle (1999). However, modern psychological research has provided empirical evidence that feelings are not just inconsequential byproducts. Rather, each emotion experience, however fleeting, has effects on cognition, behavior, and the people around us. For example, feeling happy is not only pleasant, but is also useful to feel when in social situations because it helps us be friendly and collaborative, thus promoting our positive relationships. Over time, the argument goes, these effects add up to have tangible effects on people’s well-being (good mental and physical health). A variety of research has been inspired by the notion that our emotions are involved in, and maybe even causally contribute to, our well-being. This research has shown that people who experience more frequent positive emotions and less frequent negative emotions have higher well-being (e.g., Fredrickson, 1998; Lyubomirksy, King, & Diener, 2005), including increased life satisfaction (Diener, Sandvik, & Pavot, 1991), increased physical health (Tugade, Fredrickson, & Barrett, 2004; Veenhoven, 2008), greater resilience to stress (Folkman & Moskowitz, 2000; Tugade & Fredrickson, 2004), better social connection with others (Fredrickson, 1998), and even longer lives (Veenhoven, 2008). Notably, the effect of positive emotion on longevity is about as powerful as the effect of smoking! Perhaps most importantly, some research directly supports that emotional experiences cause these various outcomes rather than being just a consequence of them (Fredrickson, Cohn, Coffey, Pek, & Finkel, 2008; Lyubomirsky et al., 2005). At this point, you might be tempted to conclude that you should always strive to experience as much positive emotion and as little negative emotion as possible. However, recent research suggests that this conclusion may be premature. This is because this conclusion neglects three central aspects of the emotion experience. First, it neglects the intensity of the emotion: Positive and negative emotions might not have the same effect on well-being at all intensities. Second, it neglects how emotions fluctuate over time: Stable emotion experiences might have quite different effects from experiences that change a lot. Third, it neglects the context in which the emotion is experienced: The context in which we experience an emotion might profoundly affect whether the emotion is good or bad for us. So, to address the question “Which emotions should we feel?” we must answer, “It depends!” We next consider each of the three aspects of feelings, and how they influence the link between feelings and well-being. The intensity of the emotion matters Experiencing more frequent positive emotions is generally beneficial. But does this mean that we should strive to feel as intense positive emotion as possible? Recent research suggests that this unqualified conclusion might be wrong. In fact, experiencing very high levels of positive emotion may be harmful (Gruber, 2011; Oishi, Diener, & Lucas, 2007). For instance, experiencing very high levels of positive emotion makes individuals more likely to engage in risky behaviors, such as binge eating and drug use (Cyders & Smith, 2008; Martin et al., 2002). Furthermore, intense positive emotion is associated with the experience of mania (Gruber et al., 2009; Johnson, 2005). It appears that the experience of positive emotions follows an inverted U-shaped curve in relation to well-being: more positive emotion is linked with increased well-being, but only up to a point, after which even more positive emotion is linked with decreased well-being (Grant & Schwartz, 2011). These empirical findings underscore the sentiment put forth long ago by the philosopher Aristotle: Moderation is key to leading a good life (1999). Too much positive emotion may pose a problem for well-being. Might too little negative emotion similarly be cause for concern? Although there is limited empirical research on this subject, initial research suggests supports this idea. For example, people who aim not to feel negative emotion are at risk for worse well-being and adaptive functioning, including lower life satisfaction, lower social support, worse college grades, and feelings of worse physical health (Tamir & Ford, 2012a). Similarly, feeling too little embarrassment in response to a social faux pas may damage someone’s social connections if they aren’t motivated by their embarrassment to make amends (Keltner & Buswell, 1997). Low levels of negative emotion also seem to be involved in some forms of psychopathology. For instance, blunted sadness in response to a sad situation is a characteristic of major depressive disorder (Rottenberg, Gross, & Gotlib, 2005) and feeling too little fear is a hallmark of psychopathy (Marsh et al., 2008; Patrick, 1994). In sum, this first section suggests that the conclusion “Of course we should experience as much positive emotions and as little negative emotions as possible” is sometimes wrong. As it turns out, there can be too much of a good thing and too little of a bad thing. The fluctuation of the emotion matters Emotions naturally vary—or fluctuate—over time (Davidson, 1998). We probably all know someone whose emotions seem to fly everywhere—one minute they’re ecstatic, the next they’re upset. We might also know a person who is pretty even-keeled, moderately happy, with only modest fluctuations across time. When looking only at average emotion experience, say across a month, both of these people might appear identical: moderately happy. However, underlying these identical averages are two very different patterns of fluctuation across time. Might these emotion fluctuationsacross time—beyond average intensity—have implications for well-being? Overall, the available research suggests that how much emotions fluctuate does indeed matter. In general, greater fluctuations are associated with worse well-being. For example, higher fluctuation of positive emotions—measured either within a single day or across two weeks—was linked with lower well-being and greater depression (Gruber, Kogan, Quoidbach, & Mauss, 2013). Fluctuation in negative emotions, in turn, has been linked with increased depressive symptoms (Peeters, Berkhof, Delespaul, Rottenberg, & Nicolson, 2003), borderline personality disorder (Trull et al., 2008), and neuroticism (Eid & Diener, 1999). These associations tend to hold even when controlling for average levels of positive or negative emotion, which means that beyond the overall intensity of positive or negative emotion, the fluctuation of one’s emotions across time is associated with well-being. While it is not entirely clear why fluctuations are linked to worse well-being, one explanation is that strong fluctuations are indicative of emotional instability (Kuppens, Oravecz, & Tuerlinckx, 2010). Of course, this should not be taken to mean that we should rigidly feel the exact same way every minute of every day, regardless of context. After all, psychological flexibility—or the ability to adapt to changing situational demands and experience emotions accordingly—has generally demonstrated beneficial links with well-being (Bonanno, Papa, Lalande, Westphal, & Coifman, 2004; Kashdan, & Rottenberg, 2010). The question remains, however: what exact amount of emotional fluctuation constitutes unhealthy instability and what amount of emotional fluctuation constitutes healthy flexibility. Again, then, we must qualify the conclusion that it is always better to experience more positive emotions and less negative emotions. The degree to which emotions fluctuate across time plays an important role. Overall, relative stability (but not rigidity) in emotion experience appears to be optimal for well-being. The context of the emotion experience matters This module has already discussed two features of emotion experiences that affect how they relate to well-being: the intensity of the emotion and the fluctuation of the emotion over time. However, neither of these features takes into account the context in which the emotion is experienced. At least three different contexts may critically affect the links between emotion and well-being: (1) the external environment in which the emotion is being experienced, (2) the other emotional responses (e.g., physiology, facial behavior) that are currently activated, and (3) the other emotions that are currently being experienced. The external environment Emotions don’t occur within a vacuum. Instead, they are usually elicited by and experienced within specific situations that come in many shapes and sizes —from birthday parties to funerals, job interviews to mundane movie nights. The situation in which an emotion is experienced has strong implications for whether a given emotion is the “best” emotion to feel. Take happiness, for example. Feeling happiness at a birthday party may be a great idea. However, having the exact same experience of happiness at a funeral would likely not bode well for your well-being. When considering how the environment influences the link between emotion and well-being, it is important to understand that each emotion has its own function. For example, although fear is a negative emotion, fear helps us notice and avoid threats to our safety (öhman & Mineka, 2001), and may thus the “best” emotion to feel in dangerous situations. Happiness can help people cooperate with others, and may thus be the best emotion to feel when we need to collaborate (e.g., Van Kleef, van Dijk, Steinel, & van Beest, 2008). Anger can energize people to compete or fight with others, and may thus be advantageous to experience it in confrontations (e.g., Tamir & Ford, 2012b; Van Kleef et al., 2008). It might be disadvantageous to experience happiness (a positive emotion) when we need to fight with someone; in this situation, it might be better to experience anger (a negative emotion). This suggests that emotions’ implications for well-being are not determined only by whether they are positive or negative but also by whether they are well-matched to their context. In support of this general idea, people who experience emotions that fit the context at hand are more likely to recover from depression and trauma (Bonanno et al., 2004; Rottenberg, Kasch, Gross, & Gotlib, 2002). Research has also found that participants who want to feel emotions that match the context at hand (e.g., anger when confronting someone)—even if that emotion was negative—are more likely to experience greater well-being (Tamir & Ford, 2012a). Conversely, people who pursue emotions without regard to context—even if those emotions are positive, like happiness—are more likely to experience lower subjective well-being, more depression, greater loneliness, and even worse grades (Ford & Tamir, 2012; Mauss et al., 2012; Mauss, Tamir, Anderson, & Savino; 2011; Tamir & Ford, 2012a). In sum, this research demonstrates that regardless of whether an emotion is positive or negative, the context in which it is experienced critically influences whether the emotion helps or hinders well-being. Other emotional responses The subjective experience of an emotion—what an emotion feels like—is only one aspect of an emotion. Other aspects include behaviors, facial expressions, and physiological activation (Levenson, 1992). For example, if you feel excited about having made a new friend, you might want to be near that person, you might smile, and your heart might be beating faster as you do so. Often, these different responses travel together, meaning that when we feel an emotion we typically have corresponding behaviors and physiological responses (e.g.,Ekman, 1972; Levenson, 1992). The degree to which responses travel together has sometimes been referred to as emotion coherence (Mauss, Levenson, McCarter, Wilhelm, & Gross, 2005). However, these different responses do not co-occur in all instances and for all people (Bradley & Lang, 2000; Mauss et al., 2005; for review, see Fridlund, Ekman, & Oster, 1987). For example, some people may choose not to express an emotion they are feeling internally (English & John, 2013), which would result in lower coherence. Does coherence—above and beyond emotion experience per se—matter for people’s well-being? To examine this question, one study measured participants’ emotion coherence by showing them a funny film clip of stand-up comedy while recording their experience of positive emotion as well as their behavioral displays of positive emotion (Mauss, Shallcross, et al., 2011). As predicted, participants differed quite a bit in their coherence. Some showed almost perfect coherence between their behavior and experience, whereas others’ behavior and experience corresponded not much at all. Interestingly, the more that participants’ behavior and experience cohered in the laboratory session, the lower levels of depressive symptoms and the higher levels of well-being they experienced 6 months later. This effect was found when statistically controlling for overall intensity of positive emotions experienced. In other words, experiencing high levels of positive emotion aided well-being only if it was accompanied by corresponding positive facial expressions. But why would coherence of different emotional responses predict well-being? One of the key functions of an emotion is social communication (Keltner & Haidt, 1999), and arguably, successful social communication depends on whether an individual’s emotions are being accurately communicated to others. When someone’s emotional behavior doesn’t match their experience it may disrupt communication because it could make the individual appear confusing or inauthentic to others. In support of this theory, the above study found that lower coherence was associated with worse well-being because people with lower coherence felt less socially connected to others (Mauss, Shallcross, et al., 2011). These findings are also consistent with a large body of research examining the extent to which people mask the outward display of an emotional experience, or suppression. This research has demonstrated that people who habitually use suppression not only experience worse well being (Gross & John, 2003), but they also seem to be particularly worse off with regard to their social relationships (Srivastava, Tamir, McGonigal, John, & Gross, 2009). These findings underscore the importance of examining whether an individual’s experience is traveling together with his or her emotional responses, above and beyond overall levels of subjective experience. Thus, to understand how emotion experiences predict well-being, it is important not only to consider the experience of an emotion, but also the other emotional responses currently activated. Other emotions Up until now, we have treated emotional experiences as though people can only experience one emotion at a time. However, it should be kept in mind that positive and negative emotions are not simply the opposite of one another. Instead, they tend to be independent of one another, which means that a person can feel positive and negative emotions at the same time (Larsen, McGraw, Mellers, & Cacioppo, 2004). For example, how does it feel to win a prize when you expected a greater prize? Given “what might have been,” situations like this can elicit both happiness and sadness. Or, take “schadenfreude” (a German term for deriving pleasure from someone else’s misfortune), or “aviman” (an Indian term for prideful, loving anger), or nostaligia (an English term for affectionate sadness about something from the past): these terms capture the notion that people can feel both positively and negatively within the same emotional experience. And as it turns out, the other emotions that someone feels (e.g., sadness) during the experience of an emotion (e.g., happiness) influence whether that emotion experience has a positive or negative effect on well-being. Importantly, the extent to which someone experiences different emotions at the same time—or mixed emotions—may be beneficial for their well-being. Early support for this theory was provided by a study of bereaved spouses. In the study, participants were asked to talk about their recently deceased spouse, which undoubtedly elicited strong negative emotions. However, some participants expressed positive emotions in addition to the negative ones, and it was those participants who recovered more quickly from their loss (Bonanno & Keltner, 1997). A recent study provides additional support for the benefits of mixed emotions, finding that adults who experienced more mixed emotions over a span of 10 years were physically healthier than adults whose experience of mixed emotions did not increase over time (Hershfield, Scheibe, Sims & Carstensen, 2013). Indeed, individuals who can experience positive emotions even in the face of negative emotions are more likely to cope successfully with stressful situations (Larsen, Hemenover, Norris, & Cacioppo, 2003). Why would mixed emotions be beneficial for well-being? Stressful situations often elicit negative emotions, and recall that negative emotions have some benefits, as we outlined above. However, so do positive emotions, and thus having the ability to “take the good with the bad” might be another key component of well-being. Again, experiencing more positive emotion and less negative emotion may not always be optimal. Sometimes, a combination of both may be best. Conclusion Are emotions just fleeting experiences with no consequence beyond our momentary comfort or discomfort? A variety of research answers a firm “no”—emotions are integral predictors of our well-being. This module examined how, exactly, emotion experience might be linked to well-being. The obvious answer to this question is: of course, experiencing as much positive emotions and as little negative emotions as possible is good for us. But although this is true in general, recent research suggests that this obvious answer is incomplete and sometimes even wrong. As philosopher Robert Solomon said, “Living well is not just maximizing the good feelings and minimizing the bad. (…) A happy life is not necessarily filled with happy moments” (2007, p. 86). Outside Resources Journal: If you are interested in direct access to research on emotion, take a look at the journal Emotion http://www.apa.org/pubs/journals/emo/index.aspx Video: Check out videos of expert emotion researchers discussing their work http://www.youtube.com/playlist?list...n43G_Y5otqKzJA Video: See psychologist Daniel Gilbert and other experts discussing current research on emotion in the PBS series This Emotional Life http://video.pbs.org/program/this-emotional-life/ Discussion Questions 1. Much research confirms the relative benefits of positive emotions and relative costs of negative emotions. Could positive emotions be detrimental, or could negative emotions be beneficial? Why or why not? 2. We described some contexts that influence the effects of emotional experiences on well-being. What other contexts might influence the links between emotions and well-being? Age? Gender? Culture? How so? 3. How could you design an experiment that tests…(A) When and why it is beneficial to feel a negative emotion such as sadness? (B) How is the coherence of emotion behavior and emotion experience linked to well-being? (C) How likely a person is to feel mixed (as compared to simple) emotions? Vocabulary Emotion An experiential, physiological, and behavioral response to a personally meaningful stimulus. Emotion coherence The degree to which emotional responses (subjective experience, behavior, physiology, etc.) converge with one another. Emotion fluctuation The degree to which emotions vary or change in intensity over time. Well-being The experience of mental and physical health and the absence of disorder.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_4%3A_Emotions_and_Motivation/4.3%3A_Emotion_Experience_and_Well_Being.txt
By Marc Brackett, Sarah Delaney, and Peter Salovey Yale University In this module, we review the construct of emotional intelligence by examining its underlying theoretical model, measurement tools, validity, and applications in real-world settings. We use empirical research from the past few decades to support and discuss competing definitions of emotional intelligence and possible future directions for the field. learning objectives • Understand the theoretical foundations of emotional intelligence and the relationship between emotion and cognition. • Distinguish between mixed and ability models of emotional intelligence. • Understand various methods for measuring emotional intelligence. • Describe emotional intelligence’s evolution as a theoretical, success-oriented, and achievement-based framework. • Identify and define key concepts of emotional intelligence (including emotion regulation, expression of emotion, understanding emotion, etc.) and the ways they contribute to decision making, relationship building, and overall well-being. Introduction Imagine you are waiting in line to buy tickets to see your favorite band. Knowing tickets are limited and prices will rise quickly, you showed up 4 hours early. Unfortunately, so did everyone else. The line stretches for blocks and hasn’t moved since you arrived. It starts to rain. You are now close to Will Call when you notice three people jump ahead of you to join their friends, who appear to have been saving a spot for them. They talk loudly on their cellphones as you inch forward, following the slow procession of others waiting in line. You finally reach the ticket counter only to have the clerk tell you the show is sold out. You notice the loud group off to the side, waving their tickets in the air. At this exact moment, a fiery line of emotion shoots through your whole body. Your heart begins to race, and you feel the urge to either slam your hands on the counter or scream in the face of those you believe have slighted you. What are these feelings, and what will you do with them? Emotional intelligence (EI) involves the idea that cognition and emotion are interrelated. From this notion stems the belief that emotions influence decision making, relationship building, and everyday behavior. After spending hours waiting eagerly in the pouring rain and having nothing to show for it, is it even possible to squelch such intense feelings of anger due to injustice? From an EI perspective, emotions are active mental processes that can be managed, so long as individuals develop the knowledge and skills to do so. But how, exactly, do we reason with our emotions? In other words, how intelligent is our emotion system? To begin, we’ll briefly review the concept of standard, or general, intelligence. The late American psychologist, David Wechsler, claimed that intelligence is the “global capacity of an individual to think rationally, act purposefully, and deal effectively with their environment” (Wechsler, 1944). If we choose to accept this definition, then intelligence is an operational process through which we learn to utilize our internal abilities in order to better navigate our surroundings—a process that is most certainly similar to, if not impacted by, our emotions. In 1990, Drs. Peter Salovey and John D. Mayer first explored and defined EI. They explained EI as “the ability to monitor one’s own and others’ feelings and emotions, to discriminate among them and use this information to guide one’s thinking and actions” (Salovey & Mayer, 1990). EI, according to these researchers, asserts that all individuals possess the ability to leverage their emotions to enhance thinking, judgment, and behavior. This module aims to unpack this theory by exploring the growing empirical research on EI, as well as what can be learned about its impact on our daily lives. History of EI Traditionally, many psychologists and philosophers viewed cognition and emotion as separate domains, with emotion posing a threat to productive and rational thinking. Have you ever been told not to let your emotions get in the way of your decisions? This separation of passion and reason stretches as far back as early ancient Greece (Lyons, 1999). Additionally, mid-20th century scholars explained emotions as mentally destabilizing forces (Young, 1943). Yet, there are traces throughout history where the intersection of emotion and cognition has been theoretically questioned. In 350 B.C.E., the famous Greek philosopher Aristotle wrote, “some men . . . if they have first perceived and seen what is coming and have first roused themselves and their calculative faculty, are not defeated by their emotion, whether it be pleasant or painful”( Aristotle, trans. 2009, Book VII, Chapter 7, Section 8). Still, our social interactions and experiences suggest this belief has undergone centuries of disregard, both in Western and Eastern cultures. These are the same interactions that teach us to “toughen up” and keep our emotions hidden. So, how did we arrive at EI—a scientific theory that claims all individuals have access to a “calculative faculty” through emotion? In the early 1970s, many scientists began to recognize the limitations of the Intelligence Quotient (IQ)—the standardized assessment of intelligence. In particular, they noticed its inability to explain differences among individuals unrelated to just cognitive ability alone. These frustrations led to the advancement of more inclusive theories of intelligence such as Gardner’s multiple intelligences theory (1983/1993) and Sternberg’s triarchic theory of intelligence (1985). Researchers also began to explore the influence of moods and emotions on thought processes, including judgment (Isen, Shalker, Clark, & Karp, 1978) and memory (Bower, 1981). It was through these theoretical explorations and empirical studies that the concept of EI began to take shape. Today, the field of EI is extensive, encompassing varying perspectives and measurement tools. Some attribute this growth to Daniel Goleman’s popularization of the construct in his 1995 book, Emotional Intelligence: Why It Can Matter More Than IQ. Generating public appeal, he focused on EI’s connection to personal and professional success. Goleman’s model of EI includes a blend of emotion-related skills, traditional cognitive intelligence, and distinct personality traits. This embellished conceptualization of EI, followed by an increase in EI literature, contributed, at least in part, to conflicting definitional and measurement models within the field. Models and Measures of EI Many researchers would agree that EI theory will only be as successful as its form of measurement. Today, there are three primary models of EI: the ability model (Mayer & Salovey 1997; Salovey & Mayer, 1990), mixed models (Bar-On, 2006; Boyatzis & Sala, 2004), and the trait EI model (Petrides & Furnham, 2003). Ability models approach EI as a standard intelligence that utilizes a distinct set of mental abilities that (1) are intercorrelated, (2) relate to other extant intelligences, and (3) develop with age and experience (Mayer, Caruso, & Salovey, 1999; Mayer, Salovey, Caruso, & Sitarenios, 2003). In contrast, both mixed and trait models define and measure EI as a set of perceived abilities, skills, and personality traits. Ability Models: Mayer and Salovey Four-Branch Model of EI In this section, we describe the EI (Four-Branch) model espoused by Mayer and Salovey (1997). This model proposes that four fundamental emotion-related abilities comprise EI: (1) perception/expression of emotion, (2) use of emotion to facilitate thinking, (3) understanding of emotion, and (4) management of emotion in oneself and others. 1. Perception of Emotion Perception of emotion refers to people’s capacity to identify emotions in themselves and others using facial expressions, tone of voice, and body language (Brackett et al., 2013). Those skilled in the perception of emotion also are able to express emotion accordingly and communicate emotional needs. For example, let’s return to our opening scenario. After being turned away at the ticket booth, you slowly settle into the reality that you cannot attend the concert. A group of your classmates, however, managed to buy tickets and are discussing their plans at your lunch table. When they ask if you are excited for the opening band, you shrug and pick at your food. If your classmates are skilled at perception of emotion, then they will read your facial expression and body language and determine that you might be masking your true feelings of disappointment, frustration, or disengagement from the conversation. As a result, they might ask you if something is wrong or choose not to talk about the concert in your presence. 2. Use of Emotion to Facilitate Thinking Using emotion to enhance cognitive activities and adapt to various situations is the second component of EI. People who are skilled in this area understand that some emotional states are more optimal for targeted outcomes than others. Feeling frustrated over the concert tickets may be a helpful mindset as you are about to play a football game or begin a wrestling match. The high levels of adrenaline associated with frustration may boost your energy and strength, helping you compete. These same emotions, however, will likely impede your ability to sit at your school desk and solve algebra problems or write an essay. Individuals who have developed and practiced this area of EI actively generate emotions that support certain tasks or objectives. For example, a teacher skilled in this domain may recognize that her students need to experience positive emotions, like joy or excitement, in order to succeed when doing creative work such as brainstorming or collaborative art projects. She may plan accordingly by scheduling these activities for after recess, knowing students will likely come into the classroom cheerful and happy from playing outside. Making decisions based on the impact that emotional experiences may have on actions and behavior is an essential component of EI. 3. Understanding of Emotion EI also includes the ability to differentiate between emotional states, as well as their specific causes and trajectories. Feelings of sadness or disappointment can result from the loss of a person or object, such as your concert tickets. Standing in the rain, by most standards, is merely a slight annoyance. However, waiting in the rain for hours in a large crowd will likely result in irritation or frustration. Feeling like you have been treated unfairly when someone cuts in line and takes the tickets you feel you deserved can cause your unpleasantness to escalate into anger and resentment. People skilled in this area are aware of this emotional trajectory and also have a strong sense of how multiple emotions can work together to produce another. For instance, it is possible that you may feel contempt for the people who cut in front of you in line. However, this feeling of contempt does not arise from anger alone. Rather, it is the combination of anger and disgust by the fact that these individuals, unlike you, have disobeyed the rules. Successfully discriminating between negative emotions is an important skill related to understanding of emotion, and it may lead to more effective emotion management (Feldman Barret, Gross, Christensen, & Benvenuto, 2001). 4. Management of Emotion Emotion management includes the ability to remain open to a wide range of emotions, recognize the value of feeling certain emotions in specific situations, and understand which short- and long-term strategies are most efficient for emotion regulation (Gross, 1998). Anger seems an appropriate response to falling short of a goal (concert tickets) that you pursued both fairly and patiently. In fact, you may even find it valuable to allow yourself the experience of this feeling. However, this feeling will certainly need to be managed in order to prevent aggressive, unwanted behavior. Coming up with strategies, such as taking a deep breath and waiting until you feel calm before letting the group ahead of you know they cut in line, will allow you to regulate your anger and prevent the situation from escalating. Using this strategy may even let you gain insight into other perspectives—perhaps you learn they had already purchased their tickets and were merely accompanying their friends. Measuring EI with Performance Measures While self-report tests are common in psychology, ability models of EI require a different approach: performance measures. Performance measures require respondents to demonstrate their four emotion skills (Mayer & Salovey, 1997) by solving emotion-related problems. Among these measures, the Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT) (Mayer, Salovey, & Caruso, 2002) is the most commonly used. The MSCEIT is a 141-item test comprised of a total of eight tasks, two per each of the four emotion abilities. To measure emotion management, for example, respondents are asked to read through scenarios involving emotionally charged conflicts and then asked to evaluate the effectiveness of different resolutions. For a comprehensive review of the MSCEIT and other performance-assessment tools, please see Rivers, Brackett, Salovey, and Mayer (2007). Mixed and Trait Models of EI Unlike ability models, mixed models offer a broad definition of EI that combines mental abilities with personality traits such as optimism, motivation, and stress tolerance (see Cherniss, 2010, for a review). The two most widely used mixed models are the Boyatzis-Goleman model (Boyatzis & Sala, 2004) and the Bar-On model of emotional-social intelligence (Bar-On, 2006). The Boyatzis-Goleman model divides EI competencies into four groups: self-awareness, self-management, social awareness, and relationship management. Similarly, the Bar-On model offers five main components of EI: intrapersonal skills, interpersonal skills, adaptability, stress management, and mood. Developers of the trait EI model (Petrides & Furnham, 2003) explain EI as a constellation of self-perceived, emotion-related personality traits. Mixed and Trait Model Assessment: Self-Report Self-report assessments—surveys that ask respondents to report their own emotional skills—are most often associated with mixed and trait models. Self-report measures are usually quick to administer. However, many researchers argue that their vulnerability to social-desirability biases and faking are problematic (Day & Carroll, 2008). In addition, there is wide speculation concerning the potential for inaccurate judgments of personal ability and skill on behalf of responders (e.g., Paulhus, Lysy, & Yik, 1998). Self-report measures have been shown to lack discriminant validity from existing personality measures and have very low correlations with ability measures of EI (Brackett & Mayer, 2003; Brackett, Rivers, Shiffman, Lerner, & Salovey, 2006). According to Mayer and colleagues (2008), self-report tests may show reliability for individual personalities, but should not be considered EI because performance tests are the gold standard for measuring intelligence. Although tensions between ability and mixed or trait model approaches appear to divide the field, competing definitions and measurements can only enhance the quality of research devoted to EI and its impact on real-world outcomes. Room for Debate While mixed and trait models shed some light on the concept of EI, many researchers feel these approaches undermine the EI construct as a discrete and measurable mental ability. EI, when conceptualized as an ability, most accurately describes the relationship between cognition and emotion by accounting for changes in individual outcomes that are often missed when focusing solely on cognitive intelligence or personality traits (O’Boyle, Humphrey, Pollack, Hawver, & Story, 2010). What’s more, among adults, personality traits provide little room for malleability, making development in these areas difficult even when combined with emotional skills. For example, characteristics such as agreeableness and neuroticism, while contributing to personal and professional success, are seen as innate traits that are likely to remain static over time. Distinguishing EI from personality traits helps us better target the skills that can improve desirable outcomes (Brackett et al., 2013). Approaching EI with language that provides the opportunity for personal growth is crucial to its application. Because the ability model aligns with this approach, the remainder of this module will focus on ability EI and the ways in which it can be applied both in professional and academic settings. Outcomes Historically, emotions have been thought to have no place in the classroom or workplace (Sutton & Wheatly, 2003). Yet today, we know empirical research supports the belief that EI has the potential to influence decision making, health, relationships, and performance in both professional and academic settings (e.g., Brackett et al., 2013; Brackett, Rivers, & Salovey, 2011). Workplace Research conducted in the workplace supports positive links between EI and enhanced job performance, occupational well-being, and leadership effectiveness. In one study, EI was associated with performance indicators such as company rank, percent merit increase, ratings of interpersonal facilitation, and affect and attitudes at work (Lopes, Grewal, Kadis, Gall, & Salovey, 2006). Similar correlations have been found between EI and a variety of managerial simulations involving problem solving, determining employee layoffs, adjusting claims, and negotiating successfully (Day & Carroll, 2004; Feyerherm & Rice, 2002; Mueller & Curhan, 2006). Emotion management is seen as most likely to affect job performance by influencing social and business interactions across a diverse range of industries (O’Boyle et al., 2010). Leaders in the workplace also benefit from high EI. Experts in the field of organizational behavior are beginning to view leadership as a process of social interactions where leaders motivate, influence, guide, and empower followers to achieve organizational goals (Bass & Riggio, 2006). This is known as transformational leadership—where leaders create a vision and then inspire others to work in this direction (Bass, 1985). In a sample of 24 managers, MSCEIT scores correlated positively with a leader’s ability to inspire followers to emulate their own actions and attend to the needs and problems of each individual (Leban & Zulauf, 2004). Schools When applied in educational settings, theoretical foundations of EI are often integrated into social and emotional learning (SEL) programs. SEL is the process of merging thinking, feeling, and behaving. These skills enable individuals to be aware of themselves and of others, make responsible decisions, and manage their own behaviors and those of others (Elias et al., 1997; Elbertson, Brackett, & Weissberg, 2010). SEL programs are designed to enhance the climate of a classroom, school, or district, with the ultimate goal of enhancing children’s social and emotional skills and improving their academic outcomes (Greenberg et al., 2003). Adopting curricula that focus on these elements is believed to enable success in academics, relationships, and, ultimately, in life (Becker & Luthar, 2002; Catalino, Berglundh, Ryan, Lonczek, & Hawkins, 2004). Take a moment to think about the role of a teacher. How might emotions impact the climate of a classroom? If a teacher enters a classroom feeling anxious, disgruntled, or unenthused, these states will most likely be noticed, and felt, by the students. If not managed well, these negative emotions can hurt the classroom dynamic and prevent student learning (Travers, 2001). Research suggests that the abilities to perceive, use, understand, and manage emotions are imperative for effective teaching (Reyes, Brackett, Rivers, White, & Salovey, 2012; Brackett, Reyes, Rivers, Elbertson, & Salovey, 2011; Hargreaves, 2001). In a study that examined the relationship between emotion regulation and both job satisfaction and burnout among secondary-school teachers, researchers found that emotion regulation among teachers was associated with positive affect, support from principals, job satisfaction, and feelings of personal accomplishment (Brackett, Palomera, Mojsa-Kaja, Reyes, & Salovey, 2010). EI, when embedded into SEL programs, has been shown to contribute positively to personal and academic success in students (Durlak, Weissberg, Dymnicki, Tayloer, & Schellinger, 2011). Research also shows that strong emotion regulation can help students pay attention in class, adjust to the school environment, and manage academic anxiety (Lopes & Salovey, 2004; Mestre, Guil, Lopes, Salovey, & Gil-Olarte, 2006). A recent randomized control trial of RULER* also found that, after one year, schools that used RULER—compared with those that used only the standard curriculum—were rated by independent observers as having higher degrees of warmth and connectedness between teachers and students, more autonomy and leadership, less bullying among students, and teachers who focused more on students’ interests and motivations (Rivers, Brackett, Reyes, Elbertson, & Salovey, 2013). *RULER - Recognize emotions in oneself and in other people. Understand the causes and consequences of a wide range of emotions. Label emotions using a sophisticated vocabulary. Express emotions in socially appropriate way. Regulate emotions effectively. Limitations and Future Directions There is a need for further development in EI theory and measurement, as well as more empirical research on its associated outcomes (Mayer, Salovey, & Caruso, 2008). Despite its prominent role as the signature performance assessment of EI, the MSCEIT has a number of limitations. For example, it does not allow for the assessment of several abilities. These abilities include the expression of emotion and monitoring or reflecting on one’s own emotions. (Brackett et al. 2013). Researchers must also address growing criticisms, particularly those that stretch beyond the measurement debate and question the validity of the EI construct when defined too broadly (Locke, 2005). In order to advance EI research, there is a great need for investigators to address these issues by reconciling disparate definitions and refining existing measures. Potential considerations for future research in the field should include deeper investigation into the genetic (versus acquired) and fluid (versus crystallized) aspects of EI. The cultural implications and differences of EI also are important to consider. Studies should expand beyond the United States and Europe in order for the theory of EI to be cross-culturally valid and for its applications and outcomes to be achieved more universally. Greater attention should also be paid to developmental trajectories, gender differences, and how EI operates in the workplace and educational settings (Brackett et al., 2013). Although further explorations and research in the field of EI are needed, current findings indicate a fundamental relationship between emotion and cognition. Returning to our opening question, what will you do when denied concert tickets? One of the more compelling aspects of EI is that it grants us reign over our own emotions—forces once thought to rule the self by denying individual agency. But with this power comes responsibility. If you are enraged about not getting tickets to the show, perhaps you can take a few deep breaths, go for a walk, and wait until your physiological indicators (shaky hands or accelerated heartbeat) subside. Once you’ve removed yourself, your feeling of rage may lessen to annoyance. Lowering the intensity level of this feeling (a process known as down regulating) will help re-direct your focus on the situation itself, rather than the activated emotion. In this sense, emotion regulation allows you to objectively view the point of conflict without dismissing your true feelings. Merely down regulating the emotional experience facilitates better problem solving. Now that you are less activated, what is the best approach? Should you talk to the ticket clerk? Ask to see the sales manager? Or do you let the group know how you felt when they cut the line? All of these options present better solutions than impulsively acting out rage. As discussed in this module, research shows that the cultivation and development of EI contributes to more productive, supportive, and healthy experiences. Whether we’re waiting in a crowded public place, delivering lesson plans, or engaging in conversation with friends, we are the ultimate decision makers when it comes how we want to feel and, in turn, behave. By engaging the right mental processes and strategies, we can better understand, regulate, and manage our emotional states in order to live the lives we desire. Outside Resources Article: Are you emotionally intelligent? Here’s how to know for sure. Inc.com Retrieved from: http://www.inc.com/travis-bradberry/...-for-sure.html Article: Grant, A. (2014, January 2). The dark side of emotional intelligence, The Atlantic. Retrieved from: http://www.theatlantic.com/health/ar...igence/282720/ Article: Gregoire, C. (2014, January 23) How emotionally intelligent are you? Here’s how to tell. Huffington Post. Retrieved from: http://www.huffingtonpost.com/2013/1...n_4371920.html Book: Goleman, D. (1995). Emotional intelligence. New York, NY: Bantam. Book: Goleman, D. (1998). Working with emotional intelligence. New York, NY: Bantam. Discussion Questions 1. What are the four emotional abilities that comprise EI, and how do they relate to each other? 2. What are three possible implications for using ability-based and mixed or trait-based models of EI? 3. Discuss the ways in which EI can contribute positively to the workplace and classroom settings. Vocabulary Ability model An approach that views EI as a standard intelligence that utilizes a distinct set of mental abilities that (1) are intercorrelated, (2) relate to other extant intelligences, and (3) develop with age and experience (Mayer & Salovey, 1997). Emotional intelligence The ability to monitor one’s own and others’ feelings and emotions, to discriminate among them and to use this information to guide one’s thinking and actions. (Salovey & Mayer, 1990). EI includes four specific abilities: perceiving, using, understanding, and managing emotions. Four-Branch Model An ability model developed by Drs. Peter Salovey and John Mayer that includes four main components of EI, arranged in hierarchical order, beginning with basic psychological processes and advancing to integrative psychological processes. The branches are (1) perception of emotion, (2) use of emotion to facilitate thinking, (3) understanding emotion, and (4) management of emotion. Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT) A 141-item performance assessment of EI that measures the four emotion abilities (as defined by the four-branch model of EI) with a total of eight tasks. Mixed and Trait Models Approaches that view EI as a combination of self-perceived emotion skills, personality traits, and attitudes. Performance assessmen t A method of measurement associated with ability models of EI that evaluate the test taker’s ability to solve emotion-related problems. Self-report assessment A method of measurement associated with mixed and trait models of EI, which evaluates the test taker’s perceived emotion-related skills, distinct personality traits, and other characteristics. Social and emotional learning (SEL) The real-world application of EI in an educational setting and/or classroom that involves curricula that teach the process of integrating thinking, feeling, and behaving in order to become aware of the self and of others, make responsible decisions, and manage one’s own behaviors and those of others (Elias et al., 1997)
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_4%3A_Emotions_and_Motivation/4.4%3A_Emotional_Intelligence.txt
By Sudeep Bhatia and George Loewenstein Carnegie Mellon University Our thoughts and behaviors are strongly influenced by affective experiences known as drive states. These drive states motivate us to fulfill goals that are beneficial to our survival and reproduction. This module provides an overview of key drive states, including information about their neurobiology and their psychological effects. learning objectives • Identify the key properties of drive states • Describe biological goals accomplished by drive states • Give examples of drive states • Outline the neurobiological basis of drive states such as hunger and arousal • Discuss the main moderators and determinants of drive states such as hunger and arousal Introduction What is the longest you’ve ever gone without eating? A couple of hours? An entire day? How did it feel? Humans rely critically on food for nutrition and energy, and the absence of food can create drastic changes, not only in physical appearance, but in thoughts and behaviors. If you’ve ever fasted for a day, you probably noticed how hunger can take over your mind, directing your attention to foods you could be eating (a cheesy slice of pizza, or perhaps some sweet, cold ice cream), and motivating you to obtain and consume these foods. And once you have eaten and your hunger has been satisfied, your thoughts and behaviors return to normal. Hunger is a drive state, an affective experience (something you feel, like the sensation of being tired or hungry) that motivates organisms to fulfill goals that are generally beneficial to their survival and reproduction. Like other drive states, such as thirst or sexual arousal, hunger has a profound impact on the functioning of the mind. It affects psychological processes, such as perception, attention, emotion, and motivation, and influences the behaviors that these processes generate. Key Properties of Drive States Drive states differ from other affective or emotional states in terms of the biological functions they accomplish. Whereas all affective states possess valence (i.e., they are positive or negative) and serve to motivate approach or avoidance behaviors (Zajonc, 1998), drive states are unique in that they generate behaviors that result in specific benefits for the body. For example, hunger directs individuals to eat foods that increase blood sugar levels in the body, while thirst causes individuals to drink fluids that increase water levels in the body. Different drive states have different triggers. Most drive states respond to both internal and external cues, but the combinations of internal and external cues, and the specific types of cues, differ between drives. Hunger, for example, depends on internal, visceral signals as well as sensory signals, such as the sight or smell of tasty food. Different drive states also result in different cognitive and emotional states, and are associated with different behaviors. Yet despite these differences, there are a number of properties common to all drive states. Homeostasis Humans, like all organisms, need to maintain a stable state in their various physiological systems. For example, the excessive loss of body water results in dehydration, a dangerous and potentially fatal state. However, too much water can be damaging as well. Thus, a moderate and stable level of body fluid is ideal. The tendency of an organism to maintain this stability across all the different physiological systems in the body is called homeostasis. Homeostasis is maintained via two key factors. First, the state of the system being regulated must be monitored and compared to an ideal level, or a set point. Second, there need to be mechanisms for moving the system back to this set point—that is, to restore homeostasis when deviations from it are detected. To better understand this, think of the thermostat in your own home. It detects when the current temperature in the house is different than the temperature you have it set at (i.e., the set point). Once the thermostat recognizes the difference, the heating or air conditioning turns on to bring the overall temperature back to the designated level. Many homeostatic mechanisms, such as blood circulation and immune responses, are automatic and nonconscious. Others, however, involve deliberate action. Most drive states motivate action to restore homeostasis using both “punishments” and “rewards.” Imagine that these homeostatic mechanisms are like molecular parents. When you behave poorly by departing from the set point (such as not eating or being somewhere too cold), they raise their voice at you. You experience this as the bad feelings, or “punishments,” of hunger, thirst, or feeling too cold or too hot. However, when you behave well (such as eating nutritious foods when hungry), these homeostatic parents reward you with the pleasure that comes from any activity that moves the system back toward the set point. For example, when body temperature declines below the set point, any activity that helps to restore homeostasis (such as putting one’s hand in warm water) feels pleasurable; and likewise, when body temperature rises above the set point, anything that cools it feels pleasurable. The Narrowing of Attention As drive states intensify, they direct attention toward elements, activities, and forms of consumption that satisfy the biological needs associated with the drive. Hunger, for example, draws attention toward food. Outcomes and objects that are not related to satisfying hunger lose their value (Easterbrook, 1959). For instance, has anyone ever invited you to do a fun activity while you were hungry? Likely your response was something like: “I’m not doing anything until I eat first.” Indeed, at a sufficient level of intensity, individuals will sacrifice almost any quantity of goods that do not address the needs signaled by the drive state. For example, cocaine addicts, according to Gawin (1991:1581), “report that virtually all thoughts are focused on cocaine during binges; nourishment, sleep, money, loved ones, responsibility, and survival lose all significance.” Drive states also produce a second form of attention-narrowing: a collapsing of time-perspective toward the present. That is, they make us impatient. While this form of attention-narrowing is particularly pronounced for the outcomes and behaviors directly related to the biological function being served by the drive state at issue (e.g., “I need food now”), it applies to general concerns for the future as well. Ariely and Loewenstein (2006), for example, investigated the impact of sexual arousal on the thoughts and behaviors of a sample of male undergraduates. These undergraduates were lent laptop computers that they took to their private residences, where they answered a series of questions, both in normal states and in states of high sexual arousal. Ariely and Loewenstein found that being sexually aroused made people extremely impatient for both sexual outcomes and for outcomes in other domains, such as those involving money. In another study Giordano et al. (2002) found that heroin addicts were more impatient with respect to heroin when they were craving it than when they were not. More surprisingly, they were also more impatient toward money (they valued delayed money less) when they were actively craving heroin. Yet a third form of attention-narrowing involves thoughts and outcomes related to the self versus others. Intense drive states tend to narrow one’s focus inwardly and to undermine altruism—or the desire to do good for others. People who are hungry, in pain, or craving drugs tend to be selfish. Indeed, popular interrogation methods involve depriving individuals of sleep, food, or water, so as to trigger intense drive states leading the subject of the interrogation to divulge information that may betray comrades, friends, and family (Biderman, 1960). Two Illustrative Drive States Thus far we have considered drive states abstractly. We have discussed the ways in which they relate to other affective and motivational mechanisms, as well as their main biological purpose and general effects on thought and behavior. Yet, despite serving the same broader goals, different drive states are often remarkably different in terms of their specific properties. To understand some of these specific properties, we will explore two different drive states that play very important roles in determining behavior, and in ensuring human survival: hunger and sexual arousal. Hunger Hunger is a classic example of a drive state, one that results in thoughts and behaviors related to the consumption of food. Hunger is generally triggered by low glucose levels in the blood (Rolls, 2000), and behaviors resulting from hunger aim to restore homeostasis regarding those glucose levels. Various other internal and external cues can also cause hunger. For example, when fats are broken down in the body for energy, this initiates a chemical cue that the body should search for food (Greenberg, Smith, & Gibbs, 1990). External cues include the time of day, estimated time until the next feeding (hunger increases immediately prior to food consumption), and the sight, smell, taste, and even touch of food and food-related stimuli. Note that while hunger is a generic feeling, it has nuances that can provoke the eating of specific foods that correct for nutritional imbalances we may not even be conscious of. For example, a couple who was lost adrift at sea found they inexplicably began to crave the eyes of fish. Only later, after they had been rescued, did they learn that fish eyes are rich in vitamin C—a very important nutrient that they had been depleted of while lost in the ocean (Walker, 2014). The hypothalamus (located in the lower, central part of the brain) plays a very important role in eating behavior. It is responsible for synthesizing and secreting various hormones. The lateral hypothalamus (LH) is concerned largely with hunger and, in fact, lesions (i.e., damage) of the LH can eliminate the desire for eating entirely—to the point that animals starve themselves to death unless kept alive by force feeding (Anand & Brobeck, 1951). Additionally, artificially stimulating the LH, using electrical currents, can generate eating behavior if food is available (Andersson, 1951). Activation of the LH can not only increase the desirability of food but can also reduce the desirability of nonfood-related items. For example, Brendl, Markman, and Messner (2003) found that participants who were given a handful of popcorn to trigger hunger not only had higher ratings of food products, but also had lower ratings of nonfood products—compared with participants whose appetites were not similarly primed. That is, because eating had become more important, other non-food products lost some of their value. Hunger is only part of the story of when and why we eat. A related process, satiation, refers to the decline of hunger and the eventual termination of eating behavior. Whereas the feeling of hunger gets you to start eating, the feeling of satiation gets you to stop. Perhaps surprisingly, hunger and satiation are two distinct processes, controlled by different circuits in the brain and triggered by different cues. Distinct from the LH, which plays an important role in hunger, the ventromedial hypothalamus (VMH) plays an important role in satiety. Though lesions of the VMH can cause an animal to overeat to the point of obesity, the relationship between the LH and the VMB is quite complicated. Rats with VMH lesions can also be quite finicky about their food (Teitelbaum, 1955). Other brain areas, besides the LH and VMH, also play important roles in eating behavior. The sensory cortices (visual, olfactory, and taste), for example, are important in identifying food items. These areas provide informational value, however, not hedonic evaluations. That is, these areas help tell a person what is good or safe to eat, but they don’t provide the pleasure (or hedonic) sensations that actually eating the food produces. While many sensory functions are roughly stable across different psychological states, other functions, such as the detection of food-related stimuli, are enhanced when the organism is in a hungry drive state. After identifying a food item, the brain also needs to determine its reward value, which affects the organism’s motivation to consume the food. The reward value ascribed to a particular item is, not surprisingly, sensitive to the level of hunger experienced by the organism. The hungrier you are, the greater the reward value of the food. Neurons in the areas where reward values are processed, such as the orbitofrontal cortex, fire more rapidly at the sight or taste of food when the organism is hungry relative to if it is satiated. Sexual Arousal A second drive state, especially critical to reproduction, is sexual arousal. Sexual arousal results in thoughts and behaviors related to sexual activity. As with hunger, it is generated by a large range of internal and external mechanisms that are triggered either after the extended absence of sexual activity or by the immediate presence and possibility of sexual activity (or by cues commonly associated with such possibilities). Unlike hunger, however, these mechanisms can differ substantially between males and females, indicating important evolutionary differences in the biological functions that sexual arousal serves for different sexes. Sexual arousal and pleasure in males, for example, is strongly related to the preoptic area, a region in the anterior hypothalamus (or the front of the hypothalamus). If the preoptic area is damaged, male sexual behavior is severely impaired. For example, rats that have had prior sexual experiences will still seek out sexual partners after their preoptic area is lesioned. However, once having secured a sexual partner, rats with lesioned preoptic areas will show no further inclination to actually initiate sex. For females, though, the preoptic area fulfills different roles, such as functions involved with eating behaviors. Instead, there is a different region of the brain, the ventromedial hypothalamus (the lower, central part) that plays a similar role for females as the preoptic area does for males. Neurons in the ventromedial hypothalamus determine the excretion of estradiol, an estrogen hormone that regulates sexual receptivity (or the willingness to accept a sexual partner). In many mammals, these neurons send impulses to the periaqueductal gray (a region in the midbrain) which is responsible for defensive behaviors, such as freezing immobility, running, increases in blood pressure, and other motor responses. Typically, these defensive responses might keep the female rat from interacting with the male one. However, during sexual arousal, these defensive responses are weakened and lordosis behavior, a physical sexual posture that serves as an invitation to mate, is initiated (Kow and Pfaff, 1998). Thus, while the preoptic area encourages males to engage in sexual activity, the ventromedial hypothalamus fulfills that role for females. Other differences between males and females involve overlapping functions of neural modules. These neural modules often provide clues about the biological roles played by sexual arousal and sexual activity in males and females. Areas of the brain that are important for male sexuality overlap to a great extent with areas that are also associated with aggression. In contrast, areas important for female sexuality overlap extensively with those that are also connected to nurturance (Panksepp, 2004). One region of the brain that seems to play an important role in sexual pleasure for both males and females is the septal nucleus, an area that receives reciprocal connections from many other brain regions, including the hypothalamus and the amygdala (a region of the brain primarily involved with emotions). This region shows considerable activity, in terms of rhythmic spiking, during sexual orgasm. It is also one of the brain regions that rats will most reliably voluntarily self-stimulate (Olds & Milner, 1954). In humans, placing a small amount of acetylcholine into this region, or stimulating it electrically, has been reported to produce a feeling of imminent orgasm (Heath, 1964). Conclusion Drive states are evolved motivational mechanisms designed to ensure that organisms take self-beneficial actions. In this module, we have reviewed key properties of drive states, such as homeostasis and the narrowing of attention. We have also discussed, in some detail, two important drive states—hunger and sexual arousal—and explored their underlying neurobiology and the ways in which various environmental and biological factors affect their properties. There are many drive states besides hunger and sexual arousal that affect humans on a daily basis. Fear, thirst, exhaustion, exploratory and maternal drives, and drug cravings are all drive states that have been studied by researchers (see e.g., Buck, 1999; Van Boven & Loewenstein, 2003). Although these drive states share some of the properties discussed in this module, each also has unique features that allow it to effectively fulfill its evolutionary function. One key difference between drive states is the extent to which they are triggered by internal as opposed to external stimuli. Thirst, for example, is induced both by decreased fluid levels and an increased concentration of salt in the body. Fear, on the other hand, is induced by perceived threats in the external environment. Drug cravings are triggered both by internal homeostatic mechanisms and by external visual, olfactory, and contextual cues. Other drive states, such as those pertaining to maternity, are triggered by specific events in the organism’s life. Differences such as these make the study of drive states a scientifically interesting and important endeavor. Drive states are rich in their diversity, and many questions involving their neurocognitive underpinnings, environmental determinants, and behavioral effects, have yet to be answered. One final thing to consider, not discussed in this module, relates to the real-world consequences of drive states. Hunger, sexual arousal, and other drive states are all psychological mechanisms that have evolved gradually over millions of years. We share these drive states not only with our human ancestors but with other animals, such as monkeys, dogs, and rats. It is not surprising then that these drive states, at times, lead us to behave in ways that are ill-suited to our modern lives. Consider, for example, the obesity epidemic that is affecting countries around the world. Like other diseases of affluence, obesity is a product of drive states that are too easily fulfilled: homeostatic mechanisms that once worked well when food was scarce now backfire when meals rich in fat and sugar are readily available. Unrestricted sexual arousal can have similarly perverse effects on our well-being. Countless politicians have sacrificed their entire life’s work (not to mention their marriages) by indulging adulterous sexual impulses toward colleagues, staffers, prostitutes, and others over whom they have social or financial power. It not an overstatement to say that many problems of the 21st century, from school massacres to obesity to drug addiction, are influenced by the mismatch between our drive states and our uniquely modern ability to fulfill them at a moment’s notice. Outside Resources Web: An open textbook chapter on homeostasis http://en.wikibooks.org/wiki/Human_P...gy/Homeostasis Web: Motivation and emotion in psychology http://allpsych.com/psychology101/mo...n_emotion.html Web: The science of sexual arousal http://www.apa.org/monitor/apr03/arousal.aspx Discussion Questions 1. The ability to maintain homeostasis is important for an organism’s survival. What are the ways in which homeostasis ensures survival? Do different drive states accomplish homeostatic goals differently? 2. Drive states result in the narrowing of attention toward the present and toward the self. Which drive states lead to the most pronounced narrowing of attention toward the present? Which drive states lead to the most pronounced narrowing of attention toward the self? 3. What are important differences between hunger and sexual arousal, and in what ways do these differences reflect the biological needs that hunger and sexual arousal have been evolved to address? 4. Some of the properties of sexual arousal vary across males and females. What other drives states affect males and females differently? Are there drive states that vary with other differences in humans (e.g., age)? Vocabulary Drive state Affective experiences that motivate organisms to fulfill goals that are generally beneficial to their survival and reproduction. Homeostasis The tendency of an organism to maintain a stable state across all the different physiological systems in the body. Homeostatic set point An ideal level that the system being regulated must be monitored and compared to. Hypothalamus A portion of the brain involved in a variety of functions, including the secretion of various hormones and the regulation of hunger and sexual arousal. Lordosis A physical sexual posture in females that serves as an invitation to mate. Preoptic area A region in the anterior hypothalamus involved in generating and regulating male sexual behavior. Reward value A neuropsychological measure of an outcome’s affective importance to an organism. Satiation The state of being full to satisfaction and no longer desiring to take on more.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_4%3A_Emotions_and_Motivation/4.5%3A_Drive_States.txt
By Ayelet Fishbach and Maferima Touré-Tillery University of Chicago, Northwestern University Your decisions and behaviors are often the result of a goal or motive you possess. This module provides an overview of the main theories and findings on goals and motivation. We address the origins, manifestations, and types of goals, and the various factors that influence motivation in goal pursuit. We further address goal conflict and, specifically, the exercise of self-control in protecting long-term goals from momentary temptations. learning objectives • Define the basic terminology related to goals, motivation, self-regulation, and self-control. • Describe the antecedents and consequences of goal activation. • Describe the factors that influence motivation in the course of goal pursuit. • Explain the process underlying goal activation, self-regulation, and self-control. • Give examples of goal activation effects, self-regulation processes, and self-control processes. Introduction Every New Year, many people make resolutions—or goals—that go unsatisfied: eat healthier; pay better attention in class; lose weight. As much as we know our lives would improve if we actually achieved these goals, people quite often don’t follow through. But what if that didn’t have to be the case? What if every time we made a goal, we actually accomplished it? Each day, our behavior is the result of countless goals—maybe not goals in the way we think of them, like getting that beach body or being the first person to land on Mars. But even with “mundane” goals, like getting food from the grocery store, or showing up to work on time, we are often enacting the same psychological processes involved with achieving loftier dreams. To understand how we can better attain our goals, let’s begin with defining what a goal is and what underlies it, psychologically. A goal is the cognitive representation of a desired state, or, in other words, our mental idea of how we’d like things to turn out (Fishbach & Ferguson 2007; Kruglanski, 1996). This desired end state of a goal can be clearly defined (e.g., stepping on the surface of Mars), or it can be more abstract and represent a state that is never fully completed (e.g., eating healthy). Underlying all of these goals, though, is motivation, or the psychological driving force that enables action in the pursuit of that goal (Lewin, 1935). Motivation can stem from two places. First, it can come from the benefits associated with the process of pursuing a goal (intrinsic motivation). For example, you might be driven by the desire to have a fulfilling experience while working on your Mars mission. Second, motivation can also come from the benefits associated with achieving a goal (extrinsic motivation), such as the fame and fortune that come with being the first person on Mars (Deci & Ryan, 1985). One easy way to consider intrinsic and extrinsic motivation is through the eyes of a student. Does the student work hard on assignments because the act of learning is pleasing (intrinsic motivation)? Or does the student work hard to get good grades, which will help land a good job (extrinsic motivation)? Social psychologists recognize that goal pursuit and the motivations that underlie it do not depend solely on an individual’s personality. Rather, they are products of personal characteristics and situational factors. Indeed, cues in a person’s immediate environment—including images, words, sounds, and the presence of other people—can activate, or prime, a goal. This activation can be conscious, such that the person is aware of the environmental cues influencing his/her pursuit of a goal. However, this activation can also occur outside a person’s awareness, and lead to nonconscious goal pursuit. In this case, the person is unaware of why s/he is pursuing a goal and may not even realize that s/he is pursuing it. In this module, we review key aspects of goals and motivation. First, we discuss the origins and manifestation of goals. Then, we review factors that influence individuals’ motivation in the course of pursuing a goal (self-regulation). Finally, we discuss what motivates individuals to keep following their goals when faced with other conflicting desires—for example, when a tempting opportunity to socialize on Facebook presents itself in the course of studying for an exam (self-control). The Origins and Manifestation of Goals Goal Adoption What makes us commit to a goal? Researchers tend to agree that commitment stems from the sense that a goal is both valuable and attainable, and that we adopt goals that are highly likely to bring positive outcomes (i.e., one’s commitment = the value of the goal × the expectancy it will be achieved) (Fishbein & Ajzen, 1974; Liberman & Förster, 2008). This process of committing to a goal can occur without much conscious deliberation. For example, people infer value and attainability, and will nonconsciously determine their commitment based on those factors, as well as the outcomes of past goals. Indeed, people often learn about themselves the same way they learn about other people—by observing their behaviors (in this case, their own) and drawing inferences about their preferences. For example, after taking a kickboxing class, you might infer from your efforts that you are indeed committed to staying physically fit (Fishbach, Zhang, & Koo, 2009). Goal Priming We don’t always act on our goals in every context. For instance, sometimes we’ll order a salad for lunch, in keeping with our dietary goals, while other times we’ll order only dessert. So, what makes people adhere to a goal in any given context? Cues in the immediate environment (e.g., objects, images, sounds—anything that primes a goal) can have a remarkable influence on the pursuit of goals to which people are already committed (Bargh, 1990; Custers, Aarts, Oikawa, & Elliot, 2009; Förster, Liberman, & Friedman, 2007). How do these cues work? In memory, goals are organized in associative networks. That is, each goal is connected to other goals, concepts, and behaviors. Particularly, each goal is connected to corresponding means—activities and objects that help us attain the goal (Kruglanski et al., 2002). For example, the goal to stay physically fit may be associated with several means, including a nearby gym, one’s bicycle, or even a training partner. Cues related to the goal or means (e.g., an ad for running shoes, a comment about weight loss) can activate or prime the pursuit of that goal. For example, the presence of one’s training partner, or even seeing the word “workout” in a puzzle, can activate the goal of staying physically fit and, hence, increase a person’s motivation to exercise. Soon after goal priming, the motivation to act on the goal peaks then slowly declines, after some delay, as the person moves away from the primer or after s/he pursues the goal (Bargh, Gollwitzer, Lee-Chai, Barndollar, & Trotschel, 2001). Consequences of Goal Activation The activation of a goal and the accompanying increase in motivation can influence many aspects of behavior and judgment, including how people perceive, evaluate, and feel about the world around them. Indeed, motivational states can even alter something as fundamental as visual perception. For example, Balcetis and Dunning (2006) showed participants an ambiguous figure (e.g., “I3”) and asked them whether they saw the letter B or the number 13. The researchers found that when participants had the goal of seeing a letter (e.g., because seeing a number required the participants to drink a gross tasting juice), they in fact saw a B. It wasn’t that the participants were simply lying, either; their goal literally changed how they perceived the world! Goals can also exert a strong influence on how people evaluate the objects (and people) around them. When pursuing a goal such as quenching one’s thirst, people evaluate goal-relevant objects (e.g., a glass) more positively than objects that are not relevant to the goal (e.g., a pencil). Furthermore, those with the goal of quenching their thirst rate the glass more positively than people who are not pursuing the goal (Ferguson & Bargh, 2004). As discussed earlier, priming a goal can lead to behaviors like this (consistent with the goal), even though the person isn’t necessarily aware of why (i.e., the source of the motivation). For example, after research participants saw words related to achievement (in the context of solving a word search), they automatically performed better on a subsequent achievement test—without being at all aware that the achievement words had influenced them (Bargh & Chartrand, 1999; Srull & Wyer, 1979). Self-Regulation in Goal Pursuit Many of the behaviors we like to engage in are inconsistent with achieving our goals. For example, you may want to be physically fit, but you may also really like German chocolate cake. Self-regulation refers to the process through which individuals alter their perceptions, feelings, and actions in the pursuit of a goal. For example, filling up on fruits at a dessert party is one way someone might alter his or her actions to help with goal attainment. In the following section, we review the main theories and findings on self-regulation. From Deliberation to Implementation Self-regulation involves two basic stages, each with its own distinct mindset. First, a person must decide which of many potential goals to pursue at a given point in time (deliberative phase). While in the deliberative phase, a person often has a mindset that fosters an effective assessment of goals. That is, one tends to be open-minded and realistic about available goals to pursue. However, such scrutiny of one’s choices sometimes hinders action. For example, in the deliberative phase about how to spend time, someone might consider improving health, academic performance, or developing a hobby. At the same time, though, this deliberation involves considering realistic obstacles, such as one’s busy schedule, which may discourage the person from believing the goals can likely be achieved (and thus, doesn’t work toward any of them). However, after deciding which goal to follow, the second stage involves planning specific actions related to the goal (implemental phase). In the implemental phase, a person tends to have a mindset conducive to the effective implementation of a goal through immediate action—i.e., with the planning done, we’re ready to jump right into attaining our goal. Unfortunately, though, this mindset often leads to closed-mindedness and unrealistically positive expectations about the chosen goal (Gollwitzer, Heckhausen, & Steller, 1990; Kruglanski et al., 2000; Thaler & Shefrin, 1981). For example, in order to follow a health goal, a person might register for a gym membership and start exercising. In doing so, s/he assumes this is all that’s needed to achieve the goal (closed-mindedness), and after a few weeks, it should be accomplished (unrealistic expectations). Regulation of Ought- and Ideals-Goals In addition to two phases in goal pursuit, research also distinguishes between two distinct self-regulatory orientations (or perceptions of effectiveness) in pursuing a goal: prevention and promotion. A prevention focus emphasizes safety, responsibility, and security needs, and views goals as “oughts.” That is, for those who are prevention-oriented, a goal is viewed as something they should be doing, and they tend to focus on avoiding potential problems (e.g., exercising to avoid health threats). This self-regulatory focus leads to a vigilant strategy aimed at avoiding losses (the presence of negatives) and approaching non-losses (the absence of negatives). On the other hand, a promotion focus views goals as “ideals,” and emphasizes hopes, accomplishments, and advancement needs. Here, people view their goals as something they want to do that will bring them added pleasure (e.g., exercising because being healthy allows them to do more activities). This type of orientation leads to the adoption of an eager strategy concerned with approaching gains (the presence of positives) and avoiding non-gains (the absence of positives). To compare these two strategies, consider the goal of saving money. Prevention-focused people will save money because they believe it’s what they should be doing (an ought), and because they’re concerned about not having any money (avoiding a harm). Promotion-focused people, on the other hand, will save money because they want to have extra funds (a desire) so they can do new and fun activities (attaining an advancement). Although these two strategies result in very similar behaviors, emphasizing potential losses will motivate individuals with a prevention focus, whereas emphasizing potential gains will motivate individuals with a promotion focus. And these orientations—responding better to either a prevention or promotion focus— differ across individuals (chronic regulatory focus) and situations (momentary regulatory focus; Higgins, 1997). A Cybernetic Process of Self-Regulation Self-regulation depends on feelings that arise from comparing actual progress to expected progress. During goal pursuit, individuals calculate the discrepancy between their current state (i.e., all goal-related actions completed so far) and their desired end state (i.e., what they view as “achieving the goal”). After determining this difference, the person then acts to close that gap (Miller, Galanter, & Pribram, 1960; Powers, 1973). In this cybernetic process of self-regulation (or, internal system directing how a person should control behavior), a higher-than-expected rate of closing the discrepancy creates a signal in the form of positive feelings. For example, if you’re nearly finished with a class project (i.e., a low discrepancy between your progress and what it will take to completely finish), you feel good about yourself. However, these positive feelings tend to make individuals “coast,” or reduce their efforts on the focal goal, and shift their focus to other goals (e.g., you’re almost done with your project for one class, so you start working on a paper for another). By contrast, a lower-than-expected rate of closing the gap elicits negative feelings, which leads to greater effort investment on the focal goal (Carver & Scheier, 1998). If it is the day before a project’s due and you’ve hardly started it, you will likely feel anxious and stop all other activities to make progress on your project. Highlighting One Goal or Balancing Between Goals When we’ve completed steps toward achieving our goal, looking back on the behaviors or actions that helped us make such progress can have implications for future behaviors and actions (see The Dynamics of Self-Regulation framework; Fishbach et al., 2009). Remember, commitment results from the perceived value and attainability of a goal, whereas progress describes the perception of a reduced discrepancy between the current state and desired end state (i.e., the cybernetic process). After achieving a goal, when people interpret their previous actions as a sign of commitment to it, they tend to highlight the pursuit of that goal, prioritizing it and putting more effort toward it. However, when people interpret their previous actions as a sign of progress, they tend to balancebetween the goal and other goals, putting less effort into the focal goal. For example, if buying a product on sale reinforces your commitment to the goal of saving money, you will continue to behave financially responsibly. However, if you perceive the same action (buying the sale item) as evidence of progress toward the goal of saving money, you might feel like you can “take a break” from your goal, justifying splurging on a subsequent purchase. Several factors can influence the meanings people assign to previous goal actions. For example, the more confident a person is about a commitment to a goal, the more likely s/he is to infer progress rather than commitment from his/her actions (Koo & Fishbach, 2008). Conflicting Goals and Self-Control In the pursuit of our ordinary and extraordinary goals (e.g., staying physically or financially healthy, landing on Mars), we inevitably come across other goals (e.g., eating delicious food, exploring Earth) that might get in the way of our lofty ambitions. In such situations, we must exercise self-control to stay on course. Self-control is the capacity to control impulses, emotions, desires, and actions in order to resist a temptation (e.g., going on a shopping spree) and protect a valued goal (e.g., stay financially sound). As such, self-control is a process of self-regulation in contexts involving a clear trade-off between long-term interests (e.g., health, financial, or Martian) and some form of immediate gratification (Fishbach & Converse, 2010; Rachlin, 2000; Read, Loewenstein, & Rabin, 1999; Thaler & Shefrin, 1981). For example, whereas reading each page of a textbook requires self-regulation, doing so while resisting the tempting sounds of friends socializing in the next room requires self-control. And although you may tend to believe self-control is just a personal characteristic that varies across individuals, it is like a muscle, in that it becomes drained by being used but is also strengthened in the process. Self-Control as an Innate Ability Mischel, Shoda, and Rodriguez (1989) identified enduring individual differences in self-control and found that the persistent capacity to postpone immediate gratification for the sake of future interests leads to greater cognitive and social competence over the course of a lifetime. In a famous series of lab experiments (first conducted by Mischel & Baker, 1975), preschoolers 3–5 years old were asked to choose between getting a smaller treat immediately (e.g., a single marshmallow) or waiting as long as 15 minutes to get a better one (e.g., two marshmallows). Some children were better-able to exercise self-control than others, resisting the temptation to take the available treat and waiting for the better one. Following up with these preschoolers ten years later, the researchers found that the children who were able to wait longer in the experiment for the second marshmallow (vs. those who more quickly ate the single marshmallow) performed better academically and socially, and had better psychological coping skills as adolescents. Self-Control as a Limited Resource Beyond personal characteristics, the ability to exercise self-control can fluctuate from one context to the next. In particular, previous exertion of self-control (e.g., choosing not to eat a donut) drains individuals of the limited physiological and psychological resources required to continue the pursuit of a goal (e.g., later in the day, again resisting a sugary treat). Ego-depletion refers to this exhaustion of resources from resisting a temptation. That is, just like bicycling for two hours would exhaust someone before a basketball game, exerting self-control reduces individuals’ capacity to exert more self-control in a consequent task—whether that task is in the same domain (e.g., resisting a donut and then continuing to eat healthy) or a different one (e.g., resisting a donut and then continuing to be financially responsible; Baumeister, Bratslavsky, Muraven, & Tice, 1998; Vohs & Heatherton, 2000). For example, in a study by Baumeister et al. (1998), research participants who forced themselves to eat radishes instead of tempting chocolates were subsequently less persistent (i.e., gave up sooner) at attempting an unsolvable puzzle task compared to the participants who had not exerted self-control to resist the chocolates. A Prerequisite to Self-Control: Identification Although factors such as resources and personal characteristics contribute to the successful exercise of self-control, identifying the self-control conflict inherent to a particular situation is an important—and often overlooked—prerequisite. For example, if you have a long-term goal of getting better sleep but don’t perceive that staying up late on a Friday night is inconsistent with this goal, you won’t have a self-control conflict. The successful pursuit of a goal in the face of temptation requires that individuals first identify they are having impulses that need to be controlled. However, individuals often fail to identify self-control conflicts because many everyday temptations seem to have very minimal negative consequences: one bowl of ice cream is unlikely to destroy a person’s health, but what about 200 bowls of ice cream over the course of a few months? People are more likely to identify a self-control conflict, and exercise self-control, when they think of a choice as part of a broader pattern of repeated behavior rather than as an isolated choice. For example, rather than seeing one bowl of ice cream as an isolated behavioral decision, the person should try to recognize that this “one bowl of ice cream” is actually part of a nightly routine. Indeed, when considering broader decision patterns, consistent temptations become more problematic for long-term interests (Rachlin, 2000; Read, Loewenstein, & Kalyanaraman, 1999). Moreover, conflict identification is more likely if people see their current choices as similar to their future choices. Self-Control Processes: Counteracting Temptation The protection of a valued goal involves several cognitive and behavioral strategies ultimately aimed at “counteracting” the pull of temptations and pushing oneself toward goal-related alternatives (Fishbach & Trope, 2007). One such cognitive process involves decreasing the value of temptations and increasing the value of goal-consistent objects or actions. For example, health-conscious individuals might tell themselves a sugary treat is less appealing than a piece of fruit in order to direct their choice toward the latter. Other behavioral strategies include a precommitment to pursue goals and forgo temptation (e.g., leaving one’s credit card at home before going to the mall), establishing rewards for goals and penalties for temptations, or physically approaching goals and distancing oneself from temptations (e.g., pushing away a dessert plate). These self-control processes can benefit individuals’ long-term interests, either consciously or without conscious awareness. Thus, at times, individuals automatically activate goal-related thoughts in response to temptation, and inhibit temptation-related thoughts in the presence of goal cues (Fishbach, Friedman, & Kruglanski, 2003). Conclusion People often make New Year’s resolutions with the idea that attaining one’s goals is simple: “I just have to choose to eat healthier, right?” However, after going through this module and learning a social-cognitive approach to the main theories and findings on goals and motivation, we see that even the most basic decisions take place within a much larger and more complex mental framework. From the principles of goal priming and how goals influence perceptions, feelings, and actions, to the factors of self-regulation and self-control, we have learned the phases, orientations, and fluctuations involved in the course of everyday goal pursuit. Looking back on prior goal failures, it may seem impossible to achieve some of our desires. But, through understanding our own mental representation of our goals (i.e., the values and expectancies behind them), we can help cognitively modify our behavior to achieve our dreams. If you do, who knows?—maybe you will be the first person to step on Mars. Discussion Questions 1. What is the difference between goal and motivation? 2. What is the difference between self-regulation and self-control? 3. How do positive and negative feelings inform goal pursuit in a cybernetic self-regulation process? 4. Describe the characteristics of the deliberative mindset that allows individuals to decide between different goals. How might these characteristics hinder the implemental phase of self-regulation? 5. You just read a module on “Goals and Motivation,” and you believe it is a sign of commitment to the goal of learning about social psychology. Define commitment in this context. How would interpreting your efforts as a sign of commitment influence your motivation to read more about social psychology? By contrast, how would interpreting your efforts as a sign of progress influence your motivation to read more? 6. Mel and Alex are friends. Mel has a prevention focus self-regulatory orientation, whereas Alex has a promotion focus. They are both training for a marathon and are looking for motivational posters to hang in their respective apartments. While shopping, they find a poster with the following Confucius quote: “The will to win, the desire to succeed, the urge to reach your full potential ... . These are the keys that will unlock the door to personal excellence.” Who is this poster more likely to help stay motivated for the marathon (Mel or Alex)? Why? Find or write a quote that might help the other friend. 7. Give an example in which an individual fails to exercise self-control. What are some factors that can cause such a self-control failure? Vocabulary Balancing between goals Shifting between a focal goal and other goals or temptations by putting less effort into the focal goal—usually with the intention of coming back to the focal goal at a later point in time. Commitment The sense that a goal is both valuable and attainable Conscious goal activation When a person is fully aware of contextual influences and resulting goal-directed behavior. Deliberative phase The first of the two basic stages of self-regulation in which individuals decide which of many potential goals to pursue at a given point in time. Ego-depletion The exhaustion of physiological and/or psychological resources following the completion of effortful self-control tasks, which subsequently leads to reduction in the capacity to exert more self-control. Extrinsic motivation Motivation stemming from the benefits associated with achieving a goal such as obtaining a monetary reward. Goal The cognitive representation of a desired state (outcome). Goal priming The activation of a goal following exposure to cues in the immediate environment related to the goal or its corresponding means (e.g., images, words, sounds). Highlighting a goal Prioritizing a focal goal over other goals or temptations by putting more effort into the focal goal. Implemental phase The second of the two basic stages of self-regulation in which individuals plan specific actions related to their selected goal. Intrinsic motivation Motivation stemming from the benefits associated with the process of pursuing a goal such as having a fulfilling experience. Means Activities or objects that contribute to goal attainment. Motivation The psychological driving force that enables action in the course of goal pursuit. Nonconscious goal activation When activation occurs outside a person’s awareness, such that the person is unaware of the reasons behind her goal-directed thoughts and behaviors. Prevention focus One of two self-regulatory orientations emphasizing safety, responsibility, and security needs, and viewing goals as “oughts.” This self-regulatory focus seeks to avoid losses (the presence of negatives) and approach non-losses (the absence of negatives). Progress The perception of reducing the discrepancy between one’s current state and one’s desired state in goal pursuit. Promotion focus One of two self-regulatory orientations emphasizing hopes, accomplishments, and advancement needs, and viewing goals as “ideals.” This self-regulatory focus seeks to approach gains (the presence of positives) and avoid non-gains (the absence of positives). Self-control The capacity to control impulses, emotions, desires, and actions in order to resist a temptation and adhere to a valued goal. Self-regulation The processes through which individuals alter their emotions, desires, and actions in the course of pursuing a goal.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_4%3A_Emotions_and_Motivation/4.6%3A_Motives_and_Goals.txt
By Paul Silvia University North Carolina, Greensboro When people think of emotions they usually think of the obvious ones, such as happiness, fear, anger, and sadness. This module looks at the knowledge emotions, a family of emotional states that foster learning, exploring, and reflecting. Surprise, interest, confusion, and awe come from events that are unexpected, complicated, and mentally challenging, and they motivate learning in its broadest sense, be it learning over the course of seconds (finding the source of a loud crash, as in surprise) or over a lifetime (engaging with hobbies, pastimes, and intellectual pursuits, as in interest). The module reviews research on each emotion, with an emphasis on causes, consequences, and individual differences. As a group, the knowledge emotions motivate people to engage with new and puzzling things rather than avoid them. Over time, engaging with new things, ideas, and people broadens someone’s experiences and cultivates expertise. The knowledge emotions thus don’t gear up the body like fear, anger, and happiness do, but they do gear up the mind—a critical task for humans, who must learn essentially everything that they know. learning objectives • Identify the four knowledge emotions. • Describe the patterns of appraisals that bring about these emotions. • Discuss how the knowledge emotions promote learning. • Apply the knowledge emotions to enhancing learning and education, and to one’s own life Introduction What comes to mind when you think of emotions? It’s probably the elation of happiness, the despair of sadness, or the freak-out fright of fear. Emotions such as happiness, anger, sadness, and fear are important emotions, but human emotional experience is vast—people are capable of experiencing a wide range of feelings. This module considers the knowledge emotions, a profoundly important family of emotions associated with learning, exploring, and reflecting. The family of knowledge emotions has four main members: surprise, interest, confusion, and awe. These are considered knowledge emotions for two reasons. First, the events that bring them about involve knowledge: These emotions happen when something violates what people expected or believed. Second, these emotions are fundamental to learning: Over time, they build useful knowledge about the world. Some Background About Emotions Before jumping into the knowledge emotions, we should consider what emotions do and when emotions happen. According to functionalist theories of emotion, emotions help people manage important tasks (Keltner & Gross, 1999; Parrott, 2001). Fear, for example, mobilizes the body to fight or flee; happiness rewards achieving goals and builds attachments to other people. What do knowledge emotions do? As we’ll see in detail later, they motivate learning, viewed in its broadest sense, during times that the environment is puzzling or erratic. Sometimes the learning is on a short time scale. Surprise, for example, makes people stop what they are doing, pay attention to the surprising thing, and evaluate whether it is dangerous (Simons, 1996). After a couple seconds, people have learned what they needed to know and get back to what they were doing. But sometimes the learning takes place over the lifespan. Interest, for example, motivates people to learn about things over days, weeks, and years. Finding something interesting motivates “for its own sake” learning and is probably the major engine of human competence (Izard, 1977; Silvia, 2006). What causes emotions to happen in the first place? Although it usually feels like something in the world—a good hug, a snake slithering across the driveway, a hot-air balloon shaped like a question mark—causes an emotion directly, emotion theories contend that emotions come from how we think about what is happening in the world, not what is literally happening. After all, if things in the world directly caused emotions, everyone would always have the same emotion in response to something. Appraisal theories (Ellsworth & Scherer, 2003; Lazarus, 1991) propose that each emotion is caused by a group of appraisals, which are evaluations and judgments of what events in the world mean for our goals and well-being: Is this relevant to me? Does it further or hinder my goals? Can I deal with it or do something about it? Did someone do it on purpose? Different emotions come from different answers to these appraisal questions. With that as a background, in the following sections we’ll consider the nature, causes, and effects of each knowledge emotion. Afterward, we will consider some of their practical implications. Surprise Nothing gets people’s attention like something startling. Surprise, a simple emotion, hijacks a person’s mind and body and focuses them on a source of possible danger (Simons, 1996). When there’s a loud, unexpected crash, people stop, freeze, and orient to the source of the noise. Their minds are wiped clean—after something startling, people usually can’t remember what they had been talking about—and attention is focused on what just happened. By focusing all the body’s resources on the unexpected event, surprise helps people respond quickly (Simons, 1996). Surprise has only one appraisal: A single “expectedness check” (Scherer, 2001) seems to be involved. When an event is “high contrast”—it sticks out against the background of what people expected to perceive or experience—people become surprised (Berlyne, 1960; Teigen & Keren, 2003). Figure 1 shows this pattern visually: Surprise is high when unexpectedness is high. Emotions are momentary states, but people vary in their propensity to experience them. Just as some people experience happiness, anger, and fear more readily, some people are much more easily surprised than others. At one end, some people are hard to surprise; at the other end, people are startled by minor noises, flashes, and changes. Like other individual differences in emotion, extreme levels of surprise propensity can be dysfunctional. When people have extreme surprise responses to mundane things—known as hyperstartling (Simons, 1996) and hyperekplexia (Bakker, van Dijk, van den Maagdenberg, & Tijssen, 2006)—everyday tasks such as driving or swimming become dangerous. Interest People are curious creatures. Interest—an emotion that motivates exploration and learning (Silvia, 2012)—is one of the most commonly experienced emotions in everyday life (Izard, 1977). Humans must learn virtually everything they know, from how to cook pasta to how the brain works, and interest is an engine of this massive undertaking of learning across the lifespan. The function of interest is to engage people with things that are new, odd, or unfamiliar. Unfamiliar things can be scary or unsettling, which makes people avoid them. But if people always avoided new things they would learn and experience nothing. It’s hard to imagine what life would be like if people weren’t curious to try new things: We would never feel like watching a different movie, trying a different restaurant, or meeting new people. Interest is thus a counterweight to anxiety—by making unfamiliar things appealing, it motivates people to experience and think about new things. As a result, interest is an intrinsically motivated form of learning. When curious, people want to learn something for its own sake, to know it for the simple pleasure of knowing it, not for an external reward, such as learning to get money, impress a peer, or receive the approval of a teacher or parent. Figure 4.7.1 shows the two appraisals that create interest. Like surprise, interest involves appraisals of novelty: Things that are unexpected, unfamiliar, novel, and complex can evoke interest (Berlyne, 1960; Hidi & Renninger, 2006; Silvia, 2008). But unlike surprise, interest involves an additional appraisal of coping potential. In appraisal theories, coping potential refers to people’s evaluations of their ability to manage what is happening (Lazarus, 1991). When coping potential is high, people feel capable of handling the challenge at hand. For interest, this challenge is mental: Something odd and unexpected happened, and people can either feel able to understand it or not. When people encounter something that they appraise as both novel (high novelty and complexity) and comprehensible (high coping potential), they will find it interesting (Silvia, 2005). The primary effect of interest is exploration: People will explore and think about the new and intriguing thing, be it an interesting object, person, or idea. By stimulating people to reflect and learn, interest builds knowledge and, in the long run, deep expertise. Consider, for example, the sometimes scary amount of knowledge people have about their hobbies. People who find cars, video games, high fashion, and soccer intrinsically interesting know an amazing amount about their passions—it would be hard to learn so much so quickly if people found it boring. A huge amount of research shows that interest promotes learning that is faster, deeper, better, and more enjoyable (Hidi, 2001; Silvia, 2006). When people find material more interesting, they engage with it more deeply and learn it more thoroughly. This is true for simple kinds of learning—sentences and paragraphs are easier to remember when they are interesting (Sadoski, 2001; Schiefele, 1999)—and for broader academic success—people get better grades and feel more intellectually engaged in classes they find interesting (Krapp, 1999, 2002; Schiefele, Krapp, & Winteler, 1992). Individual differences in interest are captured by trait curiosity (Kashdan, 2004; Kashdan et al., 2009). People low in curiosity prefer activities and ideas that are tried and true and familiar; people high in curiosity, in contrast, prefer things that are offbeat and new. Trait curiosity is a facet of openness to experience, a broader trait that is one of the five major factors of personality (McCrae, 1996; McCrae & Sutin, 2009). Not surprisingly, being high in openness to experience involves exploring new things and findings quirky things appealing. Research shows that curious, open people ask more questions in class, own and read more books, eat a wider range of food, and—not surprisingly, given their lifetime of engaging with new things—are a bit higher in intelligence (DeYoung, 2011; Kashdan & Silvia, 2009; Peters, 1978; Raine, Reynolds, Venables, & Mednick, 2002). Confusion Sometimes the world is weird. Interest is a wonderful resource when people encounter new and unfamiliar things, but those things aren’t always comprehensible. Confusion happens when people are learning something that is both unfamiliar and hard to understand. In the appraisal space shown in Figure 1, confusion comes from appraising an event as high in novelty, complexity, and unfamiliarity as well as appraising it as hard to comprehend (Silvia, 2010, 2013). Confusion, like interest, promotes thinking and learning. This isn’t an obvious idea—our intuitions would suggest that confusion makes people frustrated and thus more likely to tune out and quit. But as odd as it sounds, making students confused can help them learn better. In an approach to learning known as impasse-driven learning(VanLehn, Siler, Murray, Yamauchi, & Baggett, 2003), making students confused motivates them to think through a problem instead of passively sitting and listening to what a teacher is saying. By actively thinking through the problem, students are learning actively and thus learning the material more deeply. In one experiment, for example, students learned about scientific research methods from two virtual reality tutors (D’Mello, Lehman, Pekrun, & Graesser, in press). The tutors sometimes contradicted each other, however, which made the students confused. Measures of simple learning (memory for basic concepts) and deep learning (being able to transfer an idea to a new area) showed that students who had to work through confusion learned more deeply—they were better at correctly applying what they learned to new problems. In a study of facial expressions, Rozin and Cohen (2003) demonstrated what all college teachers know: It’s easy to spot confusion on someone’s face. When people are confused, they usually furrow, scrunch, or lower their eyebrows and purse or bite their lips (Craig, D’Mello, Witherspoon, & Graesser, 2008; Durso, Geldbach, & Corballis, 2012). In a clever application of these findings, researchers have developed artificial intelligence (AI) teaching and tutoring systems that can detect expressions of confusion (Craig et al., 2008). When the AI system detects confusion, it can ask questions and give hints that help the student work through the problem. Not much is known about individual differences related to confusion, but differences in how much people know are important. In one research study, people viewed short film clips from movies submitted to a local film festival (Silvia & Berg, 2011). Some of the people were film experts, such as professors and graduate students in media studies and film theory; others were novices, such as the rest of us who simply watch movies for fun. The experts found the clips much more interesting and much less confusing than the novices did. A similar study discovered that experts in the arts found experimental visual art more interesting and less confusing than novices did (Silvia, 2013). Awe Awe—a state of fascination and wonder—is the deepest and probably least common of the knowledge emotions. When people are asked to describe profound experiences, such as the experience of beauty or spiritual transformation, awe is usually mentioned (Cohen, Gruber, & Keltner, 2010). People are likely to report experiencing awe when they are alone, engaged with art and music, or in nature (Shiota, Keltner, & Mossman, 2007). Awe comes from two appraisals (Keltner & Haidt, 2003). First, people appraise something as vast, as beyond the normal scope of their experience. Thus, like the other knowledge emotions, awe involves appraising an event as inconsistent with one’s existing knowledge, but the degree of inconsistency is huge, usually when people have never encountered something like the event before (Bonner & Friedman, 2011). Second, people engage in accommodation, which is changing their beliefs—about themselves, other people, or the world in general—to fit in the new experience. When something is massive (in size, scope, sound, creativity, or anything else) and when people change their beliefs to accommodate it, they’ll experience awe. A mild, everyday form of awe is chills, sometimes known as shivers or thrills. Chills involve getting goosebumps on the skin, especially the scalp, neck, back, and arms, usually as a wave that starts at the head and moves downward. Chills are part of strong awe experiences, but people often experience them in response to everyday events, such as compelling music and movies (Maruskin, Thrash, & Elliot, 2012; Nusbaum & Silvia, 2011). Music that evokes chills, for example, tends to be loud, have a wide frequency range (such as both low and high frequencies), and major dynamic shifts, such as a shift from quiet to loud or a shift from few to many instruments (Huron & Margulis, 2010). Like the other knowledge emotions, awe motivates people to engage with something outside the ordinary. Awe is thus a powerful educational tool. In science education, it is common to motivate learning by inspiring wonder. One example comes from a line of research on astronomy education, which seeks to educate the public about astronomy by using awe-inspiring images of deep space (Arcand, Watzke, Smith, & Smith, 2010). When people see beautiful and striking color images of supernovas, black holes, and planetary nebulas, they usually report feelings of awe and wonder. These feelings then motivate them to learn about what they are seeing and their scientific importance (Smith et al., 2011). Regarding individual differences, some people experience awe much more often than others. One study that developed a brief scale to measure awe—the items included statements such as “I often feel awe” and “I feel wonder almost every day”—found that people who often experience awe are much higher in openness to experience (a trait associated with openness to new things and a wide emotional range) and in extraversion (a trait associated with positive emotionality) (Shiota, Keltner, & John, 2006). Similar findings appear for when people are asked how often they experience awe in response to the arts (Nusbaum & Silvia, in press). For example, people who say that they often “feel a sense of awe and wonder” when listening to music are much higher in openness to experience (Silvia & Nusbaum, 2011). Implications of the Knowledge Emotions Learning about the knowledge emotions expands our ideas about what emotions are and what they do. Emotions clearly play important roles in everyday challenges such as responding to threats and building relationships. But emotions also aid in other, more intellectual challenges for humans. Compared with other animals, we are born with little knowledge but have the potential for enormous intelligence. Emotions such as surprise, interest, confusion, and awe first signal that something awry has happened that deserves our attention. They then motivate us to engage with the new things that strain our understanding of the world and how it works. Emotions surely aid fighting and fleeing, but for most of the hours of most of our days, they mostly aid in learning, exploring, and reflecting. Outside Resources Video: A talk with Todd Kashdan, a well-known scholar in the field of curiosity and positive psychology, centered on curiosity Video: More from Todd Web: Aesthetics and Astronomy, a project that uses wonder and beauty to foster knowledge about the science of space http://astroart.cfa.harvard.edu/ Web: The Emotion Computing Group, an interdisciplinary team that studies how to measure confusion and harness it for deeper learning, among other intriguing things https://sites.google.com/site/memphi...computing/Home Discussion Questions 1. Research shows that people learn more quickly and deeply when they are interested. Can you think of examples from your own life when you learned from interest versus from extrinsic rewards (e.g., good grades, approval from parents and peers)? Was learning more enjoyable or effective in one case? 2. How would you redesign a psychology lecture to harness the power of the knowledge emotions? How could you use interest, confusion, and awe to grab students’ attention and motivate them to reflect and learn? 3. Psychology, like all the sciences, is fueled by wonder. For psychology, the wonder is about human nature and behavior. What, to you, is the most wondrous, amazing, and awe-inspiring idea or finding from the science of psychology? Does reflecting on this amazing fact motivate you to want to know more about it? 4. Many people only want to know something if it is practical—if it helps them get a job, make friends, find a mate, or earn money. But emotions such as interest and awe, by motivating learning for its own sake, often engage people in things that seem frivolous, silly, or impractical. What does this say about learning? Is some knowledge necessarily more valuable than other kinds? Vocabulary Accommodation Changing one's beliefs about the world and how it works in light of new experience. Appraisal structure The set of appraisals that bring about an emotion. Appraisal theories Evaluations that relate what is happening in the environment to people’s values, goals, and beliefs. Appraisal theories of emotion contend that emotions are caused by patterns of appraisals, such as whether an event furthers or hinders a goal and whether an event can be coped with. Awe An emotion associated with profound, moving experiences. Awe comes about when people encounter an event that is vast (far from normal experience) but that can be accommodated in existing knowledge. Chills A feeling of goosebumps, usually on the arms, scalp, and neck, that is often experienced during moments of awe. Confusion An emotion associated with conflicting and contrary information, such as when people appraise an event as unfamiliar and as hard to understand. Confusion motivates people to work through the perplexing information and thus fosters deeper learning. Coping potential People's beliefs about their ability to handle challenges. Facial expressions Part of the expressive component of emotions, facial expressions of emotion communicate inner feelings to others. F unctionalist theories of emotion Theories of emotion that emphasize the adaptive role of an emotion in handling common problems throughout evolutionary history. Impasse-driven learning An approach to instruction that motivates active learning by having learners work through perplexing barriers. Interest An emotion associated with curiosity and intrigue, interest motivates engaging with new things and learning more about them. It is one of the earliest emotions to develop and a resource for intrinsically motivated learning across the life span. Intrinsically motivated learning Learning that is “for its own sake”—such as learning motivated by curiosity and wonder—instead of learning to gain rewards or social approval. Knowledge emotions A family of emotions associated with learning, reflecting, and exploring. These emotions come about when unexpected and unfamiliar events happen in the environment. Broadly speaking, they motivate people to explore unfamiliar things, which builds knowledge and expertise over the long run. Openness to experience One of the five major factors of personality, this trait is associated with higher curiosity, creativity, emotional breadth, and open-mindedness. People high in openness to experience are more likely to experience interest and awe. Surprise An emotion rooted in expectancy violation that orients people toward the unexpected event. Trait curiosity Stable individual-differences in how easily and how often people become curious.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_4%3A_Emotions_and_Motivation/4.7%3A_Knowledge_Emotions%3A_Feelings_that_Foster_Learning_Exploring_and_Reflecting.txt
By Jeanne Tsai Stanford University How do people’s cultural ideas and practices shape their emotions (and other types of feelings)? In this module, we will discuss findings from studies comparing North American (United States, Canada) and East Asian (Chinese, Japanese, Korean) contexts. These studies reveal both cultural similarities and differences in various aspects of emotional life. Throughout, we will highlight the scientific and practical importance of these findings and conclude with recommendations for future research. Learning Objectives • Review the history of cross-cultural studies of emotion • Learn about recent empirical findings and theories of culture and emotion • Understand why cultural differences in emotion matter • Explore current and future directions in culture and emotion research Take a moment and imagine you are traveling in a country you’ve never been to before. Everything—the sights, the smells, the sounds—seems strange. People are speaking a language you don’t understand and wearing clothes unlike yours. But they greet you with a smile and you sense that, despite the differences you observe, deep down inside these people have the same feelings as you. But is this true? Do people from opposite ends of the world really feel the same emotions? While most scholars agree that members of different cultures may vary in the foods they eat, the languages they speak, and the holidays they celebrate, there is disagreement about the extent to which culture shapes people’s emotions and feelings—including what people feel, what they express, and what they do during an emotional event. Understanding how culture shapes people’s emotional lives and what impact emotion has on psychological health and well-being in different cultures will not only advance the study of human behavior but will also benefit multicultural societies. Across a variety of settings—academic, business, medical—people worldwide are coming into more contact with people from foreign cultures. In order to communicate and function effectively in such situations, we must understand the ways cultural ideas and practices shape our emotions. Historical Background In the 1950s and 1960s, social scientists tended to fall into either one of two camps. The universalist camp claimed that, despite cultural differences in customs and traditions, at a fundamental level all humans feel similarly. These universalists believed that emotions evolved as a response to the environments of our primordial ancestors, so they are the same across all cultures. Indeed, people often describe their emotions as “automatic,” “natural,” “physiological,” and “instinctual,” supporting the view that emotions are hard-wired and universal. The social constructivist camp, however, claimed that despite a common evolutionary heritage, different groups of humans evolved to adapt to their distinctive environments. And because human environments vary so widely, people’s emotions are also culturally variable. For instance, Lutz (1988) argued that many Western views of emotion assume that emotions are “singular events situated within individuals.” However, people from Ifaluk (a small island near Micronesia) view emotions as “exchanges between individuals” (p. 212). Social constructivists contended that because cultural ideas and practices are all-encompassing, people are often unaware of how their feelings are shaped by their culture. Therefore emotions can feel automatic, natural, physiological, and instinctual, and yet still be primarily culturally shaped. In the 1970s, Paul Ekman conducted one of the first scientific studies to address the universalist–social constructivist debate. He and Wallace Friesen devised a system to measure people’s facial muscle activity, called the Facial Action Coding System (FACS; Ekman & Friesen, 1978). Using FACS, Ekman and Friesen analyzed people’s facial expressions and identified specific facial muscle configurations associated with specific emotions, such as happiness, anger, sadness, fear, disgust. Ekman and Friesen then took photos of people posing with these different expressions (Figure 1). With the help of colleagues at different universities around the world, Ekman and Friesen showed these pictures to members of vastly different cultures, gave them a list of emotion words (translated into the relevant languages), and asked them to match the facial expressions in the photos with their corresponding emotion words on the list (Ekman & Friesen, 1971; Ekman et al., 1987). Across cultures, participants “recognized” the emotional facial expressions, matching each picture with its “correct” emotion word at levels greater than chance. This led Ekman and his colleagues to conclude that there are universally recognized emotional facial expressions. At the same time, though, they found considerable variability across cultures in recognition rates. For instance, whereas 95% of U.S. participants associated a smile with “happiness,” only 69% of Sumatran participants did. Similarly, 86% of U.S. participants associated wrinkling of the nose with “disgust,” but only 60% of Japanese did (Ekman et al., 1987). Ekman and colleagues interpreted this variation as demonstrating cultural differences in “display rules,” or rules about what emotions are appropriate to show in a given situation (Ekman, 1972). Indeed, since this initial work, Matsumoto and his colleagues have demonstrated widespread cultural differences in display rules (Safdar et al., 2009). One prominent example of such differences is biting one’s tongue. In India, this signals embarrassment; however, in the U.S. this expression has no such meaning (Haidt & Keltner, 1999). These findings suggest both cultural similarities and differences in the recognition of emotional facial expressions (although see Russell, 1994, for criticism of this work). Interestingly, since the mid-2000s, increasing research has demonstrated cultural differences not only in display rules, but also the degree to which people focus on the face (versus other aspects of the social context; Masuda, Ellsworth, Mesquita, Leu, Tanida, & Van de Veerdonk, 2008), and on different features of the face (Yuki, Maddux, & Matsuda, 2007) when perceiving others’ emotions. For example, people from the United States tend to focus on the mouth when interpreting others’ emotions, whereas people from Japan tend to focus on the eyes. But how does culture shape other aspects of emotional life—such as how people emotionally respond to different situations, how they want to feel generally, and what makes them happy? Today, most scholars agree that emotions and other related states are multifaceted, and that cultural similarities and differences exist for each facet. Thus, rather than classifying emotions as either universal or socially-constructed, scholars are now attempting to identify the specific similarities and differences of emotional life across cultures. These endeavors are yielding new insights into the effects of cultural on emotion. Current and Research Theory Given the wide range of cultures and facets of emotion in the world, for the remainder of the module we will limit our scope to the two cultural contexts that have received the most empirical attention by social scientists: North America (United States, Canada) and East Asia (China, Japan, and Korea). Social scientists have focused on North American and East Asian contexts because they differ in obvious ways, including their geographical locations, histories, languages, and religions. Moreover, since the 1980s large-scale studies have revealed that North American and East Asian contexts differ in their overall values and attitudes, such as the prioritization of personal vs. group needs (individualism vs. collectivism; Hofstede, 2001). Whereas North American contexts encourage members to prioritize personal over group needs (to be “individualistic”), East Asian contexts encourage members to prioritize group over personal needs (to be “collectivistic”). Cultural Models of Self in North American and East Asian Contexts In a landmark paper, cultural psychologists Markus and Kitayama (1991) proposed that previously observed differences in individualism and collectivism translated into different models of the self—or one’s personal concept of who s/he is as a person. Specifically, the researchers argued that in North American contexts, the dominant model of the self is an independent one, in which being a person means being distinct from others and behaving accordingly across situations. In East Asian contexts, however, the dominant model of the self is an interdependent one, in which being a person means being fundamentally connected to others and being responsive to situational demands. For example, in a classic study (Cousins, 1989), American and Japanese students were administered the Twenty Statements Test, in which they were asked to complete the sentence stem, “I am ______,” twenty times. U.S. participants were more likely than Japanese participants to complete the stem with psychological attributes (e.g., friendly, cheerful); Japanese participants, on the other hand, were more likely to complete the stem with references to social roles and responsibilities (e.g., a daughter, a student) (Cousins, 1989). These different models of the self result in different principles for interacting with others. An independent model of self teaches people to express themselves and try to influence others (i.e., change their environments to be consistent with their own beliefs and desires). In contrast, an interdependent model of self teaches people to suppress their own beliefs and desires and adjust to others’ (i.e., fit in with their environment) (Heine, Lehman, Markus, & Kitayama, 1999; Morling, Kitayama, & Miyamoto, 2002; Weisz, Rothbaum, & Blackburn, 1984). Markus and Kitayama (1991) argue that these different models of self have significant implications for how people in Western and East Asian contexts feel. Cultural Similarities and Differences in Emotion: Comparisons of North American and East Asian Contexts A considerable body of empirical research suggests that these different models of self shape various aspects of emotional dynamics. Next we will discuss several ways culture shapes emotion, starting with emotional response. People’s Physiological Responses to Emotional Events Are Similar Across Cultures, but Culture Influences People’s Facial Expressive Behavior How does culture influence people’s responses to emotional events? Studies of emotional response tend to focus on three components: physiology (e.g., how fast one’s heart beats), subjective experience (e.g., feeling intensely happy or sad), and facial expressive behavior (e.g., smiling or frowning). Although only a few studies have simultaneously measured these different aspects of emotional response, those that do tend to observe more similarities than differences in physiological responses between cultures. That is, regardless of culture, people tend to respond similarly in terms of physiological (or bodily) expression. For instance, in one study, European American and Hmong (pronounced “muhng”) American participants were asked to relive various emotional episodes in their lives (e.g., when they lost something or someone they loved; when something good happened) (Tsai, Chentsova-Dutton, Freire-Bebeau, & Przymus, 2002). At the level of physiological arousal (e.g., heart rate), there were no differences in how the participants responded. However, their facial expressive behavior told a different story. When reliving events that elicited happiness, pride, and love, European Americans smiled more frequently and more intensely than did their Hmong counterparts—though all participants reported feeling happy, proud, and in love at similar levels of intensity. And similar patterns have emerged in studies comparing European Americans with Chinese Americans during different emotion-eliciting tasks (Tsai et al., 2002; Tsai, Levenson, & McCoy, 2006; Tsai, Levenson, & Carstensen, 2000). Thus, while the physiological aspects of emotional responses appear to be similar across cultures, their accompanying facial expressions are more culturally distinctive. Again, these differences in facial expressions during positive emotional events are consistent with findings from cross-cultural studies of display rules, and stem from the models of self-description discussed above: In North American contexts that promote an independent self , individuals tend to express their emotions to influence others. Conversely, in East Asian contexts that promote an interdependent self, individuals tend to control and suppress their emotions to adjust to others. People Suppress Their Emotions Across Cultures, but Culture Influences the Consequences of Suppression for Psychological Well-Being If the cultural ideal in North American contexts is to express oneself, then suppressing emotions (not showing how one feels) should have negative consequences. This is the assumption underlying hydraulic models of emotion: the idea that emotional suppression and repression impair psychological functioning (Freud, 1910). Indeed, significant empirical research shows that suppressing emotions can have negative consequences for psychological well-being in North American contexts (Gross, 1998). However, Soto and colleagues (2011) find that the relationship between suppression and psychological well-being varies by culture. True, with European Americans, emotional suppression is associated with higher levels of depression and lower levels of life satisfaction. (Remember, in these individualistic societies, the expression of emotion is a fundamental aspect of positive interactions with others.) On the other hand, since for Hong Kong Chinese, emotional suppression is needed to adjust to others (in this interdependent community, suppressing emotions is how to appropriately interact with others), it is simply a part of normal life and therefore not associated with depression or life satisfaction. These findings are consistent with research suggesting that factors related to clinical depression vary between European Americans and Asian Americans. European Americans diagnosed with depression show dampened or muted emotional responses (Bylsma, Morris, & Rottenberg, 2008). For instance, when shown sad or amusing film clips, depressed European Americans respond less intensely than their nondepressed counterparts.However, other studies have shown that depressed East Asian Americans (i.e., people of East Asian descent who live in the United States) demonstrate similar or increased emotional responses compared with their nondepressed counterparts (Chentsova-Dutton et al., 2007; Chentsova-Dutton, Tsai, & Gotlib, 2010). In other words, depressed European Americans show reduced emotional expressions, but depressed East Asian Americans do not—and, in fact, may express more emotion. Thus, muted responses (which resemble suppression) are associated with depression in European American contexts, but not in East Asian contexts. People Feel Good During Positive Events, but Culture Influences Whether People Feel Bad During Positive Events What about people’s subjective emotional experiences? Do people across cultures feel the same emotions in similar situations, despite how they show them? Recent studies indicate that culture affects whether people are likely to feel bad during good events. In North American contexts, people rarely feel bad after good experiences. However, a number of research teams have observed that, compared with people in North American contexts, people in East Asian contexts are more likely to feel bad and good (“mixed” emotions) during positive events (e.g., feeling worried after winning an important competition; Miyamoto, Uchida, & Ellsworth, 2010). This may be because, compared with North Americans, East Asians engage in more dialectical thinking (i.e., they are more tolerant of contradiction and change). Therefore, they accept that positive and negative feelings can occur simultaneously. In addition, whereas North Americans value maximizing positive states and minimizing negative ones, East Asians value a greater balance between the two (Sims, Tsai, Wang, Fung, & Zhang, 2013). To better understand this, think about how you would feel after getting the top score on a test that’s graded on a curve. In North American contexts, such success is considered an individual achievement and worth celebrating. But what about the other students who will now receive a lower grade because you “raised the curve” with your good grade? In East Asian contexts, not only would students be more thoughtful of the overall group’s success, but they would also be more comfortable acknowledging both the positive (their own success on the test) and the negative (their classmates’ lower grades). Again, these differences can be linked to cultural differences in models of the self. An interdependent model encourages people to think about how their accomplishments might affect others (e.g., make others feel bad or jealous). Thus, awareness of negative emotions during positive events may discourage people from expressing their excitement and standing out (as in East Asian contexts). Such emotional suppression helps individuals feel in sync with those around them. An independent model, however, encourages people to express themselves and stand out, so when something good happens, they have no reason to feel bad. So far, we have reviewed research that demonstrates cultural similarities in physiological responses and in the ability to suppress emotions. We have also discussed the cultural differences in facial expressive behavior and the likelihood of experiencing negative feelings during positive events. Next, we will explore how culture shapes people’s ideal or desired states. People Want to Feel Good Across Cultures, but Culture Influences the Specific Good States People Want to Feel (Their “Ideal Affect”) Everyone welcomes positive feelings, but cultures vary in the specific types of positive affective states (see Figure 4.8.2) their people favor. An affective state is essentially the type of emotional arousal one feels coupled with its intensity—which can vary from pleasant to unpleasant (e.g., happy to sad), with high to low arousal (e.g., energetic to passive). Although people of all cultures experience this range of affective states, they can vary in their preferences for each. For example, people in North American contexts lean toward feeling excited, enthusiastic, energetic, and other “high arousal positive” states. People in East Asian contexts, however, generally prefer feeling calm, peaceful, and other “low arousal positive” states (Tsai, Knutson, & Fung, 2006). These cultural differences have been observed in young children between the ages of 3 and 5, college students, and adults between the ages of 60 and 80 (Tsai, Louie, Chen, & Uchida, 2007; Tsai, Sims, Thomas, & Fung, 2013), and are reflected in widely-distributed cultural products. For example, wherever you look in American contexts—women’s magazines, children’s storybooks, company websites, and even Facebook profiles (Figure 3)—you will find more open, excited smiles and fewer closed, calm smiles compared to Chinese contexts (Chim, Moon, Ang, Tsai, 2013; Tsai, 2007;Tsai, Louie, et al., 2007). Again, these differences in ideal affect (i.e., the emotional states that people believe are best) correspond to the independent and interdependent models described earlier: Independent selves want to influence others, which requires action (doing something), and action involves high arousal states. Conversely, interdependent selves want to adjust to others, which requires suspending action and attending to others—both of which involve low arousal states. Thus, the more that individuals and cultures want to influence others (as in North American contexts), the more they value excitement, enthusiasm, and other high arousal positive states. And, the more that individuals and cultures want to adjust to others (as in East Asian contexts), the more they value calm, peacefulness, and other low arousal positive states (Tsai, Miao, Seppala, Fung, & Yeung, 2007). Because one’s ideal affect functions as a guide for behavior and a way of evaluating one’s emotional states, cultural differences in ideal affect can result in different emotional lives. For example, several studies have shown that people engage in activities (e.g., recreational pastimes, musical styles) consistent with their cultural ideal affect. That is, people from North American contexts (who value high arousal affective states) tend to prefer thrilling activities like skydiving, whereas people from East Asian contexts (who value low arousal affective states) prefer tranquil activities like lounging on the beach (Tsai, 2007). In addition, people base their conceptions of well-being and happiness on their ideal affect. Therefore, European Americans are more likely to define well-being in terms of excitement, whereas Hong Kong Chinese are more likely to define well-being in terms of calmness. Indeed, among European Americans, the less people experience high arousal positive states, the more depressed they are. But, among Hong Kong Chinese—you guessed it!—the less people experience low arousal positive states, the more depressed they are (Tsai, Knutson, & Fung, 2006). People Base Their Happiness on Similar Factors Across Cultures, but Culture Influences the Weight Placed on Each Factor What factors make people happy or satisfied with their lives? We have seen that discrepancies between how people actually feel (actual affect) and how they want to feel (ideal affect)—as well as people’s suppression of their ideal affect—are associated with depression. But happiness is based on other factors as well. For instance, Kwan, Bond, & Singelis (1997) found that while European Americans and Hong Kong Chinese subjects both based life satisfaction on how they felt about themselves (self-esteem) and their relationships (relationship harmony), their weighting of each factor was different. That is, European Americans based their life satisfaction primarily on self-esteem, whereas Hong Kong Chinese based their life satisfaction equally on self-esteem and relationship harmony. Consistent with these findings, Oishi and colleagues (1999) found in a study of 39 nations that self-esteem was more strongly correlated with life satisfaction in more individualistic nations compared to more collectivistic ones. Researchers also found that in individualistic cultures people rated life satisfaction based on their emotions more so than on social definitions (or norms). In other words, rather than using social norms as a guideline for what constitutes an ideal life, people in individualistic cultures tend to evaluate their satisfaction according to how they feel emotionally. In collectivistic cultures, however, people’s life satisfaction tends to be based on a balance between their emotions and norms (Suh, Diener, Oishi, & Triandis, 1998). Similarly, other researchers have recently found that people in North American contexts are more likely to feel negative when they have poor mental and physical health, while people in Japanese contexts don’t have this association (Curhan et al., 2013). Again, these findings are consistent with cultural differences in models of the self. In North American, independent contexts, feelings about the self matter more, whereas in East Asian, interdependent contexts, feelings about others matter as much as or even more than feelings about the self. Why Do Cultural Similarities And Differences In Emotion Matter? Understanding cultural similarities and differences in emotion is obviously critical to understanding emotions in general, and the flexibility of emotional processes more specifically. Given the central role that emotions play in our interaction, understanding cultural similarities and differences is especially critical to preventing potentially harmful miscommunications. Although misunderstandings are unintentional, they can result in negative consequences—as we’ve seen historically for ethnic minorities in many cultures. For instance, across a variety of North American settings, Asian Americans are often characterized as too “quiet” and “reserved,” and these low arousal states are often misinterpreted as expressions of disengagement or boredom—rather than expressions of the ideal of calmness. Consequently, Asian Americans may be perceived as “cold,” “stoic,” and “unfriendly,” fostering stereotypes of Asian Americans as “perpetual foreigners” (Cheryan & Monin, 2005). Indeed, this may be one reason Asian Americans are often overlooked for top leadership positions (Hyun, 2005). In addition to averting cultural miscommunications, recognizing cultural similarities and differences in emotion may provide insights into other paths to psychological health and well-being. For instance, findings from a recent series of studies suggest that calm states are easier to elicit than excited states, suggesting that one way of increasing happiness in cultures that value excitement may be to increase the value placed on calm states (Chim, Tsai, Hogan, & Fung, 2013). Current Directions In Culture And Emotion Research What About Other Cultures? In this brief review, we’ve focused primarily on comparisons between North American and East Asian contexts because most of the research in cultural psychology has focused on these comparisons. However, there are obviously a multitude of other cultural contexts in which emotional differences likely exist. For example, although Western contexts are similar in many ways, specific Western contexts (e.g., American vs. German) also differ from each other in substantive ways related to emotion (Koopmann-Holm & Matsumoto, 2011). Thus, future research examining other cultural contexts is needed. Such studies may also reveal additional, uninvestigated dimensions or models that have broad implications for emotion. In addition, because more and more people are being raised within multiple cultural contexts (e.g., for many Chinese Americans, a Chinese immigrant culture at home and mainstream American culture at school), more research is needed to examine how people negotiate and integrate these different cultures in their emotional lives (for examples, see De Leersnyder, Mesquita, & Kim, 2011; Perunovic, Heller, & Rafaeli, 2007). How Are Cultural Differences in Beliefs About Emotion Transmitted? According to Kroeber and Kluckhohn (1952), cultural ideas are reflected in and reinforced by practices, institutions, and products. As an example of this phenomenon—and illustrating the point regarding cultural differences in ideal affect—bestselling children’s storybooks in the United States often contain more exciting and less calm content (smiles and activities) than do bestselling children’s storybooks in Taiwan (Tsai, Louie, et al., 2007). To investigate this further, the researchers randomly assigned European American, Asian American, and Taiwanese Chinese preschoolers to be read either stories with exciting content or stories with calm content. Across all of these cultures, the kids who were read stories with exciting content were afterward more likely to value excited states, whereas those who were read stories with calm content were more likely to value calm states. As a test, after hearing the stories, the kids were shown a list of toys and asked to select their favorites. Those who heard the exciting stories wanted to play with more arousing toys (like a drum that beats loud and fast), whereas those who heard the calm stories wanted to play with less arousing toys (like a drum that beats quiet and slow). These findings suggest that regardless of ethnic background, direct exposure to storybook content alters children’s ideal affect. More studies are needed to assess whether a similar process occurs when children and adults are chronically exposed to various types of cultural products. As well, future studies should examine other ways cultural ideas regarding emotion are transmitted (e.g., via interactions with parents and teachers). Could These Cultural Differences Be Due to Temperament? An alternative explanation for cultural differences in emotion is that they are due to temperamental factors—that is, biological predispositions to respond in certain ways. (Might European Americans just be more emotional than East Asians because of genetics?) Indeed, most models of emotion acknowledge that both culture andtemperament play roles in emotional life, yet few if any models indicate how. Nevertheless, most researchers believe that despite genetic differences in founder populations (i.e., the migrants from a population who leave to create their own societies), culture has a greater impact on emotions. For instance, one theoretical framework, Affect Valuation Theory, proposes that cultural factors shape how people want to feel (“ideal affect”) more than how they actually feel (“actual affect”); conversely, temperamental factors influence how people actually feel more than how they want to feel (Tsai, 2007) (see Figure 4.8.4). To test this hypothesis, European American, Asian American, and Hong Kong Chinese participants completed measures of temperament (i.e., stable dispositions, such as neuroticism or extraversion), actual affect (i.e., how people actually feel in given situations), ideal affect (i.e., how people would like to feel in given situations), and influential cultural values (i.e., personal beliefs transmitted through culture). When researchers analyzed the participants’ responses, they found that differences in ideal affect between cultures were associated more with cultural factors than with temperamental factors (Tsai, Knutson, & Fung, 2006). However, when researchers examined actual affect, they found this to be reversed: actual affect was more strongly associated with temperamental factors than cultural factors. Not all of the studies described above have ruled out a temperamental explanation, though, and more studies are needed to rule out the possibility that the observed group differences are due to genetic factors instead of, or in addition to, cultural factors. Moreover, future studies should examine whether the links between temperament and emotions might vary across cultures, and how cultural and temperamental factors work together to shape emotion. Summary Based on studies comparing North American and East Asian contexts, there is clear evidence for cultural similarities and differences in emotions, and most of the differences can be traced to different cultural models of the self. Consider your own concept of self for a moment. What kinds of pastimes do you prefer—activities that make you excited, or ones that make you calm? What kinds of feelings do you strive for? What is your ideal affect? Because emotions seem and feel so instinctual to us, it’s hard to imagine that the way we experience them and the ones we desire are anything other than biologically programmed into us. However, as current research has shown (and as future research will continue to explore), there are myriad ways in which culture, both consciously and unconsciously, shapes people’s emotional lives. Outside Resources Audio Interview: The Really Big Questions “What Are Emotions?” Interview with Paul Ekman, Martha Nussbaum, Dominique Moisi, and William Reddy http://www.trbq.org/index.php?option...d=16&Itemid=43 Book: Ed Diener and Robert Biswas-Diener: Happiness: Unlocking the Mysteries of Psychological Wealth Book: Eric Weiner: The Geography of Bliss Book: Eva Hoffmann: Lost in Translation: Life in a New Language Book: Hazel Markus: Clash: 8 Cultural Conflicts That Make Us Who We Are Video: Social Psychology Alive psychology.stanford.edu/~tsai...psychalive.wmv Video: The Really Big Questions “Culture and Emotion,” Dr. Jeanne Tsai Video: Tsai’s description of cultural differences in emotion Web: Acculturation and Culture Collaborative at Leuven http://ppw.kuleuven.be/home/english/...p/acc-research Web: Culture and Cognition at the University of Michigan culturecognition.isr.umich.edu/ Web: Experts In Emotion Series, Dr. June Gruber, Department of Psychology, Yale University www.yalepeplab.com/teaching/p...pertseries.php Web: Georgetown Culture and Emotion Lab http://georgetownculturelab.wordpress.com/ Web: Paul Ekman’s website http://www.paulekman.com Web: Penn State Culture, Health, and Emotion Lab http://www.personal.psu.edu/users/m/...m280/sotosite/ Web: Stanford Culture and Emotion Lab www-psych.stanford.edu/~tsailab/index.htm Web: Wesleyan Culture and Emotion Lab http://culture-and-emotion.research.wesleyan.edu/ Discussion Questions 1. What cultural ideas and practices related to emotion were you exposed to when you were a child? What cultural ideas and practices related to emotion are you currently exposed to as an adult? How do you think they shape your emotional experiences and expressions? 2. How can researchers avoid inserting their own beliefs about emotion in their research? 3. Most of the studies described above are based on self-report measures. What are some of the advantages and disadvantages of using self-report measures to understand the cultural shaping of emotion? How might the use of other behavioral methods (e.g., neuroimaging) address some of these limitations? 4. Do the empirical findings described above change your beliefs about emotion? How? 5. Imagine you are a manager of a large American company that is beginning to do work in China and Japan. How will you apply your current knowledge about culture and emotion to prevent misunderstandings between you and your Chinese and Japanese employees? Vocabulary Affect Feelings that can be described in terms of two dimensions, the dimensions of arousal and valence (Figure 2). For example, high arousal positive states refer to excitement, elation, and enthusiasm. Low arousal positive states refer to calm, peacefulness, and relaxation. Whereas “actual affect” refers to the states that people actually feel, “ideal affect” refers to the states that people ideally want to feel. Culture Shared, socially transmitted ideas (e.g., values, beliefs, attitudes) that are reflected in and reinforced by institutions, products, and rituals. Emotions Changes in subjective experience, physiological responding, and behavior in response to a meaningful event. Emotions tend to occur on the order of seconds (in contract to moods which may last for days). Feelings A general term used to describe a wide range of states that include emotions, moods, traits and that typically involve changes in subjective experience, physiological responding, and behavior in response to a meaningful event. Emotions typically occur on the order of seconds, whereas moods may last for days, and traits are tendencies to respond a certain way across various situations. Independent self A model or view of the self as distinct from others and as stable across different situations. The goal of the independent self is to express and assert the self, and to influence others. This model of self is prevalent in many individualistic, Western contexts (e.g., the United States, Australia, Western Europe). Interdependent self A model or view of the self as connected to others and as changing in response to different situations. The goal of the interdependent self is to suppress personal preferences and desires, and to adjust to others. This model of self is prevalent in many collectivistic, East Asian contexts (e.g., China, Japan, Korea). Social constructivism Social constructivism proposes that knowledge is first created and learned within a social context and is then adopted by individuals. Universalism Universalism proposes that there are single objective standards, independent of culture, in basic domains such as learning, reasoning, and emotion that are a part of all human experience.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_4%3A_Emotions_and_Motivation/4.8%3A_Culture_and_Emotion.txt
• 5.1: Conditioning and Learning Basic principles of learning are always operating and always influencing human behavior. This module discusses the two most fundamental forms of learning -- classical (Pavlovian) and instrumental (operant) conditioning. This module describes some of the most important things you need to know about classical and instrumental conditioning, and it illustrates some of the many ways they help us understand normal and disordered behavior in humans. • 5.2: Factors Influencing Learning Learning is a complex process that defies easy definition and description. This module reviews some of the philosophical issues involved with defining learning and describes in some detail the characteristics of learners and of encoding activities that seem to affect how well people can acquire new memories, knowledge, or skills. At the end, we consider a few basic principles that guide whether a particular attempt at learning will be successful or not. • 5.3: Memory (Encoding, Storage, Retrieval) “Memory” is a single term that reflects a number of different abilities: holding information briefly while working with it (working memory), remembering episodes of one’s life (episodic memory), and our general knowledge of facts of the world (semantic memory), among other types. Remembering episodes involves three processes: encoding information (learning it, by perceiving it and relating it to past knowledge), storing it (maintaining it over time), and then retrieving it (accessing the informa • 5.4: Forgetting and Amnesia This module explores the causes of everyday forgetting and considers pathological forgetting in the context of amnesia. Forgetting is viewed as an adaptive process that allows us to be efficient in terms of the information we retain. Chapter 5: Learning and Memory By Mark E. Bouton University of Vermont Basic principles of learning are always operating and always influencing human behavior. This module discusses the two most fundamental forms of learning -- classical (Pavlovian) and instrumental (operant) conditioning. Through them, we respectively learn to associate 1) stimuli in the environment, or 2) our own behaviors, with significant events, such as rewards and punishments. The two types of learning have been intensively studied because they have powerful effects on behavior, and because they provide methods that allow scientists to analyze learning processes rigorously. This module describes some of the most important things you need to know about classical and instrumental conditioning, and it illustrates some of the many ways they help us understand normal and disordered behavior in humans. The module concludes by introducing the concept of observational learning, which is a form of learning that is largely distinct from classical and operant conditioning. learning objectives • Distinguish between classical (Pavlovian) conditioning and instrumental (operant) conditioning. • Understand some important facts about each that tell us how they work. • Understand how they work separately and together to influence human behavior in the world outside the laboratory. • Students will be able to list the four aspects of observational learning according to Social Learning Theory. Two Types of Conditioning Although Ivan Pavlov won a Nobel Prize for studying digestion, he is much more famous for something else: working with a dog, a bell, and a bowl of saliva. Many people are familiar with the classic study of “Pavlov’s dog,” but rarely do they understand the significance of its discovery. In fact, Pavlov’s work helps explain why some people get anxious just looking at a crowded bus, why the sound of a morning alarm is so hated, and even why we swear off certain foods we’ve only tried once. Classical (or Pavlovian) conditioning is one of the fundamental ways we learn about the world around us. But it is far more than just a theory of learning; it is also arguably a theory of identity. For, once you understand classical conditioning, you’ll recognize that your favorite music, clothes, even political candidate, might all be a result of the same process that makes a dog drool at the sound of bell. Around the turn of the 20th century, scientists who were interested in understanding the behavior of animals and humans began to appreciate the importance of two very basic forms of learning. One, which was first studied by the Russian physiologist Ivan Pavlov, is known as classical, or Pavlovian conditioning. In his famous experiment, Pavlov rang a bell and then gave a dog some food. After repeating this pairing multiple times, the dog eventually treated the bell as a signal for food, and began salivating in anticipation of the treat. This kind of result has been reproduced in the lab using a wide range of signals (e.g., tones, light, tastes, settings) paired with many different events besides food (e.g., drugs, shocks, illness; see below). We now believe that this same learning process is engaged, for example, when humans associate a drug they’ve taken with the environment in which they’ve taken it; when they associate a stimulus (e.g., a symbol for vacation, like a big beach towel) with an emotional event (like a burst of happiness); and when they associate the flavor of a food with getting food poisoning. Although classical conditioning may seem “old” or “too simple” a theory, it is still widely studied today for at least two reasons: First, it is a straightforward test of associative learning that can be used to study other, more complex behaviors. Second, because classical conditioning is always occurring in our lives, its effects on behavior have important implications for understanding normal and disordered behavior in humans. In a general way, classical conditioning occurs whenever neutral stimuli are associated with psychologically significant events. With food poisoning, for example, although having fish for dinner may not normally be something to be concerned about (i.e., a “neutral stimuli”), if it causes you to get sick, you will now likely associate that neutral stimuli (the fish) with the psychologically significant event of getting sick. These paired events are often described using terms that can be applied to any situation. The dog food in Pavlov’s experiment is called the unconditioned stimulus (US) because it elicits an unconditioned response (UR). That is, without any kind of “training” or “teaching,” the stimulus produces a natural or instinctual reaction. In Pavlov’s case, the food (US) automatically makes the dog drool (UR). Other examples of unconditioned stimuli include loud noises (US) that startle us (UR), or a hot shower (US) that produces pleasure (UR). On the other hand, a conditioned stimulus produces a conditioned response. A conditioned stimulus (CS) is a signal that has no importance to the organism until it is paired with something that does have importance. For example, in Pavlov’s experiment, the bell is the conditioned stimulus. Before the dog has learned to associate the bell (CS) with the presence of food (US), hearing the bell means nothing to the dog. However, after multiple pairings of the bell with the presentation of food, the dog starts to drool at the sound of the bell. This drooling in response to the bell is the conditioned response (CR). Although it can be confusing, the conditioned response is almost always the same as the unconditioned response. However, it is called the conditioned response because it is conditional on (or, depends on) being paired with the conditioned stimulus (e.g., the bell). To help make this clearer, consider becoming really hungry when you see the logo for a fast food restaurant. There’s a good chance you’ll start salivating. Although it is the actual eating of the food (US) that normally produces the salivation (UR), simply seeing the restaurant’s logo (CS) can trigger the same reaction (CR). Another example you are probably very familiar with involves your alarm clock. If you’re like most people, waking up early usually makes you unhappy. In this case, waking up early (US) produces a natural sensation of grumpiness (UR). Rather than waking up early on your own, though, you likely have an alarm clock that plays a tone to wake you. Before setting your alarm to that particular tone, let’s imagine you had neutral feelings about it (i.e., the tone had no prior meaning for you). However, now that you use it to wake up every morning, you psychologically “pair” that tone (CS) with your feelings of grumpiness in the morning (UR). After enough pairings, this tone (CS) will automatically produce your natural response of grumpiness (CR). Thus, this linkage between the unconditioned stimulus (US; waking up early) and the conditioned stimulus (CS; the tone) is so strong that the unconditioned response (UR; being grumpy) will become a conditioned response (CR; e.g., hearing the tone at any point in the day—whether waking up or walking down the street—will make you grumpy). Modern studies of classical conditioning use a very wide range of CSs and USs and measure a wide range of conditioned responses. Although classical conditioning is a powerful explanation for how we learn many different things, there is a second form of conditioning that also helps explain how we learn. First studied by Edward Thorndike, and later extended by B. F. Skinner, this second type of conditioning is known as instrumentalor operant conditioning. Operant conditioning occurs when a behavior (as opposed to a stimulus) is associated with the occurrence of a significant event. In the best-known example, a rat in a laboratory learns to press a lever in a cage (called a “Skinner box”) to receive food. Because the rat has no “natural” association between pressing a lever and getting food, the rat has to learn this connection. At first, the rat may simply explore its cage, climbing on top of things, burrowing under things, in search of food. Eventually while poking around its cage, the rat accidentally presses the lever, and a food pellet drops in. This voluntary behavior is called an operant behavior, because it “operates” on the environment (i.e., it is an action that the animal itself makes). Now, once the rat recognizes that it receives a piece of food every time it presses the lever, the behavior of lever-pressing becomes reinforced. That is, the food pellets serve as reinforcers because they strengthen the rat’s desire to engage with the environment in this particular manner. In a parallel example, imagine that you’re playing a street-racing video game. As you drive through one city course multiple times, you try a number of different streets to get to the finish line. On one of these trials, you discover a shortcut that dramatically improves your overall time. You have learned this new path through operant conditioning. That is, by engaging with your environment (operant responses), you performed a sequence of behaviors that that was positively reinforced (i.e., you found the shortest distance to the finish line). And now that you’ve learned how to drive this course, you will perform that same sequence of driving behaviors (just as the rat presses on the lever) to receive your reward of a faster finish. Operant conditioning research studies how the effects of a behavior influence the probability that it will occur again. For example, the effects of the rat’s lever-pressing behavior (i.e., receiving a food pellet) influences the probability that it will keep pressing the lever. For, according to Thorndike’s law of effect, when a behavior has a positive (satisfying) effect or consequence, it is likely to be repeated in the future. However, when a behavior has a negative (painful/annoying) consequence, it is less likely to be repeated in the future. Effects that increase behaviors are referred to as reinforcers, and effects that decrease them are referred to as punishers. An everyday example that helps to illustrate operant conditioning is striving for a good grade in class—which could be considered a reward for students (i.e., it produces a positive emotional response). In order to get that reward (similar to the rat learning to press the lever), the student needs to modify his/her behavior. For example, the student may learn that speaking up in class gets him/her participation points (a reinforcer), so the student speaks up repeatedly. However, the student also learns that s/he shouldn’t speak up about just anything; talking about topics unrelated to school actually costs points. Therefore, through the student’s freely chosen behaviors, s/he learns which behaviors are reinforced and which are punished. An important distinction of operant conditioning is that it provides a method for studying how consequences influence “voluntary” behavior. The rat’s decision to press the lever is voluntary, in the sense that the rat is free to make and repeat that response whenever it wants. Classical conditioning, on the other hand, is just the opposite—depending instead on “involuntary” behavior (e.g., the dog doesn’t choose to drool; it just does). So, whereas the rat must actively participate and perform some kind of behavior to attain its reward, the dog in Pavlov’s experiment is a passive participant. One of the lessons of operant conditioning research, then, is that voluntary behavior is strongly influenced by its consequences. The illustration on the left summarizes the basic elements of classical and instrumental conditioning. The two types of learning differ in many ways. However, modern thinkers often emphasize the fact that they differ—as illustrated here—in what is learned. In classical conditioning, the animal behaves as if it has learned to associate a stimulus with a significant event. In operant conditioning, the animal behaves as if it has learned to associate a behavior with a significant event. Another difference is that the response in the classical situation (e.g., salivation) is elicited by a stimulus that comes before it, whereas the response in the operant case is not elicited by any particular stimulus. Instead, operant responses are said to be emitted. The word “emitted” further conveys the idea that operant behaviors are essentially voluntary in nature. Understanding classical and operant conditioning provides psychologists with many tools for understanding learning and behavior in the world outside the lab. This is in part because the two types of learning occur continuously throughout our lives. It has been said that “much like the laws of gravity, the laws of learning are always in effect” (Spreat & Spreat, 1982). Useful Things to Know about Classical Conditioning Classical Conditioning Has Many Effects on Behavior A classical CS (e.g., the bell) does not merely elicit a simple, unitary reflex. Pavlov emphasized salivation because that was the only response he measured. But his bell almost certainly elicited a whole system of responses that functioned to get the organism ready for the upcoming US (food) (see Timberlake, 2001). For example, in addition to salivation, CSs (such as the bell) that signal that food is near also elicit the secretion of gastric acid, pancreatic enzymes, and insulin (which gets blood glucose into cells). All of these responses prepare the body for digestion. Additionally, the CS elicits approach behavior and a state of excitement. And presenting a CS for food can also cause animals whose stomachs are full to eat more food if it is available. In fact, food CSs are so prevalent in modern society, humans are likewise inclined to eat or feel hungry in response to cues associated with food, such as the sound of a bag of potato chips opening, the sight of a well-known logo (e.g., Coca-Cola), or the feel of the couch in front of the television. Classical conditioning is also involved in other aspects of eating. Flavors associated with certain nutrients (such as sugar or fat) can become preferred without arousing any awareness of the pairing. For example, protein is a US that your body automatically craves more of once you start to consume it (UR): since proteins are highly concentrated in meat, the flavor of meat becomes a CS (or cue, that proteins are on the way), which perpetuates the cycle of craving for yet more meat (this automatic bodily reaction now a CR). In a similar way, flavors associated with stomach pain or illness become avoided and disliked. For example, a person who gets sick after drinking too much tequila may acquire a profound dislike of the taste and odor of tequila—a phenomenon called taste aversion conditioning. The fact that flavors are often associated with so many consequences of eating is important for animals (including rats and humans) that are frequently exposed to new foods. And it is clinically relevant. For example, drugs used in chemotherapy often make cancer patients sick. As a consequence, patients often acquire aversions to foods eaten just before treatment, or even aversions to such things as the waiting room of the chemotherapy clinic itself (see Bernstein, 1991; Scalera & Bavieri, 2009). Classical conditioning occurs with a variety of significant events. If an experimenter sounds a tone just before applying a mild shock to a rat’s feet, the tone will elicit fear or anxiety after one or two pairings. Similar fear conditioning plays a role in creating many anxiety disorders in humans, such as phobias and panic disorders, where people associate cues (such as closed spaces, or a shopping mall) with panic or other emotional trauma (see Mineka & Zinbarg, 2006). Here, rather than a physical response (like drooling), the CS triggers an emotion. Another interesting effect of classical conditioning can occur when we ingest drugs. That is, when a drug is taken, it can be associated with the cues that are present at the same time (e.g., rooms, odors, drug paraphernalia). In this regard, if someone associates a particular smell with the sensation induced by the drug, whenever that person smells the same odor afterward, it may cue responses (physical and/or emotional) related to taking the drug itself. But drug cues have an even more interesting property: They elicit responses that often “compensate” for the upcoming effect of the drug (see Siegel, 1989). For example, morphine itself suppresses pain; however, if someone is used to taking morphine, a cue that signals the “drug is coming soon” can actually make the person more sensitive to pain. Because the person knows a pain suppressant will soon be administered, the body becomes more sensitive, anticipating that “the drug will soon take care of it.” Remarkably, such conditioned compensatory responses in turn decrease the impact of the drug on the body—because the body has become more sensitive to pain. This conditioned compensatory response has many implications. For instance, a drug user will be most “tolerant” to the drug in the presence of cues that have been associated with it (because such cues elicit compensatory responses). As a result, overdose is usually not due to an increase in dosage, but to taking the drug in a new place without the familiar cues—which would have otherwise allowed the user to tolerate the drug (see Siegel, Hinson, Krank, & McCully, 1982). Conditioned compensatory responses (which include heightened pain sensitivity and decreased body temperature, among others) might also cause discomfort, thus motivating the drug user to continue usage of the drug to reduce them. This is one of several ways classical conditioning might be a factor in drug addiction and dependence. A final effect of classical cues is that they motivate ongoing operant behavior (see Balleine, 2005). For example, if a rat has learned via operant conditioning that pressing a lever will give it a drug, in the presence of cues that signal the “drug is coming soon” (like the sound of the lever squeaking), the rat will work harder to press the lever than if those cues weren’t present (i.e., there is no squeaking lever sound). Similarly, in the presence of food-associated cues (e.g., smells), a rat (or an overeater) will work harder for food. And finally, even in the presence of negative cues (like something that signals fear), a rat, a human, or any other organism will work harder to avoid those situations that might lead to trauma. Classical CSs thus have many effects that can contribute to significant behavioral phenomena. The Learning Process As mentioned earlier, classical conditioning provides a method for studying basic learning processes. Somewhat counterintuitively, though, studies show that pairing a CS and a US together is not sufficient for an association to be learned between them. Consider an effect called blocking (see Kamin, 1969). In this effect, an animal first learns to associate one CS—call it stimulus A—with a US. In the illustration above, the sound of a bell (stimulus A) is paired with the presentation of food. Once this association is learned, in a second phase, a second stimulus—stimulus B—is presented alongside stimulus A, such that the two stimuli are paired with the US together. In the illustration, a light is added and turned on at the same time the bell is rung. However, because the animal has already learned the association between stimulus A (the bell) and the food, the animal doesn’t learn an association between stimulus B (the light) and the food. That is, the conditioned response only occurs during the presentation of stimulus A, because the earlier conditioning of A “blocks” the conditioning of B when B is added to A. The reason? Stimulus A already predicts the US, so the US is not surprising when it occurs with Stimulus B. Learning depends on such a surprise, or a discrepancy between what occurs on a conditioning trial and what is already predicted by cues that are present on the trial. To learn something through classical conditioning, there must first be some prediction error, or the chance that a conditioned stimulus won’t lead to the expected outcome. With the example of the bell and the light, because the bell always leads to the reward of food, there’s no “prediction error” that the addition of the light helps to correct. However, if the researcher suddenly requires that the bell and the light both occur in order to receive the food, the bell alone will produce a prediction error that the animal has to learn. Blocking and other related effects indicate that the learning process tends to take in the most valid predictors of significant events and ignore the less useful ones. This is common in the real world. For example, imagine that your supermarket puts big star-shaped stickers on products that are on sale. Quickly, you learn that items with the big star-shaped stickers are cheaper. However, imagine you go into a similar supermarket that not only uses these stickers, but also uses bright orange price tags to denote a discount. Because of blocking (i.e., you already know that the star-shaped stickers indicate a discount), you don’t have to learn the color system, too. The star-shaped stickers tell you everything you need to know (i.e. there’s no prediction error for the discount), and thus the color system is irrelevant. Classical conditioning is strongest if the CS and US are intense or salient. It is also best if the CS and US are relatively new and the organism hasn’t been frequently exposed to them before. And it is especially strong if the organism’s biology has prepared it to associate a particular CS and US. For example, rats and humans are naturally inclined to associate an illness with a flavor, rather than with a light or tone. Because foods are most commonly experienced by taste, if there is a particular food that makes us ill, associating the flavor (rather than the appearance—which may be similar to other foods) with the illness will more greatly ensure we avoid that food in the future, and thus avoid getting sick. This sorting tendency, which is set up by evolution, is called preparedness. There are many factors that affect the strength of classical conditioning, and these have been the subject of much research and theory (see Rescorla & Wagner, 1972; Pearce & Bouton, 2001). Behavioral neuroscientists have also used classical conditioning to investigate many of the basic brain processes that are involved in learning (see Fanselow & Poulos, 2005; Thompson & Steinmetz, 2009). Erasing Classical Learning After conditioning, the response to the CS can be eliminated if the CS is presented repeatedly without the US. This effect is called extinction, and the response is said to become “extinguished.” For example, if Pavlov kept ringing the bell but never gave the dog any food afterward, eventually the dog’s CR (drooling) would no longer happen when it heard the CS (the bell), because the bell would no longer be a predictor of food. Extinction is important for many reasons. For one thing, it is the basis for many therapies that clinical psychologists use to eliminate maladaptive and unwanted behaviors. Take the example of a person who has a debilitating fear of spiders: one approach might include systematic exposure to spiders. Whereas, initially the person has a CR (e.g., extreme fear) every time s/he sees the CS (e.g., the spider), after repeatedly being shown pictures of spiders in neutral conditions, pretty soon the CS no longer predicts the CR (i.e., the person doesn’t have the fear reaction when seeing spiders, having learned that spiders no longer serve as a “cue” for that fear). Here, repeated exposure to spiders without an aversive consequence causes extinction. Psychologists must accept one important fact about extinction, however: it does not necessarily destroy the original learning (see Bouton, 2004). For example, imagine you strongly associate the smell of chalkboards with the agony of middle school detention. Now imagine that, after years of encountering chalkboards, the smell of them no longer recalls the agony of detention (an example of extinction). However, one day, after entering a new building for the first time, you suddenly catch a whiff of a chalkboard and WHAM!, the agony of detention returns. This is called spontaneous recovery: following a lapse in exposure to the CS after extinction has occurred, sometimes re-exposure to the CS (e.g., the smell of chalkboards) can evoke the CR again (e.g., the agony of detention). Another related phenomenon is the renewal effect: After extinction, if the CS is tested in a new context, such as a different room or location, the CR can also return. In the chalkboard example, the action of entering a new building—where you don’t expect to smell chalkboards—suddenly renews the sensations associated with detention. These effects have been interpreted to suggest that extinction inhibits rather than erases the learned behavior, and this inhibition is mainly expressed in the context in which it is learned (see “context” in the Key Vocabulary section below). This does not mean that extinction is a bad treatment for behavior disorders. Instead, clinicians can increase its effectiveness by using basic research on learning to help defeat these relapse effects (see Craske et al., 2008). For example, conducting extinction therapies in contexts where patients might be most vulnerable to relapsing (e.g., at work), might be a good strategy for enhancing the therapy’s success. Useful Things to Know about Instrumental Conditioning Most of the things that affect the strength of classical conditioning also affect the strength of instrumental learning—whereby we learn to associate our actions with their outcomes. As noted earlier, the “bigger” the reinforcer (or punisher), the stronger the learning. And, if an instrumental behavior is no longer reinforced, it will also be extinguished. Most of the rules of associative learning that apply to classical conditioning also apply to instrumental learning, but other facts about instrumental learning are also worth knowing. Instrumental Responses Come Under Stimulus Control As you know, the classic operant response in the laboratory is lever-pressing in rats, reinforced by food. However, things can be arranged so that lever-pressing only produces pellets when a particular stimulus is present. For example, lever-pressing can be reinforced only when a light in the Skinner box is turned on; when the light is off, no food is released from lever-pressing. The rat soon learns to discriminate between the light-on and light-off conditions, and presses the lever only in the presence of the light (responses in light-off are extinguished). In everyday life, think about waiting in the turn lane at a traffic light. Although you know that green means go, only when you have the green arrow do you turn. In this regard, the operant behavior is now said to be under stimulus control. And, as is the case with the traffic light, in the real world, stimulus control is probably the rule. The stimulus controlling the operant response is called a discriminative stimulus. It can be associated directly with the response, or the reinforcer (see below). However, it usually does not elicit the response the way a classical CS does. Instead, it is said to “set the occasion for” the operant response. For example, a canvas put in front of an artist does not elicit painting behavior or compel her to paint. It allows, or sets the occasion for, painting to occur. Stimulus-control techniques are widely used in the laboratory to study perception and other psychological processes in animals. For example, the rat would not be able to respond appropriately to light-on and light-off conditions if it could not see the light. Following this logic, experiments using stimulus-control methods have tested how well animals see colors, hear ultrasounds, and detect magnetic fields. That is, researchers pair these discriminative stimuli with those they know the animals already understand (such as pressing the lever). In this way, the researchers can test if the animals can learn to press the lever only when an ultrasound is played, for example. These methods can also be used to study “higher” cognitive processes. For example, pigeons can learn to peck at different buttons in a Skinner box when pictures of flowers, cars, chairs, or people are shown on a miniature TV screen (see Wasserman, 1995). Pecking button 1 (and no other) is reinforced in the presence of a flower image, button 2 in the presence of a chair image, and so on. Pigeons can learn the discrimination readily, and, under the right conditions, will even peck the correct buttons associated with pictures of new flowers, cars, chairs, and people they have never seen before. The birds have learned to categorize the sets of stimuli. Stimulus-control methods can be used to study how such categorization is learned. Operant Conditioning Involves Choice Another thing to know about operant conditioning is that the response always requires choosing one behavior over others. The student who goes to the bar on Thursday night chooses to drink instead of staying at home and studying. The rat chooses to press the lever instead of sleeping or scratching its ear in the back of the box. The alternative behaviors are each associated with their own reinforcers. And the tendency to perform a particular action depends on both the reinforcers earned for it and the reinforcers earned for its alternatives. To investigate this idea, choice has been studied in the Skinner box by making two levers available for the rat (or two buttons available for the pigeon), each of which has its own reinforcement or payoff rate. A thorough study of choice in situations like this has led to a rule called the quantitative law of effect (see Herrnstein, 1970), which can be understood without going into quantitative detail: The law acknowledges the fact that the effects of reinforcing one behavior depend crucially on how much reinforcement is earned for the behavior’s alternatives. For example, if a pigeon learns that pecking one light will reward two food pellets, whereas the other light only rewards one, the pigeon will only peck the first light. However, what happens if the first light is more strenuous to reach than the second one? Will the cost of energy outweigh the bonus of food? Or will the extra food be worth the work? In general, a given reinforcer will be less reinforcing if there are many alternative reinforcers in the environment. For this reason, alcohol, sex, or drugs may be less powerful reinforcers if the person’s environment is full of other sources of reinforcement, such as achievement at work or love from family members. Cognition in Instrumental Learning Modern research also indicates that reinforcers do more than merely strengthen or “stamp in” the behaviors they are a consequence of, as was Thorndike’s original view. Instead, animals learn about the specific consequences of each behavior, and will perform a behavior depending on how much they currently want—or “value”—its consequence. This idea is best illustrated by a phenomenon called the reinforcer devaluation effect (see Colwill & Rescorla, 1986). A rat is first trained to perform two instrumental actions (e.g., pressing a lever on the left, and on the right), each paired with a different reinforcer (e.g., a sweet sucrose solution, and a food pellet). At the end of this training, the rat tends to press both levers, alternating between the sucrose solution and the food pellet. In a second phase, one of the reinforcers (e.g., the sucrose) is then separately paired with illness. This conditions a taste aversion to the sucrose. In a final test, the rat is returned to the Skinner box and allowed to press either lever freely. No reinforcers are presented during this test (i.e., no sucrose or food comes from pressing the levers), so behavior during testing can only result from the rat’s memory of what it has learned earlier. Importantly here, the rat chooses not to perform the response that once produced the reinforcer that it now has an aversion to (e.g., it won’t press the sucrose lever). This means that the rat has learned and remembered the reinforcer associated with each response, and can combine that knowledge with the knowledge that the reinforcer is now “bad.” Reinforcers do not merely stamp in responses; the animal learns much more than that. The behavior is said to be “goal-directed” (see Dickinson & Balleine, 1994), because it is influenced by the current value of its associated goal (i.e., how much the rat wants/doesn’t want the reinforcer). Things can get more complicated, however, if the rat performs the instrumental actions frequently and repeatedly. That is, if the rat has spent many months learning the value of pressing each of the levers, the act of pressing them becomes automatic and routine. And here, this once goal-directed action (i.e., the rat pressing the lever for the goal of getting sucrose/food) can become a habit. Thus, if a rat spends many months performing the lever-pressing behavior (turning such behavior into a habit), even when sucrose is again paired with illness, the rat will continue to press that lever (see Holland, 2004). After all the practice, the instrumental response (pressing the lever) is no longer sensitive to reinforcer devaluation. The rat continues to respond automatically, regardless of the fact that the sucrose from this lever makes it sick. Habits are very common in human experience, and can be useful. You do not need to relearn each day how to make your coffee in the morning or how to brush your teeth. Instrumental behaviors can eventually become habitual, letting us get the job done while being free to think about other things. Putting Classical and Instrumental Conditioning Together Classical and operant conditioning are usually studied separately. But outside of the laboratory they almost always occur at the same time. For example, a person who is reinforced for drinking alcohol or eating excessively learns these behaviors in the presence of certain stimuli—a pub, a set of friends, a restaurant, or possibly the couch in front of the TV. These stimuli are also available for association with the reinforcer. In this way, classical and operant conditioning are always intertwined. The figure below summarizes this idea, and helps review what we have discussed in this module. Generally speaking, any reinforced or punished operant response (R) is paired with an outcome (O) in the presence of some stimulus or set of stimuli (S). The figure illustrates the types of associations that can be learned in this very general scenario. For one thing, the organism will learn to associate the response and the outcome (R – O). This is instrumental conditioning. The learning process here is probably similar to classical conditioning, with all its emphasis on surprise and prediction error. And, as we discussed while considering the reinforcer devaluation effect, once R – O is learned, the organism will be ready to perform the response if the outcome is desired or valued. The value of the reinforcer can also be influenced by other reinforcers earned for other behaviors in the situation. These factors are at the heart of instrumental learning. Second, the organism can also learn to associate the stimulus with the reinforcing outcome (S – O). This is the classical conditioning component, and as we have seen, it can have many consequences on behavior. For one thing, the stimulus will come to evoke a system of responses that help the organism prepare for the reinforcer (not shown in the figure): The drinker may undergo changes in body temperature; the eater may salivate and have an increase in insulin secretion. In addition, the stimulus will evoke approach (if the outcome is positive) or retreat (if the outcome is negative). Presenting the stimulus will also prompt the instrumental response. The third association in the diagram is the one between the stimulus and the response (S – R). As discussed earlier, after a lot of practice, the stimulus may begin to elicit the response directly. This is habit learning, whereby the response occurs relatively automatically, without much mental processing of the relation between the action and the outcome and the outcome’s current value. The final link in the figure is between the stimulus and the response-outcome association [S – (R – O)]. More than just entering into a simple association with the R or the O, the stimulus can signal that the R – O relationship is now in effect. This is what we mean when we say that the stimulus can “set the occasion” for the operant response: It sets the occasion for the response-reinforcer relationship. Through this mechanism, the painter might begin to paint when given the right tools and the opportunity enabled by the canvas. The canvas theoretically signals that the behavior of painting will now be reinforced by positive consequences. The figure provides a framework that you can use to understand almost any learned behavior you observe in yourself, your family, or your friends. If you would like to understand it more deeply, consider taking a course on learning in the future, which will give you a fuller appreciation of how classical learning, instrumental learning, habit learning, and occasion setting actually work and interact. Observational Learning Not all forms of learning are accounted for entirely by classical and operant conditioning. Imagine a child walking up to a group of children playing a game on the playground. The game looks fun, but it is new and unfamiliar. Rather than joining the game immediately, the child opts to sit back and watch the other children play a round or two. Observing the others, the child takes note of the ways in which they behave while playing the game. By watching the behavior of the other kids, the child can figure out the rules of the game and even some strategies for doing well at the game. This is called observational learning. Observational learning is a component of Albert Bandura’s Social Learning Theory (Bandura, 1977), which posits that individuals can learn novel responses via observation of key others’ behaviors. Observational learning does not necessarily require reinforcement, but instead hinges on the presence of others, referred to as social models. Social models are typically of higher status or authority compared to the observer, examples of which include parents, teachers, and police officers. In the example above, the children who already know how to play the game could be thought of as being authorities—and are therefore social models—even though they are the same age as the observer. By observing how the social models behave, an individual is able to learn how to act in a certain situation. Other examples of observational learning might include a child learning to place her napkin in her lap by watching her parents at the dinner table, or a customer learning where to find the ketchup and mustard after observing other customers at a hot dog stand. Bandura theorizes that the observational learning process consists of four parts. The first is attention—as, quite simply, one must pay attention to what s/he is observing in order to learn. The second part is retention: to learn one must be able to retain the behavior s/he is observing in memory.The third part of observational learning, initiation, acknowledges that the learner must be able to execute (or initiate) the learned behavior. Lastly, the observer must possess the motivation to engage in observational learning. In our vignette, the child must want to learn how to play the game in order to properly engage in observational learning. Researchers have conducted countless experiments designed to explore observational learning, the most famous of which is Albert Bandura’s “Bobo doll experiment.” In this experiment (Bandura, Ross & Ross 1961), Bandura had children individually observe an adult social model interact with a clown doll (“Bobo”). For one group of children, the adult interacted aggressively with Bobo: punching it, kicking it, throwing it, and even hitting it in the face with a toy mallet. Another group of children watched the adult interact with other toys, displaying no aggression toward Bobo. In both instances the adult left and the children were allowed to interact with Bobo on their own. Bandura found that children exposed to the aggressive social model were significantly more likely to behave aggressively toward Bobo, hitting and kicking him, compared to those exposed to the non-aggressive model. The researchers concluded that the children in the aggressive group used their observations of the adult social model’s behavior to determine that aggressive behavior toward Bobo was acceptable. While reinforcement was not required to elicit the children’s behavior in Bandura’s first experiment, it is important to acknowledge that consequences do play a role within observational learning. A future adaptation of this study (Bandura, Ross, & Ross, 1963) demonstrated that children in the aggression group showed less aggressive behavior if they witnessed the adult model receive punishment for aggressing against Bobo. Bandura referred to this process as vicarious reinforcement, as the children did not experience the reinforcement or punishment directly, yet were still influenced by observing it. Conclusion We have covered three primary explanations for how we learn to behave and interact with the world around us. Considering your own experiences, how well do these theories apply to you? Maybe when reflecting on your personal sense of fashion, you realize that you tend to select clothes others have complimented you on (operant conditioning). Or maybe, thinking back on a new restaurant you tried recently, you realize you chose it because its commercials play happy music (classical conditioning). Or maybe you are now always on time with your assignments, because you saw how others were punished when they were late (observational learning). Regardless of the activity, behavior, or response, there’s a good chance your “decision” to do it can be explained based on one of the theories presented in this module. Outside Resources Article: Rescorla, R. A. (1988). Pavlovian conditioning: It’s not what you think it is. American Psychologist, 43, 151–160. Book: Bouton, M. E. (2007). Learning and behavior: A contemporary synthesis. Sunderland, MA: Sinauer Associates. Book: Bouton, M. E. (2009). Learning theory. In B. J. Sadock, V. A. Sadock, & P. Ruiz (Eds.), Kaplan & Sadock’s comprehensive textbook of psychiatry (9th ed., Vol. 1, pp. 647–658). New York, NY: Lippincott Williams & Wilkins. Book: Domjan, M. (2010). The principles of learning and behavior (6th ed.). Belmont, CA: Wadsworth. Video: Albert Bandura discusses the Bobo Doll Experiment. Discussion Questions 1. Describe three examples of Pavlovian (classical) conditioning that you have seen in your own behavior, or that of your friends or family, in the past few days. 2. Describe three examples of instrumental (operant) conditioning that you have seen in your own behavior, or that of your friends or family, in the past few days. 3. Drugs can be potent reinforcers. Discuss how Pavlovian conditioning and instrumental conditioning can work together to influence drug taking. 4. In the modern world, processed foods are highly available and have been engineered to be highly palatable and reinforcing. Discuss how Pavlovian and instrumental conditioning can work together to explain why people often eat too much. 5. How does blocking challenge the idea that pairings of a CS and US are sufficient to cause Pavlovian conditioning? What is important in creating Pavlovian learning? 6. How does the reinforcer devaluation effect challenge the idea that reinforcers merely “stamp in” the operant response? What does the effect tell us that animals actually learn in operant conditioning? 7. With regards to social learning do you think people learn violence from observing violence in movies? Why or why not? 8. What do you think you have learned through social learning? Who are your social models? Vocabulary Blocking In classical conditioning, the finding that no conditioning occurs to a stimulus if it is combined with a previously conditioned stimulus during conditioning trials. Suggests that information, surprise value, or prediction error is important in conditioning. Categorize To sort or arrange different items into classes or categories. Classical conditioning The procedure in which an initially neutral stimulus (the conditioned stimulus, or CS) is paired with an unconditioned stimulus (or US). The result is that the conditioned stimulus begins to elicit a conditioned response (CR). Classical conditioning is nowadays considered important as both a behavioral phenomenon and as a method to study simple associative learning. Same as Pavlovian conditioning. Conditioned compensatory response In classical conditioning, a conditioned response that opposes, rather than is the same as, the unconditioned response. It functions to reduce the strength of the unconditioned response. Often seen in conditioning when drugs are used as unconditioned stimuli. Conditioned response (CR) The response that is elicited by the conditioned stimulus after classical conditioning has taken place. Conditioned stimulus (CS) An initially neutral stimulus (like a bell, light, or tone) that elicits a conditioned response after it has been associated with an unconditioned stimulus. Context Stimuli that are in the background whenever learning occurs. For instance, the Skinner box or room in which learning takes place is the classic example of a context. However, “context” can also be provided by internal stimuli, such as the sensory effects of drugs (e.g., being under the influence of alcohol has stimulus properties that provide a context) and mood states (e.g., being happy or sad). It can also be provided by a specific period in time—the passage of time is sometimes said to change the “temporal context.” Discriminative stimulus In operant conditioning, a stimulus that signals whether the response will be reinforced. It is said to “set the occasion” for the operant response. Extinction Decrease in the strength of a learned behavior that occurs when the conditioned stimulus is presented without the unconditioned stimulus (in classical conditioning) or when the behavior is no longer reinforced (in instrumental conditioning). The term describes both the procedure (the US or reinforcer is no longer presented) as well as the result of the procedure (the learned response declines). Behaviors that have been reduced in strength through extinction are said to be “extinguished.” Fear conditioning A type of classical or Pavlovian conditioning in which the conditioned stimulus (CS) is associated with an aversive unconditioned stimulus (US), such as a foot shock. As a consequence of learning, the CS comes to evoke fear. The phenomenon is thought to be involved in the development of anxiety disorders in humans. Goal-directed behavior Instrumental behavior that is influenced by the animal’s knowledge of the association between the behavior and its consequence and the current value of the consequence. Sensitive to the reinforcer devaluation effect. Habit Instrumental behavior that occurs automatically in the presence of a stimulus and is no longer influenced by the animal’s knowledge of the value of the reinforcer. Insensitive to the reinforcer devaluation effect. Instrumental conditioning Process in which animals learn about the relationship between their behaviors and their consequences. Also known as operant conditioning. Law of effect The idea that instrumental or operant responses are influenced by their effects. Responses that are followed by a pleasant state of affairs will be strengthened and those that are followed by discomfort will be weakened. Nowadays, the term refers to the idea that operant or instrumental behaviors are lawfully controlled by their consequences. Observational learning Learning by observing the behavior of others. Operant A behavior that is controlled by its consequences. The simplest example is the rat’s lever-pressing, which is controlled by the presentation of the reinforcer. Operant conditioning See instrumental conditioning. Pavlovian conditioning See classical conditioning. Prediction error When the outcome of a conditioning trial is different from that which is predicted by the conditioned stimuli that are present on the trial (i.e., when the US is surprising). Prediction error is necessary to create Pavlovian conditioning (and associative learning generally). As learning occurs over repeated conditioning trials, the conditioned stimulus increasingly predicts the unconditioned stimulus, and prediction error declines. Conditioning works to correct or reduce prediction error. Preparedness The idea that an organism’s evolutionary history can make it easy to learn a particular association. Because of preparedness, you are more likely to associate the taste of tequila, and not the circumstances surrounding drinking it, with getting sick. Similarly, humans are more likely to associate images of spiders and snakes than flowers and mushrooms with aversive outcomes like shocks. Punisher A stimulus that decreases the strength of an operant behavior when it is made a consequence of the behavior. Quantitative law of effect A mathematical rule that states that the effectiveness of a reinforcer at strengthening an operant response depends on the amount of reinforcement earned for all alternative behaviors. A reinforcer is less effective if there is a lot of reinforcement in the environment for other behaviors. Reinforcer Any consequence of a behavior that strengthens the behavior or increases the likelihood that it will be performed it again. Reinforcer devaluation effect The finding that an animal will stop performing an instrumental response that once led to a reinforcer if the reinforcer is separately made aversive or undesirable. Renewal effect Recovery of an extinguished response that occurs when the context is changed after extinction. Especially strong when the change of context involves return to the context in which conditioning originally occurred. Can occur after extinction in either classical or instrumental conditioning. Social Learning Theory The theory that people can learn new responses and behaviors by observing the behavior of others. Social models Authorities that are the targets for observation and who model behaviors. Spontaneous recovery Recovery of an extinguished response that occurs with the passage of time after extinction. Can occur after extinction in either classical or instrumental conditioning. Stimulus control When an operant behavior is controlled by a stimulus that precedes it. Taste aversion learning The phenomenon in which a taste is paired with sickness, and this causes the organism to reject—and dislike—that taste in the future. Unconditioned response (UR) In classical conditioning, an innate response that is elicited by a stimulus before (or in the absence of) conditioning. Unconditioned stimulus (US) In classical conditioning, the stimulus that elicits the response before conditioning occurs. Vicarious reinforcement Learning that occurs by observing the reinforcement or punishment of another person.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_5%3A_Learning_and_Memory/5.1%3A_Conditioning_and_Learning.txt
By Aaron Benjamin University of Illinois at Urbana-Champaign Learning is a complex process that defies easy definition and description. This module reviews some of the philosophical issues involved with defining learning and describes in some detail the characteristics of learners and of encoding activities that seem to affect how well people can acquire new memories, knowledge, or skills. At the end, we consider a few basic principles that guide whether a particular attempt at learning will be successful or not. learning Objectives • Consider what kinds of activities constitute learning. • Name multiple forms of learning. • List some individual differences that affect learning. • Describe the effect of various encoding activities on learning. • Describe three general principles of learning. Introduction What do you do when studying for an exam? Do you read your class notes and textbook (hopefully not for the very first time)? Do you try to find a quiet place without distraction? Do you use flash cards to test your knowledge? The choices you make reveal your theory of learning, but there is no reason for you to limit yourself to your own intuitions. There is a vast and vibrant science of learning, in which researchers from psychology, education, and neuroscience study basic principles of learning and memory. In fact, learning is a much broader domain than you might think. Consider: Is listening to music a form of learning? More often, it seems listening to music is a way of avoiding learning. But we know that your brain’s response to auditory information changes with your experience with that information, a form of learning called auditory perceptual learning(Polley, Steinberg, & Merzenich, 2006). Each time we listen to a song, we hear it differently because of our experience. When we exhibit changes in behavior without having intended to learn something, that is called implicit learning (Seger, 1994), and when we exhibit changes in our behavior that reveal the influence of past experience even though we are not attempting to use that experience, that is called implicit memory (Richardson-Klavehn & Bjork, 1988). Other well-studied forms of learning include the types of learning that are general across species. We can’t ask a slug to learn a poem or a lemur to learn to bat left-handed, but we can assess learning in other ways. For example, we can look for a change in our responses to things when we are repeatedly stimulated. If you live in a house with a grandfather clock, you know that what was once an annoying and intrusive sound is now probably barely audible to you. Similarly, poking an earthworm again and again is likely to lead to a reduction in its retraction from your touch. These phenomena are forms of nonassociative learning, in which single repeated exposure leads to a change in behavior (Pinsker, Kupfermann, Castelluci, & Kandel, 1970). When our response lessens with exposure, it is called habituation, and when it increases (like it might with a particularly annoying laugh), it is called sensitization. Animals can also learn about relationships between things, such as when an alley cat learns that the sound of janitors working in a restaurant precedes the dumping of delicious new garbage (an example of stimulus-stimulus learning called classical conditioning), or when a dog learns to roll over to get a treat (a form of stimulus-response learning called operant conditioning). These forms of learning will be covered in the module on Conditioning and Learning (http://noba.to/ajxhcqdr). Here, we’ll review some of the conditions that affect learning, with an eye toward the type of explicit learning we do when trying to learn something. Jenkins (1979) classified experiments on learning and memory into four groups of factors (renamed here): learners, encoding activities, materials, and retrieval. In this module, we’ll focus on the first two categories; the module on Memory (http://noba.to/bdc4uger) will consider other factors more generally. Learners People bring numerous individual differences with them into memory experiments, and many of these variables affect learning. In the classroom, motivation matters (Pintrich, 2003), though experimental attempts to induce motivation with money yield only modest benefits (Heyer & O’Kelly, 1949). Learners are, however, quite able to allocate more effort to learning prioritized over unimportant materials (Castel, Benjamin, Craik, & Watkins, 2002). In addition, the organization and planning skills that a learner exhibits matter a lot (Garavalia & Gredler, 2002), suggesting that the efficiency with which one organizes self-guided learning is an important component of learning. We will return to this topic soon. One well-studied and important variable is working memory capacity. Working memory describes the form of memory we use to hold onto information temporarily. Working memory is used, for example, to keep track of where we are in the course of a complicated math problem, and what the relevant outcomes of prior steps in that problem are. Higher scores on working memory measures are predictive of better reasoning skills (Kyllonen & Christal, 1990), reading comprehension (Daneman & Carpenter, 1980), and even better control of attention (Kane, Conway, Hambrick, & Engle, 2008). Anxiety also affects the quality of learning. For example, people with math anxiety have a smaller capacity for remembering math-related information in working memory, such as the results of carrying a digit in arithmetic (Ashcraft & Kirk, 2001). Having students write about their specific anxiety seems to reduce the worry associated with tests and increases performance on math tests (Ramirez & Beilock, 2011). One good place to end this discussion is to consider the role of expertise. Though there probably is a finite capacity on our ability to store information (Landauer, 1986), in practice, this concept is misleading. In fact, because the usual bottleneck to remembering something is our ability to access information, not our space to store it, having more knowledge or expertise actually enhances our ability to learn new information. A classic example can be seen in comparing a chess master with a chess novice on their ability to learn and remember the positions of pieces on a chessboard (Chase & Simon, 1973). In that experiment, the master remembered the location of many more pieces than the novice, even after only a very short glance. Maybe chess masters are just smarter than the average chess beginner, and have better memory? No: The advantage the expert exhibited only was apparent when the pieces were arranged in a plausible format for an ongoing chess game; when the pieces were placed randomly, both groups did equivalently poorly. Expertise allowed the master to chunk (Simon, 1974) multiple pieces into a smaller number of pieces of information—but only when that information was structured in such a way so as to allow the application of that expertise. Encoding Activities What we do when we’re learning is very important. We’ve all had the experience of reading something and suddenly coming to the realization that we don’t remember a single thing, even the sentence that we just read. How we go about encoding information determines a lot about how much we remember. You might think that the most important thing is to try to learn. Interestingly, this is not true, at least not completely. Trying to learn a list of words, as compared to just evaluating each word for its part of speech (i.e., noun, verb, adjective) does help you recall the words—that is, it helps you remember and write down more of the words later. But it actually impairs your ability to recognize the words—to judge on a later list which words are the ones that you studied (Eagle & Leiter, 1964). So this is a case in which incidental learning—that is, learning without the intention to learn—is better than intentional learning. Such examples are not particularly rare and are not limited to recognition. Nairne, Pandeirada, and Thompson (2008) showed, for example, that survival processing—thinking about and rating each word in a list for its relevance in a survival scenario—led to much higher recall than intentional learning (and also higher, in fact, than other encoding activities that are also known to lead to high levels of recall). Clearly, merely intending to learn something is not enough. How a learner actively processes the material plays a large role; for example, reading words and evaluating their meaning leads to better learning than reading them and evaluating the way that the words look or sound (Craik & Lockhart, 1972). These results suggest that individual differences in motivation will not have a large effect on learning unless learners also have accurate ideas about how to effectively learn material when they care to do so. So, do learners know how to effectively encode material? People allowed to freely allocate their time to study a list of words do remember those words better than a group that doesn’t have control over their own study time, though the advantage is relatively small and is limited to the subset of learners who choose to spend more time on the more difficult material (Tullis & Benjamin, 2011). In addition, learners who have an opportunity to review materials that they select for restudy often learn more than another group that is asked to restudy the materials that they didn’t select for restudy (Kornell & Metcalfe, 2006). However, this advantage also appears to be relatively modest (Kimball, Smith, & Muntean, 2012) and wasn’t apparent in a group of older learners (Tullis & Benjamin, 2012). Taken together, all of the evidence seems to support the claim that self-control of learning can be effective, but only when learners have good ideas about what an effective learning strategy is. One factor that appears to have a big effect and that learners do not always appear to understand is the effect of scheduling repetitions of study. If you are studying for a final exam next week and plan to spend a total of five hours, what is the best way to distribute your study? The evidence is clear that spacing one’s repetitions apart in time is superior than massing them all together (Baddeley & Longman, 1978; Bahrick, Bahrick, Bahrick, & Bahrick, 1993; Melton, 1967). Increasing the spacing between consecutive presentations appears to benefit learning yet further (Landauer & Bjork, 1978). A similar advantage is evident for the practice of interleaving multiple skills to be learned: For example, baseball batters improved more when they faced a mix of different types of pitches than when they faced the same pitches blocked by type (Hall, Domingues, & Cavazos, 1994). Students also showed better performance on a test when different types of mathematics problems were interleaved rather than blocked during learning (Taylor & Rohrer, 2010). One final factor that merits discussion is the role of testing. Educators and students often think about testing as a way of assessing knowledge, and this is indeed an important use of tests. But tests themselves affect memory, because retrieval is one of the most powerful ways of enhancing learning (Roediger & Butler, 2013). Self-testing is an underutilized and potent means of making learning more durable. General Principles of Learning We’ve only begun to scratch the surface here of the many variables that affect the quality and content of learning (Mullin, Herrmann, & Searleman, 1993). But even within this brief examination of the differences between people and the activities they engage in can we see some basic principles of the learning process. The value of effective metacognition To be able to guide our own learning effectively, we must be able to evaluate the progress of our learning accurately and choose activities that enhance learning efficiently. It is of little use to study for a long time if a student cannot discern between what material she has or has not mastered, and if additional study activities move her no closer to mastery. Metacognition describes the knowledge and skills people have in monitoring and controlling their own learning and memory. We can work to acquire better metacognition by paying attention to our successes and failures in estimating what we do and don’t know, and by using testing often to monitor our progress. Transfer-appropriate processing Sometimes, it doesn’t make sense to talk about whether a particular encoding activity is good or bad for learning. Rather, we can talk about whether that activity is good for learning as revealed by a particular test. For example, although reading words for meaning leads to better performance on a test of recall or recognition than paying attention to the pronunciation of the word, it leads to worse performance on a test that taps knowledge of that pronunciation, such as whether a previously studied word rhymes with another word (Morris, Bransford, & Franks, 1977). The principle of transfer-appropriate processing states that memory is “better” when the test taps the same type of knowledge as the original encoding activity. When thinking about how to learn material, we should always be thinking about the situations in which we are likely to need access to that material. An emergency responder who needs access to learned procedures under conditions of great stress should learn differently from a hobbyist learning to use a new digital camera. The value of forgetting Forgetting is sometimes seen as the enemy of learning, but, in fact, forgetting is a highly desirable part of the learning process. The main bottleneck we face in using our knowledge is being able to access it. We have all had the experience of retrieval failure—that is, not being able to remember a piece of information that we know we have, and that we can access easily once the right set of cues is provided. Because access is difficult, it is important to jettison information that is not needed—that is, to forget it. Without forgetting, our minds would become cluttered with out-of-date or irrelevant information. And, just imagine how complicated life would be if we were unable to forget the names of past acquaintances, teachers, or romantic partners. But the value of forgetting is even greater than that. There is lots of evidence that some forgetting is a prerequisite for more learning. For example, the previously discussed benefits of distributing practice opportunities may arise in part because of the greater forgetting that takes places between those spaced learning events. It is for this reason that some encoding activities that are difficult and lead to the appearance of slow learning actually lead to superior learning in the long run (Bjork, 2011). When we opt for learning activities that enhance learning quickly, we must be aware that these are not always the same techniques that lead to durable, long-term learning. Conclusion To wrap things up, let’s think back to the questions we began the module with. What might you now do differently when preparing for an exam? Hopefully, you will think about testing yourself frequently, developing an accurate sense of what you do and do not know, how you are likely to use the knowledge, and using the scheduling of tasks to your advantage. If you are learning a new skill or new material, using the scientific study of learning as a basis for the study and practice decisions you make is a good bet. Outside Resources Video: The First 20 hours – How to Learn Anything - Watch a video by Josh Kaufman about how we can get really good at almost anything with 20 hours of efficient practice. Video: The Learning Scientists - Terrific YouTube Channel with videos covering such important topics as interleaving, spaced repetition, and retrieval practice. https://www.youtube.com/channel/UCjbAmxL6GZXiaoXuNE7cIYg Video: What we learn before we’re born - In this video, science writer Annie Murphy Paul answers the question “When does learning begin?” She covers through new research that shows how much we learn in the womb — from the lilt of our native language to our soon-to-be-favorite foods. https://www.ted.com/talks/annie_murphy_paul_what_we_learn_before_we_re_born Web: Neuroscience News - This is a science website dedicated to neuroscience research, with this page addressing fascinating new memory research. http://neurosciencenews.com/neuroscience-terms/memory-research/ Web: The Learning Scientists - A websitecreated by three psychologists who wanted to make scientific research on learning more accessible to students, teachers, and other educators. http://www.learningscientists.org/ Discussion Questions 1. How would you best design a computer program to help someone learn a new foreign language? Think about some of the principles of learning outlined in this module and how those principles could be instantiated in “rules” in a computer program. 2. Would you rather have a really good memory or really good metacognition? How might you train someone to develop better metacognition if he or she doesn’t have a very good memory, and what would be the consequences of that training? 3. In what kinds of situations not discussed here might you find a benefit of forgetting on learning? Vocabulary Chunk The process of grouping information together using our knowledge. Classical conditioning Describes stimulus-stimulus associative learning. Encoding The pact of putting information into memory. Habituation Occurs when the response to a stimulus decreases with exposure. Implicit learning Occurs when we acquire information without intent that we cannot easily express. Implicit memory A type of long-term memory that does not require conscious thought to encode. It's the type of memory one makes without intent. Incidental learning Any type of learning that happens without the intention to learn. Intentional learning Any type of learning that happens when motivated by intention. Metacognition Describes the knowledge and skills people have in monitoring and controlling their own learning and memory. Nonassociative learning Occurs when a single repeated exposure leads to a change in behavior. Operant conditioning Describes stimulus-response associative learning. Perceptual learning Occurs when aspects of our perception changes as a function of experience. Sensitization Occurs when the response to a stimulus increases with exposure Transfer-appropriate processing A principle that states that memory performance is superior when a test taps the same cognitive processes as the original encoding activity. Working memory The form of memory we use to hold onto information temporarily, usually for the purposes of manipulation.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_5%3A_Learning_and_Memory/5.2%3A_Factors_Influencing_Learning.txt
By Kathleen B. McDermott and Henry L. Roediger III Washington University in St. Louis “Memory” is a single term that reflects a number of different abilities: holding information briefly while working with it (working memory), remembering episodes of one’s life (episodic memory), and our general knowledge of facts of the world (semantic memory), among other types. Remembering episodes involves three processes: encoding information (learning it, by perceiving it and relating it to past knowledge), storing it (maintaining it over time), and then retrieving it (accessing the information when needed). Failures can occur at any stage, leading to forgetting or to having false memories. The key to improving one’s memory is to improve processes of encoding and to use techniques that guarantee effective retrieval. Good encoding techniques include relating new information to what one already knows, forming mental images, and creating associations among information that needs to be remembered. The key to good retrieval is developing effective cues that will lead the rememberer back to the encoded information. Classic mnemonic systems, known since the time of the ancient Greeks and still used by some today, can greatly improve one’s memory abilities. learning objectives • Define and note differences between the following forms of memory: working memory, episodic memory, semantic memory, collective memory. • Describe the three stages in the process of learning and remembering. • Describe strategies that can be used to enhance the original learning or encoding of information. • Describe strategies that can improve the process of retrieval. • Describe why the classic mnemonic device, the method of loci, works so well. Introduction In 2013, Simon Reinhard sat in front of 60 people in a room at Washington University, where he memorized an increasingly long series of digits. On the first round, a computer generated 10 random digits—6 1 9 4 8 5 6 3 7 1—on a screen for 10 seconds. After the series disappeared, Simon typed them into his computer. His recollection was perfect. In the next phase, 20 digits appeared on the screen for 20 seconds. Again, Simon got them all correct. No one in the audience (mostly professors, graduate students, and undergraduate students) could recall the 20 digits perfectly. Then came 30 digits, studied for 30 seconds; once again, Simon didn’t misplace even a single digit. For a final trial, 50 digits appeared on the screen for 50 seconds, and again, Simon got them all right. In fact, Simon would have been happy to keep going. His record in this task—called “forward digit span”—is 240 digits! When most of us witness a performance like that of Simon Reinhard, we think one of two things: First, maybe he’s cheating somehow. (No, he is not.) Second, Simon must have abilities more advanced than the rest of humankind. After all, psychologists established many years ago that the normal memory span for adults is about 7 digits, with some of us able to recall a few more and others a few less (Miller, 1956). That is why the first phone numbers were limited to 7 digits—psychologists determined that many errors occurred (costing the phone company money) when the number was increased to even 8 digits. But in normal testing, no one gets 50 digits correct in a row, much less 240. So, does Simon Reinhard simply have a photographic memory? He does not. Instead, Simon has taught himself simple strategies for remembering that have greatly increased his capacity for remembering virtually any type of material—digits, words, faces and names, poetry, historical dates, and so on. Twelve years earlier, before he started training his memory abilities, he had a digit span of 7, just like most of us. Simon has been training his abilities for about 10 years as of this writing, and has risen to be in the top two of “memory athletes.” In 2012, he came in second place in the World Memory Championships (composed of 11 tasks), held in London. He currently ranks second in the world, behind another German competitor, Johannes Mallow. In this module, we reveal what psychologists and others have learned about memory, and we also explain the general principles by which you can improve your own memory for factual material. Varieties of Memory For most of us, remembering digits relies on short-term memory, or working memory—the ability to hold information in our minds for a brief time and work with it (e.g., multiplying 24 x 17 without using paper would rely on working memory). Another type of memory is episodic memory—the ability to remember the episodes of our lives. If you were given the task of recalling everything you did 2 days ago, that would be a test of episodic memory; you would be required to mentally travel through the day in your mind and note the main events. Semantic memory is our storehouse of more-or-less permanent knowledge, such as the meanings of words in a language (e.g., the meaning of “parasol”) and the huge collection of facts about the world (e.g., there are 196 countries in the world, and 206 bones in your body). Collective memory refers to the kind of memory that people in a group share (whether family, community, schoolmates, or citizens of a state or a country). For example, residents of small towns often strongly identify with those towns, remembering the local customs and historical events in a unique way. That is, the community’s collective memory passes stories and recollections between neighbors and to future generations, forming a memory system unto itself. Psychologists continue to debate the classification of types of memory, as well as which types rely on others (Tulving, 2007), but for this module we will focus on episodic memory. Episodic memory is usually what people think of when they hear the word “memory.” For example, when people say that an older relative is “losing her memory” due to Alzheimer’s disease, the type of memory-loss they are referring to is the inability to recall events, or episodic memory. (Semantic memory is actually preserved in early-stage Alzheimer’s disease.) Although remembering specific events that have happened over the course of one’s entire life (e.g., your experiences in sixth grade) can be referred to as autobiographical memory, we will focus primarily on the episodic memories of more recent events. Three Stages of the Learning/Memory Process Psychologists distinguish between three necessary stages in the learning and memory process: encoding, storage, and retrieval (Melton, 1963). Encoding is defined as the initial learning of information; storage refers to maintaining information over time; retrieval is the ability to access information when you need it. If you meet someone for the first time at a party, you need to encode her name (Lyn Goff) while you associate her name with her face. Then you need to maintain the information over time. If you see her a week later, you need to recognize her face and have it serve as a cue to retrieve her name. Any successful act of remembering requires that all three stages be intact. However, two types of errors can also occur. Forgetting is one type: you see the person you met at the party and you cannot recall her name. The other error is misremembering (false recall or false recognition): you see someone who looks like Lyn Goff and call the person by that name (false recognition of the face). Or, you might see the real Lyn Goff, recognize her face, but then call her by the name of another woman you met at the party (misrecall of her name). Whenever forgetting or misremembering occurs, we can ask, at which stage in the learning/memory process was there a failure?—though it is often difficult to answer this question with precision. One reason for this inaccuracy is that the three stages are not as discrete as our description implies. Rather, all three stages depend on one another. How we encode information determines how it will be stored and what cues will be effective when we try to retrieve it. And too, the act of retrieval itself also changes the way information is subsequently remembered, usually aiding later recall of the retrieved information. The central point for now is that the three stages—encoding, storage, and retrieval—affect one another, and are inextricably bound together. Encoding Encoding refers to the initial experience of perceiving and learning information. Psychologists often study recall by having participants study a list of pictures or words. Encoding in these situations is fairly straightforward. However, “real life” encoding is much more challenging. When you walk across campus, for example, you encounter countless sights and sounds—friends passing by, people playing Frisbee, music in the air. The physical and mental environments are much too rich for you to encode all the happenings around you or the internal thoughts you have in response to them. So, an important first principle of encoding is that it is selective: we attend to some events in our environment and we ignore others. A second point about encoding is that it is prolific; we are always encoding the events of our lives—attending to the world, trying to understand it. Normally this presents no problem, as our days are filled with routine occurrences, so we don’t need to pay attention to everything. But if something does happen that seems strange—during your daily walk across campus, you see a giraffe—then we pay close attention and try to understand why we are seeing what we are seeing. Right after your typical walk across campus (one without the appearance of a giraffe), you would be able to remember the events reasonably well if you were asked. You could say whom you bumped into, what song was playing from a radio, and so on. However, suppose someone asked you to recall the same walk a month later. You wouldn’t stand a chance. You would likely be able to recount the basics of a typical walk across campus, but not the precise details of that particular walk. Yet, if you had seen a giraffe during that walk, the event would have been fixed in your mind for a long time, probably for the rest of your life. You would tell your friends about it, and, on later occasions when you saw a giraffe, you might be reminded of the day you saw one on campus. Psychologists have long pinpointed distinctiveness—having an event stand out as quite different from a background of similar events—as a key to remembering events (Hunt, 2003). In addition, when vivid memories are tinged with strong emotional content, they often seem to leave a permanent mark on us. Public tragedies, such as terrorist attacks, often create vivid memories in those who witnessed them. But even those of us not directly involved in such events may have vivid memories of them, including memories of first hearing about them. For example, many people are able to recall their exact physical location when they first learned about the assassination or accidental death of a national figure. The term flashbulb memory was originally coined by Brown and Kulik (1977) to describe this sort of vivid memory of finding out an important piece of news. The name refers to how some memories seem to be captured in the mind like a flash photograph; because of the distinctiveness and emotionality of the news, they seem to become permanently etched in the mind with exceptional clarity compared to other memories. Take a moment and think back on your own life. Is there a particular memory that seems sharper than others? A memory where you can recall unusual details, like the colors of mundane things around you, or the exact positions of surrounding objects? Although people have great confidence in flashbulb memories like these, the truth is, our objective accuracy with them is far from perfect (Talarico & Rubin, 2003). That is, even though people may have great confidence in what they recall, their memories are not as accurate (e.g., what the actual colors were; where objects were truly placed) as they tend to imagine. Nonetheless, all other things being equal, distinctive and emotional events are well-remembered. Details do not leap perfectly from the world into a person’s mind. We might say that we went to a party and remember it, but what we remember is (at best) what we encoded. As noted above, the process of encoding is selective, and in complex situations, relatively few of many possible details are noticed and encoded. The process of encoding always involves recoding—that is, taking the information from the form it is delivered to us and then converting it in a way that we can make sense of it. For example, you might try to remember the colors of a rainbow by using the acronym ROY G BIV (red, orange, yellow, green, blue, indigo, violet). The process of recoding the colors into a name can help us to remember. However, recoding can also introduce errors—when we accidentally add information during encoding, then remember that new material as if it had been part of the actual experience (as discussed below). Psychologists have studied many recoding strategies that can be used during study to improve retention. First, research advises that, as we study, we should think of the meaning of the events (Craik & Lockhart, 1972), and we should try to relate new events to information we already know. This helps us form associations that we can use to retrieve information later. Second, imagining events also makes them more memorable; creating vivid images out of information (even verbal information) can greatly improve later recall (Bower & Reitman, 1972). Creating imagery is part of the technique Simon Reinhard uses to remember huge numbers of digits, but we can all use images to encode information more effectively. The basic concept behind good encoding strategies is to form distinctive memories (ones that stand out), and to form links or associations among memories to help later retrieval (Hunt & McDaniel, 1993). Using study strategies such as the ones described here is challenging, but the effort is well worth the benefits of enhanced learning and retention. We emphasized earlier that encoding is selective: people cannot encode all information they are exposed to. However, recoding can add information that was not even seen or heard during the initial encoding phase. Several of the recoding processes, like forming associations between memories, can happen without our awareness. This is one reason people can sometimes remember events that did not actually happen—because during the process of recoding, details got added. One common way of inducing false memories in the laboratory employs a word-list technique (Deese, 1959; Roediger & McDermott, 1995). Participants hear lists of 15 words, like door, glass, pane, shade, ledge, sill, house, open, curtain, frame, view, breeze, sash, screen, and shutter. Later, participants are given a test in which they are shown a list of words and asked to pick out the ones they’d heard earlier. This second list contains some words from the first list (e.g., door, pane, frame) and some words not from the list (e.g., arm, phone, bottle). In this example, one of the words on the test is window, which—importantly—does not appear in the first list, but which is related to other words in that list. When subjects were tested, they were reasonably accurate with the studied words (door, etc.), recognizing them 72% of the time. However, when window was on the test, they falsely recognized it as having been on the list 84% of the time (Stadler, Roediger, & McDermott, 1999). The same thing happened with many other lists the authors used. This phenomenon is referred to as the DRM (for Deese-Roediger-McDermott) effect. One explanation for such results is that, while students listened to items in the list, the words triggered the students to think about window, even though windowwas never presented. In this way, people seem to encode events that are not actually part of their experience. Because humans are creative, we are always going beyond the information we are given: we automatically make associations and infer from them what is happening. But, as with the word association mix-up above, sometimes we make false memories from our inferences—remembering the inferences themselves as if they were actual experiences. To illustrate this, Brewer (1977) gave people sentences to remember that were designed to elicit pragmatic inferences. Inferences, in general, refer to instances when something is not explicitly stated, but we are still able to guess the undisclosed intention. For example, if your friend told you that she didn’t want to go out to eat, you may infer that she doesn’t have the money to go out, or that she’s too tired. With pragmatic inferences, there is usually one particular inference you’re likely to make. Consider the statement Brewer (1977) gave her participants: “The karate champion hit the cinder block.” After hearing or seeing this sentence, participants who were given a memory test tended to remember the statement as having been, “The karate champion broke the cinder block.” This remembered statement is not necessarily a logical inference (i.e., it is perfectly reasonable that a karate champion could hit a cinder block without breaking it). Nevertheless, the pragmatic conclusion from hearing such a sentence is that the block was likely broken. The participants remembered this inference they made while hearing the sentence in place of the actual words that were in the sentence (see also McDermott & Chan, 2006). Encoding—the initial registration of information—is essential in the learning and memory process. Unless an event is encoded in some fashion, it will not be successfully remembered later. However, just because an event is encoded (even if it is encoded well), there’s no guarantee that it will be remembered later. Storage Every experience we have changes our brains. That may seem like a bold, even strange, claim at first, but it’s true. We encode each of our experiences within the structures of the nervous system, making new impressions in the process—and each of those impressions involves changes in the brain. Psychologists (and neurobiologists) say that experiences leave memory traces, or engrams (the two terms are synonyms). Memories have to be stored somewhere in the brain, so in order to do so, the brain biochemically alters itself and its neural tissue. Just like you might write yourself a note to remind you of something, the brain “writes” a memory trace, changing its own physical composition to do so. The basic idea is that events (occurrences in our environment) create engrams through a process of consolidation: the neural changes that occur after learning to create the memory trace of an experience. Although neurobiologists are concerned with exactly what neural processes change when memories are created, for psychologists, the term memory trace simply refers to the physical change in the nervous system (whatever that may be, exactly) that represents our experience. Although the concept of engram or memory trace is extremely useful, we shouldn’t take the term too literally. It is important to understand that memory traces are not perfect little packets of information that lie dormant in the brain, waiting to be called forward to give an accurate report of past experience. Memory traces are not like video or audio recordings, capturing experience with great accuracy; as discussed earlier, we often have errors in our memory, which would not exist if memory traces were perfect packets of information. Thus, it is wrong to think that remembering involves simply “reading out” a faithful record of past experience. Rather, when we remember past events, we reconstruct them with the aid of our memory traces—but also with our current belief of what happened. For example, if you were trying to recall for the police who started a fight at a bar, you may not have a memory trace of who pushed whom first. However, let’s say you remember that one of the guys held the door open for you. When thinking back to the start of the fight, this knowledge (of how one guy was friendly to you) may unconsciously influence your memory of what happened in favor of the nice guy. Thus, memory is a construction of what you actually recall and what you believe happened. In a phrase, remembering is reconstructive (we reconstruct our past with the aid of memory traces) not reproductive (a perfect reproduction or recreation of the past). Psychologists refer to the time between learning and testing as the retention interval. Memories can consolidate during that time, aiding retention. However, experiences can also occur that undermine the memory. For example, think of what you had for lunch yesterday—a pretty easy task. However, if you had to recall what you had for lunch 17 days ago, you may well fail (assuming you don’t eat the same thing every day). The 16 lunches you’ve had since that one have created retroactive interference. Retroactive interference refers to new activities (i.e., the subsequent lunches) during the retention interval (i.e., the time between the lunch 17 days ago and now) that interfere with retrieving the specific, older memory (i.e., the lunch details from 17 days ago). But just as newer things can interfere with remembering older things, so can the opposite happen. Proactive interference is when past memories interfere with the encoding of new ones. For example, if you have ever studied a second language, often times the grammar and vocabulary of your native language will pop into your head, impairing your fluency in the foreign language. Retroactive interference is one of the main causes of forgetting (McGeoch, 1932). In the module Eyewitness Testimony and Memory Biases http://noba.to/uy49tm37 Elizabeth Loftus describes her fascinating work on eyewitness memory, in which she shows how memory for an event can be changed via misinformation supplied during the retention interval. For example, if you witnessed a car crash but subsequently heard people describing it from their own perspective, this new information may interfere with or disrupt your own personal recollection of the crash. In fact, you may even come to remember the event happening exactly as the others described it! This misinformation effect in eyewitness memory represents a type of retroactive interference that can occur during the retention interval (see Loftus [2005] for a review). Of course, if correct information is given during the retention interval, the witness’s memory will usually be improved. Although interference may arise between the occurrence of an event and the attempt to recall it, the effect itself is always expressed when we retrieve memories, the topic to which we turn next. Retrieval Endel Tulving argued that “the key process in memory is retrieval” (1991, p. 91). Why should retrieval be given more prominence than encoding or storage? For one thing, if information were encoded and stored but could not be retrieved, it would be useless. As discussed previously in this module, we encode and store thousands of events—conversations, sights and sounds—every day, creating memory traces. However, we later access only a tiny portion of what we’ve taken in. Most of our memories will never be used—in the sense of being brought back to mind, consciously. This fact seems so obvious that we rarely reflect on it. All those events that happened to you in the fourth grade that seemed so important then? Now, many years later, you would struggle to remember even a few. You may wonder if the traces of those memories still exist in some latent form. Unfortunately, with currently available methods, it is impossible to know. Psychologists distinguish information that is available in memory from that which is accessible (Tulving & Pearlstone, 1966). Available information is the information that is stored in memory—but precisely how much and what types are stored cannot be known. That is, all we can know is what information we can retrieve—accessibleinformation. The assumption is that accessible information represents only a tiny slice of the information available in our brains. Most of us have had the experience of trying to remember some fact or event, giving up, and then—all of a sudden!—it comes to us at a later time, even after we’ve stopped trying to remember it. Similarly, we all know the experience of failing to recall a fact, but then, if we are given several choices (as in a multiple-choice test), we are easily able to recognize it. What factors determine what information can be retrieved from memory? One critical factor is the type of hints, or cues, in the environment. You may hear a song on the radio that suddenly evokes memories of an earlier time in your life, even if you were not trying to remember it when the song came on. Nevertheless, the song is closely associated with that time, so it brings the experience to mind. The general principle that underlies the effectiveness of retrieval cues is the encoding specificity principle (Tulving & Thomson, 1973): when people encode information, they do so in specific ways. For example, take the song on the radio: perhaps you heard it while you were at a terrific party, having a great, philosophical conversation with a friend. Thus, the song became part of that whole complex experience. Years later, even though you haven’t thought about that party in ages, when you hear the song on the radio, the whole experience rushes back to you. In general, the encoding specificity principle states that, to the extent a retrieval cue (the song) matches or overlaps the memory trace of an experience (the party, the conversation), it will be effective in evoking the memory. A classic experiment on the encoding specificity principle had participants memorize a set of words in a unique setting. Later, the participants were tested on the word sets, either in the same location they learned the words or a different one. As a result of encoding specificity, the students who took the test in the same place they learned the words were actually able to recall more words (Godden & Baddeley, 1975) than the students who took the test in a new setting. In this instance, the physical context itself provided cues for retrieval. This is why it’s good to study for midterms and finals in the same room you’ll be taking them in. One caution with this principle, though, is that, for the cue to work, it can’t match too many other experiences (Nairne, 2002; Watkins, 1975). Consider a lab experiment. Suppose you study 100 items; 99 are words, and one is a picture—of a penguin, item 50 in the list. Afterwards, the cue “recall the picture” would evoke “penguin” perfectly. No one would miss it. However, if the word “penguin” were placed in the same spot among the other 99 words, its memorability would be exceptionally worse. This outcome shows the power of distinctiveness that we discussed in the section on encoding: one picture is perfectly recalled from among 99 words because it stands out. Now consider what would happen if the experiment were repeated, but there were 25 pictures distributed within the 100-item list. Although the picture of the penguin would still be there, the probability that the cue “recall the picture” (at item 50) would be useful for the penguin would drop correspondingly. Watkins (1975) referred to this outcome as demonstrating the cue overload principle. That is, to be effective, a retrieval cue cannot be overloaded with too many memories. For the cue “recall the picture” to be effective, it should only match one item in the target set (as in the one-picture, 99-word case). To sum up how memory cues function: for a retrieval cue to be effective, a match must exist between the cue and the desired target memory; furthermore, to produce the best retrieval, the cue-target relationship should be distinctive. Next, we will see how the encoding specificity principle can work in practice. Psychologists measure memory performance by using production tests (involving recall) or recognition tests (involving the selection of correct from incorrect information, e.g., a multiple-choice test). For example, with our list of 100 words, one group of people might be asked to recall the list in any order (a free recall test), while a different group might be asked to circle the 100 studied words out of a mix with another 100, unstudied words (a recognition test). In this situation, the recognition test would likely produce better performance from participants than the recall test. We usually think of recognition tests as being quite easy, because the cue for retrieval is a copy of the actual event that was presented for study. After all, what could be a better cue than the exact target (memory) the person is trying to access? In most cases, this line of reasoning is true; nevertheless, recognition tests do not provide perfect indexes of what is stored in memory. That is, you can fail to recognize a target staring you right in the face, yet be able to recall it later with a different set of cues (Watkins & Tulving, 1975). For example, suppose you had the task of recognizing the surnames of famous authors. At first, you might think that being given the actual last name would always be the best cue. However, research has shown this not necessarily to be true (Muter, 1984). When given names such as Tolstoy, Shaw, Shakespeare, and Lee, subjects might well say that Tolstoy and Shakespeare are famous authors, whereas Shaw and Lee are not. But, when given a cued recall test using first names, people often recall items (produce them) that they had failed to recognize before. For example, in this instance, a cue like George Bernard ________ often leads to a recall of “Shaw,” even though people initially failed to recognize Shaw as a famous author’s name. Yet, when given the cue “William,” people may not come up with Shakespeare, because William is a common name that matches many people (the cue overload principle at work). This strange fact—that recall can sometimes lead to better performance than recognition—can be explained by the encoding specificity principle. As a cue, George Bernard _________ matches the way the famous writer is stored in memory better than does his surname, Shaw, does (even though it is the target). Further, the match is quite distinctive with George Bernard ___________, but the cue William _________________ is much more overloaded (Prince William, William Yeats, William Faulkner, will.i.am). The phenomenon we have been describing is called the recognition failure of recallable words, which highlights the point that a cue will be most effective depending on how the information has been encoded (Tulving & Thomson, 1973). The point is, the cues that work best to evoke retrieval are those that recreate the event or name to be remembered, whereas sometimes even the target itself, such as Shaw in the above example, is not the best cue. Which cue will be most effective depends on how the information has been encoded. Whenever we think about our past, we engage in the act of retrieval. We usually think that retrieval is an objective act because we tend to imagine that retrieving a memory is like pulling a book from a shelf, and after we are done with it, we return the book to the shelf just as it was. However, research shows this assumption to be false; far from being a static repository of data, the memory is constantly changing. In fact, every time we retrieve a memory, it is altered. For example, the act of retrieval itself (of a fact, concept, or event) makes the retrieved memory much more likely to be retrieved again, a phenomenon called the testing effect or the retrieval practice effect (Pyc & Rawson, 2009; Roediger & Karpicke, 2006). However, retrieving some information can actually cause us to forget other information related to it, a phenomenon called retrieval-induced forgetting (Anderson, Bjork, & Bjork, 1994). Thus the act of retrieval can be a double-edged sword—strengthening the memory just retrieved (usually by a large amount) but harming related information (though this effect is often relatively small). As discussed earlier, retrieval of distant memories is reconstructive. We weave the concrete bits and pieces of events in with assumptions and preferences to form a coherent story (Bartlett, 1932). For example, if during your 10th birthday, your dog got to your cake before you did, you would likely tell that story for years afterward. Say, then, in later years you misremember where the dog actually found the cake, but repeat that error over and over during subsequent retellings of the story. Over time, that inaccuracy would become a basic fact of the event in your mind. Just as retrieval practice (repetition) enhances accurate memories, so will it strengthen errors or false memories (McDermott, 2006). Sometimes memories can even be manufactured just from hearing a vivid story. Consider the following episode, recounted by Jean Piaget, the famous developmental psychologist, from his childhood: One of my first memories would date, if it were true, from my second year. I can still see, most clearly, the following scene, in which I believed until I was about 15. I was sitting in my pram . . . when a man tried to kidnap me. I was held in by the strap fastened round me while my nurse bravely tried to stand between me and the thief. She received various scratches, and I can still vaguely see those on her face. . . . When I was about 15, my parents received a letter from my former nurse saying that she had been converted to the Salvation Army. She wanted to confess her past faults, and in particular to return the watch she had been given as a reward on this occasion. She had made up the whole story, faking the scratches. I therefore must have heard, as a child, this story, which my parents believed, and projected it into the past in the form of a visual memory. . . . Many real memories are doubtless of the same order. (Norman & Schacter, 1997, pp. 187–188) Piaget’s vivid account represents a case of a pure reconstructive memory. He heard the tale told repeatedly, and doubtless told it (and thought about it) himself. The repeated telling cemented the events as though they had really happened, just as we are all open to the possibility of having “many real memories ... of the same order.” The fact that one can remember precise details (the location, the scratches) does not necessarily indicate that the memory is true, a point that has been confirmed in laboratory studies, too (e.g., Norman & Schacter, 1997). Putting It All Together: Improving Your Memory A central theme of this module has been the importance of the encoding and retrieval processes, and their interaction. To recap: to improve learning and memory, we need to encode information in conjunction with excellent cues that will bring back the remembered events when we need them. But how do we do this? Keep in mind the two critical principles we have discussed: to maximize retrieval, we should construct meaningful cues that remind us of the original experience, and those cues should be distinctive and not associated with other memories. These two conditions are critical in maximizing cue effectiveness (Nairne, 2002). So, how can these principles be adapted for use in many situations? Let’s go back to how we started the module, with Simon Reinhard’s ability to memorize huge numbers of digits. Although it was not obvious, he applied these same general memory principles, but in a more deliberate way. In fact, all mnemonic devices, or memory aids/tricks, rely on these fundamental principles. In a typical case, the person learns a set of cues and then applies these cues to learn and remember information. Consider the set of 20 items below that are easy to learn and remember (Bower & Reitman, 1972). 1. is a gun. 11 is penny-one, hot dog bun. 2. is a shoe. 12 is penny-two, airplane glue. 3. is a tree. 13 is penny-three, bumble bee. 4. is a door. 14 is penny-four, grocery store. 5. is knives. 15 is penny-five, big beehive. 6. is sticks. 16 is penny-six, magic tricks. 7. is oven. 17 is penny-seven, go to heaven. 8. is plate. 18 is penny-eight, golden gate. 9. is wine. 19 is penny-nine, ball of twine. 10. is hen. 20 is penny-ten, ballpoint pen. It would probably take you less than 10 minutes to learn this list and practice recalling it several times (remember to use retrieval practice!). If you were to do so, you would have a set of peg words on which you could “hang” memories. In fact, this mnemonic device is called the peg word technique. If you then needed to remember some discrete items—say a grocery list, or points you wanted to make in a speech—this method would let you do so in a very precise yet flexible way. Suppose you had to remember bread, peanut butter, bananas, lettuce, and so on. The way to use the method is to form a vivid image of what you want to remember and imagine it interacting with your peg words (as many as you need). For example, for these items, you might imagine a large gun (the first peg word) shooting a loaf of bread, then a jar of peanut butter inside a shoe, then large bunches of bananas hanging from a tree, then a door slamming on a head of lettuce with leaves flying everywhere. The idea is to provide good, distinctive cues (the weirder the better!) for the information you need to remember while you are learning it. If you do this, then retrieving it later is relatively easy. You know your cues perfectly (one is gun, etc.), so you simply go through your cue word list and “look” in your mind’s eye at the image stored there (bread, in this case). This peg word method may sound strange at first, but it works quite well, even with little training (Roediger, 1980). One word of warning, though, is that the items to be remembered need to be presented relatively slowly at first, until you have practice associating each with its cue word. People get faster with time. Another interesting aspect of this technique is that it’s just as easy to recall the items in backwards order as forwards. This is because the peg words provide direct access to the memorized items, regardless of order. How did Simon Reinhard remember those digits? Essentially he has a much more complex system based on these same principles. In his case, he uses “memory palaces” (elaborate scenes with discrete places) combined with huge sets of images for digits. For example, imagine mentally walking through the home where you grew up and identifying as many distinct areas and objects as possible. Simon has hundreds of such memory palaces that he uses. Next, for remembering digits, he has memorized a set of 10,000 images. Every four-digit number for him immediately brings forth a mental image. So, for example, 6187 might recall Michael Jackson. When Simon hears all the numbers coming at him, he places an image for every four digits into locations in his memory palace. He can do this at an incredibly rapid rate, faster than 4 digits per 4 seconds when they are flashed visually, as in the demonstration at the beginning of the module. As noted, his record is 240 digits, recalled in exact order. Simon also holds the world record in an event called “speed cards,” which involves memorizing the precise order of a shuffled deck of cards. Simon was able to do this in 21.19 seconds! Again, he uses his memory palaces, and he encodes groups of cards as single images. Many books exist on how to improve memory using mnemonic devices, but all involve forming distinctive encoding operations and then having an infallible set of memory cues. We should add that to develop and use these memory systems beyond the basic peg system outlined above takes a great amount of time and concentration. The World Memory Championships are held every year and the records keep improving. However, for most common purposes, just keep in mind that to remember well you need to encode information in a distinctive way and to have good cues for retrieval. You can adapt a system that will meet most any purpose. Outside Resources Book: Brown, P.C., Roediger, H. L. & McDaniel, M. A. (2014). Make it stick: The science of successful learning.Cambridge, MA: Harvard University Press. www.amazon.com/Make-Stick-Sc.../dp/0674729013 Student Video 1: Eureka Foong\\\\'s - The Misinformation Effect. This is a student-made video illustrating this phenomenon of altered memory. It was one of the winning entries in the 2014 Noba Student Video Award. Student Video 2: Kara McCord\\\\'s - Flashbulb Memories. This is a student-made video illustrating this phenomenon of autobiographical memory. It was one of the winning entries in the 2014 Noba Student Video Award. Student Video 3: Ang Rui Xia & Ong Jun Hao\\\\'s - The Misinformation Effect. Another student-made video exploring the misinformation effect. Also an award winner from 2014. Video: Simon Reinhard breaking the world record in speedcards. Web: Retrieval Practice, a website with research, resources, and tips for both educators and learners around the memory-strengthening skill of retrieval practice. http://www.retrievalpractice.org/ Discussion Questions 1. Mnemonists like Simon Reinhard develop mental “journeys,” which enable them to use the method of loci. Develop your own journey, which contains 20 places, in order, that you know well. One example might be: the front walkway to your parents’ apartment; their doorbell; the couch in their living room; etc. Be sure to use a set of places that you know well and that have a natural order to them (e.g., the walkway comes before the doorbell). Now you are more than halfway toward being able to memorize a set of 20 nouns, in order, rather quickly. As an optional second step, have a friend make a list of 20 such nouns and read them to you, slowly (e.g., one every 5 seconds). Use the method to attempt to remember the 20 items. 2. Recall a recent argument or misunderstanding you have had about memory (e.g., a debate over whether your girlfriend/boyfriend had agreed to something). In light of what you have just learned about memory, how do you think about it? Is it possible that the disagreement can be understood by one of you making a pragmatic inference? 3. Think about what you’ve learned in this module and about how you study for tests. On the basis of what you have learned, is there something you want to try that might help your study habits? Vocabulary Autobiographical memory Memory for the events of one’s life. Consolidation The process occurring after encoding that is believed to stabilize memory traces. Cue overload principle The principle stating that the more memories that are associated to a particular retrieval cue, the less effective the cue will be in prompting retrieval of any one memory. Distinctiveness The principle that unusual events (in a context of similar events) will be recalled and recognized better than uniform (nondistinctive) events. Encoding The initial experience of perceiving and learning events. Encoding specificity principle The hypothesis that a retrieval cue will be effective to the extent that information encoded from the cue overlaps or matches information in the engram or memory trace. Engrams A term indicating the change in the nervous system representing an event; also, memory trace. Episodic memory Memory for events in a particular time and place. Flashbulb memory Vivid personal memories of receiving the news of some momentous (and usually emotional) event. Memory traces A term indicating the change in the nervous system representing an event. Misinformation effect When erroneous information occurring after an event is remembered as having been part of the original event. Mnemonic devices A strategy for remembering large amounts of information, usually involving imaging events occurring on a journey or with some other set of memorized cues. Recoding The ubiquitous process during learning of taking information in one form and converting it to another form, usually one more easily remembered. Retrieval The process of accessing stored information. Retroactive interference The phenomenon whereby events that occur after some particular event of interest will usually cause forgetting of the original event. Semantic memory The more or less permanent store of knowledge that people have. Storage The stage in the learning/memory process that bridges encoding and retrieval; the persistence of memory over time.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_5%3A_Learning_and_Memory/5.3%3A_Memory_%28Encoding_Storage_Retrieval%29.txt
By Nicole Dudukovic and Brice Kuhl New York University This module explores the causes of everyday forgetting and considers pathological forgetting in the context of amnesia. Forgetting is viewed as an adaptive process that allows us to be efficient in terms of the information we retain. learning objectives • Identify five reasons we forget and give examples of each. • Describe how forgetting can be viewed as an adaptive process. • Explain the difference between anterograde and retrograde amnesia. Introduction Chances are that you have experienced memory lapses and been frustrated by them. You may have had trouble remembering the definition of a key term on an exam or found yourself unable to recall the name of an actor from one of your favorite TV shows. Maybe you forgot to call your aunt on her birthday or you routinely forget where you put your cell phone. Oftentimes, the bit of information we are searching for comes back to us, but sometimes it does not. Clearly, forgetting seems to be a natural part of life. Why do we forget? And is forgetting always a bad thing? Causes of Forgetting One very common and obvious reason why you cannot remember a piece of information is because you did not learn it in the first place. If you fail to encode information into memory, you are not going to remember it later on. Usually, encoding failures occur because we are distracted or are not paying attention to specific details. For example, people have a lot of trouble recognizing an actual penny out of a set of drawings of very similar pennies, or lures, even though most of us have had a lifetime of experience handling pennies (Nickerson & Adams, 1979). However, few of us have studied the features of a penny in great detail, and since we have not attended to those details, we fail to recognize them later. Similarly, it has been well documented that distraction during learning impairs later memory (e.g., Craik, Govoni, Naveh-Benjamin, & Anderson, 1996). Most of the time this is not problematic, but in certain situations, such as when you are studying for an exam, failures to encode due to distraction can have serious repercussions. Another proposed reason why we forget is that memories fade, or decay, over time. It has been known since the pioneering work of Hermann Ebbinghaus (1885/1913) that as time passes, memories get harder to recall. Ebbinghaus created more than 2,000 nonsense syllables, such as dax, bap, and rif, and studied his own memory for them, learning as many as 420 lists of 16 nonsense syllables for one experiment. He found that his memories diminished as time passed, with the most forgetting happening early on after learning. His observations and subsequent research suggested that if we do not rehearse a memory and the neural representation of that memory is not reactivated over a long period of time, the memory representation may disappear entirely or fade to the point where it can no longer be accessed. As you might imagine, it is hard to definitively prove that a memory has decayed as opposed to it being inaccessible for another reason. Critics argued that forgetting must be due to processes other than simply the passage of time, since disuse of a memory does not always guarantee forgetting (McGeoch, 1932). More recently, some memory theorists have proposed that recent memory traces may be degraded or disrupted by new experiences (Wixted, 2004). Memory traces need to be consolidated, or transferred from the hippocampus to more durable representations in the cortex, in order for them to last (McGaugh, 2000). When the consolidation process is interrupted by the encoding of other experiences, the memory trace for the original experience does not get fully developed and thus is forgotten. Both encoding failures and decay account for more permanent forms of forgetting, in which the memory trace does not exist, but forgetting may also occur when a memory exists yet we temporarily cannot access it. This type of forgetting may occur when we lack the appropriate retrievalcues for bringing the memory to mind. You have probably had the frustrating experience of forgetting your password for an online site. Usually, the password has not been permanently forgotten; instead, you just need the right reminder to remember what it is. For example, if your password was “pizza0525,” and you received the password hints “favorite food” and “Mom’s birthday,” you would easily be able to retrieve it. Retrieval hints can bring back to mind seemingly forgotten memories (Tulving & Pearlstone, 1966). One real-life illustration of the importance of retrieval cues comes from a study showing that whereas people have difficulty recalling the names of high school classmates years after graduation, they are easily able to recognize the names and match them to the appropriate faces (Bahrick, Bahrick, & Wittinger, 1975). The names are powerful enough retrieval cues that they bring back the memories of the faces that went with them. The fact that the presence of the right retrieval cues is critical for remembering adds to the difficulty in proving that a memory is permanently forgotten as opposed to temporarily unavailable. Retrieval failures can also occur because other memories are blocking or getting in the way of recalling the desired memory. This blocking is referred to as interference. For example, you may fail to remember the name of a town you visited with your family on summer vacation because the names of other towns you visited on that trip or on other trips come to mind instead. Those memories then prevent the desired memory from being retrieved. Interference is also relevant to the example of forgetting a password: passwords that we have used for other websites may come to mind and interfere with our ability to retrieve the desired password. Interference can be either proactive, in which old memories block the learning of new related memories, or retroactive, in which new memories block the retrieval of old related memories. For both types of interference, competition between memories seems to be key (Mensink & Raaijmakers, 1988). Your memory for a town you visited on vacation is unlikely to interfere with your ability to remember an Internet password, but it is likely to interfere with your ability to remember a different town’s name. Competition between memories can also lead to forgetting in a different way. Recalling a desired memory in the face of competition may result in the inhibition of related, competing memories (Levy & Anderson, 2002). You may have difficulty recalling the name of Kennebunkport, Maine, because other Maine towns, such as Bar Harbor, Winterport, and Camden, come to mind instead. However, if you are able to recall Kennebunkport despite strong competition from the other towns, this may actually change the competitive landscape, weakening memory for those other towns’ names, leading to forgetting of them instead. Finally, some memories may be forgotten because we deliberately attempt to keep them out of mind. Over time, by actively trying not to remember an event, we can sometimes successfully keep the undesirable memory from being retrieved either by inhibiting the undesirable memory or generating diversionary thoughts (Anderson & Green, 2001). Imagine that you slipped and fell in your high school cafeteria during lunch time, and everyone at the surrounding tables laughed at you. You would likely wish to avoid thinking about that event and might try to prevent it from coming to mind. One way that you could accomplish this is by thinking of other, more positive, events that are associated with the cafeteria. Eventually, this memory may be suppressed to the point that it would only be retrieved with great difficulty (Hertel & Calcaterra, 2005). Adaptive Forgetting We have explored five different causes of forgetting. Together they can account for the day-to-day episodes of forgetting that each of us experience. Typically, we think of these episodes in a negative light and view forgetting as a memory failure. Is forgetting ever good? Most people would reason that forgetting that occurs in response to a deliberate attempt to keep an event out of mind is a good thing. No one wants to be constantly reminded of falling on their face in front of all of their friends. However, beyond that, it can be argued that forgetting is adaptive, allowing us to be efficient and hold onto only the most relevant memories (Bjork, 1989; Anderson & Milson, 1989). Shereshevsky, or “S,” the mnemonist studied by Alexander Luria (1968), was a man who almost never forgot. His memory appeared to be virtually limitless. He could memorize a table of 50 numbers in under 3 minutes and recall the numbers in rows, columns, or diagonals with ease. He could recall lists of words and passages that he had memorized over a decade before. Yet Shereshevsky found it difficult to function in his everyday life because he was constantly distracted by a flood of details and associations that sprung to mind. His case history suggests that remembering everything is not always a good thing. You may occasionally have trouble remembering where you parked your car, but imagine if every time you had to find your car, every single former parking space came to mind. The task would become impossibly difficult to sort through all of those irrelevant memories. Thus, forgetting is adaptive in that it makes us more efficient. The price of that efficiency is those moments when our memories seem to fail us (Schacter, 1999). Amnesia Clearly, remembering everything would be maladaptive, but what would it be like to remember nothing? We will now consider a profound form of forgetting called amnesia that is distinct from more ordinary forms of forgetting. Most of us have had exposure to the concept of amnesia through popular movies and television. Typically, in these fictionalized portrayals of amnesia, a character suffers some type of blow to the head and suddenly has no idea who they are and can no longer recognize their family or remember any events from their past. After some period of time (or another blow to the head), their memories come flooding back to them. Unfortunately, this portrayal of amnesia is not very accurate. What does amnesia typically look like? The most widely studied amnesic patient was known by his initials H. M. (Scoville & Milner, 1957). As a teenager, H. M. suffered from severe epilepsy, and in 1953, he underwent surgery to have both of his medial temporal lobes removed to relieve his epileptic seizures. The medial temporal lobesencompass the hippocampus and surrounding cortical tissue. Although the surgery was successful in reducing H. M.’s seizures and his general intelligence was preserved, the surgery left H. M. with a profound and permanent memory deficit. From the time of his surgery until his death in 2008, H. M. was unable to learn new information, a memory impairment called anterograde amnesia. H. M. could not remember any event that occurred since his surgery, including highly significant ones, such as the death of his father. He could not remember a conversation he had a few minutes prior or recognize the face of someone who had visited him that same day. He could keep information in his short-term, or working, memory, but when his attention turned to something else, that information was lost for good. It is important to note that H. M.’s memory impairment was restricted to declarative memory, or conscious memory for facts and events. H. M. could learn new motor skills and showed improvement on motor tasks even in the absence of any memory for having performed the task before (Corkin, 2002). In addition to anterograde amnesia, H. M. also suffered from temporally graded retrograde amnesia. Retrograde amnesia refers to an inability to retrieve old memories that occurred before the onset of amnesia. Extensive retrograde amnesia in the absence of anterograde amnesia is very rare (Kopelman, 2000). More commonly, retrograde amnesia co-occurs with anterograde amnesia and shows a temporal gradient, in which memories closest in time to the onset of amnesia are lost, but more remote memories are retained (Hodges, 1994). In the case of H. M., he could remember events from his childhood, but he could not remember events that occurred a few years before the surgery. Amnesiac patients with damage to the hippocampus and surrounding medial temporal lobes typically manifest a similar clinical profile as H. M. The degree of anterograde amnesia and retrograde amnesia depend on the extent of the medial temporal lobe damage, with greater damage associated with a more extensive impairment (Reed & Squire, 1998). Anterograde amnesia provides evidence for the role of the hippocampus in the formation of long-lasting declarative memories, as damage to the hippocampus results in an inability to create this type of new memory. Similarly, temporally graded retrograde amnesia can be seen as providing further evidence for the importance of memory consolidation (Squire & Alvarez, 1995). A memory depends on the hippocampus until it is consolidated and transferred into a more durable form that is stored in the cortex. According to this theory, an amnesiac patient like H. M. could remember events from his remote past because those memories were fully consolidated and no longer depended on the hippocampus. The classic amnesiac syndrome we have considered here is sometimes referred to as organic amnesia, and it is distinct from functional, or dissociative, amnesia. Functional amnesia involves a loss of memory that cannot be attributed to brain injury or any obvious brain disease and is typically classified as a mental disorder rather than a neurological disorder (Kihlstrom, 2005). The clinical profile of dissociative amnesia is very different from that of patients who suffer from amnesia due to brain damage or deterioration. Individuals who experience dissociative amnesia often have a history of trauma. Their amnesia is retrograde, encompassing autobiographical memories from a portion of their past. In an extreme version of this disorder, people enter a dissociative fugue state, in which they lose most or all of their autobiographical memories and their sense of personal identity. They may be found wandering in a new location, unaware of who they are and how they got there. Dissociative amnesia is controversial, as both the causes and existence of it have been called into question. The memory loss associated with dissociative amnesia is much less likely to be permanent than it is in organic amnesia. Conclusion Just as the case study of the mnemonist Shereshevsky illustrates what a life with a near perfect memory would be like, amnesiac patients show us what a life without memory would be like. Each of the mechanisms we discussed that explain everyday forgetting—encoding failures, decay, insufficient retrieval cues, interference, and intentional attempts to forget—help to keep us highly efficient, retaining the important information and for the most part, forgetting the unimportant. Amnesiac patients allow us a glimpse into what life would be like if we suffered from profound forgetting and perhaps show us that our everyday lapses in memory are not so bad after all. Outside Resources Web: Brain Case Study: Patient HM https://bigpictureeducation.com/brain-case-study-patient-hm Web: Self-experiment, Penny demo www.indiana.edu/~p1013447/dictionary/penny.htm Web: The Man Who Couldn’t Remember http://www.pbs.org/wgbh/nova/body/corkin-hm-memory.html Discussion Questions 1. Is forgetting good or bad? Do you agree with the authors that forgetting is an adaptive process? Why or why not? 2. Can we ever prove that something is forgotten? Why or why not? 3. Which of the five reasons for forgetting do you think explains the majority of incidences of forgetting? Why? 4. How is real-life amnesia different than amnesia that is portrayed on TV and in film? Vocabulary Anterograde amnesia Inability to form new memories for facts and events after the onset of amnesia. Consolidation Process by which a memory trace is stabilized and transformed into a more durable form. Decay The fading of memories with the passage of time. Declarative memory Conscious memories for facts and events. Dissociative amnesia Loss of autobiographical memories from a period in the past in the absence of brain injury or disease. Encoding Process by which information gets into memory. Interference Other memories get in the way of retrieving a desired memory Medial temporal lobes Inner region of the temporal lobes that includes the hippocampus. Retrieval Process by which information is accessed from memory and utilized. Retrograde amnesia Inability to retrieve memories for facts and events acquired before the onset of amnesia. Temporally graded retrograde amnesia Inability to retrieve memories from just prior to the onset of amnesia with intact memory for more remote events.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_5%3A_Learning_and_Memory/5.4%3A_Forgetting_and_Amnesia.txt
• 6.1: Research Methods in Developmental Psychology This module describes different research techniques that are used to study psychological phenomena in infants and children, research designs that are used to examine age-related changes in development, and unique challenges and special issues associated with conducting research with infants and children. • 6.2: Cognitive Development in Childhood This module examines what cognitive development is, major theories about how it occurs, the roles of nature and nurture, whether it is continuous or discontinuous, and how research in the area is being used to improve education. • 6.3: Social and Personality Development in Childhood Childhood social and personality development emerges through the interaction of social influences, biological maturation, and the child’s representations of the social world and the self. This interaction is illustrated in a discussion of the influence of significant relationships, the development of social understanding, the growth of personality, and the development of social and emotional competence in childhood. • 6.4: Adolescent Development Adolescence is a period that begins with puberty and ends with the transition to adulthood (approximately ages 10-20). Physical changes associated with puberty are triggered by hormones. Cognitive changes include improvements in complex and abstract thought, as well as development that happens at different rates in distinct parts of the brain & increases adolescents’ propensity for risky behavior because increases in sensation-seeking and reward motivation precede increases in cognitive control. • 6.5: Emerging Adulthood Emerging adulthood has been proposed as a new life stage between adolescence and young adulthood, lasting roughly from ages 18 to 25. Five features make emerging adulthood distinctive: identity explorations, instability, self-focus, feeling in-between adolescence and adulthood, and a sense of broad possibilities for the future. • 6.6: The Developing Parent This module focuses on parenthood as a developmental task of adulthood. Parents take on new roles as their children develop, transforming their identity as a parent as the developmental demands of their children change. The main influences on parenting, parent characteristics, child characteristics, and contextual factors, are described. • 6.7: Aging Traditionally, research on aging described only the lives of people over age 65 and the very old. Contemporary theories and research recognize that biogenetic and psychological processes of aging are complex and lifelong. We consider contemporary questions about cognitive aging and changes in personality, self-related beliefs, social relationships, and subjective well-being. These four aspects of psychosocial aging are related to health and longevity. • 6.8: Attachment Through the Life Course The purpose of this module is to provide a brief review of attachment theory—a theory designed to explain the significance of the close, emotional bonds that children develop with their caregivers and the implications of those bonds for understanding personality development. The module discusses the origins of the theory, research on individual differences in attachment security in infancy and childhood, and the role of attachment in adult relationships. Chapter 6: Development By Angela Lukowski and Helen Milojevich University of Calfornia, Irvine What do infants know about the world in which they live – and how do they grow and change with age? These are the kinds of questions answered by developmental scientists. This module describes different research techniques that are used to study psychological phenomena in infants and children, research designs that are used to examine age-related changes in development, and unique challenges and special issues associated with conducting research with infants and children. Child development is a fascinating field of study, and many interesting questions remain to be examined by future generations of developmental scientists – maybe you will be among them! learning objectives • Describe different research methods used to study infant and child development • Discuss different research designs, as well as their strengths and limitations • Report on the unique challenges associated with conducting developmental research Introduction A group of children were playing hide-and-seek in the yard. Pilar raced to her hiding spot as her six-year-old cousin, Lucas, loudly counted, “… six, seven, eight, nine, ten! Ready or not, here I come!”. Pilar let out a small giggle as Lucas ran over to find her – in the exact location where he had found his sister a short time before. At first glance, this behavior is puzzling: why would Pilar hide in exactly the same location where someone else was just found? Whereas older children and adults realize that it is likely best to hide in locations that have not been searched previously, young children do not have the same cognitive sophistication. But why not… and when do these abilities first develop? Developmental psychologists investigate questions like these using research methods that are tailored to the particular capabilities of the infants and children being studied. Importantly, research in developmental psychology is more than simply examining how children behave during games of hide-and-seek – the results obtained from developmental research have been used to inform best practices in parenting, education, and policy. This module describes different research techniques that are used to study psychological phenomena in infants and children, research designs that are used to examine age-related changes in developmental processes and changes over time, and unique challenges and special issues associated with conducting research with infants and children. Research Methods Infants and children—especially younger children—cannot be studied using the same research methods used in studies with adults. Researchers, therefore, have developed many creative ways to collect information about infant and child development. In this section, we highlight some of the methods that have been used by researchers who study infants and older children, separating them into three distinct categories: involuntary or obligatory responses, voluntary responses, and psychophysiological responses. We will also discuss other methods such as the use of surveys and questionnaires. At the end of this section, we give an example of how interview techniques can be used to study the beliefs and perceptions of older children and adults – a method that cannot be used with infants or very young children. Involuntary or obligatory responses One of the primary challenges in studying very young infants is that they have limited motor control– they cannot hold their heads up for short amounts of time, much less grab an interesting toy, play the piano, or turn a door knob. As a result, infants cannot actively engage with the environment in the same way as older children and adults. For this reason, developmental scientists have designed research methods that assess involuntary or obligatory responses. These are behaviors in which people engage without much conscious thought or effort. For example, think about the last time you heard your name at a party – you likely turned your head to see who was talking without even thinking about it. Infants and young children also demonstrate involuntary responses to stimuli in the environment. When infants hear the voice of their mother, for instance, their heart rate increases – whereas if they hear the voice of a stranger, their heart rate decreases (Kisilevsky et al., 2003). Researchers study involuntary behaviors to better understand what infants know about the world around them. One research method that capitalizes on involuntary or obligatory responses is a procedure known as habituation. In habituation studies, infants are presented with a stimulus such as a photograph of a face over and over again until they become bored with it. When infants become bored, they look away from the picture. If infants are then shown a new picture--such as a photograph of a different face-- their interest returns and they look at the new picture. This is a phenomenon known as dishabituation. Habituation procedures work because infants generally look longer at novel stimuli relative to items that are familiar to them. This research technique takes advantage of involuntary or obligatory responses because infants are constantly looking around and observing their environments; they do not have to be taught to engage with the world in this way. One classic habituation study was conducted by Baillargeon and colleagues (1985). These researchers were interested in the concept of object permanence, or the understanding that objects exist even when they cannot be seen or heard. For example, you know your toothbrush exists even though you are probably not able to see it right this second. To investigate object permanence in 5-month-old infants, the researchers used a violation of expectation paradigm. The researchers first habituated infants to an opaque screen that moved back and forth like a drawbridge (using the same procedure you just learned about in the previous paragraph). Once the infants were bored with the moving screen, they were shown two different scenarios to test their understanding of physical events. In both of these test scenarios, an opaque box was placed behind the moving screen. What differed between these two scenarios, however, was whether they confirmed or violated the solidity principle – the idea that two solid objects cannot occupy the same space at the same time. In the possible scenario, infants watched as the moving drawbridge stopped when it hit the opaque box (as would be expected based on the solidity principle). In the impossible scenario, the drawbridge appeared to move right through the space that was occupied by the opaque box! This impossible scenario violates the solidity principle in the same way as if you got out of your chair and walked through a wall, reappearing on the other side. The results of this study revealed that infants looked longer at the impossible test event than at the possible test event. The authors suggested that the infants reacted in this way because they were surprised – the demonstration went against their expectation that two solids cannot move through one another. The findings indicated that 5-month-old infants understood that the box continued to exist even when they could not see it. Subsequent studies indicated that 3½- and 4½-month-old infants also demonstrate object permanence under similar test conditions (Baillargeon, 1987). These findings are notable because they suggest that infants understand object permanence much earlier than had been reported previously in research examining voluntary responses (although see more recent research by Cashon & Cohen, 2000). Voluntary responses As infants and children age, researchers are increasingly able to study their understanding of the world through their voluntary responses. Voluntary responses are behaviors that a person completes by choice. For example, think about how you act when you go to the grocery store: you select whether to use a shopping cart or a basket, you decide which sections of the store to walk through, and you choose whether to stick to your grocery list or splurge on a treat. Importantly, these behaviors are completely up to you (and are under your control). Although they do not do a lot of grocery shopping, infants and children also have voluntary control over their actions. Children, for instance, choose which toys to play with. Researchers study the voluntary responses of infants and young children in many ways. For example, developmental scientists study recall memory in infants and young children by looking at voluntary responses. Recall memory is memory of past events or episodes, such as what you did yesterday afternoon or on your last birthday. Whereas older children and adults are simply asked to talk about their past experiences, recall memory has to be studied in a different way in infants and very young children who cannot discuss the past using language. To study memory in these subjects researchers use a behavioral method known as elicited imitation (Lukowski & Milojevich, in press). In the elicited imitation procedure, infants play with toys that are designed in the lab to be unlike the kinds of things infants usually have at home. These toys (or event sequences, as researchers call them) can be put together in a certain way to produce an outcome that infants commonly enjoy. One of these events is called Find the Surprise. As shown in Figure 6.1.1, this toy has a door on the front that is held in place by a latch – and a small plastic figure is hidden on the inside. During the first part of the study, infants play with the toy in whichever way they want for a few minutes. The researcher then shows the infant how make the toy work by (1) flipping the latch out of the way and (2) opening the door, revealing the plastic toy inside. The infant is allowed to play with the toy again either immediately after the demonstration or after a longer delay. As the infant plays, the researcher records whether the infant finds the surprise using the same procedure that was demonstrated. Use of the elicited imitation procedure has taught developmental scientists a lot about how recall memory develops. For example, we now know that 6-month-old infants remember one step of a 3-step sequence for 24 hours (Barr, Dowden, & Hayne, 1996; Collie & Hayne, 1999). Nine-month-olds remember the individual steps that make up a 2-step event sequence for 1 month, but only 50% of infants remember to do the first step of the sequence before the second (Bauer, Wiebe, Carver, Waters, & Nelson, 2003; Bauer, Wiebe, Waters, & Bangston, 2001; Carver & Bauer, 1999). When children are 20 months old, they remember the individual steps and temporal order of 4-step events for at least 12 months – the longest delay that has been tested to date (Bauer, Wenner, Dropik, & Wewerka, 2000). Psychophysiology Behavioral studies have taught us important information about what infants and children know about the world. Research on behavior alone, however, cannot tell scientists how brain development or biological changes impact (or are impacted by) behavior. For this reason, researchers may also record psychophysiological data, such as measures of heart rate, hormone levels, or brain activity. These measures may be recorded by themselves or in combination with behavioral data to better understand the bidirectional relations between biology and behavior. One manner of understanding associations between brain development and behavioral advances is through the recording of event-related potentials, or ERPs. ERPs are recorded by fitting a research participant with a stretchy cap that contains many small sensors or electrodes. These electrodes record tiny electrical currents on the scalp of the participant in response to the presentation of particular stimuli, such as a picture or a sound (for additional information on recording ERPs from infants and children, see DeBoer, Scott, & Nelson, 2005). The recorded responses are then amplified thousands of times using specialized equipment so that they look like squiggly lines with peaks and valleys. Some of these brain responses have been linked to psychological phenomena. For example, researchers have identified a negative peak in the recorded waveform that they have called the N170 (Bentin, Allison, Puce, Perez, & McCarthy, 2010). The peak is named in this way because it is negative (hence the N) and because it occurs about 140ms to 170ms after a stimulus is presented (hence the 170). This peak is particularly sensitive to the presentation of faces, as it is commonly more negative when participants are presented with photographs of faces rather than with photographs of objects. In this way, researchers are able to identify brain activity associated with real world thinking and behavior. The use of ERPs has provided important insight as to how infants and children understand the world around them. In one study (Webb, Dawson, Bernier, & Panagiotides, 2006), researchers examined face and object processing in children with autism spectrum disorders, those with developmental delays, and those who were typically developing. The children wore electrode caps and had their brain activity recorded as they watched still photographs of faces (of their mother or of a stranger) and objects (including those that were familiar or unfamiliar to them). The researchers examined differences in face and object processing by group by observing a component of the brainwave they called the prN170 (because it was believed to be a precursor to the adult N170). Their results showed that the height of the prN170 peak (commonly called the amplitude) did not differ when faces or objects were presented to typically developing children. When considering children with autism, however, the peaks were higher when objects were presented relative to when faces were shown. Differences were also found in how long it took the brain to reach the negative peak (commonly called the latency of the response). Whereas the peak was reached more quickly when typically developing children were presented with faces relative to objects, the opposite was true for children with autism. These findings suggest that children with autism are in some way processing faces differently than typically developing children (and, as reported in the manuscript, children with more general developmental delays). Parent-report questionnaires Developmental science has come a long way in assessing various aspects of infant and child development through behavior and psychophysiology – and new advances are happening every day. In many ways, however, the very youngest of research participants are still quite limited in the information they can provide about their own development. As such, researchers often ask the people who know infants and children best – commonly, their parents or guardians – to complete surveys or questionnaires about various aspects of their lives. These parent-report data can be analyzed by themselves or in combination with any collected behavioral or psychophysiological data. One commonly used parent-report questionnaire is the Child Behavior Checklist (CBCL; Achenbach & Rescorla, 2000). Parents complete the preschooler version of this questionnaire by answering questions about child strengths, behavior problems, and disabilities, among other things (click here to see sample questions). The responses provided by parents are used to identify whether the child has any behavioral issues, such as sleep difficulties, aggressive behaviors, depression, or attention deficit/hyperactivity problems. A recent study used the CBCL-Preschool questionnaire (Achenbach & Rescorla, 2000) to examine preschooler functioning in relation to levels of stress experienced by their mothers while they were pregnant (Ronald, Pennell, & Whitehouse, 2011). Almost 3,000 pregnant women were recruited into the study during their pregnancy and were interviewed about their stressful life experiences. Later, when their children were 2 years old, mothers completed the CBCL-Preschool questionnaire. The results of the study showed that higher levels of maternal stress during pregnancy (such as a divorce or moving to a new house) were associated with increased attention deficit/hyperactivity problems in children over 2 years later. These findings suggest that stressful events experienced during prenatal development may be associated with problematic child behavioral functioning years later – although additional research is needed. Interview techniques Whereas infants and very young children are unable to talk about their own thoughts and behaviors, older children and adults are commonly asked to use language to discuss their thoughts and knowledge about the world. In fact, these verbal report paradigms are among the most widely used in psychological research. For instance, a researcher might present a child with a vignette or short story describing a moral dilemma, and the child would be asked to give their own thoughts and beliefs (Walrath, 2011). For example, children might react to the following: “Mr. Kohut’s wife is sick and only one medication can save her life. The medicine is extremely expensive and Mr. Kohut cannot afford it. The druggist will not lower the price. What should Mr. Kohut do, and why?” Children can provide written or verbal answers to these types of scenarios. They can also offer their perspectives on issues ranging from attitudes towards drug use to the experience of fear while falling asleep to their memories of getting lost in public places – the possibilities are endless. Verbal reports such as interviews and surveys allow children to describe their own experience of the world. Research Design Now you know about some tools used to conduct research with infants and young children. Remember, research methods are the tools that are used to collect information. But it is easy to confuse research methods and research design. Research design is the strategy or blueprint for deciding how to collect and analyze information. Research design dictates which methods are used and how. Researchers typically focus on two distinct types of comparisons when conducting research with infants and children. The first kind of comparison examines change within individuals. As the name suggests, this type of analysis measures the ways in which a specific person changes (or remains the same) over time. For example, a developmental scientist might be interested in studying the same group of infants at 12 months, 18 months, and 24 months to examine how vocabulary and grammar change over time. This kind of question would be best answered using a longitudinal research design. Another sort of comparison focuses on changes between groups. In this type of analysis, researchers study average changes in behavior between groups of different ages. Returning to the language example, a scientist might study the vocabulary and grammar used by 12-month-olds, 18-month-olds, and 24-month-olds to examine how language abilities change with age. This kind of question would be best answered using a cross-sectional research design. Longitudinal research designs Longitudinal research designs are used to examine behavior in the same infants and children over time. For example, when considering our example of hide-and-seek behaviors in preschoolers, a researcher might conduct a longitudinal study to examine whether 2-year-olds develop into better hiders over time. To this end, a researcher might observe a group of 2-year-old children playing hide-and-seek with plans to observe them again when they are 4 years old – and again when they are 6 years old. This study is longitudinal in nature because the researcher plans to study the same children as they age. Based on her data, the researcher might conclude that 2-year-olds develop more mature hiding abilities with age. Remember, researchers examine games such as hide-and-seek not because they are interested in the games themselves, but because they offer clues to how children think, feel and behave at various ages. Longitudinal studies may be conducted over the short term (over a span of months, as in Wiebe, Lukowski, & Bauer, 2010) or over much longer durations (years or decades, as in Lukowski et al., 2010). For these reasons, longitudinal research designs are optimal for studying stability and change over time. Longitudinal research also has limitations, however. For one, longitudinal studies are expensive: they require that researchers maintain continued contact with participants over time, and they necessitate that scientists have funding to conduct their work over extended durations (from infancy to when participants were 19 years old in Lukowski et al., 2010). An additional risk is attrition. Attrition occurs when participants fail to complete all portions of a study. Participants may move, change their phone numbers, or simply become disinterested in participating over time. Researchers should account for the possibility of attrition by enrolling a larger sample into their study initially, as some participants will likely drop out over time. The results from longitudinal studies may also be impacted by repeated assessments. Consider how well you would do on a math test if you were given the exact same exam every day for a week. Your performance would likely improve over time not necessarily because you developed better math abilities, but because you were continuously practicing the same math problems. This phenomenon is known as a practice effect. Practice effects occur when participants become better at a task over time because they have done it again and again; not due to natural psychological development. A final limitation of longitudinal research is that the results may be impacted by cohort effects. Cohort effects occur when the results of the study are affected by the particular point in historical time during which participants are tested. As an example, think about how peer relationships in childhood have likely changed since February 2004 – the month and year Facebook was founded. Cohort effects can be problematic in longitudinal research because only one group of participants are tested at one point in time – different findings might be expected if participants of the same ages were tested at different points in historical time. Cross-sectional designs Cross-sectional research designs are used to examine behavior in participants of different ages who are tested at the same point in time. When considering our example of hide-and-seek behaviors in children, for example, a researcher might want to examine whether older children more often hide in novel locations (those in which another child in the same game has never hidden before) when compared to younger children. In this case, the researcher might observe 2-, 4-, and 6-year-old children as they play the game (the various age groups represent the “cross sections”). This research is cross-sectional in nature because the researcher plans to examine the behavior of children of different ages within the same study at the same time. Based on her data, the researcher might conclude that 2-year-olds more commonly hide in previously-searched locations relative to 6-year-olds. Cross-sectional designs are useful for many reasons. Because participants of different ages are tested at the same point in time, data collection can proceed at a rapid pace. In addition, because participants are only tested at one point in time, practice effects are not an issue – children do not have the opportunity to become better at the task over time. Cross-sectional designs are also more cost-effective than longitudinal research designs because there is no need to maintain contact with and follow-up on participants over time. One of the primary limitations of cross-sectional research, however, is that the results yield information on age-related change, not development per se. That is, although the study described above can show that 6-year-olds are more advanced in their hiding behavior than 2-year-olds, the data used to come up with this conclusion were collected from different children. It could be, for instance, that this specific sample of 6-year-olds just happened to be particularly clever at hide-and-seek. As such, the researcher cannot conclude that 2-year-olds develop into better hiders with age; she can only state that 6-year-olds, on average, are more sophisticated hiders relative to children 4 years younger. Sequential research designs Sequential research designs include elements of both longitudinal and cross-sectional research designs. Similar to longitudinal designs, sequential research features participants who are followed over time; similar to cross-sectional designs, sequential work includes participants of different ages. This research design is also distinct from those that have been discussed previously in that children of different ages are enrolled into a study at various points in time to examine age-related changes, development within the same individuals as they age, and account for the possibility of cohort effects. Consider, once again, our example of hide-and-seek behaviors. In a study with a sequential design, a researcher might enroll three separate groups of children (Groups A, B, and C). Children in Group A would be enrolled when they are 2 years old and would be tested again when they are 4 and 6 years old (similar in design to the longitudinal study described previously). Children in Group B would be enrolled when they are 4 years old and would be tested again when they are 6 and 8 years old. Finally, children in Group C would be enrolled when they are 6 years old and would be tested again when they are 8 and 10 years old. Studies with sequential designs are powerful because they allow for both longitudinal and cross-sectional comparisons. This research design also allows for the examination of cohort effects. For example, the researcher could examine the hide-and-seek behavior of 6-year-olds in Groups A, B, and C to determine whether performance differed by group when participants were the same age. If performance differences were found, there would be evidence for a cohort effect. In the hide-and-seek example, this might mean that children from different time periods varied in the amount they giggled or how patient they are when waiting to be found. Sequential designs are also appealing because they allow researchers to learn a lot about development in a relatively short amount of time. In the previous example, a four-year research study would provide information about 8 years of developmental time by enrolling children ranging in age from two to ten years old. Because they include elements of longitudinal and cross-sectional designs, sequential research has many of the same strengths and limitations as these other approaches. For example, sequential work may require less time and effort than longitudinal research, but more time and effort than cross-sectional research. Although practice effects may be an issue if participants are asked to complete the same tasks or assessments over time, attrition may be less problematic than what is commonly experienced in longitudinal research since participants may not have to remain involved in the study for such a long period of time. When considering the best research design to use in their research, scientists think about their main research question and the best way to come up with an answer. A table of advantages and disadvantages for each of the described research designs is provided here to help you as you consider what sorts of studies would be best conducted using each of these different approaches. Challenges Associated with Conducting Developmental Research The previous sections describe research tools to assess development in infancy and early childhood, as well as the ways that research designs can be used to track age-related changes and development over time. Before you begin conducting developmental research, however, you must also be aware that testing infants and children comes with its own unique set of challenges. In the final section of this module, we review some of the main issues that are encountered when conducting research with the youngest of human participants. In particular, we focus our discussion on ethical concerns, recruitment issues, and participant attrition. Ethical concerns As a student of psychological science, you may already know that Institutional Review Boards (IRBs) review and approve of all research projects that are conducted at universities, hospitals, and other institutions. An IRB is typically a panel of experts who read and evaluate proposals for research. IRB members want to ensure that the proposed research will be carried out ethically and that the potential benefits of the research outweigh the risks and harm for participants. What you may not know though, is that the IRB considers some groups of participants to be more vulnerable or at-risk than others. Whereas university students are generally not viewed as vulnerable or at-risk, infants and young children commonly fall into this category. What makes infants and young children more vulnerable during research than young adults? One reason infants and young children are perceived as being at increased risk is due to their limited cognitive capabilities, which makes them unable to state their willingness to participate in research or tell researchers when they would like to drop out of a study. For these reasons, infants and young children require special accommodations as they participate in the research process. When thinking about special accommodations in developmental research, consider the informed consentprocess. If you have ever participated in psychological research, you may know through your own experience that adults commonly sign an informed consent statement (a contract stating that they agree to participate in research) after learning about a study. As part of this process, participants are informed of the procedures to be used in the research, along with any expected risks or benefits. Infants and young children cannot verbally indicate their willingness to participate, much less understand the balance of potential risks and benefits. As such, researchers are oftentimes required to obtain written informed consent from the parent or legal guardian of the child participant, an adult who is almost always present as the study is conducted. In fact, children are not asked to indicate whether they would like to be involved in a study at all (a process known as assent) until they are approximately seven years old. Because infants and young children also cannot easily indicate if they would like to discontinue their participation in a study, researchers must be sensitive to changes in the state of the participant (determining whether a child is too tired or upset to continue) as well as to parent desires (in some cases, parents might want to discontinue their involvement in the research). As in adult studies, researchers must always strive to protect the rights and well-being of the minor participants and their parents when conducting developmental science. Recruitment An additional challenge in developmental science is participant recruitment. Recruiting university students to participate in adult studies is typically easy. Many colleges and universities offer extra credit for participation in research and have locations such as bulletin boards and school newspapers where research can be advertised. Unfortunately, young children cannot be recruited by making announcements in Introduction to Psychology courses, by posting ads on campuses, or through online platforms such as Amazon Mechanical Turk. Given these limitations, how do researchers go about finding infants and young children to be in their studies? The answer to this question varies along multiple dimensions. Researchers must consider the number of participants they need and the financial resources available to them, among other things. Location may also be an important consideration. Researchers who need large numbers of infants and children may attempt to do so by obtaining infant birth records from the state, county, or province in which they reside. Some areas make this information publicly available for free, whereas birth records must be purchased in other areas (and in some locations birth records may be entirely unavailable as a recruitment tool). If birth records are available, researchers can use the obtained information to call families by phone or mail them letters describing possible research opportunities. All is not lost if this recruitment strategy is unavailable, however. Researchers can choose to pay a recruitment agency to contact and recruit families for them. Although these methods tend to be quick and effective, they can also be quite expensive. More economical recruitment options include posting advertisements and fliers in locations frequented by families, such as mommy-and-me classes, local malls, and preschools or day care centers. Researchers can also utilize online social media outlets like Facebook, which allows users to post recruitment advertisements for a small fee. Of course, each of these different recruitment techniques requires IRB approval. Attrition Another important consideration when conducting research with infants and young children is attrition. Although attrition is quite common in longitudinal research in particular, it is also problematic in developmental science more generally, as studies with infants and young children tend to have higher attrition rates than studies with adults. For example, high attrition rates in ERP studies oftentimes result from the demands of the task: infants are required to sit still and have a tight, wet cap placed on their heads before watching still photographs on a computer screen in a dark, quiet room. In other cases, attrition may be due to motivation (or a lack thereof). Whereas adults may be motivated to participate in research in order to receive money or extra course credit, infants and young children are not as easily enticed. In addition, infants and young children are more likely to tire easily, become fussy, and lose interest in the study procedures than are adults. For these reasons, research studies should be designed to be as short as possible – it is likely better to break up a large study into multiple short sessions rather than cram all of the tasks into one long visit to the lab. Researchers should also allow time for breaks in their study protocols so that infants can rest or have snacks as needed. Happy, comfortable participants provide the best data. Conclusions Child development is a fascinating field of study – but care must be taken to ensure that researchers use appropriate methods to examine infant and child behavior, use the correct experimental design to answer their questions, and be aware of the special challenges that are part-and-parcel of developmental research. After reading this module, you should have a solid understanding of these various issues and be ready to think more critically about research questions that interest you. For example, when considering our initial example of hide-and-seek behaviors in preschoolers, you might ask questions about what other factors might contribute to hiding behaviors in children. Do children with older siblings hide in locations that were previously searched less often than children without siblings? What other abilities are associated with the development of hiding skills? Do children who use more sophisticated hiding strategies as preschoolers do better on other tests of cognitive functioning in high school? Many interesting questions remain to be examined by future generations of developmental scientists – maybe you will make one of the next big discoveries! Outside Resources Video: A 3 and a half minute video depicting the violation of expectation paradigm. Video: A popular TED talk by Dr. Laura Schulz on the topic of how babies make decisions. Video: A popular TED talk by Dr. Patricia Kuhl on the topic of how babies learn language. Video: A two and a half minute video showing how ERP (brain activity) can be measured with children in the laboratory. Web: A link to Angela Lukowski’s research laboratory. The site includes descriptions of the research and researchers as well as a list of publications. memorydevelopment.soceco.uci.edu/ Web: The International Congress on Infant Studies - a professional society focused on infant research http://www.infantstudies.org/ Web: The Society for Research on Adolescence - a professional society focused on research on adolescence http://www.s-r-a.org/ Web: The Society for Research on Child Development - a professional society focused on child development research http://www.srcd.org/ Discussion Questions 1. Why is it important to conduct research on infants and children? 2. What are some possible benefits and limitations of the various research methods discussed in this module? 3. Why is it important to examine cohort effects in developmental research? 4. Think about additional challenges or unique issues that might be experienced by developmental scientists. How would they handle the challenges and issues you’ve addressed? 5. Work with your peers to design a study to identify whether children who were good hiders as preschoolers are more cognitively advanced in high school. What research design would you use and why? What are the advantages and limitations of the design you selected? Vocabulary Assent When minor participants are asked to indicate their willingness to participate in a study. This is usually obtained from participants who are at least 7 years old, in addition to parent or guardian consent. Attrition When a participant drops out, or fails to complete, all parts of a study. Bidirectional relations When one variable is likely both cause and consequence of another variable. Cohort effects When research findings differ for participants of the same age tested at different points in historical time. Cross-sectional research A research design used to examine behavior in participants of different ages who are tested at the same point in time. Dishabituation When participants demonstrated increased attention (through looking or listening behavior) to a new stimulus after having been habituated to a different stimulus. Elicited imitation A behavioral method used to examine recall memory in infants and young children. Event-related potentials (ERP) The recording of participant brain activity using a stretchy cap with small electrodes or sensors as participants engage in a particular task (commonly viewing photographs or listening to auditory stimuli). Habituation When participants demonstrated decreased attention (through looking or listening behavior) to repeatedly-presented stimuli. Informed consent The process of getting permission from adults for themselves and their children to take part in research. Institutional Review Boards (IRBs) A committee that reviews and approves research procedures involving human participants and animal subjects to ensure that the research is conducted in accordance with federal, institutional, and ethical guidelines. Interview techniques A research method in which participants are asked to report on their experiences using language, commonly by engaging in conversation with a researcher (participants may also be asked to record their responses in writing). Involuntary or obligatory responses Behaviors in which individuals engage that do not require much conscious thought or effort. Longitudinal research A research design used to examine behavior in the same participants over short (months) or long (decades) periods of time. Motor control The use of thinking to direct muscles and limbs to perform a desired action. Object permanence The understanding that objects continue to exist even when they cannot be directly observed (e.g., that a pen continues to exist even when it is hidden under a piece of paper). Practice effect When participants get better at a task over time by “practicing” it through repeated assessments instead of due to actual developmental change (practice effects can be particularly problematic in longitudinal and sequential research designs). Psychophysiological responses Recording of biological measures (such as heart rate and hormone levels) and neurological responses (such as brain activity) that may be associated with observable behaviors. Recall memory The process of remembering discrete episodes or events from the past, including encoding, consolidation and storage, and retrieval. Research design The strategy (or “blueprint”) for deciding how to collect and analyze research information. Research methods The specific tools and techniques used by researchers to collect information. Sequential research designs A research design that includes elements of cross-sectional and longitudinal research designs. Similar to cross-sectional designs, sequential research designs include participants of different ages within one study; similar to longitudinal designs, participants of different ages are followed over time. Solidity principle The idea that two solid masses should not be able to move through one another. Verbal report paradigms Research methods that require participants to report on their experiences, thoughts, feelings, etc., using language. Vignette A short story that presents a situation that participants are asked to respond to. Violation of expectation paradigm A research method in which infants are expected to respond in a particular way because one of two conditions violates or goes against what they should expect based on their everyday experiences (e.g., it violates our expectations that Wile E. Coyote runs off a cliff but does not immediately fall to the ground below). Voluntary responses Behaviors that a person has control over and completes by choice.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_6%3A_Development/6.1%3A_Research_Methods_in_Developmental_Psychology.txt
By Robert Siegler Carnegie Mellon University This module examines what cognitive development is, major theories about how it occurs, the roles of nature and nurture, whether it is continuous or discontinuous, and how research in the area is being used to improve education. learning objectives • Be able to identify and describe the main areas of cognitive development. • Be able to describe major theories of cognitive development and what distinguishes them. • Understand how nature and nurture work together to produce cognitive development. • Understand why cognitive development is sometimes viewed as discontinuous and sometimes as continuous. • Know some ways in which research on cognitive development is being used to improve education. Introduction By the time you reach adulthood you have learned a few things about how the world works. You know, for instance, that you can’t walk through walls or leap into the tops of trees. You know that although you cannot see your car keys they’ve got to be around here someplace. What’s more, you know that if you want to communicate complex ideas like ordering a triple-shot soy vanilla latte with chocolate sprinkles it’s better to use words with meanings attached to them rather than simply gesturing and grunting. People accumulate all this useful knowledge through the process of cognitive development, which involves a multitude of factors, both inherent and learned. Cognitive development refers to the development of thinking across the lifespan. Defining thinking can be problematic, because no clear boundaries separate thinking from other mental activities. Thinking obviously involves the higher mental processes: problem solving, reasoning, creating, conceptualizing, categorizing, remembering, planning, and so on. However, thinking also involves other mental processes that seem more basic and at which even toddlers are skilled—such as perceiving objects and events in the environment, acting skillfully on objects to obtain goals, and understanding and producing language. Yet other areas of human development that involve thinking are not usually associated with cognitive development, because thinking isn’t a prominent feature of them—such as personality and temperament. As the name suggests, cognitive development is about change. Children’s thinking changes in dramatic and surprising ways. Consider DeVries’s (1969) study of whether young children understand the difference between appearance and reality. To find out, she brought an unusually even-tempered cat named Maynard to a psychology laboratory and allowed the 3- to 6-year-old participants in the study to pet and play with him. DeVries then put a mask of a fierce dog on Maynard’s head, and asked the children what Maynard was. Despite all of the children having identified Maynard previously as a cat, now most 3-year-olds said that he was a dog and claimed that he had a dog’s bones and a dog’s stomach. In contrast, the 6-year-olds weren’t fooled; they had no doubt that Maynard remained a cat. Understanding how children’s thinking changes so dramatically in just a few years is one of the fascinating challenges in studying cognitive development. There are several main types of theories of child development. Stage theories, such as Piaget’s stage theory, focus on whether children progress through qualitatively different stages of development. Sociocultural theories, such as that of Lev Vygotsky, emphasize how other people and the attitudes, values, and beliefs of the surrounding culture, influence children’s development. Information processing theories, such as that of David Klahr, examine the mental processes that produce thinking at any one time and the transition processes that lead to growth in that thinking. At the heart of all of these theories, and indeed of all research on cognitive development, are two main questions: (1) How do nature and nurture interact to produce cognitive development? (2) Does cognitive development progress through qualitatively distinct stages? In the remainder of this module, we examine the answers that are emerging regarding these questions, as well as ways in which cognitive developmental research is being used to improve education. Nature and Nurture The most basic question about child development is how nature and nurture together shape development. Nature refers to our biological endowment, the genes we receive from our parents. Nurture refers to the environments, social as well as physical, that influence our development, everything from the womb in which we develop before birth to the homes in which we grow up, the schools we attend, and the many people with whom we interact. The nature-nurture issue is often presented as an either-or question: Is our intelligence (for example) due to our genes or to the environments in which we live? In fact, however, every aspect of development is produced by the interaction of genes and environment. At the most basic level, without genes, there would be no child, and without an environment to provide nurture, there also would be no child. The way in which nature and nurture work together can be seen in findings on visual development. Many people view vision as something that people either are born with or that is purely a matter of biological maturation, but it also depends on the right kind of experience at the right time. For example, development of depth perception, the ability to actively perceive the distance from oneself to objects in the environment, depends on seeing patterned light and having normal brain activity in response to the patterned light, in infancy (Held, 1993). If no patterned light is received, for example when a baby has severe cataracts or blindness that is not surgically corrected until later in development, depth perception remains abnormal even after the surgery. Adding to the complexity of the nature-nurture interaction, children’s genes lead to their eliciting different treatment from other people, which influences their cognitive development. For example, infants’ physical attractiveness and temperament are influenced considerably by their genetic inheritance, but it is also the case that parents provide more sensitive and affectionate care to easygoing and attractive infants than to difficult and less attractive ones, which can contribute to the infants’ later cognitive development (Langlois et al., 1995; van den Boom & Hoeksma, 1994). Also contributing to the complex interplay of nature and nurture is the role of children in shaping their own cognitive development. From the first days out of the womb, children actively choose to attend more to some things and less to others. For example, even 1-month-olds choose to look at their mother’s face more than at the faces of other women of the same age and general level of attractiveness (Bartrip, Morton, & de Schonen, 2001). Children’s contributions to their own cognitive development grow larger as they grow older (Scarr & McCartney, 1983). When children are young, their parents largely determine their experiences: whether they will attend day care, the children with whom they will have play dates, the books to which they have access, and so on. In contrast, older children and adolescents choose their environments to a larger degree. Their parents’ preferences largely determine how 5-year-olds spend time, but 15-year-olds’ own preferences largely determine when, if ever, they set foot in a library. Children’s choices often have large consequences. To cite one example, the more that children choose to read, the more that their reading improves in future years (Baker, Dreher, & Guthrie, 2000). Thus, the issue is not whether cognitive development is a product of nature or nurture; rather, the issue is how nature and nurture work together to produce cognitive development. Does Cognitive Development Progress Through Distinct Stages? Some aspects of the development of living organisms, such as the growth of the width of a pine tree, involve quantitative changes, with the tree getting a little wider each year. Other changes, such as the life cycle of a ladybug, involve qualitative changes, with the creature becoming a totally different type of entity after a transition than before (Figure 6.2.1). The existence of both gradual, quantitative changes and relatively sudden, qualitative changes in the world has led researchers who study cognitive development to ask whether changes in children’s thinking are gradual and continuous or sudden and discontinuous. The great Swiss psychologist Jean Piaget proposed that children’s thinking progresses through a series of four discrete stages. By “stages,” he meant periods during which children reasoned similarly about many superficially different problems, with the stages occurring in a fixed order and the thinking within different stages differing in fundamental ways. The four stages that Piaget hypothesized were the sensorimotor stage (birth to 2 years), thepreoperational reasoning stage (2 to 6 or 7 years), the concrete operational reasoning stage (6 or 7 to 11 or 12 years), and the formal operational reasoning stage (11 or 12 years and throughout the rest of life). During the sensorimotor stage, children’s thinking is largely realized through their perceptions of the world and their physical interactions with it. Their mental representations are very limited. Consider Piaget’s object permanence task, which is one of his most famous problems. If an infant younger than 9 months of age is playing with a favorite toy, and another person removes the toy from view, for example by putting it under an opaque cover and not letting the infant immediately reach for it, the infant is very likely to make no effort to retrieve it and to show no emotional distress (Piaget, 1954). This is not due to their being uninterested in the toy or unable to reach for it; if the same toy is put under a clear cover, infants below 9 months readily retrieve it (Munakata, McClelland, Johnson, & Siegler, 1997). Instead, Piaget claimed that infants less than 9 months do not understand that objects continue to exist even when out of sight. During the preoperational stage, according to Piaget, children can solve not only this simple problem (which they actually can solve after 9 months) but show a wide variety of other symbolic-representation capabilities, such as those involved in drawing and using language. However, such 2- to 7-year-olds tend to focus on a single dimension, even when solving problems would require them to consider multiple dimensions. This is evident in Piaget’s (1952) conservation problems. For example, if a glass of water is poured into a taller, thinner glass, children below age 7 generally say that there now is more water than before. Similarly, if a clay ball is reshaped into a long, thin sausage, they claim that there is now more clay, and if a row of coins is spread out, they claim that there are now more coins. In all cases, the children are focusing on one dimension, while ignoring the changes in other dimensions (for example, the greater width of the glass and the clay ball). Children overcome this tendency to focus on a single dimension during the concrete operations stage, and think logically in most situations. However, according to Piaget, they still cannot think in systematic scientific ways, even when such thinking would be useful. Thus, if asked to find out which variables influence the period that a pendulum takes to complete its arc, and given weights that they can attach to strings in order to do experiments with the pendulum to find out, most children younger than age 12, perform biased experiments from which no conclusion can be drawn, and then conclude that whatever they originally believed is correct. For example, if a boy believed that weight was the only variable that mattered, he might put the heaviest weight on the shortest string and push it the hardest, and then conclude that just as he thought, weight is the only variable that matters (Inhelder & Piaget, 1958). Finally, in the formal operations period, children attain the reasoning power of mature adults, which allows them to solve the pendulum problem and a wide range of other problems. However, this formal operations stagetends not to occur without exposure to formal education in scientific reasoning, and appears to be largely or completely absent from some societies that do not provide this type of education. Although Piaget’s theory has been very influential, it has not gone unchallenged. Many more recent researchers have obtained findings indicating that cognitive development is considerably more continuous than Piaget claimed. For example, Diamond (1985) found that on the object permanence task described above, infants show earlier knowledge if the waiting period is shorter. At age 6 months, they retrieve the hidden object if the wait is no longer than 2 seconds; at 7 months, they retrieve it if the wait is no longer than 4 seconds; and so on. Even earlier, at 3 or 4 months, infants show surprise in the form of longer looking times if objects suddenly appear to vanish with no obvious cause (Baillargeon, 1987). Similarly, children’s specific experiences can greatly influence when developmental changes occur. Children of pottery makers in Mexican villages, for example, know that reshaping clay does not change the amount of clay at much younger ages than children who do not have similar experiences (Price-Williams, Gordon, & Ramirez, 1969). So, is cognitive development fundamentally continuous or fundamentally discontinuous? A reasonable answer seems to be, “It depends on how you look at it and how often you look.” For example, under relatively facilitative circumstances, infants show early forms of object permanence by 3 or 4 months, and they gradually extend the range of times for which they can remember hidden objects as they grow older. However, on Piaget’s original object permanence task, infants do quite quickly change toward the end of their first year from not reaching for hidden toys to reaching for them, even after they’ve experienced a substantial delay before being allowed to reach. Thus, the debate between those who emphasize discontinuous, stage-like changes in cognitive development and those who emphasize gradual continuous changes remains a lively one. Applications to Education Understanding how children think and learn has proven useful for improving education. One example comes from the area of reading. Cognitive developmental research has shown that phonemic awareness—that is, awareness of the component sounds within words—is a crucial skill in learning to read. To measure awareness of the component sounds within words, researchers ask children to decide whether two words rhyme, to decide whether the words start with the same sound, to identify the component sounds within words, and to indicate what would be left if a given sound were removed from a word. Kindergartners’ performance on these tasks is the strongest predictor of reading achievement in third and fourth grade, even stronger than IQ or social class background (Nation, 2008). Moreover, teaching these skills to randomly chosen 4- and 5-year-olds results in their being better readers years later (National Reading Panel, 2000). Another educational application of cognitive developmental research involves the area of mathematics. Even before they enter kindergarten, the mathematical knowledge of children from low-income backgrounds lags far behind that of children from more affluent backgrounds. Ramani and Siegler (2008) hypothesized that this difference is due to the children in middle- and upper-income families engaging more frequently in numerical activities, for example playing numerical board games such as Chutes and Ladders. Chutes and Ladders is a game with a number in each square; children start at the number one and spin a spinner or throw a dice to determine how far to move their token. Playing this game seemed likely to teach children about numbers, because in it, larger numbers are associated with greater values on a variety of dimensions. In particular, the higher the number that a child’s token reaches, the greater the distance the token will have traveled from the starting point, the greater the number of physical movements the child will have made in moving the token from one square to another, the greater the number of number-words the child will have said and heard, and the more time will have passed since the beginning of the game. These spatial, kinesthetic, verbal, and time-based cues provide a broad-based, multisensory foundation for knowledge of numerical magnitudes (the sizes of numbers), a type of knowledge that is closely related to mathematics achievement test scores (Booth & Siegler, 2006). Playing this numerical board game for roughly 1 hour, distributed over a 2-week period, improved low-income children’s knowledge of numerical magnitudes, ability to read printed numbers, and skill at learning novel arithmetic problems. The gains lasted for months after the game-playing experience (Ramani & Siegler, 2008; Siegler & Ramani, 2009). An advantage of this type of educational intervention is that it has minimal if any cost—a parent could just draw a game on a piece of paper. Understanding of cognitive development is advancing on many different fronts. One exciting area is linking changes in brain activity to changes in children’s thinking (Nelson et al., 2006). Although many people believe that brain maturation is something that occurs before birth, the brain actually continues to change in large ways for many years thereafter. For example, a part of the brain called the prefrontal cortex, which is located at the front of the brain and is particularly involved with planning and flexible problem solving, continues to develop throughout adolescence (Blakemore & Choudhury, 2006). Such new research domains, as well as enduring issues such as nature and nurture, continuity and discontinuity, and how to apply cognitive development research to education, insure that cognitive development will continue to be an exciting area of research in the coming years. Conclusion Research into cognitive development has shown us that minds don’t just form according to a uniform blueprint or innate intellect, but through a combination of influencing factors. For instance, if we want our kids to have a strong grasp of language we could concentrate on phonemic awareness early on. If we want them to be good at math and science we could engage them in numerical games and activities early on. Perhaps most importantly, we no longer think of brains as empty vessels waiting to be filled up with knowledge but as adaptable organs that develop all the way through early adulthood. Outside Resources Book: Frye, D., Baroody, A., Burchinal, M., Carver, S. M., Jordan, N. C., & McDowell, J. (2013). Teaching math to young children: A practice guide. Washington, DC: National Center for Education Evaluation and Regional Assistance (NCEE), Institute of Education Sciences, U.S. Department of Education. Book: Goswami, U. G. (2010). The Blackwell Handbook of Childhood Cognitive Development. New York: John Wiley and Sons. Book: Kuhn, D., & Siegler, R. S. (Vol. Eds.). (2006). Volume 2: Cognition, perception, and language. In W. Damon & R. M. Lerner (Series Eds.), Handbook of child psychology (6th ed.). Hoboken, NJ: Wiley. Book: Miller, P. H. (2011). Theories of developmental psychology (5th ed.). New York: Worth. Book: Siegler, R. S., & Alibali, M. W. (2004). Children's thinking (4th ed.). Upper Saddle River, NJ: Prentice-Hall. Discussion Questions 1. Why are there different theories of cognitive development? Why don’t researchers agree on which theory is the right one? 2. Do children’s natures differ, or do differences among children only reflect differences in their experiences? 3. Do you see development as more continuous or more discontinuous? 4. Can you think of ways other than those described in the module in which research on cognitive development could be used to improve education? Vocabulary Chutes and Ladders A numerical board game that seems to be useful for building numerical knowledge. Concrete operations stage Piagetian stage between ages 7 and 12 when children can think logically about concrete situations but not engage in systematic scientific reasoning. Conservation problems Problems pioneered by Piaget in which physical transformation of an object or set of objects changes a perceptually salient dimension but not the quantity that is being asked about. Continuous development Ways in which development occurs in a gradual incremental manner, rather than through sudden jumps. Depth perception The ability to actively perceive the distance from oneself of objects in the environment. Discontinuous development Discontinuous development Formal operations stage Piagetian stage starting at age 12 years and continuing for the rest of life, in which adolescents may gain the reasoning powers of educated adults. Information processing theories Theories that focus on describing the cognitive processes that underlie thinking at any one age and cognitive growth over time. Nature The genes that children bring with them to life and that influence all aspects of their development. Numerical magnitudes The sizes of numbers. Nurture The environments, starting with the womb, that influence all aspects of children’s development. Object permanence task The Piagetian task in which infants below about 9 months of age fail to search for an object that is removed from their sight and, if not allowed to search immediately for the object, act as if they do not know that it continues to exist. Phonemic awareness Awareness of the component sounds within words. Piaget’s theory Theory that development occurs through a sequence of discontinuous stages: the sensorimotor, preoperational, concrete operational, and formal operational stages. Preoperational reasoning stage Period within Piagetian theory from age 2 to 7 years, in which children can represent objects through drawing and language but cannot solve logical reasoning problems, such as the conservation problems. Qualitative changes Large, fundamental change, as when a caterpillar changes into a butterfly; stage theories such as Piaget’s posit that each stage reflects qualitative change relative to previous stages. Quantitative changes Gradual, incremental change, as in the growth of a pine tree’s girth. Sensorimotor stage Period within Piagetian theory from birth to age 2 years, during which children come to represent the enduring reality of objects. Sociocultural theories Theory founded in large part by Lev Vygotsky that emphasizes how other people and the attitudes, values, and beliefs of the surrounding culture influence children’s development.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_6%3A_Development/6.2%3A_Cognitive_Development_in_Childhood.txt
By Ross Thompson University of California, Davis Childhood social and personality development emerges through the interaction of social influences, biological maturation, and the child’s representations of the social world and the self. This interaction is illustrated in a discussion of the influence of significant relationships, the development of social understanding, the growth of personality, and the development of social and emotional competence in childhood. learning objectives • Provide specific examples of how the interaction of social experience, biological maturation, and the child’s representations of experience and the self provide the basis for growth in social and personality development. • Describe the significant contributions of parent–child and peer relationships to the development of social skills and personality in childhood. • Explain how achievements in social understanding occur in childhood. Moreover, do scientists believe that infants and young children are egocentric? • Describe the association of temperament with personality development. • Explain what is “social and emotional competence“ and provide some examples of how it develops in childhood. Introduction “How have I become the kind of person I am today?” Every adult ponders this question from time to time. The answers that readily come to mind include the influences of parents, peers, temperament, a moral compass, a strong sense of self, and sometimes critical life experiences such as parental divorce. Social and personality development encompasses these and many other influences on the growth of the person. In addition, it addresses questions that are at the heart of understanding how we develop as unique people. How much are we products of nature or nurture? How enduring are the influences of early experiences? The study of social and personality development offers perspective on these and other issues, often by showing how complex and multifaceted are the influences on developing children, and thus the intricate processes that have made you the person you are today (Thompson, 2006a). Understanding social and personality development requires looking at children from three perspectives that interact to shape development. The first is the social context in which each child lives, especially the relationships that provide security, guidance, and knowledge. The second is biological maturation that supports developing social and emotional competencies and underlies temperamental individuality. The third is children’s developing representations of themselves and the social world. Social and personality development is best understood as the continuous interaction between these social, biological, and representational aspects of psychological development. Relationships This interaction can be observed in the development of the earliest relationships between infants and their parents in the first year. Virtually all infants living in normal circumstances develop strong emotional attachments to those who care for them. Psychologists believe that the development of these attachments is as biologically natural as learning to walk and not simply a byproduct of the parents’ provision of food or warmth. Rather, attachments have evolved in humans because they promote children’s motivation to stay close to those who care for them and, as a consequence, to benefit from the learning, security, guidance, warmth, and affirmation that close relationships provide (Cassidy, 2008). Although nearly all infants develop emotional attachments to their caregivers--parents, relatives, nannies-- their sense of security in those attachments varies. Infants become securely attached when their parents respond sensitively to them, reinforcing the infants’ confidence that their parents will provide support when needed. Infants become insecurely attached when care is inconsistent or neglectful; these infants tend to respond avoidantly, resistantly, or in a disorganized manner (Belsky & Pasco Fearon, 2008). Such insecure attachments are not necessarily the result of deliberately bad parenting but are often a byproduct of circumstances. For example, an overworked single mother may find herself overstressed and fatigued at the end of the day, making fully-involved childcare very difficult. In other cases, some parents are simply poorly emotionally equipped to take on the responsibility of caring for a child. The different behaviors of securely- and insecurely-attached infants can be observed especially when the infant needs the caregiver’s support. To assess the nature of attachment, researchers use a standard laboratory procedure called the “Strange Situation,” which involves brief separations from the caregiver (e.g., mother) (Solomon & George, 2008). In the Strange Situation, the caregiver is instructed to leave the child to play alone in a room for a short time, then return and greet the child while researchers observe the child’s response. Depending on the child’s level of attachment, he or she may reject the parent, cling to the parent, or simply welcome the parent—or, in some instances, react with an agitated combination of responses. Infants can be securely or insecurely attached with mothers, fathers, and other regular caregivers, and they can differ in their security with different people. The security of attachment is an important cornerstone of social and personality development, because infants and young children who are securely attached have been found to develop stronger friendships with peers, more advanced emotional understanding and early conscience development, and more positive self-concepts, compared with insecurely attached children (Thompson, 2008). This is consistent with attachment theory’s premise that experiences of care, resulting in secure or insecure attachments, shape young children’s developing concepts of the self, as well as what people are like, and how to interact with them. As children mature, parent-child relationships naturally change. Preschool and grade-school children are more capable, have their own preferences, and sometimes refuse or seek to compromise with parental expectations. This can lead to greater parent-child conflict, and how conflict is managed by parents further shapes the quality of parent-child relationships. In general, children develop greater competence and self-confidence when parents have high (but reasonable) expectations for children’s behavior, communicate well with them, are warm and responsive, and use reasoning (rather than coercion) as preferred responses to children’s misbehavior. This kind of parenting style has been described as authoritative (Baumrind, 2013). Authoritative parents are supportive and show interest in their kids’ activities but are not overbearing and allow them to make constructive mistakes. By contrast, some less-constructive parent-child relationships result from authoritarian, uninvolved, or permissive parenting styles (see Table 1). Parental roles in relation to their children change in other ways, too. Parents increasingly become mediators (or gatekeepers) of their children’s involvement with peers and activities outside the family. Their communication and practice of values contributes to children’s academic achievement, moral development, and activity preferences. As children reach adolescence, the parent-child relationship increasingly becomes one of “coregulation,” in which both the parent(s) and the child recognizes the child’s growing competence and autonomy, and together they rebalance authority relations. We often see evidence of this as parents start accommodating their teenage kids’ sense of independence by allowing them to get cars, jobs, attend parties, and stay out later. Family relationships are significantly affected by conditions outside the home. For instance, the Family Stress Model describes how financial difficulties are associated with parents’ depressed moods, which in turn lead to marital problems and poor parenting that contributes to poorer child adjustment (Conger, Conger, & Martin, 2010). Within the home, parental marital difficulty or divorce affects more than half the children growing up today in the United States. Divorce is typically associated with economic stresses for children and parents, the renegotiation of parent-child relationships (with one parent typically as primary custodian and the other assuming a visiting relationship), and many other significant adjustments for children. Divorce is often regarded by children as a sad turning point in their lives, although for most it is not associated with long-term problems of adjustment (Emery, 1999). Peer Relationships Parent-child relationships are not the only significant relationships in a child’s life. Peer relationships are also important. Social interaction with another child who is similar in age, skills, and knowledge provokes the development of many social skills that are valuable for the rest of life (Bukowski, Buhrmester, & Underwood, 2011). In peer relationships, children learn how to initiate and maintain social interactions with other children. They learn skills for managing conflict, such as turn-taking, compromise, and bargaining. Play also involves the mutual, sometimes complex, coordination of goals, actions, and understanding. For example, as infants, children get their first encounter with sharing (of each other’s toys); during pretend play as preschoolers they create narratives together, choose roles, and collaborate to act out their stories; and in primary school, they may join a sports team, learning to work together and support each other emotionally and strategically toward a common goal. Through these experiences, children develop friendships that provide additional sources of security and support to those provided by their parents. However, peer relationships can be challenging as well as supportive (Rubin, Coplan, Chen, Bowker, & McDonald, 2011). Being accepted by other children is an important source of affirmation and self-esteem, but peer rejection can foreshadow later behavior problems (especially when children are rejected due to aggressive behavior). With increasing age, children confront the challenges of bullying, peer victimization, and managing conformity pressures. Social comparison with peers is an important means by which children evaluate their skills, knowledge, and personal qualities, but it may cause them to feel that they do not measure up well against others. For example, a boy who is not athletic may feel unworthy of his football-playing peers and revert to shy behavior, isolating himself and avoiding conversation. Conversely, an athlete who doesn’t “get” Shakespeare may feel embarrassed and avoid reading altogether. Also, with the approach of adolescence, peer relationships become focused on psychological intimacy, involving personal disclosure, vulnerability, and loyalty (or its betrayal)—which significantly affects a child’s outlook on the world. Each of these aspects of peer relationships requires developing very different social and emotional skills than those that emerge in parent-child relationships. They also illustrate the many ways that peer relationships influence the growth of personality and self-concept. Social Understanding As we have seen, children’s experience of relationships at home and the peer group contributes to an expanding repertoire of social and emotional skills and also to broadened social understanding. In these relationships, children develop expectations for specific people (leading, for example, to secure or insecure attachments to parents), understanding of how to interact with adults and peers, and developing self-concept based on how others respond to them. These relationships are also significant forums for emotional development. Remarkably, young children begin developing social understanding very early in life. Before the end of the first year, infants are aware that other people have perceptions, feelings, and other mental states that affect their behavior, and which are different from the child’s own mental states. This can be readily observed in a process called social referencing, in which an infant looks to the mother’s face when confronted with an unfamiliar person or situation (Feinman, 1992). If the mother looks calm and reassuring, the infant responds positively as if the situation is safe. If the mother looks fearful or distressed, the infant is likely to respond with wariness or distress because the mother’s expression signals danger. In a remarkably insightful manner, therefore, infants show an awareness that even though they are uncertain about the unfamiliar situation, their mother is not, and that by “reading” the emotion in her face, infants can learn about whether the circumstance is safe or dangerous, and how to respond. Although developmental scientists used to believe that infants are egocentric—that is, focused on their own perceptions and experience—they now realize that the opposite is true. Infants are aware at an early stage that people have different mental states, and this motivates them to try to figure out what others are feeling, intending, wanting, and thinking, and how these mental states affect their behavior. They are beginning, in other words, to develop a theory of mind, and although their understanding of mental states begins very simply, it rapidly expands (Wellman, 2011). For example, if an 18-month-old watches an adult try repeatedly to drop a necklace into a cup but inexplicably fail each time, they will immediately put the necklace into the cup themselves—thus completing what the adult intended, but failed, to do. In doing so, they reveal their awareness of the intentions underlying the adult’s behavior (Meltzoff, 1995). Carefully designed experimental studies show that by late in the preschool years, young children understand that another’s beliefs can be mistaken rather than correct, that memories can affect how you feel, and that one’s emotions can be hidden from others (Wellman, 2011). Social understanding grows significantly as children’s theory of mind develops. How do these achievements in social understanding occur? One answer is that young children are remarkably sensitive observers of other people, making connections between their emotional expressions, words, and behavior to derive simple inferences about mental states (e.g., concluding, for example, that what Mommy is looking at is in her mind) (Gopnik, Meltzoff, & Kuhl, 2001). This is especially likely to occur in relationships with people whom the child knows well, consistent with the ideas of attachment theory discussed above. Growing language skills give young children words with which to represent these mental states (e.g., “mad,” “wants”) and talk about them with others. Thus in conversation with their parents about everyday experiences, children learn much about people’s mental states from how adults talk about them (“Your sister was sad because she thought Daddy was coming home.”) (Thompson, 2006b). Developing social understanding is, in other words, based on children’s everyday interactions with others and their careful interpretations of what they see and hear. There are also some scientists who believe that infants are biologically prepared to perceive people in a special way, as organisms with an internal mental life, and this facilitates their interpretation of people’s behavior with reference to those mental states (Leslie, 1994). Personality Parents look into the faces of their newborn infants and wonder, “What kind of person will this child will become?” They scrutinize their baby’s preferences, characteristics, and responses for clues of a developing personality. They are quite right to do so, because temperament is a foundation for personality growth. But temperament (defined as early-emerging differences in reactivity and self-regulation) is not the whole story. Although temperament is biologically based, it interacts with the influence of experience from the moment of birth (if not before) to shape personality (Rothbart, 2011). Temperamental dispositions are affected, for example, by the support level of parental care. More generally, personality is shaped by the goodness of fit between the child’s temperamental qualities and characteristics of the environment (Chess & Thomas, 1999). For example, an adventurous child whose parents regularly take her on weekend hiking and fishing trips would be a good “fit” to her lifestyle, supporting personality growth. Personality is the result, therefore, of the continuous interplay between biological disposition and experience, as is true for many other aspects of social and personality development. Personality develops from temperament in other ways (Thompson, Winer, & Goodvin, 2010). As children mature biologically, temperamental characteristics emerge and change over time. A newborn is not capable of much self-control, but as brain-based capacities for self-control advance, temperamental changes in self-regulation become more apparent. For example, a newborn who cries frequently doesn’t necessarily have a grumpy personality; over time, with sufficient parental support and increased sense of security, the child might be less likely to cry. In addition, personality is made up of many other features besides temperament. Children’s developing self-concept, their motivations to achieve or to socialize, their values and goals, their coping styles, their sense of responsibility and conscientiousness, and many other qualities are encompassed into personality. These qualities are influenced by biological dispositions, but even more by the child’s experiences with others, particularly in close relationships, that guide the growth of individual characteristics. Indeed, personality development begins with the biological foundations of temperament but becomes increasingly elaborated, extended, and refined over time. The newborn that parents gazed upon thus becomes an adult with a personality of depth and nuance. Social and Emotional Competence Social and personality development is built from the social, biological, and representational influences discussed above. These influences result in important developmental outcomes that matter to children, parents, and society: a young adult’s capacity to engage in socially constructive actions (helping, caring, sharing with others), to curb hostile or aggressive impulses, to live according to meaningful moral values, to develop a healthy identity and sense of self, and to develop talents and achieve success in using them. These are some of the developmental outcomes that denote social and emotional competence. These achievements of social and personality development derive from the interaction of many social, biological, and representational influences. Consider, for example, the development of conscience, which is an early foundation for moral development. Conscience consists of the cognitive, emotional, and social influences that cause young children to create and act consistently with internal standards of conduct (Kochanska, 2002). Conscience emerges from young children’s experiences with parents, particularly in the development of a mutually responsive relationship that motivates young children to respond constructively to the parents’ requests and expectations. Biologically based temperament is involved, as some children are temperamentally more capable of motivated self-regulation (a quality called effortful control) than are others, while some children are dispositionally more prone to the fear and anxiety that parental disapproval can evoke. Conscience development grows through a good fit between the child’s temperamental qualities and how parents communicate and reinforce behavioral expectations. Moreover, as an illustration of the interaction of genes and experience, one research group found that young children with a particular gene allele (the 5-HTTLPR) were low on measures of conscience development when they had previously experienced unresponsive maternal care, but children with the same allele growing up with responsive care showed strong later performance on conscience measures (Kochanska, Kim, Barry, & Philibert, 2011). Conscience development also expands as young children begin to represent moral values and think of themselves as moral beings. By the end of the preschool years, for example, young children develop a “moral self” by which they think of themselves as people who want to do the right thing, who feel badly after misbehaving, and who feel uncomfortable when others misbehave. In the development of conscience, young children become more socially and emotionally competent in a manner that provides a foundation for later moral conduct (Thompson, 2012). The development of gender and gender identity is likewise an interaction among social, biological, and representational influences (Ruble, Martin, & Berenbaum, 2006). Young children learn about gender from parents, peers, and others in society, and develop their own conceptions of the attributes associated with maleness or femaleness (called gender schemas). They also negotiate biological transitions (such as puberty) that cause their sense of themselves and their sexual identity to mature. Each of these examples of the growth of social and emotional competence illustrates not only the interaction of social, biological, and representational influences, but also how their development unfolds over an extended period. Early influences are important, but not determinative, because the capabilities required for mature moral conduct, gender identity, and other outcomes continue to develop throughout childhood, adolescence, and even the adult years. Conclusion As the preceding sentence suggests, social and personality development continues through adolescence and the adult years, and it is influenced by the same constellation of social, biological, and representational influences discussed for childhood. Changing social relationships and roles, biological maturation and (much later) decline, and how the individual represents experience and the self continue to form the bases for development throughout life. In this respect, when an adult looks forward rather than retrospectively to ask, “what kind of person am I becoming?”—a similarly fascinating, complex, multifaceted interaction of developmental processes lies ahead. Outside Resources Web: Center for the Developing Child, Harvard University http://developingchild.harvard.edu Web: Collaborative for Academic, Social, and Emotional Learning http://casel.org Discussion Questions 1. If parent–child relationships naturally change as the child matures, would you expect that the security of attachment might also change over time? What reasons would account for your expectation? 2. In what ways does a child’s developing theory of mind resemble how scientists create, refine, and use theories in their work? In other words, would it be appropriate to think of children as informal scientists in their development of social understanding? 3. If there is a poor goodness of fit between a child’s temperament and characteristics of parental care, what can be done to create a better match? Provide a specific example of how this might occur. 4. What are the contributions that parents offer to the development of social and emotional competence in children? Answer this question again with respect to peer contributions. Vocabulary Authoritative A parenting style characterized by high (but reasonable) expectations for children’s behavior, good communication, warmth and nurturance, and the use of reasoning (rather than coercion) as preferred responses to children’s misbehavior. Conscience The cognitive, emotional, and social influences that cause young children to create and act consistently with internal standards of conduct. Effortful control A temperament quality that enables children to be more successful in motivated self-regulation. Family Stress Model A description of the negative effects of family financial difficulty on child adjustment through the effects of economic stress on parents’ depressed mood, increased marital problems, and poor parenting. Gender schemas Organized beliefs and expectations about maleness and femaleness that guide children’s thinking about gender. Goodness of fit The match or synchrony between a child’s temperament and characteristics of parental care that contributes to positive or negative personality development. A good “fit” means that parents have accommodated to the child’s temperamental attributes, and this contributes to positive personality growth and better adjustment. Security of attachment An infant’s confidence in the sensitivity and responsiveness of a caregiver, especially when he or she is needed. Infants can be securely attached or insecurely attached. Social referencing The process by which one individual consults another’s emotional expressions to determine how to evaluate and respond to circumstances that are ambiguous or uncertain. Temperament Early emerging differences in reactivity and self-regulation, which constitutes a foundation for personality development. Theory of mind Children’s growing understanding of the mental states that affect people’s behavior.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_6%3A_Development/6.3%3A_Social_and_Personality_Development_in_Childhood.txt
By Jennifer Lansford Duke University Adolescence is a period that begins with puberty and ends with the transition to adulthood (approximately ages 10–20). Physical changes associated with puberty are triggered by hormones. Cognitive changes include improvements in complex and abstract thought, as well as development that happens at different rates in distinct parts of the brain and increases adolescents’ propensity for risky behavior because increases in sensation-seeking and reward motivation precede increases in cognitive control. Adolescents’ relationships with parents go through a period of redefinition in which adolescents become more autonomous, and aspects of parenting, such as distal monitoring and psychological control, become more salient. Peer relationships are important sources of support and companionship during adolescence yet can also promote problem behaviors. Same-sex peer groups evolve into mixed-sex peer groups, and adolescents’ romantic relationships tend to emerge from these groups. Identity formation occurs as adolescents explore and commit to different roles and ideological positions. Nationality, gender, ethnicity, socioeconomic status, religious background, sexual orientation, and genetic factors shape how adolescents behave and how others respond to them, and are sources of diversity in adolescence. learning objectives • Describe major features of physical, cognitive, and social development during adolescence. • Understand why adolescence is a period of heightened risk taking. • Be able to explain sources of diversity in adolescent development. Adolescence Defined Adolescence is a developmental stage that has been defined as starting with puberty and ending with the transition to adulthood (approximately ages 10–20). Adolescence has evolved historically, with evidence indicating that this stage is lengthening as individuals start puberty earlier and transition to adulthood later than in the past. Puberty today begins, on average, at age 10–11 years for girls and 11–12 years for boys. This average age of onset has decreased gradually over time since the 19th century by 3–4 months per decade, which has been attributed to a range of factors including better nutrition, obesity, increased father absence, and other environmental factors (Steinberg, 2013). Completion of formal education, financial independence from parents, marriage, and parenthood have all been markers of the end of adolescence and beginning of adulthood, and all of these transitions happen, on average, later now than in the past. In fact, the prolonging of adolescence has prompted the introduction of a new developmental period called emerging adulthood that captures these developmental changes out of adolescence and into adulthood, occurring from approximately ages 18 to 29 (Arnett, 2000). This module will outline changes that occur during adolescence in three domains: physical, cognitive, and social. Within the social domain, changes in relationships with parents, peers, and romantic partners will be considered. Next, the module turns to adolescents’ psychological and behavioral adjustment, including identity formation, aggression and antisocial behavior, anxiety and depression, and academic achievement. Finally, the module summarizes sources of diversity in adolescents’ experiences and development. Physical Changes Physical changes of puberty mark the onset of adolescence (Lerner & Steinberg, 2009). For both boys and girls, these changes include a growth spurt in height, growth of pubic and underarm hair, and skin changes (e.g., pimples). Boys also experience growth in facial hair and a deepening of their voice. Girls experience breast development and begin menstruating. These pubertal changes are driven by hormones, particularly an increase in testosterone for boys and estrogen for girls. Cognitive Changes Major changes in the structure and functioning of the brain occur during adolescence and result in cognitive and behavioral developments (Steinberg, 2008). Cognitive changes during adolescence include a shift from concrete to more abstract and complex thinking. Such changes are fostered by improvements during early adolescence in attention, memory, processing speed, and metacognition (ability to think about thinking and therefore make better use of strategies like mnemonic devices that can improve thinking). Early in adolescence, changes in the brain’s dopaminergic system contribute to increases in adolescents’ sensation-seeking and reward motivation. Later in adolescence, the brain’s cognitive control centers in the prefrontal cortex develop, increasing adolescents’ self-regulation and future orientation. The difference in timing of the development of these different regions of the brain contributes to more risk taking during middle adolescence because adolescents are motivated to seek thrills that sometimes come from risky behavior, such as reckless driving, smoking, or drinking, and have not yet developed the cognitive control to resist impulses or focus equally on the potential risks (Steinberg, 2008). One of the world’s leading experts on adolescent development, Laurence Steinberg, likens this to engaging a powerful engine before the braking system is in place. The result is that adolescents are more prone to risky behaviors than are children or adults. Social Changes Parents Although peers take on greater importance during adolescence, family relationships remain important too. One of the key changes during adolescence involves a renegotiation of parent–child relationships. As adolescents strive for more independence and autonomy during this time, different aspects of parenting become more salient. For example, parents’ distal supervision and monitoring become more important as adolescents spend more time away from parents and in the presence of peers. Parental monitoring encompasses a wide range of behaviors such as parents’ attempts to set rules and know their adolescents’ friends, activities, and whereabouts, in addition to adolescents’ willingness to disclose information to their parents (Stattin & Kerr, 2000). Psychological control, which involves manipulation and intrusion into adolescents’ emotional and cognitive world through invalidating adolescents’ feelings and pressuring them to think in particular ways (Barber, 1996), is another aspect of parenting that becomes more salient during adolescence and is related to more problematic adolescent adjustment. Peers As children become adolescents, they usually begin spending more time with their peers and less time with their families, and these peer interactions are increasingly unsupervised by adults. Children’s notions of friendship often focus on shared activities, whereas adolescents’ notions of friendship increasingly focus on intimate exchanges of thoughts and feelings. During adolescence, peer groups evolve from primarily single-sex to mixed-sex. Adolescents within a peer group tend to be similar to one another in behavior and attitudes, which has been explained as being a function of homophily (adolescents who are similar to one another choose to spend time together in a “birds of a feather flock together” way) and influence (adolescents who spend time together shape each other’s behavior and attitudes). One of the most widely studied aspects of adolescent peer influence is known as deviant peer contagion (Dishion & Tipsord, 2011), which is the process by which peers reinforce problem behavior by laughing or showing other signs of approval that then increase the likelihood of future problem behavior. Peers can serve both positive and negative functions during adolescence. Negative peer pressure can lead adolescents to make riskier decisions or engage in more problematic behavior than they would alone or in the presence of their family. For example, adolescents are much more likely to drink alcohol, use drugs, and commit crimes when they are with their friends than when they are alone or with their family. However, peers also serve as an important source of social support and companionship during adolescence, and adolescents with positive peer relationships are happier and better adjusted than those who are socially isolated or have conflictual peer relationships. Crowds are an emerging level of peer relationships in adolescence. In contrast to friendships (which are reciprocal dyadic relationships) and cliques (which refer to groups of individuals who interact frequently), crowds are characterized more by shared reputations or images than actual interactions (Brown & Larson, 2009). These crowds reflect different prototypic identities (such as jocks or brains) and are often linked with adolescents’ social status and peers’ perceptions of their values or behaviors. Romantic relationships Adolescence is the developmental period during which romantic relationships typically first emerge. Initially, same-sex peer groups that were common during childhood expand into mixed-sex peer groups that are more characteristic of adolescence. Romantic relationships often form in the context of these mixed-sex peer groups (Connolly, Furman, & Konarski, 2000). Although romantic relationships during adolescence are often short-lived rather than long-term committed partnerships, their importance should not be minimized. Adolescents spend a great deal of time focused on romantic relationships, and their positive and negative emotions are more tied to romantic relationships (or lack thereof) than to friendships, family relationships, or school (Furman & Shaffer, 2003). Romantic relationships contribute to adolescents’ identity formation, changes in family and peer relationships, and adolescents’ emotional and behavioral adjustment. Furthermore, romantic relationships are centrally connected to adolescents’ emerging sexuality. Parents, policymakers, and researchers have devoted a great deal of attention to adolescents’ sexuality, in large part because of concerns related to sexual intercourse, contraception, and preventing teen pregnancies. However, sexuality involves more than this narrow focus. For example, adolescence is often when individuals who are lesbian, gay, bisexual, or transgender come to perceive themselves as such (Russell, Clarke, & Clary, 2009). Thus, romantic relationships are a domain in which adolescents experiment with new behaviors and identities. Behavioral and Psychological Adjustment Identity formation Theories of adolescent development often focus on identity formation as a central issue. For example, in Erikson’s (1968) classic theory of developmental stages, identity formation was highlighted as the primary indicator of successful development during adolescence (in contrast to role confusion, which would be an indicator of not successfully meeting the task of adolescence). Marcia (1966) described identify formation during adolescence as involving both decision points and commitments with respect to ideologies (e.g., religion, politics) and occupations. He described four identity statuses: foreclosure, identity diffusion, moratorium, and identity achievement. Foreclosure occurs when an individual commits to an identity without exploring options. Identity diffusion occurs when adolescents neither explore nor commit to any identities. Moratorium is a state in which adolescents are actively exploring options but have not yet made commitments. Identity achievement occurs when individuals have explored different options and then made identity commitments. Building on this work, other researchers have investigated more specific aspects of identity. For example, Phinney (1989) proposed a model of ethnic identity development that included stages of unexplored ethnic identity, ethnic identity search, and achieved ethnic identity. Aggression and antisocial behavior Several major theories of the development of antisocial behavior treat adolescence as an important period. Patterson’s (1982) early versus late starter model of the development of aggressive and antisocial behavior distinguishes youths whose antisocial behavior begins during childhood (early starters) versus adolescence (late starters). According to the theory, early starters are at greater risk for long-term antisocial behavior that extends into adulthood than are late starters. Late starters who become antisocial during adolescence are theorized to experience poor parental monitoring and supervision, aspects of parenting that become more salient during adolescence. Poor monitoring and lack of supervision contribute to increasing involvement with deviant peers, which in turn promotes adolescents’ own antisocial behavior. Late starters desist from antisocial behavior when changes in the environment make other options more appealing. Similarly, Moffitt’s (1993) life-course persistent versus adolescent-limited model distinguishes between antisocial behavior that begins in childhood versus adolescence. Moffitt regards adolescent-limited antisocial behavior as resulting from a “maturity gap” between adolescents’ dependence on and control by adults and their desire to demonstrate their freedom from adult constraint. However, as they continue to develop, and legitimate adult roles and privileges become available to them, there are fewer incentives to engage in antisocial behavior, leading to desistance in these antisocial behaviors. Anxiety and depression Developmental models of anxiety and depression also treat adolescence as an important period, especially in terms of the emergence of gender differences in prevalence rates that persist through adulthood (Rudolph, 2009). Starting in early adolescence, compared with males, females have rates of anxiety that are about twice as high and rates of depression that are 1.5 to 3 times as high (American Psychiatric Association, 2013). Although the rates vary across specific anxiety and depression diagnoses, rates for some disorders are markedly higher in adolescence than in childhood or adulthood. For example, prevalence rates for specific phobias are about 5% in children and 3%–5% in adults but 16% in adolescents. Anxiety and depression are particularly concerning because suicide is one of the leading causes of death during adolescence. Developmental models focus on interpersonal contexts in both childhood and adolescence that foster depression and anxiety (e.g., Rudolph, 2009). Family adversity, such as abuse and parental psychopathology, during childhood sets the stage for social and behavioral problems during adolescence. Adolescents with such problems generate stress in their relationships (e.g., by resolving conflict poorly and excessively seeking reassurance) and select into more maladaptive social contexts (e.g., “misery loves company” scenarios in which depressed youths select other depressed youths as friends and then frequently co-ruminate as they discuss their problems, exacerbating negative affect and stress). These processes are intensified for girls compared with boys because girls have more relationship-oriented goals related to intimacy and social approval, leaving them more vulnerable to disruption in these relationships. Anxiety and depression then exacerbate problems in social relationships, which in turn contribute to the stability of anxiety and depression over time. Academic achievement Adolescents spend more waking time in school than in any other context (Eccles & Roeser, 2011). Academic achievement during adolescence is predicted by interpersonal (e.g., parental engagement in adolescents’ education), intrapersonal (e.g., intrinsic motivation), and institutional (e.g., school quality) factors. Academic achievement is important in its own right as a marker of positive adjustment during adolescence but also because academic achievement sets the stage for future educational and occupational opportunities. The most serious consequence of school failure, particularly dropping out of school, is the high risk of unemployment or underemployment in adulthood that follows. High achievement can set the stage for college or future vocational training and opportunities. Diversity Adolescent development does not necessarily follow the same pathway for all individuals. Certain features of adolescence, particularly with respect to biological changes associated with puberty and cognitive changes associated with brain development, are relatively universal. But other features of adolescence depend largely on circumstances that are more environmentally variable. For example, adolescents growing up in one country might have different opportunities for risk taking than adolescents in a different country, and supports and sanctions for different behaviors in adolescence depend on laws and values that might be specific to where adolescents live. Likewise, different cultural norms regarding family and peer relationships shape adolescents’ experiences in these domains. For example, in some countries, adolescents’ parents are expected to retain control over major decisions, whereas in other countries, adolescents are expected to begin sharing in or taking control of decision making. Even within the same country, adolescents’ gender, ethnicity, immigrant status, religion, sexual orientation, socioeconomic status, and personality can shape both how adolescents behave and how others respond to them, creating diverse developmental contexts for different adolescents. For example, early puberty (that occurs before most other peers have experienced puberty) appears to be associated with worse outcomes for girls than boys, likely in part because girls who enter puberty early tend to associate with older boys, which in turn is associated with early sexual behavior and substance use. For adolescents who are ethnic or sexual minorities, discrimination sometimes presents a set of challenges that nonminorities do not face. Finally, genetic variations contribute an additional source of diversity in adolescence. Current approaches emphasize gene X environment interactions, which often follow a differential susceptibility model (Belsky & Pluess, 2009). That is, particular genetic variations are considered riskier than others, but genetic variations also can make adolescents more or less susceptible to environmental factors. For example, the association between the CHRM2genotype and adolescent externalizing behavior (aggression and delinquency)has been found in adolescents whose parents are low in monitoring behaviors (Dick et al., 2011). Thus, it is important to bear in mind that individual differences play an important role in adolescent development. Conclusions Adolescent development is characterized by biological, cognitive, and social changes. Social changes are particularly notable as adolescents become more autonomous from their parents, spend more time with peers, and begin exploring romantic relationships and sexuality. Adjustment during adolescence is reflected in identity formation, which often involves a period of exploration followed by commitments to particular identities. Adolescence is characterized by risky behavior, which is made more likely by changes in the brain in which reward-processing centers develop more rapidly than cognitive control systems, making adolescents more sensitive to rewards than to possible negative consequences. Despite these generalizations, factors such as country of residence, gender, ethnicity, and sexual orientation shape development in ways that lead to diversity of experiences across adolescence. Outside Resources Podcasts: Society for Research on Adolescence website with links to podcasts on a variety of topics, from autonomy-relatedness in adolescence, to the health ramifications of growing up in the United States. www.s-r-a.org/sra-news/podcasts Study: The National Longitudinal Study of Adolescent to Adult Health (Add Health) is a longitudinal study of a nationally representative sample of adolescents in grades 7-12 in the United States during the 1994-95 school year. Add Health combines data on respondents’ social, economic, psychological and physical well-being with contextual data on the family, neighborhood, community, school, friendships, peer groups, and romantic relationships. http://www.cpc.unc.edu/projects/addhealth Video: This is a series of TED talks on topics from the mysterious workings of the adolescent brain, to videos about surviving anxiety in adolescence. http://tinyurl.com/lku4a3k Web: UNICEF website on adolescents around the world. UNICEF provides videos and other resources as part of an initiative to challenge common preconceptions about adolescence. http://www.unicef.org/adolescence/index.html Discussion Questions 1. What can parents do to promote their adolescents’ positive adjustment? 2. In what ways do changes in brain development and cognition make adolescents particularly susceptible to peer influence? 3. How could interventions designed to prevent or reduce adolescents’ problem behavior be developed to take advantage of what we know about adolescent development? 4. Reflecting on your own adolescence, provide examples of times when you think your experience was different from those of your peers as a function of something unique about you. 5. In what ways was your experience of adolescence different from your parents’ experience of adolescence? How do you think adolescence may be different 20 years from now? Vocabulary Crowds Adolescent peer groups characterized by shared reputations or images. Deviant peer contagion The spread of problem behaviors within groups of adolescents. Differential susceptibility Genetic factors that make individuals more or less responsive to environmental experiences. Foreclosure Individuals commit to an identity without exploration of options. Homophily Adolescents tend to associate with peers who are similar to themselves. Identity achievement Individuals have explored different options and then made commitments. Identity diffusion Adolescents neither explore nor commit to any roles or ideologies. Moratorium State in which adolescents are actively exploring options but have not yet made identity commitments. Psychological control Parents’ manipulation of and intrusion into adolescents’ emotional and cognitive world through invalidating adolescents’ feelings and pressuring them to think in particular ways.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_6%3A_Development/6.4%3A_Adolescent_Development.txt
By Jeffrey Jensen Arnett Clark University Emerging adulthood has been proposed as a new life stage between adolescence and young adulthood, lasting roughly from ages 18 to 25. Five features make emerging adulthood distinctive: identity explorations, instability, self-focus, feeling in-between adolescence and adulthood, and a sense of broad possibilities for the future. Emerging adulthood is found mainly in developed countries, where most young people obtain tertiary education and median ages of entering marriage and parenthood are around 30. There are variations in emerging adulthood within developed countries. It lasts longest in Europe, and in Asian developed countries, the self-focused freedom of emerging adulthood is balanced by obligations to parents and by conservative views of sexuality. In developing countries, although today emerging adulthood exists only among the middle-class elite, it can be expected to grow in the 21st century as these countries become more affluent. learning objectives • Explain where, when, and why a new life stage of emerging adulthood appeared over the past half-century. • Identify the five features that distinguish emerging adulthood from other life stages. • Describe the variations in emerging adulthood in countries around the world. Introduction Think for a moment about the lives of your grandparents and great-grandparents when they were in their twenties. How do their lives at that age compare to your life? If they were like most other people of their time, their lives were quite different than yours. What happened to change the twenties so much between their time and our own? And how should we understand the 18–25 age period today? The theory of emerging adulthood proposes that a new life stage has arisen between adolescence and young adulthood over the past half-century in industrialized countries. Fifty years ago, most young people in these countries had entered stable adult roles in love and work by their late teens or early twenties. Relatively few people pursued education or training beyond secondary school, and, consequently, most young men were full-time workers by the end of their teens. Relatively few women worked in occupations outside the home, and the median marriage age for women in the United States and in most other industrialized countries in 1960 was around 20 (Arnett & Taber, 1994; Douglass, 2005). The median marriage age for men was around 22, and married couples usually had their first child about one year after their wedding day. All told, for most young people half a century ago, their teenage adolescence led quickly and directly to stable adult roles in love and work by their late teens or early twenties. These roles would form the structure of their adult lives for decades to come. Now all that has changed. A higher proportion of young people than ever before—about 70% in the United States—pursue education and training beyond secondary school (National Center for Education Statistics, 2012). The early twenties are not a time of entering stable adult work but a time of immense job instability: In the United States, the average number of job changes from ages 20 to 29 is seven. The median age of entering marriage in the United States is now 27 for women and 29 for men (U.S. Bureau of the Census, 2011). Consequently, a new stage of the life span, emerging adulthood, has been created, lasting from the late teens through the mid-twenties, roughly ages 18 to 25. The Five Features of Emerging Adulthood Five characteristics distinguish emerging adulthood from other life stages (Arnett, 2004). Emerging adulthood is: 1. the age of identity explorations; 2. the age of instability; 3. the self-focused age; 4. the age of feeling in-between; and 5. the age of possibilities. Perhaps the most distinctive characteristic of emerging adulthood is that it is the age of identity explorations. That is, it is an age when people explore various possibilities in love and work as they move toward making enduring choices. Through trying out these different possibilities, they develop a more definite identity, including an understanding of who they are, what their capabilities and limitations are, what their beliefs and values are, and how they fit into the society around them. Erik Erikson (1950), who was the first to develop the idea of identity, proposed that it is mainly an issue in adolescence; but that was more than 50 years ago, and today it is mainly in emerging adulthood that identity explorations take place (Côté, 2006). The explorations of emerging adulthood also make it the age of instability. As emerging adults explore different possibilities in love and work, their lives are often unstable. A good illustration of this instability is their frequent moves from one residence to another. Rates of residential change in American society are much higher at ages 18 to 29 than at any other period of life (Arnett, 2004). This reflects the explorations going on in emerging adults’ lives. Some move out of their parents’ household for the first time in their late teens to attend a residential college, whereas others move out simply to be independent (Goldscheider & Goldscheider, 1999). They may move again when they drop out of college or when they graduate. They may move to cohabit with a romantic partner, and then move out when the relationship ends. Some move to another part of the country or the world to study or work. For nearly half of American emerging adults, residential change includes moving back in with their parents at least once (Goldscheider & Goldscheider, 1999). In some countries, such as in southern Europe, emerging adults remain in their parents’ home rather than move out; nevertheless, they may still experience instability in education, work, and love relationships (Douglass, 2005, 2007). Emerging adulthood is also a self-focused age. Most American emerging adults move out of their parents’ home at age 18 or 19 and do not marry or have their first child until at least their late twenties (Arnett, 2004). Even in countries where emerging adults remain in their parents’ home through their early twenties, as in southern Europe and in Asian countries such as Japan, they establish a more independent lifestyle than they had as adolescents (Rosenberger, 2007). Emerging adulthood is a time between adolescents’ reliance on parents and adults’ long-term commitments in love and work, and during these years, emerging adults focus on themselves as they develop the knowledge, skills, and self-understanding they will need for adult life. In the course of emerging adulthood, they learn to make independent decisions about everything from what to have for dinner to whether or not to get married. Another distinctive feature of emerging adulthood is that it is an age of feeling in-between, not adolescent but not fully adult, either. When asked, “Do you feel that you have reached adulthood?” the majority of emerging adults respond neither yes nor no but with the ambiguous “in some ways yes, in some ways no” (Arnett, 2003, 2012). It is only when people reach their late twenties and early thirties that a clear majority feels adult. Most emerging adults have the subjective feeling of being in a transitional period of life, on the way to adulthood but not there yet. This “in-between” feeling in emerging adulthood has been found in a wide range of countries, including Argentina (Facio & Micocci, 2003), Austria (Sirsch, Dreher, Mayr, & Willinger, 2009), Israel (Mayseless & Scharf, 2003), the Czech Republic (Macek, Bejček, & Vaníčková, 2007), and China (Nelson & Chen, 2007). Finally, emerging adulthood is the age of possibilities, when many different futures remain possible, and when little about a person’s direction in life has been decided for certain. It tends to be an age of high hopes and great expectations, in part because few of their dreams have been tested in the fires of real life. In one national survey of 18- to 24-year-olds in the United States, nearly all—89%—agreed with the statement, “I am confident that one day I will get to where I want to be in life” (Arnett & Schwab, 2012). This optimism in emerging adulthood has been found in other countries as well (Nelson & Chen, 2007). International Variations The five features proposed in the theory of emerging adulthood originally were based on research involving about 300 Americans between ages 18 and 29 from various ethnic groups, social classes, and geographical regions (Arnett, 2004). To what extent does the theory of emerging adulthood apply internationally? The answer to this question depends greatly on what part of the world is considered. Demographers make a useful distinction between the developing countries that comprise the majority of the world’s population and the economically developed countries that are part of the Organization for Economic Co-operation and Development (OECD), including the United States, Canada, western Europe, Japan, South Korea, Australia, and New Zealand. The current population of OECD countries (also called developed countries) is 1.2 billion, about 18% of the total world population (UNDP, 2011). The rest of the human population resides in developing countries, which have much lower median incomes; much lower median educational attainment; and much higher incidence of illness, disease, and early death. Let us consider emerging adulthood in OECD countries first, then in developing countries. EA in OECD Countries: The Advantages of Affluence The same demographic changes as described above for the United States have taken place in other OECD countries as well. This is true of participation in postsecondary education as well as median ages for entering marriage and parenthood (UNdata, 2010). However, there is also substantial variability in how emerging adulthood is experienced across OECD countries. Europe is the region where emerging adulthood is longest and most leisurely. The median ages for entering marriage and parenthood are near 30 in most European countries (Douglass, 2007). Europe today is the location of the most affluent, generous, and egalitarian societies in the world—in fact, in human history (Arnett, 2007). Governments pay for tertiary education, assist young people in finding jobs, and provide generous unemployment benefits for those who cannot find work. In northern Europe, many governments also provide housing support. Emerging adults in European societies make the most of these advantages, gradually making their way to adulthood during their twenties while enjoying travel and leisure with friends. The lives of Asian emerging adults in developed countries such as Japan and South Korea are in some ways similar to the lives of emerging adults in Europe and in some ways strikingly different. Like European emerging adults, Asian emerging adults tend to enter marriage and parenthood around age 30 (Arnett, 2011). Like European emerging adults, Asian emerging adults in Japan and South Korea enjoy the benefits of living in affluent societies with generous social welfare systems that provide support for them in making the transition to adulthood—for example, free university education and substantial unemployment benefits. However, in other ways, the experience of emerging adulthood in Asian OECD countries is markedly different than in Europe. Europe has a long history of individualism, and today’s emerging adults carry that legacy with them in their focus on self-development and leisure during emerging adulthood. In contrast, Asian cultures have a shared cultural history emphasizing collectivism and family obligations. Although Asian cultures have become more individualistic in recent decades as a consequence of globalization, the legacy of collectivism persists in the lives of emerging adults. They pursue identity explorations and self-development during emerging adulthood, like their American and European counterparts, but within narrower boundaries set by their sense of obligations to others, especially their parents (Phinney & Baldelomar, 2011). For example, in their views of the most important criteria for becoming an adult, emerging adults in the United States and Europe consistently rank financial independence among the most important markers of adulthood. In contrast, emerging adults with an Asian cultural background especially emphasize becoming capable of supporting parents financially as among the most important criteria (Arnett, 2003; Nelson, Badger, & Wu, 2004). This sense of family obligation may curtail their identity explorations in emerging adulthood to some extent, as they pay more heed to their parents’ wishes about what they should study, what job they should take, and where they should live than emerging adults do in the West (Rosenberger, 2007). Another notable contrast between Western and Asian emerging adults is in their sexuality. In the West, premarital sex is normative by the late teens, more than a decade before most people enter marriage. In the United States and Canada, and in northern and eastern Europe, cohabitation is also normative; most people have at least one cohabiting partnership before marriage. In southern Europe, cohabiting is still taboo, but premarital sex is tolerated in emerging adulthood. In contrast, both premarital sex and cohabitation remain rare and forbidden throughout Asia. Even dating is discouraged until the late twenties, when it would be a prelude to a serious relationship leading to marriage. In cross-cultural comparisons, about three fourths of emerging adults in the United States and Europe report having had premarital sexual relations by age 20, versus less than one fifth in Japan and South Korea (Hatfield and Rapson, 2006). EA in Developing Countries: Low But Rising Emerging adulthood is well established as a normative life stage in the developed countries described thus far, but it is still growing in developing countries. Demographically, in developing countries as in OECD countries, the median ages for entering marriage and parenthood have been rising in recent decades, and an increasing proportion of young people have obtained post-secondary education. Nevertheless, currently it is only a minority of young people in developing countries who experience anything resembling emerging adulthood. The majority of the population still marries around age 20 and has long finished education by the late teens. As you can see in Figure 1, rates of enrollment in tertiary education are much lower in developing countries (represented by the five countries on the right) than in OECD countries (represented by the five countries on the left). For young people in developing countries, emerging adulthood exists only for the wealthier segment of society, mainly the urban middle class, whereas the rural and urban poor—the majority of the population—have no emerging adulthood and may even have no adolescence because they enter adult-like work at an early age and also begin marriage and parenthood relatively early. What Saraswathi and Larson (2002) observed about adolescence applies to emerging adulthood as well: “In many ways, the lives of middle-class youth in India, South East Asia, and Europe have more in common with each other than they do with those of poor youth in their own countries.” However, as globalization proceeds, and economic development along with it, the proportion of young people who experience emerging adulthood will increase as the middle class expands. By the end of the 21st century, emerging adulthood is likely to be normative worldwide. Conclusion The new life stage of emerging adulthood has spread rapidly in the past half-century and is continuing to spread. Now that the transition to adulthood is later than in the past, is this change positive or negative for emerging adults and their societies? Certainly there are some negatives. It means that young people are dependent on their parents for longer than in the past, and they take longer to become full contributing members of their societies. A substantial proportion of them have trouble sorting through the opportunities available to them and struggle with anxiety and depression, even though most are optimistic. However, there are advantages to having this new life stage as well. By waiting until at least their late twenties to take on the full range of adult responsibilities, emerging adults are able to focus on obtaining enough education and training to prepare themselves for the demands of today’s information- and technology-based economy. Also, it seems likely that if young people make crucial decisions about love and work in their late twenties or early thirties rather than their late teens and early twenties, their judgment will be more mature and they will have a better chance of making choices that will work out well for them in the long run. What can societies do to enhance the likelihood that emerging adults will make a successful transition to adulthood? One important step would be to expand the opportunities for obtaining tertiary education. The tertiary education systems of OECD countries were constructed at a time when the economy was much different, and they have not expanded at the rate needed to serve all the emerging adults who need such education. Furthermore, in some countries, such as the United States, the cost of tertiary education has risen steeply and is often unaffordable to many young people. In developing countries, tertiary education systems are even smaller and less able to accommodate their emerging adults. Across the world, societies would be wise to strive to make it possible for every emerging adult to receive tertiary education, free of charge. There could be no better investment for preparing young people for the economy of the future. Outside Resources Article: “Average Age of First-Time Moms Keeps Climbing In The U.S” - This NPR story was released in January of 2016 and discusses the rising age of first time pregnancies in US women. The rising average age is reflective of emerging adulthood. www.npr.org/sections/health-s...ing-in-the-u-s Article: “Emerging Adulthood: A Theory of Development from the Late Teens Through the Early Twenties.” - The author of the module, Dr. Arnett wrote this American Psychologist article. Is summarizes the theory of Emerging Adulthood. http://jeffreyarnett.com/articles/AR...ood_theory.pdf Article: “Why are so many people in their 20s taking so long to grow up?” - This article presents an interesting perspective that discusses the changing lifestyle of US individuals in there 20’s. http://www.nytimes.com/2010/08/22/ma...anted=all&_r=0 Video: “Jeffrey Jensen Arnett: Emerging Adulthood” - This video shows an interview with the author of the module, Jeffrey Jensen Arnett. Dr. Arnett talks about his book “Emerging Adulthood” as well as emerging adulthood as a lifespan. Web: Jeffrey Jensen Arnett website http://www.jeffreyarnett.com Web: Society for the Study of Emerging Adulthood. SSEA is “a multidisciplinary, international organization with a focus on theory and research related to emerging adulthood, which includes the age range of approximately 18 through 29 years. The website includes information on topics, events, and publications pertaining to emerging adults from diverse backgrounds, cultures, and countries.” http://www.ssea.org Discussion Questions 1. What kind of variations in emerging adulthood would you predict within your country? Would there be social class differences? Gender differences? Ethnic differences? 2. Looking at Figure 1, what contrasts do you observe between OECD countries and developing countries? Between males and females? What economic and cultural differences might explain these contrasts? 3. Do you agree or disagree with the author’s prediction that emerging adulthood is likely to become a life stage experienced worldwide in the decades to come? What factors are likely to determine whether this turns out to be true? Vocabulary Collectivism Belief system that emphasizes the duties and obligations that each person has toward others. Developed countries The economically advanced countries of the world, in which most of the world’s wealth is concentrated. Developing countries The less economically advanced countries that comprise the majority of the world’s population. Most are currently developing at a rapid rate. Emerging adulthood A new life stage extending from approximately ages 18 to 25, during which the foundation of an adult life is gradually constructed in love and work. Primary features include identity explorations, instability, focus on self-development, feeling incompletely adult, and a broad sense of possibilities. Individualism Belief system that exalts freedom, independence, and individual choice as high values. OECD countries Members of the Organization for Economic Co-operation and Development, comprised of the world’s wealthiest countries. Tertiary education Education or training beyond secondary school, usually taking place in a college, university, or vocational training program.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_6%3A_Development/6.5%3A_Emerging_Adulthood.txt
By Marissa L. Diener University of Utah This module focuses on parenthood as a developmental task of adulthood. Parents take on new roles as their children develop, transforming their identity as a parent as the developmental demands of their children change. The main influences on parenting, parent characteristics, child characteristics, and contextual factors, are described. learning objectives • Identify and describe the stages of parenthood. • Identify and describe the influences on parenting. The Development of Parents Think back to an emotional event you experienced as a child. How did your parents react to you? Did your parents get frustrated or criticize you, or did they act patiently and provide support and guidance? Did your parents provide lots of rules for you or let you make decisions on your own? Why do you think your parents behaved the way they did? Psychologists have attempted to answer these questions about the influences on parents and understand why parents behave the way they do. Because parents are critical to a child’s development, a great deal of research has been focused on the impact that parents have on children. Less is known, however, about the development of parents themselves and the impact of children on parents. Nonetheless, parenting is a major role in an adult’s life. Parenthood is often considered a normative developmental task of adulthood. Cross-cultural studies show that adolescents around the world plan to have children. In fact, most men and women in the United States will become parents by the age of 40 years (Martinez, Daniels, & Chandra, 2012). People have children for many reasons, including emotional reasons (e.g., the emotional bond with children and the gratification the parent–child relationship brings), economic and utilitarian reasons (e.g., children provide help in the family and support in old age), and social-normative reasons (e.g., adults are expected to have children; children provide status) (Nauck, 2007). Parenthood is undergoing changes in the United States and elsewhere in the world. Children are less likely to be living with both parents, and women in the United States have fewer children than they did previously. The average fertility rate of women in the United States was about seven children in the early 1900s and has remained relatively stable at 2.1 since the 1970s (Hamilton, Martin, & Ventura, 2011; Martinez, Daniels, & Chandra, 2012). Not only are parents having fewer children, the context of parenthood has also changed. Parenting outside of marriage has increased dramatically among most socioeconomic, racial, and ethnic groups, although college-educated women are substantially more likely to be married at the birth of a child than are mothers with less education (Dye, 2010). Parenting is occurring outside of marriage for many reasons, both economic and social. People are having children at older ages, too. Despite the fact that young people are more often delaying childbearing, most 18- to 29-year-olds want to have children and say that being a good parent is one of the most important things in life (Wang & Taylor, 2011). Galinsky (1987) was one of the first to emphasize the development of parents themselves, how they respond to their children’s development, and how they grow as parents. Parenthood is an experience that transforms one’s identity as parents take on new roles. Children’s growth and development force parents to change their roles. They must develop new skills and abilities in response to children’s development. Galinsky identified six stages of parenthood that focus on different tasks and goals (see Table 2). 1. The Image-Making Stage As prospective parents think about and form images about their roles as parents and what parenthood will bring, and prepare for the changes an infant will bring, they enter the image-making stage. Future parents develop their ideas about what it will be like to be a parent and the type of parent they want to be. Individuals may evaluate their relationships with their own parents as a model of their roles as parents. 2. The Nurturing Stage The second stage, the nurturing stage, occurs at the birth of the baby. A parent’s main goal during this stage is to develop an attachment relationship to their baby. Parents must adapt their romantic relationships, their relationships with their other children, and with their own parents to include the new infant. Some parents feel attached to the baby immediately, but for other parents, this occurs more gradually. Parents may have imagined their infant in specific ways, but they now have to reconcile those images with their actual baby. In incorporating their relationship with their child into their other relationships, parents often have to reshape their conceptions of themselves and their identity. Parenting responsibilities are the most demanding during infancy because infants are completely dependent on caregiving. 3. The Authority Stage The authority stage occurs when children are 2 years old until about 4 or 5 years old. In this stage, parents make decisions about how much authority to exert over their children’s behavior. Parents must establish rules to guide their child’s behavior and development. They have to decide how strictly they should enforce rules and what to do when rules are broken. 4. The Interpretive Stage The interpretive stage occurs when children enter school (preschool or kindergarten) to the beginning of adolescence. Parents interpret their children’s experiences as children are increasingly exposed to the world outside the family. Parents answer their children’s questions, provide explanations, and determine what behaviors and values to teach. They decide what experiences to provide their children, in terms of schooling, neighborhood, and extracurricular activities. By this time, parents have experience in the parenting role and often reflect on their strengths and weaknesses as parents, review their images of parenthood, and determine how realistic they have been. Parents have to negotiate how involved to be with their children, when to step in, and when to encourage children to make choices independently. 5. The Interdependent Stage Parents of teenagers are in the interdependent stage. They must redefine their authority and renegotiate their relationship with their adolescent as the children increasingly make decisions independent of parental control and authority. On the other hand, parents do not permit their adolescent children to have complete autonomy over their decision-making and behavior, and thus adolescents and parents must adapt their relationship to allow for greater negotiation and discussion about rules and limits. 6. The Departure Stage During the departure stage of parenting, parents evaluate the entire experience of parenting. They prepare for their child’s departure, redefine their identity as the parent of an adult child, and assess their parenting accomplishments and failures. This stage forms a transition to a new era in parents’ lives. This stage usually spans a long time period from when the oldest child moves away (and often returns) until the youngest child leaves. The parenting role must be redefined as a less central role in a parent’s identity. Despite the interest in the development of parents among lay people and helping professionals, little research has examined developmental changes in parents’ experience and behaviors over time. Thus, it is not clear whether these theoretical stages are generalizable to parents of different races, ages, and religions, nor do we have empirical data on the factors that influence individual differences in these stages. On a practical note, how-to books and websites geared toward parental development should be evaluated with caution, as not all advice provided is supported by research. Influences on Parenting Parenting is a complex process in which parents and children influence one another. There are many reasons that parents behave the way they do. The multiple influences on parenting are still being explored. Proposed influences on parental behavior include 1) parent characteristics, 2) child characteristics, and 3) contextual and sociocultural characteristics (Belsky, 1984; Demick, 1999) (see Figure 6.6.1). Parent Characteristics Parents bring unique traits and qualities to the parenting relationship that affect their decisions as parents. These characteristics include the age of the parent, gender, beliefs, personality, developmental history, knowledge about parenting and child development, and mental and physical health. Parents’ personalities affect parenting behaviors. Mothers and fathers who are more agreeable, conscientious, and outgoing are warmer and provide more structure to their children. Parents who are more agreeable, less anxious, and less negative also support their children’s autonomy more than parents who are anxious and less agreeable (Prinzie, Stams, Dekovic, Reijntjes, & Belsky, 2009). Parents who have these personality traits appear to be better able to respond to their children positively and provide a more consistent, structured environment for their children. Parents’ developmental histories, or their experiences as children, also affect their parenting strategies. Parents may learn parenting practices from their own parents. Fathers whose own parents provided monitoring, consistent and age-appropriate discipline, and warmth were more likely to provide this constructive parenting to their own children (Kerr, Capaldi, Pears, & Owen, 2009). Patterns of negative parenting and ineffective discipline also appear from one generation to the next. However, parents who are dissatisfied with their own parents’ approach may be more likely to change their parenting methods with their own children. Child Characteristics Parenting is bidirectional. Not only do parents affect their children, children influence their parents. Child characteristics, such as gender, birth order, temperament, and health status, affect parenting behaviors and roles. For example, an infant with an easy temperament may enable parents to feel more effective, as they are easily able to soothe the child and elicit smiling and cooing. On the other hand, a cranky or fussy infant elicits fewer positive reactions from his or her parents and may result in parents feeling less effective in the parenting role (Eisenberg et al., 2008). Over time, parents of more difficult children may become more punitive and less patient with their children (Clark, Kochanska, & Ready, 2000; Eisenberg et al., 1999; Kiff, Lengua, & Zalewski, 2011). Parents who have a fussy, difficult child are less satisfied with their marriages and have greater challenges in balancing work and family roles (Hyde, Else-Quest, & Goldsmith, 2004). Thus, child temperament is one of the child characteristics that influences how parents behave with their children. Another child characteristic is the gender of the child. Parents respond differently to boys and girls. Parents often assign different household chores to their sons and daughters. Girls are more often responsible for caring for younger siblings and household chores, whereas boys are more likely to be asked to perform chores outside the home, such as mowing the lawn (Grusec, Goodnow, & Cohen, 1996). Parents also talk differently with their sons and daughters, providing more scientific explanations to their sons and using more emotion words with their daughters (Crowley, Callanan, Tenenbaum, & Allen, 2001). Contextual Factors and Sociocultural Characteristics The parent–child relationship does not occur in isolation. Sociocultural characteristics, including economic hardship, religion, politics, neighborhoods, schools, and social support, also influence parenting. Parents who experience economic hardship are more easily frustrated, depressed, and sad, and these emotional characteristics affect their parenting skills (Conger & Conger, 2002). Culture also influences parenting behaviors in fundamental ways. Although promoting the development of skills necessary to function effectively in one’s community is a universal goal of parenting, the specific skills necessary vary widely from culture to culture. Thus, parents have different goals for their children that partially depend on their culture (Tamis-LeMonda et al., 2008). For example, parents vary in how much they emphasize goals for independence and individual achievements, and goals involving maintaining harmonious relationships and being embedded in a strong network of social relationships. These differences in parental goals are influenced by culture and by immigration status. Other important contextual characteristics, such as the neighborhood, school, and social networks, also affect parenting, even though these settings don’t always include both the child and the parent (Brofenbrenner, 1989). For example, Latina mothers who perceived their neighborhood as more dangerous showed less warmth with their children, perhaps because of the greater stress associated with living a threatening environment (Gonzales et al., 2011). Many contextual factors influence parenting. Conclusion Many factors influence parenting decisions and behaviors. These factors include characteristics of the parent, such as gender and personality, as well as characteristics of the child, such as age. The context is also important. The interaction among all these factors creates many different patterns of parenting behavior. Furthermore, parenting influences not just a child’s development, but also the development of the parent. As parents are faced with new challenges, they change their parenting strategies and construct new aspects of their identity. The goals and tasks of parents change over time as their children develop. Outside Resources Article: “Is a Child's Behavior Always a Reflection of His Parents?” - This article is written by Dr. Peggy Drexler and discusses the notion that child behavior is not always a reflection of parenting. http://www.huffingtonpost.com/peggy-...b_1886367.html Article: “Parent behavior toward first and second children” - This journal article describes how parents behave differently and the same with their first and second born children. This is an interesting read to learn more about parenting behavior and how it changes based on a child characteristic, birth order. http://psycnet.apa.org/psycinfo/1954-08594-001 Org: American Psychological Association (APA), Parenting - Parenting is a psychology topic explored by APA. They state that, “Parenting practices around the world share three major goals: ensuring children’s health and safety, preparing children for life as productive adults and transmitting cultural values. A high-quality parent-child relationship is critical for healthy development.” This webpage links to articles to support these goals. http://www.apa.org/topics/parenting/ Org: Society for the Research in Child Development (SRCD) - SRCD works to coordinate and integrate research in human development. It aims to assist in the dissemination of research findings and in this way can be a great resource to teachers and students. http://www.srcd.org Web: American Psychological Association- Information and Resources on Parenting http://www.apa.org/topics/parenting/index.aspx Web: NPR, Parenting - National Public Radio presented interesting stories on many topics related to child development. The page linked here has many stories on parenting. http://www.npr.org/tags/126952921/parenting Web: PBS Parents: Child Development - PBS has some interesting resources for parents including articles, games and products. This is a good resource for students looking for some friendly and less psychology based sources that they can read or share with their own families. http://www.pbs.org/parents/child-development/ Discussion Questions 1. Reflect on the way you were raised. Consider the parenting behaviors (e.g., rules, discipline strategies, warmth, and support) used in your household when you were a child. Why do you think your parents behaved this way? How do these factors fit with the influences on parenting described here? Provide specific examples of multiple influences on parenting. 2. Think about different parents and grandparents you know. Do the challenges they face as parents differ based on the age of their children? Do your observations fit with Galinsky’s stages of parenting? 3. What type of parent do you envision yourself becoming? If you are a parent, how do you parent your child/children? How do you think this is similar to or different than the way you were raised? What influences exist in your life that will make you parent differently from your own parents? Vocabulary Authority stage Stage from approximately 2 years to age 4 or 5 when parents create rules and figure out how to effectively guide their children’s behavior. Bidirectional The idea that parents influence their children, but their children also influence the parents; the direction of influence goes both ways, from parent to child, and from child to parent. Departure stage Stage at which parents prepare for a child to depart and evaluate their successes and failures as parents. Image-making stage Stage during pregnancy when parents consider what it means to be a parent and plan for changes to accommodate a child. Interdependent stage Stage during teenage years when parents renegotiate their relationship with their adolescent children to allow for shared power in decision-making. Interpretive stage Stage from age 4or 5 to the start of adolescence when parents help their children interpret their experiences with the social world beyond the family. Nurturing stage Stage from birth to around 18-24 months in which parents develop an attachment relationship with child and adapt to the new baby. Temperament A child’s innate personality; biologically based personality, including qualities such as activity level, emotional reactivity, sociability, mood, and soothability.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_6%3A_Development/6.6%3A_The_Developing_Parent.txt
By Tara Queen and Jacqui Smith University of Michigan Traditionally, research on aging described only the lives of people over age 65 and the very old. Contemporary theories and research recognize that biogenetic and psychological processes of aging are complex and lifelong. Functioning in each period of life is influenced by what happened earlier and, in turn, affects subsequent change. We all age in specific social and historical contexts. Together, these multiple influences on aging make it difficult to define when middle-age or old age begins. This module describes central concepts and research about adult development and aging. We consider contemporary questions about cognitive aging and changes in personality, self-related beliefs, social relationships, and subjective well-being. These four aspects of psychosocial aging are related to health and longevity. learning objectives • Explain research approaches to studying aging. • Describe cognitive, psychosocial, and physical changes that occur with age. • Provide examples of how age-related changes in these domains are observed in the context of everyday life. Introduction We are currently living in an aging society (Rowe, 2009). Indeed, by 2030 when the last of the Baby Boomers reach age 65, the U.S. older population will be double that of 2010. Furthermore, because of increases in average life expectancy, each new generation can expect to live longer than their parents’ generation and certainly longer than their grandparents’ generation. As a consequence, it is time for individuals of all ages to rethink their personal life plans and consider prospects for a long life. When is the best time to start a family? Will the education gained up to age 20 be sufficient to cope with future technological advances and marketplace needs? What is the right balance between work, family, and leisure throughout life? What's the best age to retire? How can I age successfully and enjoy life to the fullest when I'm 80 or 90? In this module we will discuss several different domains of psychological research on aging that will help answer these important questions. Overview: Life Span and Life Course Perspectives on Aging Just as young adults differ from one another, older adults are also not all the same. In each decade of adulthood, we observe substantial heterogeneity in cognitive functioning, personality, social relationships, lifestyle, beliefs, and satisfaction with life. This heterogeneity reflects differences in rates of biogenetic and psychological aging and the sociocultural contexts and history of people's lives (Bronfenbrenner, 1979; Fingerman, Berg, Smith, & Antonucci, 2011). Theories of aging describe how these multiple factors interact and change over time. They describe why functioning differs on average between young, middle-aged, young-old, and very old adults and why there is heterogeneity within these age groups. Life course theories, for example, highlight the effects of social expectations and the normative timing of life events and social roles (e.g., becoming a parent, retirement). They also consider the lifelong cumulative effects of membership in specific cohorts (generations) and sociocultural subgroups (e.g., race, gender, socioeconomic status) and exposure to historical events (e.g., war, revolution, natural disasters; Elder, Johnson, & Crosnoe, 2003; Settersten, 2005). Life span theories complement the life-course perspective with a greater focus on processes within the individual (e.g., the aging brain). This approach emphasizes the patterning of lifelong intra- and inter-individual differences in the shape (gain, maintenance, loss), level, and rate of change (Baltes, 1987, 1997). Both life course and life span researchers generally rely on longitudinal studies to examine hypotheses about different patterns of aging associated with the effects of biogenetic, life history, social, and personal factors. Cross-sectional studies provide information about age-group differences, but these are confounded with cohort, time of study, and historical effects. Cognitive Aging Researchers have identified areas of both losses and gains in cognition in older age. Cognitive ability and intelligence are often measured using standardized tests and validated measures. The psychometric approachhas identified two categories of intelligence that show different rates of change across the life span (Schaie & Willis, 1996). Fluid intelligence refers to information processing abilities, such as logical reasoning, remembering lists, spatial ability, and reaction time. Crystallized intelligence encompasses abilities that draw upon experience and knowledge. Measures of crystallized intelligence include vocabulary tests, solving number problems, and understanding texts. With age, systematic declines are observed on cognitive tasks requiring self-initiated, effortful processing, without the aid of supportive memory cues (Park, 2000). Older adults tend to perform poorer than young adults on memory tasks that involve recallof information, where individuals must retrieve information they learned previously without the help of a list of possible choices. For example, older adults may have more difficulty recalling facts such as names or contextual details about where or when something happened (Craik, 2000). What might explain these deficits as we age? As we age, working memory, or our ability to simultaneously store and use information, becomes less efficient (Craik & Bialystok, 2006). The ability to process information quickly also decreases with age. This slowing of processing speedmay explain age differences on many different cognitive tasks (Salthouse, 2004). Some researchers have argued that inhibitory functioning, or the ability to focus on certain information while suppressing attention to less pertinent information, declines with age and may explain age differences in performance on cognitive tasks (Hasher & Zacks, 1988). Finally, it is well established that our hearing and vision decline as we age. Longitudinal research has proposed that deficits in sensory functioning explain age differences in a variety of cognitive abilities (Baltes & Lindenberger, 1997). Fewer age differences are observed when memory cues are available, such as for recognition memory tasks, or when individuals can draw upon acquired knowledge or experience. For example, older adults often perform as well if not better than young adults on tests of word knowledge or vocabulary. With age often comes expertise, and research has pointed to areas where aging experts perform as well or better than younger individuals. For example, older typists were found to compensate for age-related declines in speed by looking farther ahead at printed text (Salthouse, 1984). Compared to younger players, older chess experts are able to focus on a smaller set of possible moves, leading to greater cognitive efficiency (Charness, 1981). Accrued knowledge of everyday tasks, such as grocery prices, can help older adults to make better decisions than young adults (Tentori, Osheron, Hasher, & May, 2001). How do changes or maintenance of cognitive ability affect older adults’ everyday lives? Researchers have studied cognition in the context of several different everyday activities. One example is driving. Although older adults often have more years of driving experience, cognitive declines related to reaction time or attentional processes may pose limitations under certain circumstances (Park & Gutchess, 2000). Research on interpersonal problem solving suggested that older adults use more effective strategies than younger adults to navigate through social and emotional problems (Blanchard-Fields, 2007). In the context of work, researchers rarely find that older individuals perform poorer on the job (Park & Gutchess, 2000). Similar to everyday problem solving, older workers may develop more efficient strategies and rely on expertise to compensate for cognitive decline. Personality and Self-Related Processes Research on adult personality examines normative age-related increases and decreases in the expression of the so-called "Big Five" traits—extraversion, neuroticism, conscientiousness, agreeableness, and openness to new experience. Does personality change throughout adulthood? Previously the answer was no, but contemporary research shows that although some people’s personalities are relatively stable over time, others’ are not (Lucas & Donnellan, 2011; Roberts & Mroczek, 2008). Longitudinal studies reveal average changes during adulthood in the expression of some traits (e.g., neuroticism and openness decrease with age and conscientiousness increases) and individual differences in these patterns due to idiosyncratic life events (e.g., divorce, illness). Longitudinal research also suggests that adult personality traits, such as conscientiousness, predict important life outcomes including job success, health, and longevity (Friedman, Tucker, Tomlinson-Keasey, Schwartz, Wingard, & Criqui, 1993; Roberts, Kuncel, Shiner, Caspi, & Goldberg, 2007). In contrast to the relative stability of personality traits, theories about the aging self-propose changes in self-related knowledge, beliefs, and autobiographical narratives. Responses to questions such as “Tell me something about yourself. Who are you?” "What are your hopes for the future?" provide insight into the characteristics and life themes that an individual considers uniquely distinguish him or herself from others. These self-descriptions enhance self-esteem and guide behavior (Markus & Nurius, 1986; McAdams, 2006). Theory suggests that as we age, themes that were relatively unimportant in young and middle adulthood gain in salience (e.g., generativity, health) and that people view themselves as improving over time (Ross & Wilson, 2003). Reorganizing personal life narratives and self-descriptions are the major tasks of midlife and young-old age due to transformations in professional and family roles and obligations. In advanced old age, self-descriptions are often characterized by a life review and reflections about having lived a long life. Birren and Schroots (2006), for example, found the process of life review in late life helped individuals confront and cope with the challenges of old age. One aspect of the self that particularly interests life span and life course psychologists is the individual’s perception and evaluation of their own aging and identification with an age group. Subjective age is a multidimensional construct that indicates how old (or young) a person feels and into which age group a person categorizes him- or herself. After early adulthood, most people say that they feel younger than their chronological age and the gap between subjective age and actual age generally increases. On average, after age 40 people report feeling 20% younger than their actual age (e.g., Rubin & Berntsen, 2006). Asking people how satisfied they are with their own aging assesses an evaluative component of age identity. Whereas some aspects of age identity are positively valued (e.g., acquiring seniority in a profession or becoming a grandparent), others may be less valued, depending on societal context. Perceived physical age (i.e., the age one looks in a mirror) is one aspect that requires considerable self-related adaptation in social and cultural contexts that value young bodies. Feeling younger and being satisfied with one’s own aging are expressions of positive self-perceptions of aging. They reflect the operation of self-related processes that enhance well-being. Levy (2009) found that older individuals who are able to adapt to and accept changes in their appearance and physical capacity in a positive way report higher well-being, have better health, and live longer. Social Relationships Social ties to family, friends, mentors, and peers are primary resources of information, support, and comfort. Individuals develop and age together with family and friends and interact with others in the community. Across the life course, social ties are accumulated, lost, and transformed. Already in early life, there are multiple sources of heterogeneity in the characteristics of each person's social network of relationships (e.g., size, composition, and quality). Life course and life span theories and research about age-related patterns in social relationships focus on understanding changes in the processes underlying social connections. Antonucci's Convoy Model of Social Relations (2001; Kahn & Antonucci, 1980), for example, suggests that the social connections that people accumulate are held together by exchanges in social support (e.g., tangible and emotional). The frequency, types, and reciprocity of the exchanges change with age and in response to need, and in turn, these exchanges impact the health and well-being of the givers and receivers in the convoy. In many relationships, it is not the actual objective exchange of support that is critical but instead the perception that support is available if needed (Uchino, 2009). Carstensen’s Socioemotional Selectivity Theory (1993; Carstensen, Isaacowitz, & Charles, 1999) focuses on changes in motivation for actively seeking social contact with others. She proposes that with increasing age our motivational goals change from information gathering to emotion regulation. To optimize the experience of positive affect, older adults actively restrict their social life to prioritize time spent with emotionally close significant others. In line with this, older marriages are found to be characterized by enhanced positive and reduced negative interactions and older partners show more affectionate behavior during conflict discussions than do middle-aged partners (Carstensen, Gottman, & Levenson, 1995). Research showing that older adults have smaller networks compared to young adults and tend to avoid negative interactions also supports this theory. Similar selective processes are also observed when time horizons for interactions with close partners shrink temporarily for young adults (e.g., impending geographical separations). Much research focuses on the associations between specific effects of long-term social relationships and health in later life. Older married individuals who receive positive social and emotional support from their partner generally report better health than their unmarried peers (Antonucci, 2001; Umberson, Williams, Powers, Liu, & Needham, 2006; Waite & Gallagher, 2000). Despite the overall positive health effects of being married in old age (compared with being widowed, divorced, or single), living as a couple can have a "dark side" if the relationship is strained or if one partner is the primary caregiver. The consequences of positive and negative aspects of relationships are complex (Birditt & Antonucci, 2008; Rook, 1998; Uchino, 2009). For example, in some circumstances, criticism from a partner may be perceived as valid and useful feedback whereas in others it is considered unwarranted and hurtful. In long-term relationships, habitual negative exchanges might have diminished effects. Parent-child and sibling relationships are often the most long-term and emotion-laden social ties. Across the life span, the parent-child tie, for example, is characterized by a paradox of solidarity, conflict, and ambivalence (Fingerman, Chen, Hay, Cichy, & Lefkowitz, 2006). Emotion and Well-being As we get older, the likelihood of losing loved ones or experiencing declines in health increases. Does the experience of such losses result in decreases in well-being in older adulthood? Researchers have found that well-being differs across the life span and that the patterns of these differences depend on how well-being is measured. Measures of global subjective well-being assess individuals’ overall perceptions of their lives. This can include questions about life satisfaction or judgments of whether individuals are currently living the best life possible. What factors may contribute to how people respond to these questions? Age, health, personality, social support, and life experiences have been shown to influence judgments of global well-being. It is important to note that predictors of well-being may change as we age. What is important to life satisfaction in young adulthood can be different in later adulthood (George, 2010). Early research on well-being argued that life events such as marriage or divorce can temporarily influence well-being, but people quickly adapt and return to a neutral baseline (called the hedonic treadmill; Diener, Lucas, & Scollon, 2006). More recent research suggests otherwise. Using longitudinal data, researchers have examined well-being prior to, during, and after major life events such as widowhood, marriage, and unemployment (Lucas, 2007). Different life events influence well-being in different ways, and individuals do not often adapt back to baseline levels of well-being. The influence of events, such as unemployment, may have a lasting negative influence on well-being as people age. Research suggests that global well-being is highest in early and later adulthood and lowest in midlife (Stone, Schwartz, Broderick, & Deaton, 2010). Hedonic well-being refers to the emotional component of well-being and includes measures of positive (e.g., happiness, contentment) and negative affect (e.g., stress, sadness). The pattern of positive affect across the adult life span is similar to that of global well-being, with experiences of positive emotions such as happiness and enjoyment being highest in young and older adulthood. Experiences of negative affect, particularly stress and anger, tend to decrease with age. Experiences of sadness are lowest in early and later adulthood compared to midlife (Stone et al., 2010). Other research finds that older adults report more positive and less negative affect than middle age and younger adults (Magai, 2008; Mroczek, 2001). It should be noted that both global well-being and positive affect tend to taper off during late older adulthood and these declines may be accounted for by increases in health-related losses during these years (Charles & Carstensen, 2010). Psychological well-being aims to evaluate the positive aspects of psychosocial development, as opposed to factors of ill-being, such as depression or anxiety. Ryff’s model of psychological well-being proposes six core dimensions of positive well-being. Older adults tend to report higher environmental mastery (feelings of competence and control in managing everyday life) and autonomy (independence), lower personal growth and purpose in life, and similar levels of positive relations with others as younger individuals (Ryff, 1995). Links between health and interpersonal flourishing, or having high-quality connections with others, may be important in understanding how to optimize quality of life in old age (Ryff & Singer, 2000). Successful Aging and Longevity Increases in average life expectancy in the 20th century and evidence from twin studies that suggests that genes account for only 25% of the variance in human life spans have opened new questions about implications for individuals and society (Christensen, Doblhammer, Rau, & Vaupel, 2009). What environmental and behavioral factors contribute to a healthy long life? Is it possible to intervene to slow processes of aging or to minimize cognitive decline, prevent dementia, and ensure life quality at the end of life (Fratiglioni, Paillard-Borg, & Winblad, 2004; Hertzog, Kramer, Wilson, & Lindenberger, 2009; Lang, Baltes, & Wagner, 2007)? Should interventions focus on late life, midlife, or indeed begin in early life? Suggestions that pathological change (e.g., dementia) is not an inevitable component of aging and that pathology could at least be delayed until the very end of life led to theories about successful aging and proposals about targets for intervention. Rowe and Kahn (1997) defined three criteria of successful aging: (a) the relative avoidance of disease, disability, and risk factors like high blood pressure, smoking, or obesity; (b) the maintenance of high physical and cognitive functioning; and (c) active engagement in social and productive activities. Although such definitions of successful aging are value-laden, research and behavioral interventions have subsequently been guided by this model. For example, research has suggested that age-related declines in cognitive functioning across the adult life span may be slowed through physical exercise and lifestyle interventions (Kramer & Erickson, 2007). It is recognized, however, that societal and environmental factors also play a role and that there is much room for social change and technical innovation to accommodate the needs of the Baby Boomers and later generations as they age in the next decades. Outside Resources Web: Columbia Aging Society http://www.agingsocietynetwork.org/ Web: Columbia International Longevity Center www.mailman.columbia.edu/acad...ledge-transfer Web: National Institute on Aging http://www.nia.nih.gov/ Web: Stanford Center Longevity http://longevity3.stanford.edu/ Discussion Questions 1. How do age stereotypes and intergenerational social interactions shape quality of life in older adults? What are the implications of the research of Levy and others? 2. Researchers suggest that there is both stability and change in Big Five personality traits after age 30. What is stable? What changes? 3. Describe the Social Convoy Model of Antonucci. What are the implications of this model for older adults? 4. Memory declines during adulthood. Is this statement correct? What does research show? 5. Is dementia inevitable in old age? What factors are currently thought to be protective? 6. What are the components of successful aging described by Rowe and Kahn (1998) and others? What outcomes are used to evaluate successful aging? Vocabulary Age identity How old or young people feel compared to their chronological age; after early adulthood, most people feel younger than their chronological age. Autobiographical narratives A qualitative research method used to understand characteristics and life themes that an individual considers to uniquely distinguish him- or herself from others. Average life expectancy Mean number of years that 50% of people in a specific birth cohort are expected to survive. This is typically calculated from birth but is also sometimes re-calculated for people who have already reached a particular age (e.g., 65). Cohort Group of people typically born in the same year or historical period, who share common experiences over time; sometimes called a generation (e.g., Baby Boom Generation). Convoy Model of Social Relations Theory that proposes that the frequency, types, and reciprocity of social exchanges change with age. These social exchanges impact the health and well-being of the givers and receivers in the convoy. Cross-sectional studies Research method that provides information about age group differences; age differences are confounded with cohort differences and effects related to history and time of study. Crystallized intelligence Type of intellectual ability that relies on the application of knowledge, experience, and learned information. Fluid intelligence Type of intelligence that relies on the ability to use information processing resources to reason logically and solve novel problems. Global subjective well-being Individuals’ perceptions of and satisfaction with their lives as a whole. Hedonic well-being Component of well-being that refers to emotional experiences, often including measures of positive (e.g., happiness, contentment) and negative affect (e.g., stress, sadness). Heterogeneity Inter-individual and subgroup differences in level and rate of change over time. Inhibitory functioning Ability to focus on a subset of information while suppressing attention to less relevant information. Intra- and inter-individual differences Different patterns of development observed within an individual (intra-) or between individuals (inter-). Life course theories Theory of development that highlights the effects of social expectations of age-related life events and social roles; additionally considers the lifelong cumulative effects of membership in specific cohorts and sociocultural subgroups and exposure to historical events. Life span theories Theory of development that emphasizes the patterning of lifelong within- and between-person differences in the shape, level, and rate of change trajectories. Longitudinal studies Research method that collects information from individuals at multiple time points over time, allowing researchers to track cohort differences in age-related change to determine cumulative effects of different life experiences. Processing speed The time it takes individuals to perform cognitive operations (e.g., process information, react to a signal, switch attention from one task to another, find a specific target object in a complex picture). Psychometric approach Approach to studying intelligence that examines performance on tests of intellectual functioning. Recall Type of memory task where individuals are asked to remember previously learned information without the help of external cues. Recognition Type of memory task where individuals are asked to remember previously learned information with the assistance of cues. Self-perceptions of aging An individual’s perceptions of their own aging process; positive perceptions of aging have been shown to be associated with greater longevity and health. Social network Network of people with whom an individual is closely connected; social networks provide emotional, informational, and material support and offer opportunities for social engagement. Socioemotional Selectivity Theory Theory proposed to explain the reduction of social partners in older adulthood; posits that older adults focus on meeting emotional over information-gathering goals, and adaptively select social partners who meet this need. Subjective age A multidimensional construct that indicates how old (or young) a person feels and into which age group a person categorizes him- or herself Successful aging Includes three components: avoiding disease, maintaining high levels of cognitive and physical functioning, and having an actively engaged lifestyle. Working memory Memory system that allows for information to be simultaneously stored and utilized or manipulated.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_6%3A_Development/6.7%3A_Aging.txt
By R. Chris Fraley University of Illinois at Urbana-Champaign The purpose of this module is to provide a brief review of attachment theory—a theory designed to explain the significance of the close, emotional bonds that children develop with their caregivers and the implications of those bonds for understanding personality development. The module discusses the origins of the theory, research on individual differences in attachment security in infancy and childhood, and the role of attachment in adult relationships. learning objectives • Explain the way the attachment system works and its evolutionary significance. • Identify three commonly studied attachment patterns and what is known about the development of those patterns. • Describe what is known about the consequences of secure versus insecure attachment in adult relationships. Introduction Some of the most rewarding experiences in people’s lives involve the development and maintenance of close relationships. For example, some of the greatest sources of joy involve falling in love, starting a family, being reunited with distant loved ones, and sharing experiences with close others. And, not surprisingly, some of the most painful experiences in people’s lives involve the disruption of important social bonds, such as separation from a spouse, losing a parent, or being abandoned by a loved one. Why do close relationships play such a profound role in human experience? Attachment theory is one approach to understanding the nature of close relationships. In this module, we review the origins of the theory, the core theoretical principles, and some ways in which attachment influences human behavior, thoughts, and feelings across the life course. Attachment Theory: A Brief History and Core Concepts Attachment theory was originally developed in the 1940s by John Bowlby, a British psychoanalyst who was attempting to understand the intense distress experienced by infants who had been separated from their parents. Bowlby (1969) observed that infants would go to extraordinary lengths to prevent separation from their parents or to reestablish proximity to a missing parent. For example, he noted that children who had been separated from their parents would often cry, call for their parents, refuse to eat or play, and stand at the door in desperate anticipation of their parents’ return. At the time of Bowlby’s initial writings, psychoanalytic writers held that these expressions were manifestations of immature defense mechanisms that were operating to repress emotional pain. However, Bowlby observed that such expressions are common to a wide variety of mammalian species and speculated that these responses to separation may serve an evolutionary function (see Focus Topic 1). Focus Topic 1: Harlow's research on contact comfort When Bowlby was originally developing his theory of attachment, there were alternative theoretical perspectives on why infants were emotionally attached to their primary caregivers (most often, their biological mothers). Bowlby and other theorists, for example, believed that there was something important about the responsiveness and contact provided by mothers. Other theorists, in contrast, argued that young infants feel emotionally connected to their mothers because mothers satisfy more basic needs, such as the need for food. That is, the child comes to feel emotionally connected to the mother because she is associated with the reduction of primary drives, such as hunger, rather than the reduction of drives that might be relational in nature. In a classic set of studies, psychologist Harry Harlow placed young monkeys in cages that contained two artificial, surrogate “mothers” (Harlow, 1958). One of those surrogates was a simple wire contraption; the other was a wire contraption covered in cloth. Both of the surrogate mothers were equipped with a feeding tube so that Harrow and his colleagues had the option to allow the surrogate to deliver or not deliver milk. Harlow found that the young macaques spent a disproportionate amount of time with the cloth surrogate as opposed to the wire surrogate. Moreover, this was true even when the infants were fed by the wire surrogate rather than the cloth surrogate. This suggests that the strong emotional bond that infants form with their primary caregivers is rooted in something more than whether the caregiver provides food per se. Harlow’s research is now regarded as one of the first experimental demonstrations of the importance of “contact comfort” in the establishment of infant–caregiver bonds. Drawing on evolutionary theory, Bowlby (1969) argued that these behaviors are adaptive responses to separation from a primary attachment figure—a caregiver who provides support, protection, and care. Because human infants, like other mammalian infants, cannot feed or protect themselves, they are dependent upon the care and protection of “older and wiser” adults for survival. Bowlby argued that, over the course of evolutionary history, infants who were able to maintain proximity to an attachment figure would be more likely to survive to a reproductive age. According to Bowlby, a motivational system, what he called the attachment behavioral system, was gradually “designed” by natural selection to regulate proximity to an attachment figure. The attachment system functions much like a thermostat that continuously monitors the ambient temperature of a room, comparing that temperature against a desired state and adjusting behavior (e.g., activating the furnace) accordingly. In the case of the attachment system, Bowlby argued that the system continuously monitors the accessibility of the primary attachment figure. If the child perceives the attachment figure to be nearby, accessible, and attentive, then the child feels loved, secure, and confident and, behaviorally, is likely to explore his or her environment, play with others, and be sociable. If, however, the child perceives the attachment figure to be inaccessible, the child experiences anxiety and, behaviorally, is likely to exhibit attachment behaviors ranging from simple visual searching on the low extreme to active searching, following, and vocal signaling on the other. These attachment behaviors continue either until the child is able to reestablish a desirable level of physical or psychological proximity to the attachment figure or until the child exhausts himself or herself or gives up, as may happen in the context of a prolonged separation or loss. Individual Differences in Infant Attachment Although Bowlby believed that these basic dynamics captured the way the attachment system works in most children, he recognized that there are individual differences in the way children appraise the accessibility of the attachment figure and how they regulate their attachment behavior in response to threats. However, it was not until his colleague, Mary Ainsworth, began to systematically study infant–parent separations that a formal understanding of these individual differences emerged. Ainsworth and her students developed a technique called the strange situation—a laboratory task for studying infant–parent attachment (Ainsworth, Blehar, Waters, & Wall, 1978). In the strange situation, 12-month-old infants and their parents are brought to the laboratory and, over a period of approximately 20 minutes, are systematically separated from and reunited with one another. In the strange situation, most children (about 60%) behave in the way implied by Bowlby’s normative theory. Specifically, they become upset when the parent leaves the room, but, when he or she returns, they actively seek the parent and are easily comforted by him or her. Children who exhibit this pattern of behavior are often called secure. Other children (about 20% or less) are ill at ease initially and, upon separation, become extremely distressed. Importantly, when reunited with their parents, these children have a difficult time being soothed and often exhibit conflicting behaviors that suggest they want to be comforted, but that they also want to “punish” the parent for leaving. These children are often called anxious-resistant. The third pattern of attachment that Ainsworth and her colleagues documented is often labeled avoidant. Avoidant children (about 20%) do not consistently behave as if they are stressed by the separation but, upon reunion, actively avoid seeking contact with their parent, sometimes turning their attention to play objects on the laboratory floor. Ainsworth’s work was important for at least three reasons. First, she provided one of the first empirical demonstrations of how attachment behavior is organized in unfamiliar contexts. Second, she provided the first empirical taxonomy of individual differences in infant attachment patterns. According to her research, at least three types of children exist: those who are secure in their relationship with their parents, those who are anxious-resistant, and those who are anxious-avoidant. Finally, she demonstrated that these individual differences were correlated with infant–parent interactions in the home during the first year of life. Children who appear secure in the strange situation, for example, tend to have parents who are responsive to their needs. Children who appear insecure in the strange situation (i.e., anxious-resistant or avoidant) often have parents who are insensitive to their needs, or inconsistent or rejecting in the care they provide. Antecedents of Attachment Patterns In the years that have followed Ainsworth’s ground-breaking research, researchers have investigated a variety of factors that may help determine whether children develop secure or insecure relationships with their primary attachment figures. As mentioned above, one of the key determinants of attachment patterns is the history of sensitive and responsive interactions between the caregiver and the child. In short, when the child is uncertain or stressed, the ability of the caregiver to provide support to the child is critical for his or her psychological development. It is assumed that such supportive interactions help the child learn to regulate his or her emotions, give the child the confidence to explore the environment, and provide the child with a safe haven during stressful circumstances. Evidence for the role of sensitive caregiving in shaping attachment patterns comes from longitudinal and experimental studies. For example, Grossmann, Grossmann, Spangler, Suess, and Unzner (1985) studied parent–child interactions in the homes of 54 families, up to three times during the first year of the child’s life. At 12 months of age, infants and their mothers participated in the strange situation. Grossmann and her colleagues found that children who were classified as secure in the strange situation at 12 months of age were more likely than children classified as insecure to have mothers who provided responsive care to their children in the home environment. Van den Boom (1994) developed an intervention that was designed to enhance maternal sensitive responsiveness. When the infants were 9 months of age, the mothers in the intervention group were rated as more responsive and attentive in their interaction with their infants compared to mothers in the control group. In addition, their infants were rated as more sociable, self-soothing, and more likely to explore the environment. At 12 months of age, children in the intervention group were more likely to be classified as secure than insecure in the strange situation. Attachment Patterns and Child Outcomes Attachment researchers have studied the association between children’s attachment patterns and their adaptation over time. Researchers have learned, for example, that children who are classified as secure in the strange situation are more likely to have high functioning relationships with peers, to be evaluated favorably by teachers, and to persist with more diligence in challenging tasks. In contrast, insecure-avoidant children are more likely to be construed as “bullies” or to have a difficult time building and maintaining friendships (Weinfield, Sroufe, Egeland, & Carlson, 2008). Attachment in Adulthood Although Bowlby was primarily focused on understanding the nature of the infant–caregiver relationship, he believed that attachment characterized human experience across the life course. It was not until the mid-1980s, however, that researchers began to take seriously the possibility that attachment processes may be relevant to adulthood. Hazan and Shaver (1987) were two of the first researchers to explore Bowlby’s ideas in the context of romantic relationships. According to Hazan and Shaver, the emotional bond that develops between adult romantic partners is partly a function of the same motivational system—the attachment behavioral system—that gives rise to the emotional bond between infants and their caregivers. Hazan and Shaver noted that in both kinds of relationship, people (a) feel safe and secure when the other person is present; (b) turn to the other person during times of sickness, distress, or fear; (c) use the other person as a “secure base” from which to explore the world; and (d) speak to one another in a unique language, often called “motherese” or “baby talk.” (See Focus Topic 2) Focus Topic 2: Attachment and social media Social media websites and mobile communication services are coming to play an increasing role in people’s lives. Many people use Facebook, for example, to keep in touch with family and friends, to update their loved ones regarding things going on in their lives, and to meet people who share similar interests. Moreover, modern cellular technology allows people to get in touch with their loved ones much easier than was possible a mere 20 years ago. From an attachment perspective, these innovations in communications technology are important because they allow people to stay connected virtually to their attachment figures—regardless of the physical distance that might exist between them. Recent research has begun to examine how attachment processes play out in the use of social media. Oldmeadow, Quinn, and Kowert (2013), for example, studied a diverse sample of individuals and assessed their attachment security and their use of Facebook. Oldmeadow and colleagues found that the use of Facebook may serve attachment functions. For example, people were more likely to report using Facebook to connect with others when they were experiencing negative emotions. In addition, the researchers found that people who were more anxious in their attachment orientation were more likely to use Facebook frequently, but people who were more avoidant used Facebook less and were less open on the site. On the basis of these parallels, Hazan and Shaver (1987) argued that adult romantic relationships, such as infant–caregiver relationships, are attachments. According to Hazan and Shaver, individuals gradually transfer attachment-related functions from parents to peers as they develop. Thus, although young children tend to use their parents as their primary attachment figures, as they reach adolescence and young adulthood, they come to rely more upon close friends and/or romantic partners for basic attachment-related functions. Thus, although a young child may turn to his or her mother for comfort, support, and guidance when distressed, scared, or ill, young adults may be more likely to turn to their romantic partners for these purposes under similar situations. Hazan and Shaver (1987) asked a diverse sample of adults to read the three paragraphs below and indicate which paragraph best characterized the way they think, feel, and behave in close relationships: 1. I am somewhat uncomfortable being close to others; I find it difficult to trust them completely, difficult to allow myself to depend on them. I am nervous when anyone gets too close, and often, others want me to be more intimate than I feel comfortable being. 2. I find it relatively easy to get close to others and am comfortable depending on them and having them depend on me. I don’t worry about being abandoned or about someone getting too close to me. 3. I find that others are reluctant to get as close as I would like. I often worry that my partner doesn’t really love me or won’t want to stay with me. I want to get very close to my partner, and this sometimes scares people away. Conceptually, these descriptions were designed to represent what Hazan and Shaver considered to be adult analogues of the kinds of attachment patterns Ainsworth described in the strange situation (avoidant, secure, and anxious, respectively). Hazan and Shaver (1987) found that the distribution of the three patterns was similar to that observed in infancy. In other words, about 60% of adults classified themselves as secure (paragraph B), about 20% described themselves as avoidant (paragraph A), and about 20% described themselves as anxious-resistant (paragraph C). Moreover, they found that people who described themselves as secure, for example, were more likely to report having had warm and trusting relationships with their parents when they were growing up. In addition, they were more likely to have positive views of romantic relationships. Based on these findings, Hazan and Shaver (1987) concluded that the same kinds of individual differences that exist in infant attachment also exist in adulthood. Research on Attachment in Adulthood Attachment theory has inspired a large amount of literature in social, personality, and clinical psychology. In the sections below, I provide a brief overview of some of the major research questions and what researchers have learned about attachment in adulthood. Who Ends Up with Whom? When people are asked what kinds of psychological or behavioral qualities they are seeking in a romantic partner, a large majority of people indicate that they are seeking someone who is kind, caring, trustworthy, and understanding—the kinds of attributes that characterize a “secure” caregiver (Chappell & Davis, 1998). But we know that people do not always end up with others who meet their ideals. Are secure people more likely to end up with secure partners—and, vice versa, are insecure people more likely to end up with insecure partners? The majority of the research that has been conducted to date suggests that the answer is “yes.” Frazier, Byer, Fischer, Wright, and DeBord (1996), for example, studied the attachment patterns of more than 83 heterosexual couples and found that, if the man was relatively secure, the woman was also likely to be secure. One important question is whether these findings exist because (a) secure people are more likely to be attracted to other secure people, (b) secure people are likely to create security in their partners over time, or (c) some combination of these possibilities. Existing empirical research strongly supports the first alternative. For example, when people have the opportunity to interact with individuals who vary in security in a speed-dating context, they express a greater interest in those who are higher in security than those who are more insecure (McClure, Lydon, Baccus, & Baldwin, 2010). However, there is also some evidence that people’s attachment styles mutually shape one another in close relationships. For example, in a longitudinal study, Hudson, Fraley, Vicary, and Brumbaugh (2012) found that, if one person in a relationship experienced a change in security, his or her partner was likely to experience a change in the same direction. Relationship Functioning Research has consistently demonstrated that individuals who are relatively secure are more likely than insecure individuals to have high functioning relationships—relationships that are more satisfying, more enduring, and less characterized by conflict. For example, Feeney and Noller (1992) found that insecure individuals were more likely than secure individuals to experience a breakup of their relationship. In addition, secure individuals are more likely to report satisfying relationships (e.g., Collins & Read, 1990) and are more likely to provide support to their partners when their partners were feeling distressed (Simpson, Rholes, & Nelligan, 1992). Do Early Experiences Shape Adult Attachment? The majority of research on this issue is retrospective—that is, it relies on adults’ reports of what they recall about their childhood experiences. This kind of work suggests that secure adults are more likely to describe their early childhood experiences with their parents as being supportive, loving, and kind (Hazan & Shaver, 1987). A number of longitudinal studies are emerging that demonstrate prospective associations between early attachment experiences and adult attachment styles and/or interpersonal functioning in adulthood. For example, Fraley, Roisman, Booth-LaForce, Owen, and Holland (2013) found in a sample of more than 700 individuals studied from infancy to adulthood that maternal sensitivity across development prospectively predicted security at age 18. Simpson, Collins, Tran, and Haydon (2007) found that attachment security, assessed in infancy in the strange situation, predicted peer competence in grades 1 to 3, which, in turn, predicted the quality of friendship relationships at age 16, which, in turn, predicted the expression of positive and negative emotions in their adult romantic relationships at ages 20 to 23. It is easy to come away from such findings with the mistaken assumption that early experiences “determine” later outcomes. To be clear: Attachment theorists assume that the relationship between early experiences and subsequent outcomes is probabilistic, not deterministic. Having supportive and responsive experiences with caregivers early in life is assumed to set the stage for positive social development. But that does not mean that attachment patterns are set in stone. In short, even if an individual has far from optimal experiences in early life, attachment theory suggests that it is possible for that individual to develop well-functioning adult relationships through a number of corrective experiences—including relationships with siblings, other family members, teachers, and close friends. Security is best viewed as a culmination of a person’s attachment history rather than a reflection of his or her early experiences alone. Those early experiences are considered important not because they determine a person’s fate, but because they provide the foundation for subsequent experiences. Outside Resources Hazan, C., & Shaver, P. (1987). Romantic love conceptualized as an attachment process. Journal of Personality and Social Psychology, 52, 511-524. Retrieved from: http://www2.psych.ubc.ca/~schaller/P...Shaver1987.pdf Hofer, M. A. (2006). Psychobiological roots of early attachment. Current Directions in Psychological Science, 15, 84-88. doi:10.1111/j.0963-7214.2006.00412.x http://cdp.sagepub.com/content/15/2/84.short Strange Situation Video Survey: Learn more about your attachment patterns via this online survey http://www.yourpersonality.net/relstructures/ Video on Harry Harlow’s Research with Rhesus Monkeys Discussion Questions 1. What kind of relationship did you have with your parents or primary caregivers when you were young? Do you think that had any bearing on the way you related to others (e.g., friends, relationship partners) as you grew older? 2. There is variation across cultures in the extent to which people value independence. Do you think this might have implications for the development of attachment patterns? 3. As parents age, it is not uncommon for them to have to depend on their adult children. Do you think that people’s history of experiences in their relationships with their parents might shape people’s willingness to provide care for their aging parents? In other words, are secure adults more likely to provide responsive care to their aging parents? 4. Some people, despite reporting insecure relationships with their parents, report secure, well-functioning relationships with their spouses. What kinds of experiences do you think might enable someone to develop a secure relationship with their partners despite having an insecure relationship with other central figures in their lives? 5. Most attachment research on adults focuses on attachment to peers (e.g., romantic partners). What other kinds of things may serve as attachment figures? Do you think siblings, pets, or gods can serve as attachment figures? Vocabulary Attachment behavioral system A motivational system selected over the course of evolution to maintain proximity between a young child and his or her primary attachment figure. Attachment behaviors Behaviors and signals that attract the attention of a primary attachment figure and function to prevent separation from that individual or to reestablish proximity to that individual (e.g., crying, clinging). Attachment figure Someone who functions as the primary safe haven and secure base for an individual. In childhood, an individual’s attachment figure is often a parent. In adulthood, an individual’s attachment figure is often a romantic partner. Attachment patterns (also called “attachment styles” or “attachment orientations”) Individual differences in how securely (vs. insecurely) people think, feel, and behave in attachment relationships. Strange situation A laboratory task that involves briefly separating and reuniting infants and their primary caregivers as a way of studying individual differences in attachment behavior.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_6%3A_Development/6.8%3A_Attachment_Through_the_Life_Course.txt
• 7.1: Consciousness Consciousness is the ultimate mystery. What is it and why do we have it? These questions are difficult to answer, even though consciousness is so fundamental to our existence. Perhaps the natural world could exist largely as it is without human consciousness; but taking away consciousness would essentially take away our humanity. • 7.2: The Unconscious Unconscious psychological processes have fascinated people for a very long time .Not only logic dictates that action starts unconsciously, but research strongly suggests this too. Moreover, unconscious processes are very often highly important for human functioning. • 7.3: States of Consciousness No matter what you’re doing--solving homework, playing a video game, simply picking out a shirt--all of your actions and decisions relate to your consciousness. But as frequently as we use it, have you ever stopped to ask yourself: What really is consciousness? In this module, we discuss the different levels of consciousness and how they can affect your behavior in a variety of situations. As well, we explore the role of consciousness in other, “altered” states like hypnosis and sleep. • 7.4: Theory of Mind One of the most remarkable human capacities is to perceive and understand mental states. This capacity, often labeled “theory of mind,” consists of an array of psychological processes that play essential roles in human social life. We review some of these roles, examine what happens when the capacity is deficient, and explore the many processes that make up the capacity to understand minds. • 7.5: Intelligence Intelligence is among the oldest and longest studied topics in all of psychology. The development of assessments to measure this concept is at the core of the development of psychological science itself. This module introduces key historical figures, major theories of intelligence, and common assessment strategies related to intelligence. This module will also discuss controversies related to the study of group differences in intelligence. • 7.6: Language and Language Use Humans have the capacity to use complex language, far more than any other species on Earth. We cooperate with each other to use language for communication; language is often used to communicate about and even construct and maintain our social world. Language use and human sociality are inseparable parts of Homo sapiens as a biological species. • 7.7: Judgement and Decision Making Humans are not perfect decision makers. Not only are we not perfect, but we depart from perfection or rationality in systematic and predictable ways. The understanding of these systematic and predictable departures is core to the field of judgment and decision making. By understanding these limitations, we can also identify strategies for making better and more effective decisions. • 7.8: Categories and Concepts People form mental concepts of categories of objects, which permit them to respond appropriately to new objects they encounter. Most concepts cannot be strictly defined but are organized around the “best” examples or prototypes, which have the properties most common in the category. Objects fall into many categories, but there is usually a most salient one, called the basic-level category, which is at an intermediate level of specificity (e.g. chairs, rather than furniture or desk chairs). • 7.9: Attention We use the term “attention“ all the time, but what processes or abilities does that concept really refer to? This module will focus on how attention allows us to select certain parts of our environment and ignore other parts, and what happens to the ignored information. A key concept is the idea that we are limited in how much we can do at any one time. So we will also consider what happens when someone tries to do several things at once, such as driving while using electronic devices. Chapter 7: Cognition and Language By Ken Paller and Satoru Suzuki Northwestern University Consciousness is the ultimate mystery. What is it and why do we have it? These questions are difficult to answer, even though consciousness is so fundamental to our existence. Perhaps the natural world could exist largely as it is without human consciousness; but taking away consciousness would essentially take away our humanity. Psychological science has addressed questions about consciousness in part by distinguishing neurocognitive functions allied with conscious experience from those that transpire without conscious experience. The continuing investigation of these sorts of distinctions is yielding an empirical basis for new hypotheses about the precursors of conscious experience. Richer conceptualizations are thus being built, combining first-person and third-person perspectives to provide new clues to the mystery of consciousness. learning objectives • Understand scientific approaches to comprehending consciousness. • Be familiar with evidence about human vision, memory, body awareness, and decision making relevant to the study of consciousness. • Appreciate some contemporary theories about consciousness. Conscious Experiences Contemplate the unique experience of being you at this moment! You, and only you, have direct knowledge of your own conscious experiences. At the same time, you cannot know consciousness from anyone else’s inside view. How can we begin to understand this fantastic ability to have private, conscious experiences? In a sense, everything you know is from your own vantage point, with your own consciousness at the center. Yet the scientific study of consciousness confronts the challenge of producing general understanding that goes beyond what can be known from one individual’s perspective. To delve into this topic, some terminology must first be considered. The term consciousness can denote the ability of a person to generate a series of conscious experiences one after another. Here we include experiences of feeling and understanding sensory input, of a temporal sequence of autobiographical events, of imagination, of emotions and moods, of ideas, of memories—the whole range of mental contents open to an individual. Consciousness can also refer to the state of an individual, as in a sharp or dull state of consciousness, a drug-induced state such as euphoria, or a diminished state due to drowsiness, sleep, neurological abnormality, or coma. In this module, we focus not on states of consciousness or on self-consciousness, but rather on the process that unfolds in the course of a conscious experience—a moment of awareness—the essential ingredient of consciousness. Other Minds You have probably experienced the sense of knowing exactly what a friend is thinking. Various signs can guide our inferences about consciousness in others. We can try to infer what’s going on in someone else’s mind by relying on the assumption that they feel what we imagine we would feel in the same situation. We might account for someone’s actions or emotional expressions through our knowledge of that individual and our careful observations of their behavior. In this way, we often display substantial insight into what they are thinking. Other times we are completely wrong. By measuring brain activity using various neuroscientific technologies, we can acquire additional information useful for deciphering another person’s state of mind. In special circumstances such inferences can be highly accurate, but limitations on mind reading remain, highlighting the difficulty of understanding exactly how conscious experiences arise. A Science of Consciousness Attempts to understand consciousness have been pervasive throughout human history, mostly dominated by philosophical analyses focused on the first-person perspective. Now we have a wider set of approaches that includes philosophy, psychology, neuroscience, cognitive science, and contemplative science (Blackmore, 2006; Koch, 2012; Zelazo, Moscovitch, & Thompson, 2007; Zeman, 2002). The challenge for this combination of approaches is to give a comprehensive explanation of consciousness. That explanation would include describing the benefits of consciousness, particularly for behavioral capabilities that conscious experiences allow, that trump automatic behaviors. Subjective experiences also need to be described in a way that logically shows how they result from precursor events in the human brain. Moreover, a full account would describe how consciousness depends on biological, environmental, social, cultural, and developmental factors. At the outset, a central question is how to conceive of consciousness relative to other things we know. Objects in our environment have a physical basis and are understood to be composed of constituents, such that they can be broken down into molecules, elements, atoms, particles, and so on. Yet we can also understand things relationally and conceptually. Sometimes a phenomenon can best be conceived as a process rather than a physical entity (e.g., digestion is a process whereby food is broken down). What, then, is the relationship between our conscious thoughts and the physical universe, and in particular, our brains? Rene Descartes’ position, dualism, was that mental and physical are, in essence, different substances. This view can be contrasted with reductionist views that mental phenomena can be explained via descriptions of physical phenomena. Although the dualism/reductionism debate continues, there are many ways in which mind can be shown to depend on the brain. A prominent orientation to the scientific study of consciousness is to seek understanding of these dependencies—to see how much light they can shed on consciousness. Significant advances in our knowledge about consciousness have thus been gained, as seen in the following examples. Conscious Experiences of Visual Perception Suppose you meet your friend at a crowded train station. You may notice a subtle smile on her face. At that moment you are probably unaware of many other things happening within your view. What makes you aware of some things but not others? You probably have your own intuitions about this, but experiments have proven wrong many common intuitions about what generates visual awareness. For instance, you may think that if you attentively look at a bright spot, you must be aware of it. Not so. In a phenomenon known as motion-induced blindness, bright discs completely vanish from your awareness in full attention. To experience this for yourself, see this module's Outside Resource section for a demonstration of motion-induced blindness. You may think that if you deeply analyze an image, decoding its meaning and making a decision about it, you must be aware of the image. Not necessarily. When a number is briefly flashed and rapidly replaced by a random pattern, you may have no awareness of it, despite the fact that your brain allows you to determine that the number is greater than 5, and then prepare your right hand for a key press if that is what you were instructed to do (Dehaene et al., 1998). Thus, neither the brightness of an image, paying full attention to it, nor deeply analyzing it guarantees that you will be aware of it. What, then, is the crucial ingredient of visual awareness? A contemporary answer is that our awareness of a visual feature depends on a certain type of reciprocal exchange of information across multiple brain areas, particularly in the cerebral cortex. In support of this idea, directly activating your visual motion area (known as V5) with an externally applied magnetic field (transcranial magnetic stimulation) will make you see moving dots. This is not surprising. What is surprising is that activating your visual motion area alone does not let you see motion. You will not see moving dots if the feedback signal from V5 to the primary visual cortex is disrupted by a further transcranial magnetic stimulation pulse (Pascual-Leone & Walsh, 2001). The reverberating reciprocal exchange of information between higher-level visual areas and primary visual cortex appears to be essential for generating visual awareness. This idea can also explain why people with certain types of brain damage lack visual awareness. Consider a patient with brain damage limited to primary visual cortex who claims not to see anything — a problem termedcortical blindness. Other areas of visual cortex may still receive visual input through projections from brain structures such as the thalamus and superior colliculus, and these networks may mediate some preserved visual abilities that take place without awareness. For example, a patient with cortical blindness might detect moving stimuli via V5 activation but still have no conscious experiences of the stimuli, because the reverberating reciprocal exchange of information cannot take place between V5 and the damaged primary visual cortex. The preserved ability to detect motion might be evident only when a guess is required (“guess whether something moved to the left or right”)—otherwise the answer would be “I didn’t see anything.” This phenomenon of blindsight refers to blindness due to a neurological cause that preserves abilities to analyze and respond to visual stimuli that are not consciously experienced (Lamme, 2001). If exchanges of information across brain areas are crucial for generating visual awareness, neural synchronization must play an important role because it promotes neural communication. A neuron’s excitability varies over time. Communication among neural populations is enhanced when their oscillatory cycles of excitability are synchronized. In this way, information transmitted from one population in its excitable phase is received by the target population when it is also in its excitable phase. Indeed, oscillatory neural synchronization in the beta- and gamma-band frequencies (identified according to the number of oscillations per second, 13–30 Hz and 30–100 Hz, respectively) appears to be closely associated with visual awareness. This idea is highlighted in the Global Neuronal Workspace Theory of Consciousness (Dehaene & Changeux, 2011), in which sharing of information among prefrontal, inferior parietal, and occipital regions of the cerebral cortex is postulated to be especially important for generating awareness. A related view, the Information Integration Theory of Consciousness, is that shared information itself constitutes consciousness (Tononi, 2004). An organism would have minimal consciousness if the structure of shared information is simple, whereas it would have rich conscious experiences if the structure of shared information is complex. Roughly speaking, complexity is defined as the number of intricately interrelated informational units or ideas generated by a web of local and global sharing of information. The degree of consciousness in an organism (or a machine) would be high if numerous and diversely interrelated ideas arise, low if only a few ideas arise or if there are numerous ideas but they are random and unassociated. Computational analyses provide additional perspectives on such proposals. In particular, if every neuron is connected to every other neuron, all neurons would tend to activate together, generating few distinctive ideas. With a very low level of neuronal connectivity at the other extreme, all neurons would tend to activate independently, generating numerous but unassociated ideas. To promote a rich level of consciousness, then, a suitable mixture of short-, medium-, and long-range neural connections would be needed. The human cerebral cortex may indeed have such an optimum structure of neural connectivity. Given how consciousness is conceptualized in this theory as graded rather than all-or-none, a quantitative approach (e.g., Casali et al., 2013; Monti et al., 2013) could conceivably be used to estimate the level of consciousness in nonhuman species and artificial beings. Conscious Experiences of Memory The pinnacle of conscious human memory functions is known as episodic recollection because it allows one to reexperience the past, to virtually relive an earlier event. People who suffer from amnesia due to neurological damage to certain critical brain areas have poor memory for events and facts. Their memory deficit disrupts the type of memory termed declarative memory and makes it difficult to consciously remember. However, amnesic insults typically spare a set of memory functions that do not involve conscious remembering. These other types of memory, which include various habits, motor skills, cognitive skills, and procedures, can be demonstrated when an individual executes various actions as a function of prior learning, but in these cases a conscious experience of remembering is not necessarily included. Research on amnesia has thus supported the proposal that conscious remembering requires a specific set of brain operations that depend on networks of neurons in the cerebral cortex. Some of the other types of memory involve only subcortical brain regions, but there are also notable exceptions. In particular, perceptual priming is a type of memory that does not entail the conscious experience of remembering and that is typically preserved in amnesia. Perceptual priming is thought to reflect a fluency of processing produced by a prior experience, even when the individual cannot remember that prior experience. For example, a word or face might be perceived more efficiently if it had been viewed minutes earlier than if it hadn’t. Whereas a person with amnesia can demonstrate this item-specific fluency due to changes in corresponding cortical areas, they nevertheless would be impaired if asked to recognize the words or faces they previously experienced. A reasonable conclusion on the basis of this evidence is that remembering an episode is a conscious experience not merely due to the involvement of one portion of the cerebral cortex, but rather due to the specific configuration of cortical activity involved in the sharing or integration of information. Further neuroscientific studies of memory retrieval have shed additional light on the necessary steps for conscious recollection. For example, storing memories for the events we experience each day appears to depend on connections among multiple cortical regions as well as on a brain structure known as the hippocampus. Memory storage becomes more secure due to interactions between the hippocampus and cerebral cortex that can transpire over extended time periods following the initial registration of information. Conscious retrieval thus depends on the activity of elaborate sets of networks in the cortex. Memory retrieval that does not include conscious recollection depends either on restricted portions of the cortex or on brain regions separate from the cortex. The ways in which memory expressions that include the awareness of remembering differ from those that do not thus highlight the special nature of conscious memory experiences (Paller, Voss, & Westerberg, 2009; Voss, Lucas, & Paller, 2012). Indeed, memory storage in the brain can be very complex for many different types of memory, but there are specific physiological prerequisites for the type of memory that coincides with conscious recollection. Conscious Experiences of Body Awareness The brain can generate body awareness by registering coincident sensations. For example, when you rub your arm, you see your hand rubbing your arm and simultaneously feel the rubbing sensation in both your hand and your arm. This simultaneity tells you that it is your hand and your arm. Infants use the same type of coincident sensations to initially develop the self/nonself distinction that is fundamental to our construal of the world. The fact that your brain constructs body awareness in this way can be experienced via the rubber-hand illusion (see Outside Resource on this). If you see a rubber hand being rubbed and simultaneously feel the corresponding rubbing sensation on your own body out of view, you will momentarily feel a bizarre sensation—that the rubber hand is your own. The construction of our body awareness appears to be mediated by specific brain mechanisms involving a region of the cortex known as the temporoparietal junction. Damage to this brain region can generate distorted body awareness, such as feeling a substantially elongated torso. Altered neural activity in this region through artificial stimulation can also produce an out-of-body experience (see this module’s Outside Resources section), in which you feel like your body is in another location and you have a novel perspective on your body and the world, such as from the ceiling of the room. Remarkably, comparable brain mechanisms may also generate the normal awareness of the sense of self and the sensation of being inside a body. In the context of virtual reality this sensation is known as presence (the compelling experience of actually being there). Our normal localization of the self may be equally artificial, in that it is not a given aspect of life but is constructed through a special brain mechanism. A Social Neuroscience Theory of Consciousness (Graziano & Kastner, 2011) ascribes an important role to our ability to localize our own sense of self. The main premise of the theory is that you fare better in a social environment to the extent that you can predict what people are going to do. So, the human brain has developed mechanisms to construct models of other people’s attention and intention, and to localize those models in the corresponding people’s heads to keep track of them. The proposal is that the same brain mechanism was adapted to construct a model of one’s own attention and intention, which is then localized in one’s own head and perceived as consciousness. If so, then the primary function of consciousness is to allow us to predict our own behavior. Research is needed to test the major predictions of this new theory, such as whether changes in consciousness (e.g., due to normal fluctuations, psychiatric disease, brain damage) are closely associated with changes in the brain mechanisms that allow us to model other people’s attention and intention. Conscious Experiences of Decision Making Choosing among multiple possible actions, the sense of volition, is closely associated with our subjective feeling of consciousness. When we make a lot of decisions, we may feel especially conscious and then feel exhausted, as if our mental energy has been drained. We make decisions in two distinct ways. Sometimes we carefully analyze and weigh different factors to reach a decision, taking full advantage of the brain’s conscious mode of information processing. Other times we make a gut decision, trusting the unconscious mode of information processing (although it still depends on the brain). The unconscious mode is adept at simultaneously considering numerous factors in parallel, which can yield an overall impression of the sum total of evidence. In this case, we have no awareness of the individual considerations. In the conscious mode, in contrast, we can carefully scrutinize each factor—although the act of focusing on a specific factor can interfere with weighing in other factors. One might try to optimize decision making by taking into account these two strategies. A careful conscious decision should be effective when there are only a few known factors to consider. A gut decision should be effective when a large number of factors should be considered simultaneously. Gut decisions can indeed be accurate on occasion (e.g., guessing which of many teams will win a close competition), but only if you are well versed in the relevant domain (Dane, Rockmann, & Pratt, 2012). As we learn from our experiences, some of this gradual knowledge accrual is unconscious; we don’t know we have it and we can use it without knowing it. On the other hand, consciously acquired information can be uniquely beneficial by allowing additional stages of control (de Lange, van Gaal, Lamme, & Dehaene, 2011). It is often helpful to control which new knowledge we acquire and which stored information we retrieve in accordance with our conscious goals and beliefs. Whether you choose to trust your gut or to carefully analyze the relevant factors, you feel that you freely reach your own decision. Is this feeling of free choice real? Contemporary experimental techniques fall short of answering this existential question. However, it is likely that at least the sense of immediacy of our decisions is an illusion. In one experiment, people were asked to freely consider whether to press the right button or the left button, and to press it when they made the decision (Soon, Brass, Heinze, & Haynes, 2008). Although they indicated that they made the decision immediately before pressing the button, their brain activity, measured using functional magnetic resonance imaging, predicted their decision as much as 10 seconds before they said they freely made the decision. In the same way, each conscious experience is likely preceded by precursor brain events that on their own do not entail consciousness but that culminate in a conscious experience. In many situations, people generate a reason for an action that has nothing to do with the actual basis of the decision to act in a particular way. We all have a propensity to retrospectively produce a reasonable explanation for our behavior, yet our behavior is often the result of unconscious mental processing, not conscious volition. Why do we feel that each of our actions is immediately preceded by our own decision to act? This illusion may help us distinguish our own actions from those of other agents. For example, while walking hand-in-hand with a friend, if you felt you made a decision to turn left immediately before you both turned left, then you know that you initiated the turn; otherwise, you would know that your friend did. Even if some aspects of the decision-making process are illusory, to what extent are our decisions determined by prior conditions? It certainly seems that we can have full control of some decisions, such as when we create a conscious intention that leads to a specific action: You can decide to go left or go right. To evaluate such impressions, further research must develop a better understanding of the neurocognitive basis of volition, which is a tricky undertaking, given that decisions are conceivably influenced by unconscious processing, neural noise, and the unpredictability of a vast interactive network of neurons in the brain. Yet belief in free choice has been shown to promote moral behavior, and it is the basis of human notions of justice. The sense of free choice may be a beneficial trait that became prevalent because it helped us flourish as social beings. Understanding Consciousness Our human consciousness unavoidably colors all of our observations and our attempts to gain understanding. Nonetheless, scientific inquiries have provided useful perspectives on consciousness. The advances described above should engender optimism about the various research strategies applied to date and about the prospects for further insight into consciousness in the future. Because conscious experiences are inherently private, they have sometimes been taken to be outside the realm of scientific inquiry. This view idealizes science as an endeavor involving only observations that can be verified by multiple observers, relying entirely on the third-person perspective, or the view from nowhere (from no particular perspective). Yet conducting science is a human activity that depends, like other human activities, on individuals and their subjective experiences. A rational scientific account of the world cannot avoid the fact that people have subjective experiences. Subjectivity thus has a place in science. Conscious experiences can be subjected to systematic analysis and empirical tests to yield progressive understanding. Many further questions remain to be addressed by scientists of the future. Is the first-person perspective of a conscious experience basically the same for all human beings, or do individuals differ fundamentally in their introspective experiences and capabilities? Should psychological science focus only on ordinary experiences of consciousness, or are extraordinary experiences also relevant? Can training in introspection lead to a specific sort of expertise with respect to conscious experience? An individual with training, such as through extensive meditation practice, might be able to describe their experiences in a more precise manner, which could then support improved characterizations of consciousness. Such a person might be able to understand subtleties of experience that other individuals fail to notice, and thereby move our understanding of consciousness significantly forward. These and other possibilities await future scientific inquiries into consciousness. Outside Resources 1. Video: Demonstration of motion-induced blindness - Look steadily at the blue moving pattern. One or more of the yellow spots may disappear. 2. Web: Learn more about motion-induced blindness on Michael Bach\\'s website http://www.michaelbach.de/ot/mot-mib/index.html 3. Video: Clip showing a patient with blindsight, from the documentary \\"Phantoms in the Brain.\\" 4. Video: Clip on the rubber hand illusion, from the BBC science series \\"Horizon.\\" 5. Video: Clip on out-of-body experiences induced using virtual reality. 6. App: Visual illusions for the iPad. http://www.exploratorium.edu/explore...olor-uncovered 7. Web: Definitions of Consciousness http://www.consciousentities.com/definitions.htm 8. Video: The mind-body problem - An interview with Ned Block 9. Video: Imaging the Brain, Reading the Mind - A talk by Marsel Mesulam. Video: Ted Talk - Simon Lewis: Don't take consciousness for granted http://www.ted.com/talks/simon_lewis...r_granted.html Discussion Questions 1. Why has consciousness evolved? Presumably it provides some beneficial capabilities for an organism beyond behaviors that are based only on automatic triggers or unconscious processing. What are the likely benefits of consciousness? 2. How would you explain to a congenitally blind person the experience of seeing red? Detailed explanations of the physics of light and neurobiology of color processing in the brain would describe the mechanisms that give rise to the experience of seeing red, but would not convey the experience. What would be the best way to communicate the subjective experience itself? 3. Our visual experiences seem to be a direct readout of information from the world that comes into our eyes, and we usually believe that our mental representations give us an accurate and exact re-creation of the world. Is it possible that what we consciously perceive is not veridical, but is a limited and distorted view, in large part a function of the specific sensory and information-processing abilities that the brain affords? 4. When are you most conscious—while you’re calm, angry, happy, or moved; while absorbed in a movie, video game, or athletic activity; while engaged in a spirited conversation, making decisions, meditating, reflecting, trying to solve a difficult problem, day dreaming, or feeling creative? How do these considerations shed light on what consciousness is? 5. Consciousness may be a natural biological phenomenon and a chief function of a brain, but consider the many ways in which it is also contingent on (i) a body linked with a brain, (ii) an outside world, (iii) a social environment, and (iv) a developmental trajectory. How do these considerations enrich our understanding of consciousness? 6. Conscious experiences may not be limited to human beings. However, the difficulty of inferring consciousness in other beings highlights the limitations of our current understanding of consciousness. Many nonhuman animals may have conscious experiences; pet owners often have no doubt about what their pets are thinking. Computers with sufficient complexity might at some point be conscious—but how would we know? Vocabulary Awareness A conscious experience or the capability of having conscious experiences, which is distinct from self-awareness, the conscious understanding of one’s own existence and individuality. Conscious experience The first-person perspective of a mental event, such as feeling some sensory input, a memory, an idea, an emotion, a mood, or a continuous temporal sequence of happenings. Contemplative science A research area concerned with understanding how contemplative practices such as meditation can affect individuals, including changes in their behavior, their emotional reactivity, their cognitive abilities, and their brains. Contemplative science also seeks insights into conscious experience that can be gained from first-person observations by individuals who have gained extraordinary expertise in introspection. First-person perspective Observations made by individuals about their own conscious experiences, also known as introspection or a subjective point of view. Phenomenology refers to the description and investigation of such observations. Third-person perspective Observations made by individuals in a way that can be independently confirmed by other individuals so as to lead to general, objective understanding. With respect to consciousness, third-person perspectives make use of behavioral and neural measures related to conscious experiences.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_7%3A_Cognition_and_Language/7.1%3A_Consciousness.txt
By Ap Dijksterhuis Radboud University Nijmegen Unconscious psychological processes have fascinated people for a very long time. The idea that people must have an unconscious is based on the idea that (a) there is so much going on in our brains, and the capacity of consciousness is so small, that there must be much more than just consciousness; and that (b) unless you believe consciousness is causally disconnected from other bodily and mental processes, conscious experiences must be prepared by other processes in the brain of which we are not conscious. Not only logic dictates that action starts unconsciously, but research strongly suggests this too. Moreover, unconscious processes are very often highly important for human functioning, and many phenomena, such as attitude formation, goal pursuit, stereotyping, creativity, and decision making are impossible to fully understand without incorporating the role of unconscious processes. learning objectives • Understand the logic underlying the assumption that unconscious processes are important. • Obtain a crude understanding of some important historical thoughts about unconscious processes. • Learn about some of the important psychological experiments on the unconscious. • Appreciate the distinction between consciousness and attention. Have you ever grabbed a candy bar, chewing gum or a magazine as you purchased your groceries? These well-known “impulse buys” raise an intriguing question: what is really driving your decisions? While, on the one hand, you might argue that it is your conscious mind that decides what you buy, what you eat and what you read. On the other hand you’d probably have to admit that those celebrity magazines and salted chocolates weren't actually on your shopping list with the eggs and the bread. So where did the desire to purchase them come from? As we will see in this module, there are a number of forces that operate on your thinking and decisions that you might not even be aware of; all of them being processed by the unconscious. A Little Bit of History Although the term “unconscious” was only introduced fairly recently (in the 18th century by the German philosopher Platner, the German term being “Unbewusstsein”), the relative “unconsciousness” of human nature has evoked both marvel and frustration for more than two millennia. Socrates (490–399 BC) argued that free will is limited, or at least so it seems, after he noticed that people often do things they really do not want to do. He called this akrasia, which can best be translated as “the lack of control over oneself.” A few centuries later, the Roman thinker Plotinus (AD 205–270) was presumably the first to allude to the possibility of unconscious psychological processes in writing: “The absence of a conscious perception is no proof of the absence of mental activity.” These two ideas, first verbalized by Socrates and Plotinus respectively, were—and still are—hotly debated in psychology, philosophy, and neuroscience. That is, scientists still investigate the extent to which human behavior is (and/or seems) voluntary or involuntary, and scientists still investigate the relative importance of unconscious versus conscious psychological processes, or mental activity in general. And, perhaps not surprisingly, both issues are still controversial. During the scientific revolution in Europe, our unconscious was taken away from us, so to speak, by the French philosopher Descartes (1596–1650). Descartes’s dualism entailed a strict distinction between body and mind. According to Descartes, the mind produces psychological processes and everything going on in our minds is by definition conscious. Some psychologists have called this idea, in which mental processes taking place outside conscious awareness were rendered impossible, the Cartesian catastrophe. It took well over two centuries for science to fully recover from the impoverishment dictated by Descartes. This is not say that contemporaries of Descartes and later thinkers all agreed with Descartes’s dualism. In fact, many of them disagreed and kept on theorizing about unconscious psychological processes. For instance, the British philosopher John Norris (1657–1711) said: “We may have ideas of which we are not conscious. . . . There are infinitely more ideas impressed on our minds than we can possibly attend to or perceive.” Immanuel Kant (1724–1804) agreed: “The field of our sense-perceptions and sensations, of which we are not conscious . . .is immeasurable.” Norris and Kant used a logical argument that many proponents of the importance of unconscious psychological processes still like to point at today: There is so much going on in our brains, and the capacity of consciousness is so small, that there must be much more than just consciousness. The most famous advocate of the importance of unconscious processes arrived at the scene in the late 19th century: the Austrian neurologist Sigmund Freud. Most people associate Freud with psychoanalysis, with his theory on id, ego, and superego, and with his ideas on repression, hidden desires, and dreams. Such associations are fully justified, but Freud also published lesser-known general theoretical work (e.g., Freud, 1915/1963). This theoretical work sounds, in contrast to his psychoanalytic work, very fresh and contemporary. For instance, Freud already argued that human behavior never starts with a conscious process (compare this to the Libet experiment discussed below). Freud, and also Wilhelm Wundt, pointed at another logical argument for the necessity of unconscious psychological processes. Wundt put it like this: “Our mind is so fortunately equipped, that it brings us the most important bases for our thoughts without our having the least knowledge of this work of elaboration. Only the results of it become conscious. This unconscious mind is for us like an unknown being who creates and produces for us, and finally throws the ripe fruits in our lap.” In other words, we may become consciously aware of many different things—the taste of a glass of Burgundy, the beauty of the Taj Mahal, or the sharp pain in our toe after a collision with a bed—but these experiences do not hover in the air before they reach us. They are prepared, somehow and somewhere. Unless you believe consciousness is causally disconnected from other bodily and mental processes (for instance if one assumes it is guided by the gods), conscious experiences must be prepared by other processes in the brain of which we are not conscious. The German psychologist Watt (1905), in an appealing experiment, showed that we are only consciously aware of the results of mental processes. His participants were repeatedly presented with nouns (e.g., “oak”) and had to respond with an associated word as quickly as they could. On some occasions participants were requested to name a superordinate word (“oak”-“tree”), while on other occasions they were asked to come up with a part (“oak”-“acorn”) or a subordinate (“oak”-“beam”) word. Hence, participants’ thinking was divided into four stages: the instructions (e.g., superordinate), the presentation of the noun (e.g., “oak”), the search for an appropriate association, and the verbalization of the reply (e.g., “tree”). Participants were asked to carefully introspect on all four stages to shed light on the role of consciousness during each stage. The third stage (searching for an association) is the stage during which the actual thinking takes place and hence this was considered the most interesting stage. However, unlike the other stages, this stage was, as psychologists call it, introspectively blank: Participants could not report anything. The thinking itself was unconscious, and participants were only conscious of the answer that surfaced. Where Action Originates The idea that we unconsciously prepare an action before we are conscious of this action was tested in one of psychology’s most famous experiments. Quite some time ago, Kornhuber and Deecke (1965) did experiments in which they asked their participants to perform a simple action, in this case flexing a finger. They also measured EEG to investigate when the brain starts to prepare the action. Their results showed that the first sign of unconscious preparation preceded an action by about 800 milliseconds. This is a serious amount of time, and it led Benjamin Libet to wonder whether conscious awareness of the decision to act appears just as long or even longer in advance as well. Libet (1985) replicated the Kornhuber and Deecke experiments while adding another measure: conscious awareness of the decision to act. He showed that conscious decisions follow unconscious preparation and only precede the actual execution of the action by about 200 milliseconds. In other words, the unconscious decides to act, we then become consciously aware of wanting to execute the action, and finally we act. The experiment by Libet caused quite a stir, and some people tried to save the day for the decisive role of consciousness by criticizing the experiment. Some of this criticism made sense, such as the notion that the action sequence in the Libet experiments does not start with the EEG signals in the brain, but instead before that, with the instruction of the experimenter to flex a finger. And this instruction is consciously perceived. The dust surrounding the precise meaning of this experiment has still not completely settled, and recently Soon and colleagues (Soon, Brass, Heinze, & Haynes, 2008) reported an intriguing experiment in which they circumvented an important limitation of the Libet experiment. Participants had to repeatedly make a dichotomous choice (they were to press one of two buttons) and they could freely choose which one. The experimenters measured participants’ brain activity. After the participants made their simple choice many times, the experimenters could, by looking at the difference in brain activity for the two different choices in earlier trials, predict which button a participant was going to press next up to ten seconds in advance—indeed, long before a participant had consciously “decided” what button to press next. The Unconscious in Social Psychological Processes These days, most scientific research on unconscious processes is aimed at showing that people do not need consciousness for certain psychological processes or behaviors. One such example is attitude formation. The most basic process of attitude formation is through mere exposure (Zajonc, 1968). Merely perceiving a stimulus repeatedly, such as a brand on a billboard one passes every day or a song that is played on the radio frequently, renders it more positive. Interestingly, mere exposure does not require conscious awareness of the object of an attitude. In fact, mere-exposure effects occur even when novel stimuli are presented subliminally for extremely brief durations (e.g., Kunst-Wilson & Zajonc, 1980). Intriguingly, in such subliminal mere-exposure experiments, participants indicate a preference for, or a positive attitude towards, stimuli they do not consciously remember being exposed to. Another example of modern research on unconscious processes is research on priming. In a well-known experiment by a research team led by the American psychologist John Bargh (Bargh, Chen, & Burrows, 1996), half the participants were primed with the stereotype of the elderly by doing a language task (they had to make sentences on the basis of lists of words). These lists contained words commonly associated with the elderly (e.g., “old,” “bingo,” “walking stick,” “Florida”). The remaining participants received a language task in which the critical words were replaced by words not related to the elderly. After participants had finished they were told the experiment was over, but they were secretly monitored to see how long they took to walk to the nearest elevator. The primed participants took significantly longer. That is, after being exposed to words typically associated with being old, they behaved in line with the stereotype of old people: being slow. Such priming effects have been shown in many different domains. For example, Dijksterhuis and van Knippenberg (1998) demonstrated that priming can improve intellectual performance. They asked their participants to answer 42 general knowledge questions taken from the game Trivial Pursuit. Under normal conditions, participants answered about 50% of the questions correctly. However, participants primed with the stereotype of professors—who are by most people seen as intelligent—managed to answer 60% of the questions correctly. Conversely, performance of participants primed with the “dumb” stereotype of hooligans dropped to 40%. Holland, Hendriks, and Aarts (2005) examined whether the mere priming with an odor is capable of changing behavior. They exposed some of their participants to the scent of all-purpose cleaner without participants’ conscious awareness of the presence of this scent (a bucket was hidden in the laboratory). Because the scent of the cleaner was assumed to prime the concept of cleaning, the researchers hypothesized that participants exposed to the scent would spontaneously start to pay more attention to cleanliness. Participants were requested to eat a very crumbly cookie in the lab, and indeed, participants exposed to the scent put in more effort to keep their environment clean and free of crumbs. Priming techniques are also applied to change people’s behavior in the real world. Latham and Piccolo (2012) randomly assigned call center employees to a condition where the employees viewed a photograph of people making telephone calls in a call center or a photograph of a woman winning a race. Both photographs led to a significant improvement in job performance compared to employees in the control condition, who did not see a photograph. In fact, the people who saw the photograph of people making phone calls raised 85% more money than the people in in the control group. The research on unconscious processes also greatly improved our understanding of prejudice. People automatically categorize other people according to their race, and Patricia Devine (1989) demonstrated that categorization unconsciously leads to the activation of associated cultural stereotypes. Importantly, Devine also showed that stereotype activation was not moderated by people’s level of explicit prejudice. The conclusion of this work was bleak: We unconsciously activate cultural stereotypes, and this is true for all of us, even for people who are not explicitly prejudiced, or, in other words, for people who do not want to stereotype. Unconscious Processing and the Role of Attention Insight into unconscious processes has also contributed to our ideas about creativity. Creativity is usually seen as the result of a three-stage process. It begins with attending to a problem consciously. You think and read about a problem and discuss matters with others. This stage allows the necessary information to be gathered and organized, but during this stage a truly creative idea is rarely produced. The second stage is unconscious; it is the incubation stage during which people think unconsciously. The problem is put aside for a while, and conscious attention is directed elsewhere. The process of unconscious thought sometimes leads to a “Eureka experience” whereby the creative product enters consciousness. This third stage is one where conscious attention again plays a role. The creative product needs to be verbalized and communicated. For example, a scientific discovery needs detailed proof before it can be communicated to others. The idea that people think unconsciously has also been applied to decision making (Dijksterhuis & Nordgren, 2006). In a recent set of experiments (Bos, Dijksterhuis, & van Baaren, 2008), participants were presented with information about various alternatives (such as cars or roommates) differing in attractiveness. Subsequently, participants engaged in a distractor task before they made a decision. That is, they consciously thought about something else; in this case, they solved anagrams. However, one group was told, prior to the distractor task, that they would be later asked questions about the decision problem. A second group was instead told that they were done with the decision problem and would not be asked anything later on. In other words, the first group had the goal to further process the information, whereas the second group had no such goal. Results showed that the first group made better decisions than the latter. Although they did the exact same thing consciously—again, solving anagrams—the first group made better decisions than the second group because the first thought unconsciously. Recently, researchers reported neuroscientific evidence for such unconscious thought processes, indeed showing that recently encoded information is further processed unconsciously when people have the goal to do so (Creswell, Bursley, & Satpute, in press). People are sometimes surprised to learn that we can do so much, and so many sophisticated things, unconsciously. However, it is important to realize that there is no one-to-one relation between attention and consciousness (see e.g., Dijksterhuis & Aarts, 2010). Our behavior is largely guided by goals and motives, and these goals determine what we pay attention to—that is, how many resources our brain spends on something—but not necessarily what we become consciously aware of. We can be conscious of things that we hardly pay attention to (such as fleeting daydreams), and we can be paying a lot of attention to something we are temporarily unaware of (such as a problem we want to solve or a big decision we are facing). Part of the confusion arises because attention and consciousness are correlated. When one pays more attention to an incoming stimulus, the probability that one becomes consciously aware of it increases. However, attention and consciousness are distinct. And to understand why we can do so many things unconsciously, attention is the key. We need attention, but for quite a number of things, we do not need conscious awareness. These days, most researchers agree that the most sensible approach to learn about unconscious and conscious processes is to consider (higher) cognitive operations as unconscious, and test what (if anything) consciousness adds (Dijksterhuis & Aarts 2010; van Gaal, Lamme, Fahrenfort, & Ridderinkhof, 2011; for an exception, see Newell & Shanks, in press). However, researchers still widely disagree about the relative importance or contribution of conscious and unconscious processes. Some theorists maintain the causal role of consciousness is limited or virtually nonexistent; others still believe that consciousness plays a crucial role in almost all human behavior of any consequence. Note The historical overview of the way people thought about the unconscious is largely based on Koestler (1964). Outside Resources Book: A wonderful book about how little we know about ourselves: Wilson, T. D. (2002). Strangers to ourselves. Cambridge, MA: Harvard University Press. Book: Another wonderful book about free will—or its absence?: Wegner, D. M. (2002). The illusion of conscious will. Cambridge, MA: MIT Press. Video: An interesting video on attention http://www.dansimons.com/videos.html Web: A good overview of priming en.Wikipedia.org/wiki/Priming_(psychology) Discussion Questions 1. Assess both the strengths and weaknesses of the famous Libet study. 2. Assuming that attention and consciousness are orthogonal, can you name examples of conscious processes that hardly require attention or of unconscious processes that require a lot of attention? 3. Do you think some of the priming experiments can also be explained purely by conscious processes? 4. What do you think could be the main function of consciousness? 5. Some people, scientists included, have a strong aversion to the idea that human behavior is largely guided by unconscious processes. Do you know why? Vocabulary Cartesian catastrophe The idea that mental processes taking place outside conscious awareness are impossible. Conscious Having knowledge of something external or internal to oneself; being aware of and responding to one’s surroundings. Distractor task A task that is designed to make a person think about something unrelated to an impending decision. EEG (Electroencephalography) The recording of the brain’s electrical activity over a period of time by placing electrodes on the scalp. Eureka experience When a creative product enters consciousness. Mere-exposure effects The result of developing a more positive attitude towards a stimulus after repeated instances of mere exposure to it. Priming The process by which recent experiences increase a trait’s accessibility. Unconscious Not conscious; the part of the mind that affects behavior though it is inaccessible to the conscious mind.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_7%3A_Cognition_and_Language/7.2%3A_The_Unconscious.txt
By Robert Biswas-Diener and Jake Teeny Portland State University, The Ohio State University No matter what you’re doing--solving homework, playing a video game, simply picking out a shirt--all of your actions and decisions relate to your consciousness. But as frequently as we use it, have you ever stopped to ask yourself: What really is consciousness? In this module, we discuss the different levels of consciousness and how they can affect your behavior in a variety of situations. As well, we explore the role of consciousness in other, “altered” states like hypnosis and sleep. Learning Objectives • Define consciousness and distinguish between high and low conscious states • Explain the relationship between consciousness and bias • Understand the difference between popular portrayals of hypnosis and how it is currently used therapeutically Introduction Have you ever had a fellow motorist stopped beside you at a red light, singing his brains out, or picking his nose, or otherwise behaving in ways he might not normally do in public? There is something about being alone in a car that encourages people to zone out and forget that others can see them. Although these little lapses of attention are amusing for the rest of us, they are also instructive when it comes to the topic of consciousness. Consciousness is a term meant to indicate awareness. It includes awareness of the self, of bodily sensations, of thoughts and of the environment. In English, we use the opposite word “unconscious” to indicate senselessness or a barrier to awareness, as in the case of “Theresa fell off the ladder and hit her head, knocking herself unconscious.” And yet, psychological theory and research suggest that consciousness and unconsciousness are more complicated than falling off a ladder. That is, consciousness is more than just being “on” or “off.” For instance, Sigmund Freud (1856 – 1939)—a psychological theorist—understood that even while we are awake, many things lay outside the realm of our conscious awareness (like being in the car and forgetting the rest of the world can see into your windows). In response to this notion, Freud introduced the concept of the “subconscious” (Freud, 2001) and proposed that some of our memories and even our basic motivations are not always accessible to our conscious minds. Upon reflection, it is easy to see how slippery a topic consciousness is. For example, are people conscious when they are daydreaming? What about when they are drunk? In this module, we will describe several levels of consciousness and then discuss altered states of consciousness such as hypnosis and sleep. Levels of Awareness In 1957, a marketing researcher inserted the words “Eat Popcorn” onto one frame of a film being shown all across the United States. And although that frame was only projected onto the movie screen for 1/24th of a second—a speed too fast to be perceived by conscious awareness—the researcher reported an increase in popcorn sales by nearly 60%. Almost immediately, all forms of “subliminal messaging” were regulated in the US and banned in countries such as Australia and the United Kingdom. Even though it was later shown that the researcher had made up the data (he hadn’t even inserted the words into the film), this fear about influences on our subconscious persists. At its heart, this issue pits various levels of awareness against one another. On the one hand, we have the “low awareness” of subtle, even subliminal influences. On the other hand, there is you—the conscious thinking, feeling you which includes all that you are currently aware of, even reading this sentence. However, when we consider these different levels of awareness separately, we can better understand how they operate. Low Awareness You are constantly receiving and evaluating sensory information. Although each moment has too many sights, smells, and sounds for them all to be consciously considered, our brains are nonetheless processing all that information. For example, have you ever been at a party, overwhelmed by all the people and conversation, when out of nowhere you hear your name called? Even though you have no idea what else the person is saying, you are somehow conscious of your name (for more on this, “the cocktail party effect,” see Noba’s Module on Attention). So, even though you may not be aware of various stimuli in your environment, your brain is paying closer attention than you think. Similar to a reflex (like jumping when startled), some cues, or significant sensory information, will automatically elicit a response from us even though we never consciously perceive it. For example, Öhman and Soares (1994) measured subtle variations in sweating of participants with a fear of snakes. The researchers flashed pictures of different objects (e.g., mushrooms, flowers, and most importantly, snakes) on a screen in front of them, but did so at speeds that left the participant clueless as to what he or she had actually seen. However, when snake pictures were flashed, these participants started sweating more (i.e., a sign of fear), even though they had no idea what they’d just viewed! Although our brains perceive some stimuli without our conscious awareness, do they really affect our subsequent thoughts and behaviors? In a landmark study, Bargh, Chen, and Burrows (1996) had participants solve a word search puzzle where the answers pertained to words about the elderly (e.g., “old,” “grandma”) or something random (e.g., “notebook,” “tomato”). Afterward, the researchers secretly measured how fast the participants walked down the hallway exiting the experiment. And although none of the participants were aware of a theme to the answers, those who had solved a puzzle with elderly words (vs. those with other types of words) walked more slowly down the hallway! This effect is called priming (i.e., readily “activating” certain concepts and associations from one’s memory) has been found in a number of other studies. For example, priming people by having them drink from a warm glass (vs. a cold one) resulted in behaving more “warmly” toward others (Williams & Bargh, 2008). Although all of these influences occur beneath one’s conscious awareness, they still have a significant effect on one’s subsequent thoughts and behaviors. In the last two decades, researchers have made advances in studying aspects of psychology that exist beyond conscious awareness. As you can understand, it is difficult to use self-reports and surveys to ask people about motives or beliefs that they, themselves, might not even be aware of! One way of side-stepping this difficulty can be found in the implicit associations test, or IAT (Greenwald, McGhee & Schwartz, 1998). This research method uses computers to assess people’s reaction times to various stimuli and is a very difficult test to fake because it records automatic reactions that occur in milliseconds. For instance, to shed light on deeply held biases, the IAT might present photographs of Caucasian faces and Asian faces while asking research participants to click buttons indicating either “good” or “bad” as quickly as possible. Even if the participant clicks “good” for every face shown, the IAT can still pick up tiny delays in responding. Delays are associated with more mental effort needed to process information. When information is processed quickly—as in the example of white faces being judged as “good”—it can be contrasted with slower processing—as in the example of Asian faces being judged as “good”—and the difference in processing speed is reflective of bias. In this regard, the IAT has been used for investigating stereotypes (Nosek, Banaji & Greenwald, 2002) as well as self-esteem (Greenwald & Farnam, 2000). This method can help uncover non-conscious biases as well as those that we are motivated to suppress. High Awareness Just because we may be influenced by these “invisible” factors, it doesn’t mean we are helplessly controlled by them. The other side of the awareness continuum is known as “high awareness.” This includes effortful attention and careful decision making. For example, when you listen to a funny story on a date, or consider which class schedule would be preferable, or complete a complex math problem, you are engaging a state of consciousness that allows you to be highly aware of and focused on particular details in your environment. Mindfulness is a state of higher consciousness that includes an awareness of the thoughts passing through one’s head. For example, have you ever snapped at someone in frustration, only to take a moment and reflect on why you responded so aggressively? This more effortful consideration of your thoughts could be described as an expansion of your conscious awareness as you take the time to consider the possible influences on your thoughts. Research has shown that when you engage in this more deliberate consideration, you are less persuaded by irrelevant yet biasing influences, like the presence of a celebrity in an advertisement (Petty & Cacioppo, 1986). Higher awareness is also associated with recognizing when you’re using a stereotype, rather than fairly evaluating another person (Gilbert & Hixon, 1991). Humans alternate between low and high thinking states. That is, we shift between focused attention and a less attentive default sate, and we have neural networks for both (Raichle, 2015). Interestingly, the the less we’re paying attention, the more likely we are to be influenced by non-conscious stimuli (Chaiken, 1980). Although these subtle influences may affect us, we can use our higher conscious awareness to protect against external influences. In what’s known as the Flexible Correction Model (Wegener & Petty, 1997), people who are aware that their thoughts or behavior are being influenced by an undue, outside source, can correct their attitude against the bias. For example, you might be aware that you are influenced by mention of specific political parties. If you were motivated to consider a government policy you can take your own biases into account to attempt to consider the policy in a fair way (on its own merits rather than being attached to a certain party). To help make the relationship between lower and higher consciousness clearer, imagine the brain is like a journey down a river. In low awareness, you simply float on a small rubber raft and let the currents push you. It's not very difficult to just drift along but you also don't have total control. Higher states of consciousness are more like traveling in a canoe. In this scenario, you have a paddle and can steer, but it requires more effort. This analogy applies to many states of consciousness, but not all. What about other states such as like sleeping, daydreaming, or hypnosis? How are these related to our conscious awareness? Other States of Consciousness Hypnosis If you’ve ever watched a stage hypnotist perform, it may paint a misleading portrait of this state of consciousness. The hypnotized people on stage, for example, appear to be in a state similar to sleep. However, as the hypnotist continues with the show, you would recognize some profound differences between sleep and hypnosis. Namely, when you’re asleep, hearing the word “strawberry” doesn’t make you flap your arms like a chicken. In stage performances, the hypnotized participants appear to be highly suggestible, to the point that they are seemingly under the hypnotist’s control. Such performances are entertaining but have a way of sensationalizing the true nature of hypnotic states. Hypnosis is an actual, documented phenomenon—one that has been studied and debated for over 200 years (Pekala et al., 2010). Franz Mesmer (1734 – 1815) is often credited as among the first people to “discover” hypnosis, which he used to treat members of elite society who were experiencing psychological distress. It is from Mesmer’s name that we get the English word, “mesmerize” meaning “to entrance or transfix a person’s attention.” Mesmer attributed the effect of hypnosis to “animal magnetism,” a supposed universal force (similar to gravity) that operates through all human bodies. Even at the time, such an account of hypnosis was not scientifically supported, and Mesmer himself was frequently the center of controversy. Over the years, researchers have proposed that hypnosis is a mental state characterized by reduced peripheral awareness and increased focus on a singular stimulus, which results in an enhanced susceptibility to suggestion (Kihlstrom, 2003). For example, the hypnotist will usually induce hypnosis by getting the person to pay attention only to the hypnotist’s voice. As the individual focuses more and more on that, s/he begins to forget the context of the setting and responds to the hypnotist’s suggestions as if they were his or her own. Some people are naturally more suggestible, and therefore more “hypnotizable” than are others, and this is especially true for those who score high in empathy (Wickramasekera II & Szlyk, 2003). One common “trick” of stage hypnotists is to discard volunteers who are less suggestible than others. Dissociation is the separation of one’s awareness from everything besides what one is centrally focused on. For example, if you’ve ever been daydreaming in class, you were likely so caught up in the fantasy that you didn’t hear a word the teacher said. During hypnosis, this dissociation becomes even more extreme. That is, a person concentrates so much on the words of the hypnotist that s/he loses perspective of the rest of the world around them. As a consequence of dissociation, a person is less effortful, and less self-conscious in consideration of his or her own thoughts and behaviors. Similar to low awareness states, where one often acts on the first thought that comes to mind, so, too, in hypnosis does the individual simply follow the first thought that comes to mind, i.e., the hypnotist’s suggestion. Still, just because one is more susceptible to suggestion under hypnosis, it doesn’t mean s/he will do anything that’s ordered. To be hypnotized, you must first want to be hypnotized (i.e., you can’t be hypnotized against your will; Lynn & Kirsh, 2006), and once you are hypnotized, you won’t do anything you wouldn’t also do while in a more natural state of consciousness (Lynn, Rhue, & Weekes, 1990). Today, hypnotherapy is still used in a variety of formats, and it has evolved from Mesmer’s early tinkering with the concept. Modern hypnotherapy often uses a combination of relaxation, suggestion, motivation and expectancies to create a desired mental or behavioral state. Although there is mixed evidence on whether hypnotherapy can help with addiction reduction (e.g., quitting smoking; Abbot et al., 1998) there is some evidence that it can be successful in treating sufferers of acute and chronic pain (Ewin, 1978; Syrjala et al., 1992). For example, one study examined the treatment of burn patients with either hypnotherapy, pseudo-hypnosis (i.e., a placebo condition), or no treatment at all. Afterward, even though people in the placebo condition experienced a 16% decrease in pain, those in the actual hypnosis condition experienced a reduction of nearly 50% (Patterson et al., 1996). Thus, even though hypnosis may be sensationalized for television and movies, its ability to disassociate a person from their environment (or their pain) in conjunction with increased suggestibility to a clinician’s recommendations (e.g., “you will feel less anxiety about your chronic pain”) is a documented practice with actual medical benefits. Now, similar to hypnotic states, trance states also involve a dissociation of the self; however, people in a trance state are said to have less voluntary control over their behaviors and actions. Trance states often occur in religious ceremonies, where the person believes he or she is “possessed” by an otherworldly being or force. While in trance, people report anecdotal accounts of a “higher consciousness” or communion with a greater power. However, the body of research investigating this phenomenon tends to reject the claim that these experiences constitute an “altered state of consciousness.” Most researchers today describe both hypnosis and trance states as “subjective” alterations of consciousness, not an actually distinct or evolved form (Kirsch & Lynn, 1995). Just like you feel different when you’re in a state of deep relaxation, so, too, are hypnotic and trance states simply shifts from the standard conscious experience. Researchers contend that even though both hypnotic and trance states appear and feel wildly different than the normal human experience, they can be explained by standard socio-cognitive factors like imagination, expectation, and the interpretation of the situation. Sleep You may have experienced the sensation-- as you are falling asleep-- of falling and then found yourself physically jerking forward and grabbing out as if you were really falling. Sleep is a unique state of consciousness; it lacks full awareness but the brain is still active. People generally follow a “biological clock” that impacts when they naturally become drowsy, when they fall asleep, and the time they naturally awaken. The hormone melatonin increases at night and is associated with becoming sleepy. Your natural daily rhythm, or Circadian Rhythm, can be influenced by the amount of daylight to which you are exposed as well as your work and activity schedule. Changing your location, such as flying from Canada to England, can disrupt your natural sleep rhythms, and we call this jet lag. You can overcome jet lag by synchronizing yourself to the local schedule by exposing yourself to daylight and forcing yourself to stay awake even though you are naturally sleepy. Interestingly, sleep itself is more than shutting off for the night (or for a nap). Instead of turning off like a light with a flick of a switch, your shift in consciousness is reflected in your brain’s electrical activity. While you are awake and alert your brain activity is marked by betawaves. Beta waves are characterized by being high in frequency but low in intensity. In addition, they are the most inconsistent brain wave and this reflects the wide variation in sensory input that a person processes during the day. As you begin to relax these change to alpha waves. These waves reflect brain activity that is less frequent, more consistent and more intense. As you slip into actual sleep you transition through many stages. Scholars differ on how they characterize sleep stages with some experts arguing that there are four distinct stages (Manoach et al., 2010), while others recognize five (Šušmáková, & Krakovská, 2008) but they all distinguish between those that include rapid eye movement (REM) and those that are non-rapid eye movement (NREM). In addition, each stage is typically characterized by its own unique pattern of brain activity: • Stage 1 (called NREM 1, or N1) is the "falling asleep" stage and is marked by theta waves. • Stage 2 (called NREM 2, or N2) is considered a light sleep. Here, there are occasional “sleep spindles,” or very high intensity brain waves. These are thought to be associated with the processing of memories. NREM 2 makes up about 55% of all sleep. • Stage 3 (called NREM 3, or N3) makes up between 20-25% of all sleep and is marked by greater muscle relaxation and the appearance of delta waves. • Finally, REM sleep is marked by rapid eye movement (REM). Interestingly, this stage—in terms of brain activity—is similar to wakefulness. That is, the brain waves occur less intensely than in other stages of sleep. REM sleep accounts for about 20% of all sleep and is associated with dreaming. Dreams are, arguably, the most interesting aspect of sleep. Throughout history dreams have been given special importance because of their unique, almost mystical nature. They have been thought to be predictions of the future, hints of hidden aspects of the self, important lessons about how to live life, or opportunities to engage in impossible deeds like flying. There are several competing theories of why humans dream. One is that it is our nonconscious attempt to make sense of our daily experiences and learning. Another, popularized by Freud, is that dreams represent taboo or troublesome wishes or desires. Regardless of the specific reason we know a few facts about dreams: all humans dream, we dream at every stage of sleep, but dreams during REM sleep are especially vivid. One under-explored area of dream research is the possible social functions of dreams: we often share our dreams with others and use them for entertainment value. Sleep serves many functions, one of which is to give us a period of mental and physical restoration. Children generally need more sleep than adults since they are developing. It is so vital, in fact, that a lack of sleep is associated with a wide range of problems. People who do not receive adequate sleep are more irritable, have slower reaction time, have more difficulty sustaining attention, and make poorer decisions. Interestingly, this is an issue relevant to the lives of college students. In one highly cited study researchers found that 1 in 5 students took more than 30 minutes to fall asleep at night, 1 in 10 occasionally took sleep medications, and more than half reported being “mostly tired” in the mornings (Buboltz, et al, 2001). Psychoactive Drugs On April 16, 1943, Albert Hoffman—a Swiss chemist working in a pharmaceutical company—accidentally ingested a newly synthesized drug. The drug—lysergic acid diethylimide (LSD)—turned out to be a powerful hallucinogen. Hoffman went home and later reported the effects of the drug, describing them as seeing the world through a “warped mirror” and experiencing visions of “extraordinary shapes with intense, kaleidoscopic play of colors.” Hoffman had discovered what members of many traditional cultures around the world already knew: there are substances that, when ingested, can have a powerful effect on perception and on consciousness. Drugs operate on human physiology in a variety of ways and researchers and medical doctors tend to classify drugs according to their effects. Here we will briefly cover 3 categories of drugs: hallucinogens, depressants, and stimulants. Hallucinogens It is possible that hallucinogens are the substance that have, historically, been used the most widely. Traditional societies have used plant-based hallucinogens such as peyote, ebene, and psilocybin mushrooms in a wide range of religious ceremonies. Hallucinogens are substances that alter a person’s perceptions, often by creating visions or hallucinations that are not real. There are a wide range of hallucinogens and many are used as recreational substances in industrialized societies. Common examples include marijuana, LSD, and MDMA (also known as “ecstasy”). Marijuana is the dried flowers of the hemp plant and is often smoked to produce euphoria. The active ingredient in marijuana is called THC and can produce distortions in the perception of time, can create a sense of rambling, unrelated thoughts, and is sometimes associated with increased hunger or excessive laughter. The use and possession of marijuana is illegal in most places but this appears to be a trend that is changing. Uruguay, Bangladesh, and several of the United States, have recently legalized marijuana. This may be due, in part, to changing public attitudes or to the fact that marijuana is increasingly used for medical purposes such as the management of nausea or treating glaucoma. Depressants Depressants are substances that, as their name suggests, slow down the body’s physiology and mental processes. Alcohol is the most widely used depressant. Alcohol’s effects include the reduction of inhibition, meaning that intoxicated people are more likely to act in ways they would otherwise be reluctant to. Alcohol’s psychological effects are the result of it increasing the neurotransmitter GABA. There are also physical effects, such as loss of balance and coordination, and these stem from the way that alcohol interferes with the coordination of the visual and motor systems of the brain. Despite the fact that alcohol is so widely accepted in many cultures it is also associated with a variety of dangers. First, alcohol is toxic, meaning that it acts like a poison because it is possible to drink more alcohol than the body can effectively remove from the bloodstream. When a person’s blood alcohol content (BAC) reaches .3 to .4% there is a serious risk of death. Second, the lack of judgment and physical control associated with alcohol is associated with more risk taking behavior or dangerous behavior such as drunk driving. Finally, alcohol is addictive and heavy drinkers often experience significant interference with their ability to work effectively or in their close relationships. Other common depressants include opiates (also called “narcotics”), which are substances synthesized from the poppy flower. Opiates stimulate endorphin production in the brain and because of this they are often used as pain killers by medical professionals. Unfortunately, because opiates such as Oxycontin so reliably produce euphoria they are increasingly used—illegally—as recreational substances. Opiates are highly addictive. Stimulants Stimulants are substances that “speed up” the body’s physiological and mental processes. Two commonly used stimulants are caffeine—the drug found in coffee and tea—and nicotine, the active drug in cigarettes and other tobacco products. These substances are both legal and relatively inexpensive, leading to their widespread use. Many people are attracted to stimulants because they feel more alert when under the influence of these drugs. As with any drug there are health risks associated with consumption. For example, excessive consumption of these types of stimulants can result in anxiety, headaches, and insomnia. Similarly, smoking cigarettes—the most common means of ingesting nicotine—is associated with higher risks of cancer. For instance, among heavy smokers 90% of lung cancer is directly attributable to smoking (Stewart & Kleihues, 2003). There are other stimulants such as cocaine and methamphetamine (also known as “crystal meth” or “ice”) that are illegal substances that are commonly used. These substances act by blocking “re-uptake” of dopamine in the brain. This means that the brain does not naturally clear out the dopamine and that it builds up in the synapse, creating euphoria and alertness. As the effects wear off it stimulates strong cravings for more of the drug. Because of this these powerful stimulants are highly addictive. Conclusion When you think about your daily life it is easy to get lulled into the belief that there is one “setting” for your conscious thought. That is, you likely believe that you hold the same opinions, values, and memories across the day and throughout the week. But “you” are like a dimmer switch on a light that can be turned from full darkness increasingly on up to full brightness. This switch is consciousness. At your brightest setting you are fully alert and aware; at dimmer settings you are day dreaming; and sleep or being knocked unconscious represent dimmer settings still. The degree to which you are in high, medium, or low states of conscious awareness affect how susceptible you are to persuasion, how clear your judgment is, and how much detail you can recall. Understanding levels of awareness, then, is at the heart of understanding how we learn, decide, remember and many other vital psychological processes. Outside Resources App: Visual illusions for the iPad. http://www.exploratorium.edu/explore...olor-uncovered Book: A wonderful book about how little we know about ourselves: Wilson, T. D. (2004). Strangers to ourselves. Cambridge, MA: Harvard University Press. http://www.hup.harvard.edu/catalog.p...=9780674013827 Book: Another wonderful book about free will—or its absence?: Wegner, D. M. (2002). The illusion of conscious will. Cambridge, MA: MIT Press. https://mitpress.mit.edu/books/illus...conscious-will Information on alcoholism, alcohol abuse, and treatment: http://www.niaaa.nih.gov/alcohol-hea...port-treatment The American Psychological Association has information on getting a good night’s sleep as well as on sleep disorders http://www.apa.org/helpcenter/sleep-disorders.aspx The LSD simulator: This simulator uses optical illusions to simulate the halluginogenic experience of LSD. Simply follow the instructions in this two minute video. After looking away you may see the world around you in a warped or pulsating way similar to the effects of LSD. The effect is temporary and will disappear in about a minute. The National Sleep Foundation is a non-profit with videos on insomnia, sleep training in children, and other topics https://sleepfoundation.org/video-library Video: An artist who periodically took LSD and drew self-portraits: http://www.openculture.com/2013/10/a...xperiment.html Video: An interesting video on attention: http://www.dansimons.com/videos.html Video: Clip on out-of-body experiences induced using virtual reality. Video: Clip on the rubber hand illusion, from the BBC science series \\\\"Horizon.\\\\" Video: Clip showing a patient with blindsight, from the documentary \\\\"Phantoms in the Brain.\\\\" Video: Demonstration of motion-induced blindness - Look steadily at the blue moving pattern. One or more of the yellow spots may disappear: Video: Howie Mandel from America\\'s Got Talent being hypnotized into shaking hands with people: Video: Imaging the Brain, Reading the Mind - A talk by Marsel Mesulam. http://video.at.northwestern.edu/lores/SO_marsel.m4v Video: Lucas Handwerker – a stage hypnotist discusses the therapeutic aspects of hypnosis: Video: Ted Talk - Simon Lewis: Don\\\\'t take consciousness for granted http://www.ted.com/talks/simon_lewis...r_granted.html Video: TED Talk on Dream Research: Video: The mind-body problem - An interview with Ned Block: Want a quick demonstration of priming? (Want a quick demonstration of how powerful these effects can be? Check out: Web: A good overview of priming: en.Wikipedia.org/wiki/Priming_(psychology) Web: Definitions of Consciousness: http://www.consciousentities.com/definitions.htm Web: Learn more about motion-induced blindness on Michael Bach\\\\'s website: http://www.michaelbach.de/ot/mot-mib/index.html Discussion Questions 1. If someone were in a coma after an accident, and you wanted to better understand how “conscious” or aware s/he were, how might you go about it? 2. What are some of the factors in daily life that interfere with people’s ability to get adequate sleep? What interferes with your sleep? 3. How frequently do you remember your dreams? Do you have recurring images or themes in your dreams? Why do you think that is? 4. Consider times when you fantasize or let your mind wander? Describe these times: are you more likely to be alone or with others? Are there certain activities you engage in that seem particularly prone to daydreaming? 5. A number of traditional societies use consciousness altering substances in ceremonies. Why do you think they do this? 6. Do you think attitudes toward drug use are changing over time? If so, how? Why do you think these changes occur? 7. Students in high school and college are increasingly using stimulants such as Adderol as study aids and “performance enhancers.” What is your opinion of this trend? Vocabulary Blood Alcohol Content (BAC) Blood Alcohol Content (BAC): a measure of the percentage of alcohol found in a person’s blood. This measure is typically the standard used to determine the extent to which a person is intoxicated, as in the case of being too impaired to drive a vehicle. Circadian Rhythm Circadian Rhythm: The physiological sleep-wake cycle. It is influenced by exposure to sunlight as well as daily schedule and activity. Biologically, it includes changes in body temperature, blood pressure and blood sugar. Consciousness Consciousness: the awareness or deliberate perception of a stimulus Cues Cues: a stimulus that has a particular significance to the perceiver (e.g., a sight or a sound that has special relevance to the person who saw or heard it) Depressants Depressants: a class of drugs that slow down the body’s physiological and mental processes. Dissociation Dissociation: the heightened focus on one stimulus or thought such that many other things around you are ignored; a disconnect between one’s awareness of their environment and the one object the person is focusing on Euphoria Euphoria: an intense feeling of pleasure, excitement or happiness. Flexible Correction Model Flexible Correction Model: the ability for people to correct or change their beliefs and evaluations if they believe these judgments have been biased (e.g., if someone realizes they only thought their day was great because it was sunny, they may revise their evaluation of the day to account for this “biasing” influence of the weather) Hallucinogens Hallucinogens: substances that, when ingested, alter a person’s perceptions, often by creating hallucinations that are not real or distorting their perceptions of time. Hypnosis Hypnosis: the state of consciousness whereby a person is highly responsive to the suggestions of another; this state usually involves a dissociation with one’s environment and an intense focus on a single stimulus, which is usually accompanied by a sense of relaxation Hypnotherapy Hypnotherapy: The use of hypnotic techniques such as relaxation and suggestion to help engineer desirable change such as lower pain or quitting smoking. Implicit Associations Test Implicit Associations Test (IAT): A computer reaction time test that measures a person’s automatic associations with concepts. For instance, the IAT could be used to measure how quickly a person makes positive or negative evaluations of members of various ethnic groups. Jet Lag Jet Lag: The state of being fatigued and/or having difficulty adjusting to a new time zone after traveling a long distance (across multiple time zones). Melatonin Melatonin: A hormone associated with increased drowsiness and sleep. Mindfulness Mindfulness: a state of heightened focus on the thoughts passing through one’s head, as well as a more controlled evaluation of those thoughts (e.g., do you reject or support the thoughts you’re having?) Priming Priming: the activation of certain thoughts or feelings that make them easier to think of and act upon Stimulants Stimulants: a class of drugs that speed up the body’s physiological and mental processes. Trance States Trance: a state of consciousness characterized by the experience of “out-of-body possession,” or an acute dissociation between one’s self and the current, physical environment surrounding them.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_7%3A_Cognition_and_Language/7.3%3A_States_of_Consciousness.txt
By Bertram Malle Brown University One of the most remarkable human capacities is to perceive and understand mental states. This capacity, often labeled “theory of mind,” consists of an array of psychological processes that play essential roles in human social life. We review some of these roles, examine what happens when the capacity is deficient, and explore the many processes that make up the capacity to understand minds. learning objectives • Explain what theory of mind is. • Enumerate the many domains of social life in which theory of mind is critical. • Describe some characteristics of how autistic individuals differ in their processing of others’ minds. • Describe and explain some of the many concepts and processes that comprise the human understanding of minds. • Have a basic understanding of how ordinary people explain unintentional and intentional behavior. Introduction One of the most fascinating human capacities is the ability to perceive and interpret other people’s behavior in terms of their mental states. Having an appreciation for the workings of another person’s mind is considered a prerequisite for natural language acquisition (Baldwin & Tomasello, 1998), strategic social interaction (Zhang, Hedden, & Chia, 2012), reflexive thought (Bogdan, 2000), and moral judgment (Guglielmo, Monroe, & Malle, 2009). This capacity develops from early beginnings in the first year of life to the adult’s fast and often effortless understanding of others’ thoughts, feelings, and intentions. And though we must speculate about its evolutionary origin, we do have indications that the capacity evolved sometime in the last few million years. In this module we will focus on two questions: What is the role of understanding others’ minds in human social life? And what is known about the mental processes that underlie such understanding? For simplicity, we will label this understanding “theory of mind,” even though it is not literally a “theory” that people have about the mind; rather, it is a capacity that some scholars prefer to label “mentalizing” or “mindreading.” But we will go behind all these labels by breaking down the capacity into distinct components: the specific concepts and mental processes that underlie the human understanding of minds. First, let’s get clear about the roles that this understanding plays in social life. The Role of Theory of Mind in Social Life Put yourself in this scene: You observe two people’s movements, one behind a large wooden object, the other reaching behind him and then holding a thin object in front of the other. Without a theory of mind you would neither understand what this movement stream meant nor be able to predict either person’s likely responses. With the capacity to interpret certain physical movements in terms of mental states, perceivers can parse this complex scene into intentional actions of reaching and giving (Baird & Baldwin, 2001); they can interpret the actions as instances of offering and trading; and with an appropriate cultural script, they know that all that was going on was a customer pulling out her credit card with the intention to pay the cashier behind the register. People’s theory of mind thus frames and interprets perceptions of human behavior in a particular way—as perceptions of agents who can act intentionally and who have desires, beliefs, and other mental states that guide their actions (Perner, 1991; Wellman, 1990). Not only would social perceivers without a theory of mind be utterly lost in a simple payment interaction; without a theory of mind, there would probably be no such things as cashiers, credit cards, and payment (Tomasello, 2003). Plain and simple, humans need to understand minds in order to engage in the kinds of complex interactions that social communities (small and large) require. And it is these complex social interactions that have given rise, in human cultural evolution, to houses, cities, and nations; to books, money, and computers; to education, law, and science. The list of social interactions that rely deeply on theory of mind is long; here are a few highlights. • Teaching another person new actions or rules by taking into account what the learner knows or doesn’t know and how one might best make him understand. • Learning the words of a language by monitoring what other people attend to and are trying to do when they use certain words. • Figuring out our social standing by trying to guess what others think and feel about us. • Sharing experiences by telling a friend how much we liked a movie or by showing her something beautiful. • Collaborating on a task by signaling to one another that we share a goal and understand and trust the other’s intention to pursue this joint goal. Autism and Theory of Mind Another way of appreciating the enormous impact that theory of mind has on social interactions is to study what happens when the capacity is severely limited, as in the case of autism (Tager-Flusberg, 2007). In a fascinating discussion in which (high-functioning) autistic individuals talk about their difficulties with other people’s minds (Blackburn, Gottschewski, George, & L—, 2000), one person reports: “I know people’s faces down to the acne scars on the left corners of their chins . . . and how the hairs of their eyebrows curl. . . . The best I can do is start picking up bits of data during my encounter with them because there’s not much else I can do. . . . I’m not sure what kind of information about them I’m attempting to process.” What seems to be missing, as another person with autism remarks, is an “automatic processing of ‘people information.’” Some autistic people report that they perceive others “in a more analytical way.” This analytical mode of processing, however, is very tiresome and slow: “Given time I may be able to analyze someone in various ways, and seem to get good results, but may not pick up on certain aspects of an interaction until I am obsessing over it hours or days later” (Blackburn et al., 2000). So what is this magical potion that allows most people to gain quick and automatic access to other people’s minds and to recognize the meaning underlying human behavior? Scientific research has accumulated a good deal of knowledge in the past few decades, and here is a synopsis of what we know. The Mental Processes Underlying Theory of Mind The first thing to note is that “theory of mind” is not a single thing. What underlies people’s capacity to recognize and understand mental states is a whole host of components—a toolbox, as it were, for many different but related tasks in the social world (Malle, 2008). Figure 7.4.1 shows some of the most important tools, organized in a way that reflects the complexity of involved processes: from simple and automatic on the bottom to complex and deliberate on the top. This organization also reflects development—from tools that infants master within the first 6–12 months to tools they need to acquire over the next 3–5 years. Strikingly, the organization also reflects evolution: monkeys have available the tools on the bottom; chimpanzees have available the tools at the second level; but only humans master the remaining tools above. Let’s look at a few of them in more detail. Agents, Goals, and Intentionality The agent category allows humans to identify those moving objects in the world that can act on their own. Features that even very young children take to be indicators of being an agent include being self-propelled, having eyes, and reacting systematically to the interaction partner’s behavior, such as following gaze or imitating (Johnson, 2000; Premack, 1990). The process of recognizing goals builds on this agent category, because agents are characteristically directed toward goal objects, which means they seek out, track, and often physically contact said objects. Even before the end of their first year, infants recognize that humans reach toward an object they strive for even if that object changes location or if the path to the object contains obstacles (Gergely, Nádasdy, Csibra, & Bíró, 1995; Woodward, 1998). What it means to recognize goals, therefore, is to see the systematic and predictable relationship between a particular agent pursuing a particular object across various circumstances. Through learning to recognize the many ways by which agents pursue goals, humans learn to pick out behaviors that are intentional. The concept of intentionality is more sophisticated than the goal concept. For one thing, human perceivers recognize that some behaviors can be unintentional even if they were goal-directed—such as when you unintentionally make a fool of yourself even though you had the earnest goal of impressing your date. To act intentionally you need, aside from a goal, the right kinds of beliefs about how to achieve the goal. Moreover, the adult concept of intentionality requires that an agent have the skill to perform the intentional action in question: If I am flipping a coin, trying to make it land on heads, and if I get it to land on heads on my first try, you would not judge my action of making it land on heads as intentional—you would say it was luck (Malle & Knobe, 1997). Imitation, Synchrony, and Empathy Imitation and empathy are two other basic capacities that aid the understanding of mind from childhood on (Meltzoff & Decety, 2003). Imitation is the human tendency to carefully observe others’ behaviors and do as they do—even if it is the first time the perceiver has seen this behavior. A subtle, automatic form of imitation is called mimicry, and when people mutually mimic one another they can reach a state of synchrony. Have you ever noticed when two people in conversation take on similar gestures, body positions, even tone of voice? They “synchronize” their behaviors by way of (largely) unconscious imitation. Such synchrony can happen even at very low levels, such as negative physiological arousal (Levenson & Ruef, 1992), though the famous claim of synchrony in women’s menstrual cycles is a myth (Yang & Schank, 2006). Interestingly, people who enjoy an interaction synchronize their behaviors more, and increased synchrony (even manipulated in an experiment) makes people enjoy their interaction more (Chartrand & Bargh, 1999). Some research findings suggest that synchronizing is made possible by brain mechanisms that tightly link perceptual information with motor information (when I see you move your arm, my arm-moving program is activated). In monkeys, highly specialized so-called mirror neurons fire both when the monkey sees a certain action and when it performs that same action (Rizzolatti, Fogassi, & Gallese, 2001). In humans, however, things are a bit more complex. In many everyday settings, people perceive uncountable behaviors and fortunately don’t copy all of them (just consider walking in a crowd—hundreds of your mirror neurons would fire in a blaze of confusion). Human imitation and mirroring is selective, triggering primarily actions that are relevant to the perceiver’s current state or aim. Automatic empathy builds on imitation and synchrony in a clever way. If Bill is sad and expresses this emotion in his face and body, and if Elena watches or interacts with Bill, then she will subtly imitate his dejected behavior and, through well-practiced associations of certain behaviors and emotions, she will feel a little sad as well (Sonnby-Borgström, Jönsson, & Svensson, 2003). Thus, she empathizes with him—whether she wants to or not. Try it yourself. Type “sad human faces” into your Internet search engine and select images from your results. Look at 20 photos and pay careful attention to what happens to your face and to your mood. Do you feel almost a “pull” of some of your facial muscles? Do you feel a tinge of melancholy? Joint Attention, Visual Perspective Taking Going beyond the automatic, humans are capable of actively engaging with other people’s mental states, such as when they enter into situations of joint attentionlike Marissa and Noah, who are each looking at an object and are both aware that each of them is looking at the object. This sounds more complicated than it really is. Just point to an object when a 3-year old is around and notice how both the child and you check in with each other, ensuring that you are really jointly engaging with the object. Such shared engagement is critical for children to learn the meaning of objects—both their value (is it safe and rewarding to approach?) and the words that refer to them (what do you call this?). When I hold up my keyboard and show it to you, we are jointly attending to it, and if I then say it’s called “Tastatur” in German, you know that I am referring to the keyboard and not to the table on which it had been resting. Another important capacity of engagement is visual perspective taking: You are sitting at a dinner table and advise another person on where the salt is—do you consider that it is to her left even though it is to your right? When we overcome our egocentric perspective this way, we imaginatively adopt the other person’s spatial viewpoint and determine how the world looks from their perspective. In fact, there is evidence that we mentally “rotate” toward the other’s spatial location, because the farther away the person sits (e.g., 60, 90, or 120 degrees away from you) the longer it takes to adopt the person’s perspective (Michelon & Zacks, 2006). Projection, Simulation (and the Specter of Egocentrism) When imagining what it might be like to be in another person’s psychological position, humans have to go beyond mental rotation. One tool to understand the other’s thoughts or feelings is simulation—using one’s own mental states as a model for others’ mental states: “What would it feel like sitting across from the stern interrogator? I would feel scared . . .” An even simpler form of such modeling is the assumption that the other thinks, feels, wants what we do—which has been called the “like-me” assumption (Meltzoff, 2007) or the inclination toward social projection (Krueger, 2007). In a sense, this is an absence of perspective taking, because we assume that the other’s perspective equals our own. This can be an effective strategy if we share with the other person the same environment, background, knowledge, and goals, but it gets us into trouble when this presumed common ground is in reality lacking. Let’s say you know that Brianna doesn’t like Fred’s new curtains, but you hear her exclaim to Fred, “These are beautiful!” Now you have to predict whether Fred can figure out that Brianna was being sarcastic. It turns out that you will have a hard time suppressing your own knowledge in this case and you may overestimate how easy it is for Fred to spot the sarcasm (Keysar, 1994). Similarly, you will overestimate how visible that pimple is on your chin—even though it feels big and ugly to you, in reality very few people will ever notice it (Gilovich & Savitsky, 1999). So the next time when you spot a magnificent bird high up in the tree and you get impatient with your friend who just can’t see what is clearly obvious, remember: it’s obvious to you. What all these examples show is that people use their own current state—of knowledge, concern, or perception—to grasp other people’s mental states. And though they often do so correctly, they also get things wrong at times. This is why couples counselors, political advisors, and Buddhists agree on at least one thing: we all need to try harder to recognize our egocentrism and actively take other people’s perspective—that is, grasp their actual mental states, even if (or especially when) they are different from our own. Explicit Mental State Inference The ability to truly take another person’s perspective requires that we separate what we want, feel, and know from what the other person is likely to want, feel, and know. To do so humans make use of a variety of information. For one thing, they rely on stored knowledge—both general knowledge (“Everybody would be nervous when threatened by a man with a gun”) and agent-specific knowledge (“Joe was fearless because he was trained in martial arts”). For another, they critically rely on perceived facts of the concrete situation—such as what is happening to the agent, the agent’s facial expressions and behaviors, and what the person saw or didn’t see. This capacity of integrating multiple lines of information into a mental-state inference develops steadily within the first few years of life, and this process has led to a substantial body of research (Wellman, Cross, & Watson, 2001). The research began with a clever experiment by Wimmer and Perner (1983), who tested whether children can pass a false-belief test (see Figure 7.4.2). The child is shown a picture story of Sally, who puts her ball in a basket and leaves the room. While Sally is out of the room, Anne comes along and takes the ball from the basket and puts it inside a box. The child is then asked where Sally thinks the ball is located when she comes back to the room. Is she going to look first in the box or in the basket? The right answer is that she will look in the basket, because that’s where she put it and thinks it is; but we have to infer this false belief against our own better knowledge that the ball is in the box. This is very difficult for children before the age of 4, and it usually takes some cognitive effort in adults (Epley, Morewedge, & Keysar, 2004). The challenge is clear: People are good at automatically relating to other people, using their own minds as a fitting model for others’ minds. But people need to recognize when to step out of their own perspective and truly represent the other person’s perspective—which may harbor very different thoughts, feelings, and intentions. Tools in Summary We have seen that the human understanding of other minds relies on many tools. People process such information as motion, faces, and gestures and categorize it into such concepts as agent, intentional action, or fear. They rely on relatively automatic psychological processes, such as imitation, joint attention, and projection. And they rely on more effortful processes, such as simulation and mental-state inference. These processes all link behavior that humans observe to mental states that humans infer. If we call this stunning capacity a “theory,” it is a theory of mind and behavior. Folk Explanations of Behavior Nowhere is this mind–behavior link clearer than in people’s explanations of behavior—when they try to understand why somebody acted or felt a certain way. People have a strong need to answer such “why” questions, from the trivial to the significant: why the neighbor’s teenage daughter is wearing a short skirt in the middle of winter; why the policeman is suddenly so friendly; why the murderer killed three people. The need to explain this last behavior seems puzzling, because typical benefits of explanation are absent: We do not need to predict or control the criminal’s behavior since we will never have anything to do with him. Nonetheless, we have an insatiable desire to understand, to find meaning in this person’s behavior—and in people’s behavior generally. Older theories of how people explain and understand behavior suggested that people merely identify causes of the behavior (e.g., Kelley, 1967). That is true for most unintentional behaviors—tripping, having a headache, calling someone by the wrong name. But to explain intentional behaviors, people use a more sophisticated framework of interpretation, which follows directly from their concept of intentionality and the associated mental states they infer (Malle, 2004). We have already mentioned the complexity of people’s concept of intentionality; here it is in full (Malle & Knobe, 1997): For an agent to perform a behavior intentionally, she must have a desire for an outcome (what we had called a goal), beliefs about how a particular action leads to the outcome, and an intention to perform that action; if the agent then actually performs the action with awareness and skill, people take it to be an intentional action. To explain why the agent performed the action, humans try to make the inverse inference of what desire and what beliefs the agent had that led her to so act, and these inferred desires and beliefs are the reasons for which she acted. What was her reason for wearing a short skirt in the winter? “She wanted to annoy her mother.” What was the policeman’s reason for suddenly being so nice? “He thought he was speaking with an influential politician.” What was his reason for killing three people? In fact, with such extreme actions, people are often at a loss for an answer. If they do offer an answer, they frequently retreat to “causal history explanations” (Malle, 1999), which step outside the agent’s own reasoning and refer instead to more general background facts—for example, that he was mentally ill or a member of an extremist group. But people clearly prefer to explain others’ actions by referring to their beliefs and desires, the specific reasons for which they acted. By relying on a theory of mind, explanations of behavior make meaningful what would otherwise be inexplicable motions—just like in our initial example of two persons passing some object between them. We recognize that the customer wanted to pay and that’s why she passed her credit card to the cashier, who in turn knew that he was given a credit card and swiped it. It all seems perfectly clear, almost trivial to us. But that is only because humans have a theory of mind and use it to retrieve the relevant knowledge, simulate the other people’s perspective, infer beliefs and desires, and explain what a given action means. Humans do this effortlessly and often accurately. Moreover, they do it within seconds or less. What’s so special about that? Well, it takes years for a child to develop this capacity, and it took our species a few million years to evolve it. That’s pretty special. Outside Resources Blog: On the debate about menstrual synchrony http://blogs.scientificamerican.com/...ual-synchrony/ Blog: On the debates over mirror neurons http://blogs.scientificamerican.com/...irror-neurons/ Book: First and last chapters of Zunshine, L. (2006). Why we read fiction: Theory of mind and the novel. Columbus, OH: Ohio State University Press. ohiostatepress.org/Books/Book PDFs/Zunshine Why.pdf Movie: A movie that portrays the social difficulties of a person with autism: Adam (Fox Searchlight Pictures, 2009) http://www.imdb.com/title/tt1185836/?ref_=fn_tt_tt_1 ToM and Autism TEDx Talks https://www.ted.com/playlists/153/the_autism_spectrum Video: TED talk on autism http://www.ted.com/talks/temple_gran..._of_minds.html Video: TED talk on empathy http://blog.ted.com/2011/04/18/a-rad...ds-at-ted-com/ Video: TED talk on theory of mind and moral judgment http://www.ted.com/talks/rebecca_sax...judgments.html Video: Test used by Baron Cohen (prior to the core study) to investigate whether autistic children had a theory of mind by using a false belief task. Video: Theory of mind development Discussion Questions 1. Recall a situation in which you tried to infer what a person was thinking or feeling but you just couldn’t figure it out, and recall another situation in which you tried the same but succeeded. Which tools were you able to use in the successful case that you didn’t or couldn’t use in the failed case? 2. Mindfulness training improves keen awareness of one’s own mental states. Look up a few such training programs (easily found online) and develop a similar training program to improve awareness of other people’s minds. 3. In the near future we will have robots that closely interact with people. Which theory of mind tools should a robot definitely have? Which ones are less important? Why? 4. Humans assume that everybody has the capacity to make choices and perform intentional actions. But in a sense, a choice is just a series of brain states, caused by previous brain states and states of the world, all governed by the physical laws of the universe. Is the concept of choice an illusion? 5. The capacity to understand others’ minds is intimately related to another unique human capacity: language. How might these two capacities have evolved? Together? One before the other? Which one? Vocabulary Automatic empathy A social perceiver unwittingly taking on the internal state of another person, usually because of mimicking the person’s expressive behavior and thereby feeling the expressed emotion. False-belief test An experimental procedure that assesses whether a perceiver recognizes that another person has a false belief—a belief that contradicts reality. Folk explanations of behavior People’s natural explanations for why somebody did something, felt something, etc. (differing substantially for unintentional and intentional behaviors). Intention An agent’s mental state of committing to perform an action that the agent believes will bring about a desired outcome. Intentionality The quality of an agent’s performing a behavior intentionally—that is, with skill and awareness and executing an intention (which is in turn based on a desire and relevant beliefs). Joint attention Two people attending to the same object and being aware that they both are attending to it. Mimicry Copying others’ behavior, usually without awareness. Mirror neurons Neurons identified in monkey brains that fire both when the monkey performs a certain action and when it perceives another agent performing that action. Projection A social perceiver’s assumption that the other person wants, knows, or feels the same as the perceiver wants, know, or feels. Simulation The process of representing the other person’s mental state. Synchrony Two people displaying the same behaviors or having the same internal states (typically because of mutual mimicry). Theory of mind The human capacity to understand minds, a capacity that is made up of a collection of concepts (e.g., agent, intentionality) and processes (e.g., goal detection, imitation, empathy, perspective taking). Visual perspective taking Can refer to visual perspective taking (perceiving something from another person’s spatial vantage point) or more generally to effortful mental state inference (trying to infer the other person’s thoughts, desires, emotions).
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_7%3A_Cognition_and_Language/7.4%3A_Theory_of_Mind.txt
By Robert Biswas-Diener Portland State University Intelligence is among the oldest and longest studied topics in all of psychology. The development of assessments to measure this concept is at the core of the development of psychological science itself. This module introduces key historical figures, major theories of intelligence, and common assessment strategies related to intelligence. This module will also discuss controversies related to the study of group differences in intelligence. learning objectives • List at least two common strategies for measuring intelligence. • Name at least one “type” of intelligence. • Define intelligence in simple terms. • Explain the controversy relating to differences in intelligence between groups. Introduction Every year hundreds of grade school students converge on Washington, D.C., for the annual Scripps National Spelling Bee. The “bee” is an elite event in which children as young as 8 square off to spell words like “cymotrichous” and “appoggiatura.” Most people who watch the bee think of these kids as being “smart” and you likely agree with this description. What makes a person intelligent? Is it heredity (two of the 2014 contestants in the bee have siblings who have previously won)(National Spelling Bee, 2014a)? Is it interest (the most frequently listed favorite subject among spelling bee competitors is math)(NSB, 2014b)? In this module we will cover these and other fascinating aspects of intelligence. By the end of the module you should be able to define intelligence and discuss some common strategies for measuring intelligence. In addition, we will tackle the politically thorny issue of whether there are differences in intelligence between groups such as men and women. Defining and Measuring Intelligence When you think of “smart people” you likely have an intuitive sense of the qualities that make them intelligent. Maybe you think they have a good memory, or that they can think quickly, or that they simply know a whole lot of information. Indeed, people who exhibit such qualities appear very intelligent. That said, it seems that intelligence must be more than simply knowing facts and being able to remember them. One point in favor of this argument is the idea of animal intelligence. It will come as no surprise to you that a dog, which can learn commands and tricks seems smarter than a snake that cannot. In fact, researchers and lay people generally agree with one another that primates—monkeys and apes (including humans)—are among the most intelligent animals. Apes such as chimpanzees are capable of complex problem solving and sophisticated communication (Kohler, 1924). Scientists point to the social nature of primates as one evolutionary source of their intelligence. Primates live together in troops or family groups and are, therefore, highly social creatures. As such, primates tend to have brains that are better developed for communication and long term thinking than most other animals. For instance, the complex social environment has led primates to develop deception, altruism, numerical concepts, and “theory of mind” (a sense of the self as a unique individual separate from others in the group; Gallup, 1982; Hauser, MacNeilage & Ware, 1996).[Also see Noba module Theory of Mind noba.to/a8wpytg3] The question of what constitutes human intelligence is one of the oldest inquiries in psychology. When we talk about intelligence we typically mean intellectual ability. This broadly encompasses the ability to learn, remember and use new information, to solve problems and to adapt to novel situations. An early scholar of intelligence, Charles Spearman, proposed the idea that intelligence was one thing, a “general factor” sometimes known as simply “g.” He based this conclusion on the observation that people who perform well in one intellectual area such as verbal ability also tend to perform well in other areas such as logic and reasoning (Spearman, 1904). A contemporary of Spearman’s named Francis Galton—himself a cousin of Charles Darwin-- was among those who pioneered psychological measurement (Hunt, 2009). For three pence Galton would measure various physical characteristics such as grip strength but also some psychological attributes such as the ability to judge distance or discriminate between colors. This is an example of one of the earliest systematic measures of individual ability. Galton was particularly interested in intelligence, which he thought was heritable in much the same way that height and eye color are. He conceived of several rudimentary methods for assessing whether his hypothesis was true. For example, he carefully tracked the family tree of the top-scoring Cambridge students over the previous 40 years. Although he found specific families disproportionately produced top scholars, intellectual achievement could still be the product of economic status, family culture or other non-genetic factors. Galton was also, possibly, the first to popularize the idea that the heritability of psychological traits could be studied by looking at identical and fraternal twins. Although his methods were crude by modern standards, Galton established intelligence as a variable that could be measured (Hunt, 2009). The person best known for formally pioneering the measurement of intellectual ability is Alfred Binet. Like Galton, Binet was fascinated by individual differences in intelligence. For instance, he blindfolded chess players and saw that some of them had the ability to continue playing using only their memory to keep the many positions of the pieces in mind (Binet, 1894). Binet was particularly interested in the development of intelligence, a fascination that led him to observe children carefully in the classroom setting. Along with his colleague Theodore Simon, Binet created a test of children’s intellectual capacity. They created individual test items that should be answerable by children of given ages. For instance, a child who is three should be able to point to her mouth and eyes, a child who is nine should be able to name the months of the year in order, and a twelve year old ought to be able to name sixty words in three minutes. Their assessment became the first “IQ test.” “IQ” or “intelligence quotient” is a name given to the score of the Binet-Simon test. The score is derived by dividing a child’s mental age (the score from the test) by their chronological age to create an overall quotient. These days, the phrase “IQ” does not apply specifically to the Binet-Simon test and is used to generally denote intelligence or a score on any intelligence test. In the early 1900s the Binet-Simon test was adapted by a Stanford professor named Lewis Terman to create what is, perhaps, the most famous intelligence test in the world, the Stanford-Binet (Terman, 1916). The major advantage of this new test was that it was standardized. Based on a large sample of children Terman was able to plot the scores in a normal distribution, shaped like a “bell curve” (see Fig. 7.5.1). To understand a normal distribution think about the height of people. Most people are average in height with relatively fewer being tall or short, and fewer still being extremely tall or extremely short. Terman (1916) laid out intelligence scores in exactly the same way, allowing for easy and reliable categorizations and comparisons between individuals. Looking at another modern intelligence test—the Wechsler Adult Intelligence Scale (WAIS)—can provide clues to a definition of intelligence itself. Motivated by several criticisms of the Stanford-Binet test, psychologist David Wechsler sought to create a superior measure of intelligence. He was critical of the way that the Stanford-Binet relied so heavily on verbal ability and was also suspicious of using a single score to capture all of intelligence. To address these issues Wechsler created a test that tapped a wide range of intellectual abilities. This understanding of intelligence—that it is made up of a pool of specific abilities—is a notable departure from Spearman’s concept of general intelligence. The WAIS assesses people's ability to remember, compute, understand language, reason well, and process information quickly (Wechsler, 1955). One interesting by-product of measuring intelligence for so many years is that we can chart changes over time. It might seem strange to you that intelligence can change over the decades but that appears to have happened over the last 80 years we have been measuring this topic. Here’s how we know: IQ tests have an average score of 100. When new waves of people are asked to take older tests they tend to outperform the original sample from years ago on which the test was normed. This gain is known as the “Flynn Effect,” named after James Flynn, the researcher who first identified it (Flynn, 1987). Several hypotheses have been put forth to explain the Flynn Effect including better nutrition (healthier brains!), greater familiarity with testing in general, and more exposure to visual stimuli. Today, there is no perfect agreement among psychological researchers with regards to the causes of increases in average scores on intelligence tests. Perhaps if you choose a career in psychology you will be the one to discover the answer! Types of Intelligence David Wechsler’s approach to testing intellectual ability was based on the fundamental idea that there are, in essence, many aspects to intelligence. Other scholars have echoed this idea by going so far as to suggest that there are actually even different types of intelligence. You likely have heard distinctions made between “street smarts” and “book learning.” The former refers to practical wisdom accumulated through experience while the latter indicates formal education. A person high in street smarts might have a superior ability to catch a person in a lie, to persuade others, or to think quickly under pressure. A person high in book learning, by contrast, might have a large vocabulary and be able to remember a large number of references to classic novels. Although psychologists don’t use street smarts or book smarts as professional terms they do believe that intelligence comes in different types. There are many ways to parse apart the concept of intelligence. Many scholars believe that Carroll ‘s (1993) review of more than 400 data sets provides the best currently existing single source for organizing various concepts related to intelligence. Carroll divided intelligence into three levels, or strata, descending from the most abstract down to the most specific (see Fig. 7.5.2). To understand this way of categorizing simply think of a “car.” Car is a general word that denotes all types of motorized vehicles. At the more specific level under “car” might be various types of cars such as sedans, sports cars, SUVs, pick-up trucks, station wagons, and so forth. More specific still would be certain models of each such as a Honda Civic or Ferrari Enzo. In the same manner, Carroll called the highest level (stratum III) the general intelligence factor “g.” Under this were more specific stratum II categories such as fluid intelligence and visual perception and processing speed. Each of these, in turn, can be sub-divided into very specific components such as spatial scanning, reaction time, and word fluency. Thinking of intelligence as Carroll (1993) does, as a collection of specific mental abilities, has helped researchers conceptualize this topic in new ways. For example, Horn and Cattell (1966) distinguish between “fluid” and “crystalized” intelligence, both of which show up on stratum II of Carroll’s model. Fluid intelligence is the ability to “think on your feet;” that is, to solve problems. Crystalized intelligence, on the other hand, is the ability to use language, skills and experience to address problems. The former is associated more with youth while the latter increases with age. You may have noticed the way in which younger people can adapt to new situations and use trial and error to quickly figure out solutions. By contrast, older people tend to rely on their relatively superior store of knowledge to solve problems. Harvard professor Howard Gardner is another figure in psychology who is well-known for championing the notion that there are different types of intelligence. Gardner’s theory is appropriately, called “multiple intelligences.” Gardner’s theory is based on the idea that people process information through different “channels” and these are relatively independent of one another. He has identified 8 common intelligences including 1) logic-math, 2) visual-spatial, 3) music-rhythm, 4) verbal-linguistic, 5) bodily-kinesthetic, 6) interpersonal, 7) intrapersonal, and 8) naturalistic (Gardner, 1985). Many people are attracted to Gardner’s theory because it suggests that people each learn in unique ways. There are now many Gardner- influenced schools in the world. Another type of intelligence is Emotional intelligence. Unlike traditional models of intelligence that emphasize cognition (thinking) the idea of emotional intelligence emphasizes the experience and expression of emotion. Some researchers argue that emotional intelligence is a set of skills in which an individual can accurately understand the emotions of others, can identify and label their own emotions, and can use emotions. (Mayer & Salovey, 1997). Other researchers believe that emotional intelligence is a mixture of abilities, such as stress management, and personality, such as a person’s predisposition for certain moods (Bar-On, 2006). Regardless of the specific definition of emotional intelligence, studies have shown a link between this concept and job performance (Lopes, Grewal, Kadis, Gall, & Salovey, 2006). In fact, emotional intelligence is similar to more traditional notions of cognitive intelligence with regards to workplace benefits. Schmidt and Hunter (1998), for example, reviewed research on intelligence in the workplace context and show that intelligence is the single best predictor of doing well in job training programs, of learning on the job. They also report that general intelligence is moderately correlated with all types of jobs but especially with managerial and complex, technical jobs. There is one last point that is important to bear in mind about intelligence. It turns out that the way an individual thinks about his or her own intelligence is also important because it predicts performance. Researcher Carol Dweck has made a career out of looking at the differences between high IQ children who perform well and those who do not, so-called “under achievers.” Among her most interesting findings is that it is not gender or social class that sets apart the high and low performers. Instead, it is their mindset. The children who believe that their abilities in general—and their intelligence specifically—is a fixed trait tend to underperform. By contrast, kids who believe that intelligence is changeable and evolving tend to handle failure better and perform better (Dweck, 1986). Dweck refers to this as a person’s “mindset” and having a growth mindset appears to be healthier. Correlates of Intelligence The research on mindset is interesting but there can also be a temptation to interpret it as suggesting that every human has an unlimited potential for intelligence and that becoming smarter is only a matter of positive thinking. There is some evidence that genetics is an important factor in the intelligence equation. For instance, a number of studies on genetics in adults have yielded the result that intelligence is largely, but not totally, inherited (Bouchard,2004). Having a healthy attitude about the nature of smarts and working hard can both definitely help intellectual performance but it also helps to have the genetic leaning toward intelligence. Carol Dweck’s research on the mindset of children also brings one of the most interesting and controversial issues surrounding intelligence research to the fore: group differences. From the very beginning of the study of intelligence researchers have wondered about differences between groups of people such as men and women. With regards to potential differences between the sexes some people have noticed that women are under-represented in certain fields. In 1976, for example, women comprised just 1% of all faculty members in engineering (Ceci, Williams & Barnett, 2009). Even today women make up between 3% and 15% of all faculty in math-intensive fields at the 50 top universities. This phenomenon could be explained in many ways: it might be the result of inequalities in the educational system, it might be due to differences in socialization wherein young girls are encouraged to develop other interests, it might be the result of that women are—on average—responsible for a larger portion of childcare obligations and therefore make different types of professional decisions, or it might be due to innate differences between these groups, to name just a few possibilities. The possibility of innate differences is the most controversial because many people see it as either the product of or the foundation for sexism. In today’s political landscape it is easy to see that asking certain questions such as “are men smarter than women?” would be inflammatory. In a comprehensive review of research on intellectual abilities and sex Ceci and colleagues (2009) argue against the hypothesis that biological and genetic differences account for much of the sex differences in intellectual ability. Instead, they believe that a complex web of influences ranging from societal expectations to test taking strategies to individual interests account for many of the sex differences found in math and similar intellectual abilities. A more interesting question, and perhaps a more sensitive one, might be to inquire in which ways men and women might differ in intellectual ability, if at all. That is, researchers should not seek to prove that one group or another is better but might examine the ways that they might differ and offer explanations for any differences that are found. Researchers have investigated sex differences in intellectual ability. In a review of the research literature Halpern (1997) found that women appear, on average, superior to men on measures of fine motor skill, acquired knowledge, reading comprehension, decoding non-verbal expression, and generally have higher grades in school. Men, by contrast, appear, on average, superior to women on measures of fluid reasoning related to math and science, perceptual tasks that involve moving objects, and tasks that require transformations in working memory such as mental rotations of physical spaces. Halpern also notes that men are disproportionately represented on the low end of cognitive functioning including in mental retardation, dyslexia, and attention deficit disorders (Halpern, 1997). Other researchers have examined various explanatory hypotheses for why sex differences in intellectual ability occur. Some studies have provided mixed evidence for genetic factors while others point to evidence for social factors (Neisser, et al, 1996; Nisbett, et al., 2012). One interesting phenomenon that has received research scrutiny is the idea of stereotype threat. Stereotype threat is the idea that mental access to a particular stereotype can have real-world impact on a member of the stereotyped group. In one study (Spencer, Steele, & Quinn, 1999), for example, women who were informed that women tend to fare poorly on math exams just before taking a math test actually performed worse relative to a control group who did not hear the stereotype. One possible antidote to stereotype threat, at least in the case of women, is to make a self-affirmation (such as listing positive personal qualities) before the threat occurs. In one study, for instance, Martens and her colleagues (2006) had women write about personal qualities that they valued before taking a math test. The affirmation largely erased the effect of stereotype by improving math scores for women relative to a control group but similar affirmations had little effect for men (Martens, Johns, Greenberg, & Schimel, 2006). These types of controversies compel many lay people to wonder if there might be a problem with intelligence measures. It is natural to wonder if they are somehow biased against certain groups. Psychologists typically answer such questions by pointing out that bias in the testing sense of the word is different than how people use the word in everyday speech. Common use of bias denotes a prejudice based on group membership. Scientific bias, on the other hand, is related to the psychometric properties of the test such as validity and reliability. Validity is the idea that an assessment measures what it claims to measure and that it can predict future behaviors or performance. To this end, intelligence tests are not biased because they are fairly accurate measures and predictors. There are, however, real biases, prejudices, and inequalities in the social world that might benefit some advantaged group while hindering some disadvantaged others. Conclusion Although you might not be able to spell “esquamulose” or “staphylococci” – indeed, you might not even know what they mean—you don’t need to count yourself out in the intelligence department. Now that we have examined intelligence in depth we can return to our intuitive view of those students who compete in the National Spelling Bee. Are they smart? Certainly, they seem to have high verbal intelligence. There is also the possibility that they benefit from either a genetic boost in intelligence, a supportive social environment, or both. Watching them spell difficult words there is also much we do not know about them. We cannot tell, for instance, how emotionally intelligent they are or how they might use bodily-kinesthetic intelligence. This highlights the fact that intelligence is a complicated issue. Fortunately, psychologists continue to research this fascinating topic and their studies continue to yield new insights. Outside Resources Blog: Dr. Jonathan Wai has an excellent blog on Psychology Today discussing many of the most interesting issue related to intelligence. http://www.psychologytoday.com/blog/...-next-einstein Video: Hank Green gives a fun and interesting overview of the concept of intelligence in this installment of the Crash Course series. Discussion Questions 1. Do you think that people get smarter as they get older? In what ways might people gain or lose intellectual abilities as they age? 2. When you meet someone who strikes you as being smart what types of cues or information do you typically attend to in order to arrive at this judgment? 3. How do you think socio-economic status affects an individual taking an intellectual abilities test? 4. Should psychologists be asking about group differences in intellectual ability? What do you think? 5. Which of Howard Gardner’s 8 types of intelligence do you think describes the way you learn best? Vocabulary G Short for “general factor” and is often used to be synonymous with intelligence itself. Intelligence An individual’s cognitive capability. This includes the ability to acquire, process, recall and apply information. IQ Short for “intelligence quotient.” This is a score, typically obtained from a widely used measure of intelligence that is meant to rank a person’s intellectual ability against that of others. Norm Assessments are given to a representative sample of a population to determine the range of scores for that population. These “norms” are then used to place an individual who takes that assessment on a range of scores in which he or she is compared to the population at large. Standardize Assessments that are given in the exact same manner to all people . With regards to intelligence tests standardized scores are individual scores that are computed to be referenced against normative scores for a population (see “norm”). Stereotype threat The phenomenon in which people are concerned that they will conform to a stereotype or that their performance does conform to that stereotype, especially in instances in which the stereotype is brought to their conscious awareness.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_7%3A_Cognition_and_Language/7.5%3A_Intelligence.txt
By Yoshihisa Kashima University of Melbourne Humans have the capacity to use complex language, far more than any other species on Earth. We cooperate with each other to use language for communication; language is often used to communicate about and even construct and maintain our social world. Language use and human sociality are inseparable parts of Homo sapiens as a biological species. learning objectives • Define basic terms used to describe language use. • Describe the process by which people can share new information by using language. • Characterize the typical content of conversation and its social implications. • Characterize psychological consequences of language use and give an example. Introduction Imagine two men of 30-something age, Adam and Ben, walking down the corridor. Judging from their clothing, they are young businessmen, taking a break from work. They then have this exchange. Adam: “You know, Gary bought a ring.” Ben: "Oh yeah? For Mary, isn't it?" (Adam nods.) If you are watching this scene and hearing their conversation, what can you guess from this? First of all, you’d guess that Gary bought a ring for Mary, whoever Gary and Mary might be. Perhaps you would infer that Gary is getting married to Mary. What else can you guess? Perhaps that Adam and Ben are fairly close colleagues, and both of them know Gary and Mary reasonably well. In other words, you can guess the social relationships surrounding the people who are engaging in the conversation and the people whom they are talking about. Language is used in our everyday lives. If psychology is a science of behavior, scientific investigation of language use must be one of the most central topics—this is because language use is ubiquitous. Every human group has a language; human infants (except those who have unfortunate disabilities) learn at least one language without being taught explicitly. Even when children who don’t have much language to begin with are brought together, they can begin to develop and use their own language. There is at least one known instance where children who had had little language were brought together and developed their own language spontaneously with minimum input from adults. In Nicaragua in the 1980s, deaf children who were separately raised in various locations were brought together to schools for the first time. Teachers tried to teach them Spanish with little success. However, they began to notice that the children were using their hands and gestures, apparently to communicate with each other. Linguists were brought in to find out what was happening—it turned out the children had developed their own sign language by themselves. That was the birth of a new language, Nicaraguan Sign Language (Kegl, Senghas, & Coppola, 1999). Language is ubiquitous, and we humans are born to use it. How Do We Use Language? If language is so ubiquitous, how do we actually use it? To be sure, some of us use it to write diaries and poetry, but the primary form of language use is interpersonal. That’s how we learn language, and that’s how we use it. Just like Adam and Ben, we exchange words and utterances to communicate with each other. Let’s consider the simplest case of two people, Adam and Ben, talking with each other. According to Clark (1996), in order for them to carry out a conversation, they must keep track of common ground. Common ground is a set of knowledge that the speaker and listener share and they think, assume, or otherwise take for granted that they share. So, when Adam says, “Gary bought a ring,” he takes for granted that Ben knows the meaning of the words he is using, whom Gary is, and what buying a ring means. When Ben says, “For Mary, isn’t it?” he takes for granted that Adam knows the meaning of these words, who Mary is, and what buying a ring for someone means. All these are part of their common ground. Note that, when Adam presents the information about Gary’s purchase of a ring, Ben responds by presenting his inference about who the recipient of the ring might be, namely, Mary. In conversational terms, Ben’s utterance acts as evidence for his comprehension of Adam’s utterance—“Yes, I understood that Gary bought a ring”—and Adam’s nod acts as evidence that he now has understood what Ben has said too—“Yes, I understood that you understood that Gary has bought a ring for Mary.” This new information is now added to the initial common ground. Thus, the pair of utterances by Adam and Ben (called an adjacency pair) together with Adam’s affirmative nod jointly completes one proposition, “Gary bought a ring for Mary,” and adds this information to their common ground. This way, common ground changes as we talk, gathering new information that we agree on and have evidence that we share. It evolves as people take turns to assume the roles of speaker and listener, and actively engage in the exchange of meaning. Common ground helps people coordinate their language use. For instance, when a speaker says something to a listener, he or she takes into account their common ground, that is, what the speaker thinks the listener knows. Adam said what he did because he knew Ben would know who Gary was. He’d have said, “A friend of mine is getting married,” to another colleague who wouldn’t know Gary. This is called audience design (Fussell & Krauss, 1992); speakers design their utterances for their audiences by taking into account the audiences’ knowledge. If their audiences are seen to be knowledgeable about an object (such as Ben about Gary), they tend to use a brief label of the object (i.e., Gary); for a less knowledgeable audience, they use more descriptive words (e.g., “a friend of mine”) to help the audience understand their utterances (Box 1). So, language use is a cooperative activity, but how do we coordinate our language use in a conversational setting? To be sure, we have a conversation in small groups. The number of people engaging in a conversation at a time is rarely more than four. By some counts (e.g., Dunbar, Duncan, & Nettle, 1995; James, 1953), more than 90 percent of conversations happen in a group of four individuals or less. Certainly, coordinating conversation among four is not as difficult as coordinating conversation among 10. But, even among only four people, if you think about it, everyday conversation is an almost miraculous achievement. We typically have a conversation by rapidly exchanging words and utterances in real time in a noisy environment. Think about your conversation at home in the morning, at a bus stop, in a shopping mall. How can we keep track of our common ground under such circumstances? Pickering and Garrod (2004) argue that we achieve our conversational coordination by virtue of our ability to interactively align each other’s actions at different levels of language use: lexicon (i.e., words and expressions), syntax (i.e., grammatical rules for arranging words and expressions together), as well as speech rate and accent. For instance, when one person uses a certain expression to refer to an object in a conversation, others tend to use the same expression (e.g., Clark & Wilkes-Gibbs, 1986). Furthermore, if someone says “the cowboy offered a banana to the robber,” rather than “the cowboy offered the robber a banana,” others are more likely to use the same syntactic structure (e.g., “the girl gave a book to the boy” rather than “the girl gave the boy a book”) even if different words are involved (Branigan, Pickering, & Cleland, 2000). Finally, people in conversation tend to exhibit similar accents and rates of speech, and they are often associated with people’s social identity (Giles, Coupland, & Coupland, 1991). So, if you have lived in different places where people have somewhat different accents (e.g., United States and United Kingdom), you might have noticed that you speak with Americans with an American accent, but speak with Britons with a British accent. Pickering and Garrod (2004) suggest that these interpersonal alignments at different levels of language use can activate similar situation models in the minds of those who are engaged in a conversation. Situation models are representations about the topic of a conversation. So, if you are talking about Gary and Mary with your friends, you might have a situation model of Gary giving Mary a ring in your mind. Pickering and Garrod’s theory is that as you describe this situation using language, others in the conversation begin to use similar words and grammar, and many other aspects of language use converge. As you all do so, similar situation models begin to be built in everyone’s mind through the mechanism known as priming. Priming occurs when your thinking about one concept (e.g., “ring”) reminds you about other related concepts (e.g., “marriage”, “wedding ceremony”). So, if everyone in the conversation knows about Gary, Mary, and the usual course of events associated with a ring—engagement, wedding, marriage, etc.— everyone is likely to construct a shared situation model about Gary and Mary. Thus, making use of our highly developed interpersonal ability to imitate (i.e., executing the same action as another person) and cognitive ability to infer (i.e., one idea leading to other ideas), we humans coordinate our common ground, share situation models, and communicate with each other. What Do We Talk About? What are humans doing when we are talking? Surely, we can communicate about mundane things such as what to have for dinner, but also more complex and abstract things such as the meaning of life and death, liberty, equality, and fraternity, and many other philosophical thoughts. Well, when naturally occurring conversations were actually observed (Dunbar, Marriott, & Duncan, 1997), a staggering 60%–70% of everyday conversation, for both men and women, turned out to be gossip—people talk about themselves and others whom they know. Just like Adam and Ben, more often than not, people use language to communicate about their social world. Gossip may sound trivial and seem to belittle our noble ability for language—surely one of the most remarkable human abilities of all that distinguish us from other animals. Au contraire, some have argued that gossip—activities to think and communicate about our social world—is one of the most critical uses to which language has been put. Dunbar (1996) conjectured that gossiping is the human equivalent of grooming, monkeys and primates attending and tending to each other by cleaning each other’s fur. He argues that it is an act of socializing, signaling the importance of one’s partner. Furthermore, by gossiping, humans can communicate and share their representations about their social world—who their friends and enemies are, what the right thing to do is under what circumstances, and so on. In so doing, they can regulate their social world—making more friends and enlarging one’s own group (often called the ingroup, the group to which one belongs) against other groups (outgroups) that are more likely to be one’s enemies. Dunbar has argued that it is these social effects that have given humans an evolutionary advantage and larger brains, which, in turn, help humans to think more complex and abstract thoughts and, more important, maintain larger ingroups. Dunbar (1993) estimated an equation that predicts average group size of nonhuman primate genera from their average neocortex size (the part of the brain that supports higher order cognition). In line with his social brain hypothesis, Dunbar showed that those primate genera that have larger brains tend to live in larger groups. Furthermore, using the same equation, he was able to estimate the group size that human brains can support, which turned out to be about 150—approximately the size of modern hunter-gatherer communities. Dunbar’s argument is that language, brain, and human group living have co-evolved—language and human sociality are inseparable. Dunbar’s hypothesis is controversial. Nonetheless, whether or not he is right, our everyday language use often ends up maintaining the existing structure of intergroup relationships. Language use can have implications for how we construe our social world. For one thing, there are subtle cues that people use to convey the extent to which someone’s action is just a special case in a particular context or a pattern that occurs across many contexts and more like a character trait of the person. According to Semin and Fiedler (1988), someone’s action can be described by an action verb that describes a concrete action (e.g., he runs), a state verb that describes the actor’s psychological state (e.g., he likes running), an adjective that describes the actor’s personality (e.g., he is athletic), or a noun that describes the actor’s role (e.g., he is an athlete). Depending on whether a verb or an adjective (or noun) is used, speakers can convey the permanency and stability of an actor’s tendency to act in a certain way—verbs convey particularity, whereas adjectives convey permanency. Intriguingly, people tend to describe positive actions of their ingroup members using adjectives (e.g., he is generous) rather than verbs (e.g., he gave a blind man some change), and negative actions of outgroup members using adjectives (e.g., he is cruel) rather than verbs (e.g., he kicked a dog). Maass, Salvi, Arcuri, and Semin (1989) called this a linguistic intergroup bias, which can produce and reproduce the representation of intergroup relationships by painting a picture favoring the ingroup. That is, ingroup members are typically good, and if they do anything bad, that’s more an exception in special circumstances; in contrast, outgroup members are typically bad, and if they do anything good, that’s more an exception. In addition, when people exchange their gossip, it can spread through broader social networks. If gossip is transmitted from one person to another, the second person can transmit it to a third person, who then in turn transmits it to a fourth, and so on through a chain of communication. This often happens for emotive stories (Box 2). If gossip is repeatedly transmitted and spread, it can reach a large number of people. When stories travel through communication chains, they tend to become conventionalized (Bartlett, 1932). A Native American tale of the “War of the Ghosts” recounts a warrior’s encounter with ghosts traveling in canoes and his involvement with their ghostly battle. He is shot by an arrow but doesn’t die, returning home to tell the tale. After his narration, however, he becomes still, a black thing comes out of his mouth, and he eventually dies. When it was told to a student in England in the 1920s and retold from memory to another person, who, in turn, retold it to another and so on in a communication chain, the mythic tale became a story of a young warrior going to a battlefield, in which canoes became boats, and the black thing that came out of his mouth became simply his spirit (Bartlett, 1932). In other words, information transmitted multiple times was transformed to something that was easily understood by many, that is, information was assimilated into the common ground shared by most people in the linguistic community. More recently, Kashima (2000) conducted a similar experiment using a story that contained a sequence of events that described a young couple’s interaction that included both stereotypical and counter-stereotypical actions (e.g., a man watching sports on TV on Sunday vs. a man vacuuming the house). After the retelling of this story, much of the counter-stereotypical information was dropped, and stereotypical information was more likely to be retained. Because stereotypes are part of the common ground shared by the community, this finding too suggests that conversational retellings are likely to reproduce conventional content. Psychological Consequences of Language Use What are the psychological consequences of language use? When people use language to describe an experience, their thoughts and feelings are profoundly shaped by the linguistic representation that they have produced rather than the original experience per se (Holtgraves & Kashima, 2008). For example, Halberstadt (2003) showed a picture of a person displaying an ambiguous emotion and examined how people evaluated the displayed emotion. When people verbally explained why the target person was expressing a particular emotion, they tended to remember the person as feeling that emotion more intensely than when they simply labeled the emotion. Thus, constructing a linguistic representation of another person’s emotion apparently biased the speaker’s memory of that person’s emotion. Furthermore, linguistically labeling one’s own emotional experience appears to alter the speaker’s neural processes. When people linguistically labeled negative images, the amygdala—a brain structure that is critically involved in the processing of negative emotions such as fear—was activated less than when they were not given a chance to label them (Lieberman et al., 2007). Potentially because of these effects of verbalizing emotional experiences, linguistic reconstructions of negative life events can have some therapeutic effects on those who suffer from the traumatic experiences (Pennebaker & Seagal, 1999). Lyubomirsky, Sousa, and Dickerhoof (2006) found that writing and talking about negative past life events improved people’s psychological well-being, but just thinking about them worsened it. There are many other examples of effects of language use on memory and decision making (Holtgraves & Kashima, 2008). Furthermore, if a certain type of language use (linguistic practice) (Holtgraves & Kashima, 2008) is repeated by a large number of people in a community, it can potentially have a significant effect on their thoughts and action. This notion is often called Sapir-Whorf hypothesis (Sapir, 1921; Whorf, 1956; Box 3). For instance, if you are given a description of a man, Steven, as having greater than average experience of the world (e.g., well-traveled, varied job experience), a strong family orientation, and well-developed social skills, how do you describe Steven? Do you think you can remember Steven’s personality five days later? It will probably be difficult. But if you know Chinese and are reading about Steven in Chinese, as Hoffman, Lau, and Johnson (1986) showed, the chances are that you can remember him well. This is because English does not have a word to describe this kind of personality, whereas Chinese does (shì gù). This way, the language you use can influence your cognition. In its strong form, it has been argued that language determines thought, but this is probably wrong. Language does not completely determine our thoughts—our thoughts are far too flexible for that—but habitual uses of language can influence our habit of thought and action. For instance, some linguistic practice seems to be associated even with cultural values and social institution. Pronoun drop is the case in point. Pronouns such as “I” and “you” are used to represent the speaker and listener of a speech in English. In an English sentence, these pronouns cannot be dropped if they are used as the subject of a sentence. So, for instance, “I went to the movie last night” is fine, but “Went to the movie last night” is not in standard English. However, in other languages such as Japanese, pronouns can be, and in fact often are, dropped from sentences. It turned out that people living in those countries where pronoun drop languages are spoken tend to have more collectivistic values (e.g., employees having greater loyalty toward their employers) than those who use non–pronoun drop languages such as English (Kashima & Kashima, 1998). It was argued that the explicit reference to “you” and “I” may remind speakers the distinction between the self and other, and the differentiation between individuals. Such a linguistic practice may act as a constant reminder of the cultural value, which, in turn, may encourage people to perform the linguistic practice. Conclusion Language and language use constitute a central ingredient of human psychology. Language is an essential tool that enables us to live the kind of life we do. Can you imagine a world in which machines are built, farms are cultivated, and goods and services are transported to our household without language? Is it possible for us to make laws and regulations, negotiate contracts, and enforce agreements and settle disputes without talking? Much of contemporary human civilization wouldn’t have been possible without the human ability to develop and use language. Like the Tower of Babel, language can divide humanity, and yet, the core of humanity includes the innate ability for language use. Whether we can use it wisely is a task before us in this globalized world. Discussion Questions 1. In what sense is language use innate and learned? 2. Is language a tool for thought or a tool for communication? 3. What sorts of unintended consequences can language use bring to your psychological processes? Vocabulary Audience design Constructing utterances to suit the audience’s knowledge. Common ground Information that is shared by people who engage in a conversation. Ingroup Group to which a person belongs. Lexicon Words and expressions. Linguistic intergroup bias A tendency for people to characterize positive things about their ingroup using more abstract expressions, but negative things about their outgroups using more abstract expressions. Outgroup Group to which a person does not belong. Priming A stimulus presented to a person reminds him or her about other ideas associated with the stimulus. Sapir-Whorf hypothesis The hypothesis that the language that people use determines their thoughts. Situation model A mental representation of an event, object, or situation constructed at the time of comprehending a linguistic description. Social brain hypothesis The hypothesis that the human brain has evolved, so that humans can maintain larger ingroups. Social networks Networks of social relationships among individuals through which information can travel. Syntax Rules by which words are strung together to form sentences.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_7%3A_Cognition_and_Language/7.6%3A_Language_and_Language_Use.txt
By Max H. Bazerman Harvard University Humans are not perfect decision makers. Not only are we not perfect, but we depart from perfection or rationality in systematic and predictable ways. The understanding of these systematic and predictable departures is core to the field of judgment and decision making. By understanding these limitations, we can also identify strategies for making better and more effective decisions. learning objectives • Understand the systematic biases that affect our judgment and decision making. • Develop strategies for making better decisions. • Experience some of the biases through sample decisions. Introduction Every day you have the opportunity to make countless decisions: should you eat dessert, cheat on a test, or attend a sports event with your friends. If you reflect on your own history of choices you will realize that they vary in quality; some are rational and some are not. This module provides an overview of decision making and includes discussion of many of the common biases involved in this process. In his Nobel Prize–winning work, psychologist Herbert Simon (1957; March & Simon, 1958) argued that our decisions are bounded in their rationality. According to the bounded rationality framework, human beings try to make rational decisions (such as weighing the costs and benefits of a choice) but our cognitive limitations prevent us from being fully rational. Time and cost constraints limit the quantity and quality of the information that is available to us. Moreover, we only retain a relatively small amount of information in our usable memory. And limitations on intelligence and perceptions constrain the ability of even very bright decision makers to accurately make the best choice based on the information that is available. About 15 years after the publication of Simon’s seminal work, Tversky and Kahneman (1973, 1974; Kahneman & Tversky, 1979) produced their own Nobel Prize–winning research, which provided critical information about specific systematic and predictable biases, or mistakes, that influence judgment (Kahneman received the prize after Tversky’s death). The work of Simon, Tversky, and Kahneman paved the way to our modern understanding of judgment and decision making. And their two Nobel prizes signaled the broad acceptance of the field of behavioral decision research as a mature area of intellectual study. What Would a Rational Decision Look Like? Imagine that during your senior year in college, you apply to a number of doctoral programs, law schools, or business schools (or another set of programs in whatever field most interests you). The good news is that you receive many acceptance letters. So, how should you decide where to go? Bazerman and Moore (2013) outline the following six steps that you should take to make a rational decision: (1) define the problem (i.e., selecting the right graduate program), (2) identify the criteria necessary to judge the multiple options (location, prestige, faculty, etc.), (3) weight the criteria (rank them in terms of importance to you), (4) generate alternatives (the schools that admitted you), (5) rate each alternative on each criterion (rate each school on each criteria that you identified, and (6) compute the optimal decision. Acting rationally would require that you follow these six steps in a fully rational manner. I strongly advise people to think through important decisions such as this in a manner similar to this process. Unfortunately, we often don’t. Many of us rely on our intuitions far more than we should. And when we do try to think systematically, the way we enter data into such formal decision-making processes is often biased. Fortunately, psychologists have learned a great deal about the biases that affect our thinking. This knowledge about the systematic and predictable mistakes that even the best and the brightest make can help you identify flaws in your thought processes and reach better decisions. Biases in Our Decision Process Simon’s concept of bounded rationality taught us that judgment deviates from rationality, but it did not tell us how judgment is biased. Tversky and Kahneman’s (1974) research helped to diagnose the specific systematic, directional biases that affect human judgment. These biases are created by the tendency to short-circuit a rational decision process by relying on a number of simplifying strategies, or rules of thumb, known as heuristics. Heuristics allow us to cope with the complex environment surrounding our decisions. Unfortunately, they also lead to systematic and predictable biases. To highlight some of these biases please answer the following three quiz items: Problem 1 (adapted from Alpert & Raiffa, 1969): Listed below are 10 uncertain quantities. Do not look up any information on these items. For each, write down your best estimate of the quantity. Next, put a lower and upper bound around your estimate, such that you are 98 percent confident that your range surrounds the actual quantity. Respond to each of these items even if you admit to knowing very little about these quantities. 1. The first year the Nobel Peace Prize was awarded 2. The date the French celebrate "Bastille Day" 3. The distance from the Earth to the Moon 4. The height of the Leaning Tower of Pisa 5. Number of students attending Oxford University (as of 2014) 6. Number of people who have traveled to space (as of 2013) 7. 2012-2013 annual budget for the University of Pennsylvania 8. Average life expectancy in Bangladesh (as of 2012) 9. World record for pull-ups in a 24-hour period 10. Number of colleges and universities in the Boston metropolitan area Problem 2 (adapted from Joyce & Biddle, 1981): We know that executive fraud occurs and that it has been associated with many recent financial scandals. And, we know that many cases of management fraud go undetected even when annual audits are performed. Do you think that the incidence of significant executive-level management fraud is more than 10 in 1,000 firms (that is, 1 percent) audited by Big Four accounting firms? 1. Yes, more than 10 in 1,000 Big Four clients have significant executive-level management fraud. 2. No, fewer than 10 in 1,000 Big Four clients have significant executive-level management fraud. What is your estimate of the number of Big Four clients per 1,000 that have significant executive-level management fraud? (Fill in the blank below with the appropriate number.) ________ in 1,000 Big Four clients have significant executive-level management fraud. Problem 3 (adapted from Tversky & Kahneman, 1981): Imagine that the United States is preparing for the outbreak of an unusual avian disease that is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows. 1. Program A: If Program A is adopted, 200 people will be saved. 2. Program B: If Program B is adopted, there is a one-third probability that 600 people will be saved and a two-thirds probability that no people will be saved. Which of the two programs would you favor? Overconfidence On the first problem, if you set your ranges so that you were justifiably 98 percent confident, you should expect that approximately 9.8, or nine to 10, of your ranges would include the actual value. So, let’s look at the correct answers: 1. 1901 2. 14th of July 3. 384,403 km (238,857 mi) 4. 56.67 m (183 ft) 5. 22,384 (as of 2014) 6. 536 people (as of 2013) 7. \$6.007 billion 8. 70.3 years (as of 2012) 9. 4,321 10. 52 Count the number of your 98% ranges that actually surrounded the true quantities. If you surrounded nine to 10, you were appropriately confident in your judgments. But most readers surround only between three (30%) and seven (70%) of the correct answers, despite claiming 98% confidence that each range would surround the true value. As this problem shows, humans tend to be overconfident in their judgments. Anchoring Regarding the second problem, people vary a great deal in their final assessment of the level of executive-level management fraud, but most think that 10 out of 1,000 is too low. When I run this exercise in class, half of the students respond to the question that I asked you to answer. The other half receive a similar problem, but instead are asked whether the correct answer is higher or lower than 200 rather than 10. Most people think that 200 is high. But, again, most people claim that this “anchor” does not affect their final estimate. Yet, on average, people who are presented with the question that focuses on the number 10 (out of 1,000) give answers that are about one-half the size of the estimates of those facing questions that use an anchor of 200. When we are making decisions, any initial anchor that we face is likely to influence our judgments, even if the anchor is arbitrary. That is, we insufficiently adjust our judgments away from the anchor. Framing Turning to Problem 3, most people choose Program A, which saves 200 lives for sure, over Program B. But, again, if I was in front of a classroom, only half of my students would receive this problem. The other half would have received the same set-up, but with the following two options: 1. Program C: If Program C is adopted, 400 people will die. 2. Program D: If Program D is adopted, there is a one-third probability that no one will die and a two-thirds probability that 600 people will die. Which of the two programs would you favor? Careful review of the two versions of this problem clarifies that they are objectively the same. Saving 200 people (Program A) means losing 400 people (Program C), and Programs B and D are also objectively identical. Yet, in one of the most famous problems in judgment and decision making, most individuals choose Program A in the first set and Program D in the second set (Tversky & Kahneman, 1981). People respond very differently to saving versus losing lives—even when the difference is based just on the “framing” of the choices. The problem that I asked you to respond to was framed in terms of saving lives, and the implied reference point was the worst outcome of 600 deaths. Most of us, when we make decisions that concern gains, are risk averse; as a consequence, we lock in the possibility of saving 200 lives for sure. In the alternative version, the problem is framed in terms of losses. Now the implicit reference point is the best outcome of no deaths due to the avian disease. And in this case, most people are risk seeking when making decisions regarding losses. These are just three of the many biases that affect even the smartest among us. Other research shows that we are biased in favor of information that is easy for our minds to retrieve, are insensitive to the importance of base rates and sample sizes when we are making inferences, assume that random events will always look random, search for information that confirms our expectations even when disconfirming information would be more informative, claim a priori knowledge that didn’t exist due to the hindsight bias, and are subject to a host of other effects that continue to be developed in the literature (Bazerman & Moore, 2013). Contemporary Developments Bounded rationality served as the integrating concept of the field of behavioral decision research for 40 years. Then, in 2000, Thaler (2000) suggested that decision making is bounded in two ways not precisely captured by the concept of bounded rationality. First, he argued that our willpower is bounded and that, as a consequence, we give greater weight to present concerns than to future concerns. Our immediate motivations are often inconsistent with our long-term interests in a variety of ways, such as the common failure to save adequately for retirement or the difficulty many people have staying on a diet. Second, Thaler suggested that our self-interest is bounded such that we care about the outcomes of others. Sometimes we positively value the outcomes of others—giving them more of a commodity than is necessary out of a desire to be fair, for example. And, in unfortunate contexts, we sometimes are willing to forgo our own benefits out of a desire to harm others. My colleagues and I have recently added two other important bounds to the list. Chugh, Banaji, and Bazerman (2005) and Banaji and Bhaskar (2000) introduced the concept of bounded ethicality, which refers to the notion that our ethics are limited in ways we are not even aware of ourselves. Second, Chugh and Bazerman (2007) developed the concept of bounded awareness to refer to the broad array of focusing failures that affect our judgment, specifically the many ways in which we fail to notice obvious and important information that is available to us. A final development is the application of judgment and decision-making research to the areas of behavioral economics, behavioral finance, and behavioral marketing, among others. In each case, these fields have been transformed by applying and extending research from the judgment and decision-making literature. Fixing Our Decisions Ample evidence documents that even smart people are routinely impaired by biases. Early research demonstrated, unfortunately, that awareness of these problems does little to reduce bias (Fischhoff, 1982). The good news is that more recent research documents interventions that do help us overcome our faulty thinking (Bazerman & Moore, 2013). One critical path to fixing our biases is provided in Stanovich and West’s (2000) distinction between System 1 and System 2 decision making. System 1 processing is our intuitive system, which is typically fast, automatic, effortless, implicit, and emotional. System 2 refers to decision making that is slower, conscious, effortful, explicit, and logical. The six logical steps of decision making outlined earlier describe a System 2 process. Clearly, a complete System 2 process is not required for every decision we make. In most situations, our System 1 thinking is quite sufficient; it would be impractical, for example, to logically reason through every choice we make while shopping for groceries. But, preferably, System 2 logic should influence our most important decisions. Nonetheless, we use our System 1 processes for most decisions in life, relying on it even when making important decisions. The key to reducing the effects of bias and improving our decisions is to transition from trusting our intuitive System 1 thinking toward engaging more in deliberative System 2 thought. Unfortunately, the busier and more rushed people are, the more they have on their minds, and the more likely they are to rely on System 1 thinking (Chugh, 2004). The frantic pace of professional life suggests that executives often rely on System 1 thinking (Chugh, 2004). Fortunately, it is possible to identify conditions where we rely on intuition at our peril and substitute more deliberative thought. One fascinating example of this substitution comes from journalist Michael Lewis’ (2003) account of how Billy Beane, the general manager of the Oakland Athletics, improved the outcomes of the failing baseball team after recognizing that the intuition of baseball executives was limited and systematically biased and that their intuitions had been incorporated into important decisions in ways that created enormous mistakes. Lewis (2003) documents that baseball professionals tend to overgeneralize from their personal experiences, be overly influenced by players’ very recent performances, and overweigh what they see with their own eyes, despite the fact that players’ multiyear records provide far better data. By substituting valid predictors of future performance (System 2 thinking), the Athletics were able to outperform expectations given their very limited payroll. Another important direction for improving decisions comes from Thaler and Sunstein’s (2008) book Nudge: Improving Decisions about Health, Wealth, and Happiness. Rather than setting out to debias human judgment, Thaler and Sunstein outline a strategy for how “decision architects” can change environments in ways that account for human bias and trigger better decisions as a result. For example, Beshears, Choi, Laibson, and Madrian (2008) have shown that simple changes to defaults can dramatically improve people’s decisions. They tackle the failure of many people to save for retirement and show that a simple change can significantly influence enrollment in 401(k) programs. In most companies, when you start your job, you need to proactively sign up to join the company’s retirement savings plan. Many people take years before getting around to doing so. When, instead, companies automatically enroll their employees in 401(k) programs and give them the opportunity to “opt out,” the net enrollment rate rises significantly. By changing defaults, we can counteract the human tendency to live with the status quo. Similarly, Johnson and Goldstein’s (2003) cross-European organ donation study reveals that countries that have opt-in organ donation policies, where the default is not to harvest people’s organs without their prior consent, sacrifice thousands of lives in comparison to opt-out policies, where the default is to harvest organs. The United States and too many other countries require that citizens opt in to organ donation through a proactive effort; as a consequence, consent rates range between 4.25%–44% across these countries. In contrast, changing the decision architecture to an opt-out policy improves consent rates to 85.9% to 99.98%. Designing the donation system with knowledge of the power of defaults can dramatically change donation rates without changing the options available to citizens. In contrast, a more intuitive strategy, such as the one in place in the United States, inspires defaults that result in many unnecessary deaths. Concluding Thoughts Our days are filled with decisions ranging from the small (what should I wear today?) to the important (should we get married?). Many have real world consequences on our health, finances and relationships. Simon, Kahneman, and Tversky created a field that highlights the surprising and predictable deficiencies of the human mind when making decisions. As we understand more about our own biases and thinking shortcomings we can begin to take them into account or to avoid them. Only now have we reached the frontier of using this knowledge to help people make better decisions. Outside Resources Book: Bazerman, M. H., & Moore, D. (2013). Judgment in managerial decision making (8th ed.). John Wiley & Sons Inc. Book: Kahneman, D. (2011) Thinking, Fast and Slow. New York, NY: Farrar, Straus and Giroux. Book: Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving Decisions about Health, Wealth, and Happiness. New Haven, CT: Yale University Press. Discussion Questions 1. Are the biases in this module a problem in the real world? 2. How would you use this module to be a better decision maker? 3. Can you see any biases in today’s newspaper? Vocabulary Anchoring The bias to be affected by an initial anchor, even if the anchor is arbitrary, and to insufficiently adjust our judgments away from that anchor. Biases The systematic and predictable mistakes that influence the judgment of even very talented human beings. Bounded awareness The systematic ways in which we fail to notice obvious and important information that is available to us. Bounded ethicality The systematic ways in which our ethics are limited in ways we are not even aware of ourselves. Bounded rationality Model of human behavior that suggests that humans try to make rational decisions but are bounded due to cognitive limitations. Bounded self-interest The systematic and predictable ways in which we care about the outcomes of others. Bounded willpower The tendency to place greater weight on present concerns rather than future concerns. Framing The bias to be systematically affected by the way in which information is presented, while holding the objective information constant. Heuristics cognitive (or thinking) strategies that simplify decision making by using mental short-cuts Overconfident The bias to have greater confidence in your judgment than is warranted based on a rational assessment. System 1 Our intuitive decision-making system, which is typically fast, automatic, effortless, implicit, and emotional. System 2 Our more deliberative decision-making system, which is slower, conscious, effortful, explicit, and logical.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_7%3A_Cognition_and_Language/7.7%3A_Judgement_and_Decision_Making.txt
By Gregory Murphy New York University People form mental concepts of categories of objects, which permit them to respond appropriately to new objects they encounter. Most concepts cannot be strictly defined but are organized around the “best” examples or prototypes, which have the properties most common in the category. Objects fall into many different categories, but there is usually a most salient one, called the basic-level category, which is at an intermediate level of specificity (e.g., chairs, rather than furniture or desk chairs). Concepts are closely related to our knowledge of the world, and people can more easily learn concepts that are consistent with their knowledge. Theories of concepts argue either that people learn a summary description of a whole category or else that they learn exemplars of the category. Recent research suggests that there are different ways to learn and represent concepts and that they are accomplished by different neural systems. learning objectives • Understand the problems with attempting to define categories. • Understand typicality and fuzzy category boundaries. • Learn about theories of the mental representation of concepts. • Learn how knowledge may influence concept learning. Introduction Consider the following set of objects: some dust, papers, a computer monitor, two pens, a cup, and an orange. What do these things have in common? Only that they all happen to be on my desk as I write this. This set of things can be considered a category, a set of objects that can be treated as equivalent in some way. But, most of our categories seem much more informative—they share many properties. For example, consider the following categories: trucks, wireless devices, weddings, psychopaths, and trout. Although the objects in a given category are different from one another, they have many commonalities. When you know something is a truck, you know quite a bit about it. The psychology of categories concerns how people learn, remember, and use informative categories such as trucks or psychopaths. The mental representations we form of categories are called concepts. There is a category of trucks in the world, and I also have a concept of trucks in my head. We assume that people’s concepts correspond more or less closely to the actual category, but it can be useful to distinguish the two, as when someone’s concept is not really correct. Concepts are at the core of intelligent behavior. We expect people to be able to know what to do in new situations and when confronting new objects. If you go into a new classroom and see chairs, a blackboard, a projector, and a screen, you know what these things are and how they will be used. You’ll sit on one of the chairs and expect the instructor to write on the blackboard or project something onto the screen. You do this even if you have never seen any of these particular objects before, because you have concepts of classrooms, chairs, projectors, and so forth, that tell you what they are and what you’re supposed to do with them. Furthermore, if someone tells you a new fact about the projector—for example, that it has a halogen bulb—you are likely to extend this fact to other projectors you encounter. In short, concepts allow you to extend what you have learned about a limited number of objects to a potentially infinite set of entities. You know thousands of categories, most of which you have learned without careful study or instruction. Although this accomplishment may seem simple, we know that it isn’t, because it is difficult to program computers to solve such intellectual tasks. If you teach a learning program that a robin, a swallow, and a duck are all birds, it may not recognize a cardinal or peacock as a bird. As we’ll shortly see, the problem is that objects in categories are often surprisingly diverse. Simpler organisms, such as animals and human infants, also have concepts (Mareschal, Quinn, & Lea, 2010). Squirrels may have a concept of predators, for example, that is specific to their own lives and experiences. However, animals likely have many fewer concepts and cannot understand complex concepts such as mortgages or musical instruments. Nature of Categories Traditionally, it has been assumed that categories are well-defined. This means that you can give a definition that specifies what is in and out of the category. Such a definition has two parts. First, it provides the necessary features for category membership: What must objects have in order to be in it? Second, those features must be jointly sufficientfor membership: If an object has those features, then it is in the category. For example, if I defined a dog as a four-legged animal that barks, this would mean that every dog is four-legged, an animal, and barks, and also that anything that has all those properties is a dog. Unfortunately, it has not been possible to find definitions for many familiar categories. Definitions are neat and clear-cut; the world is messy and often unclear. For example, consider our definition of dogs. In reality, not all dogs have four legs; not all dogs bark. I knew a dog that lost her bark with age (this was an improvement); no one doubted that she was still a dog. It is often possible to find some necessary features (e.g., all dogs have blood and breathe), but these features are generally not sufficient to determine category membership (you also have blood and breathe but are not a dog). Even in domains where one might expect to find clear-cut definitions, such as science and law, there are often problems. For example, many people were upset when Pluto was downgraded from its status as a planet to a dwarf planet in 2006. Upset turned to outrage when they discovered that there was no hard-and-fast definition of planethood: “Aren’t these astronomers scientists? Can’t they make a simple definition?” In fact, they couldn’t. After an astronomical organization tried to make a definition for planets, a number of astronomers complained that it might not include accepted planets such as Neptune and refused to use it. If everything looked like our Earth, our moon, and our sun, it would be easy to give definitions of planets, moons, and stars, but the universe has sadly not conformed to this ideal. Fuzzy Categories Borderline Items Experiments also showed that the psychological assumptions of well-defined categories were not correct. Hampton (1979) asked subjects to judge whether a number of items were in different categories. He did not find that items were either clear members or clear nonmembers. Instead, he found many items that were just barely considered category members and others that were just barely not members, with much disagreement among subjects. Sinks were barely considered as members of the kitchen utensil category, and sponges were barely excluded. People just included seaweed as a vegetable and just barely excluded tomatoes and gourds. Hampton found that members and nonmembers formed a continuum, with no obvious break in people’s membership judgments. If categories were well defined, such examples should be very rare. Many studies since then have found such borderline members that are not clearly in or clearly out of the category. McCloskey and Glucksberg (1978) found further evidence for borderline membership by asking people to judge category membership twice, separated by two weeks. They found that when people made repeated category judgments such as “Is an olive a fruit?” or “Is a sponge a kitchen utensil?” they changed their minds about borderline items—up to 22 percent of the time. So, not only do people disagree with one another about borderline items, they disagree with themselves! As a result, researchers often say that categories are fuzzy, that is, they have unclear boundaries that can shift over time. Typicality A related finding that turns out to be most important is that even among items that clearly are in a category, some seem to be “better” members than others (Rosch, 1973). Among birds, for example, robins and sparrows are very typical. In contrast, ostriches and penguins are very atypical (meaning not typical). If someone says, “There’s a bird in my yard,” the image you have will be of a smallish passerine bird such as a robin, not an eagle or hummingbird or turkey. You can find out which category members are typical merely by asking people. Table 1 shows a list of category members in order of their rated typicality. Typicality is perhaps the most important variable in predicting how people interact with categories. The following text box is a partial list of what typicality influences. We can understand the two phenomena of borderline members and typicality as two sides of the same coin. Think of the most typical category member: This is often called the category prototype. Items that are less and less similar to the prototype become less and less typical. At some point, these less typical items become so atypical that you start to doubt whether they are in the category at all. Is a rug really an example of furniture? It’s in the home like chairs and tables, but it’s also different from most furniture in its structure and use. From day to day, you might change your mind as to whether this atypical example is in or out of the category. So, changes in typicality ultimately lead to borderline members. Source of Typicality Intuitively, it is not surprising that robins are better examples of birds than penguins are, or that a table is a more typical kind of furniture than is a rug. But given that robins and penguins are known to be birds, why should one be more typical than the other? One possible answer is the frequency with which we encounter the object: We see a lot more robins than penguins, so they must be more typical. Frequency does have some effect, but it is actually not the most important variable (Rosch, Simpson, & Miller, 1976). For example, I see both rugs and tables every single day, but one of them is much more typical as furniture than the other. The best account of what makes something typical comes from Rosch and Mervis’s (1975) family resemblance theory. They proposed that items are likely to be typical if they (a) have the features that are frequent in the category and (b) do not have features frequent in other categories. Let’s compare two extremes, robins and penguins. Robins are small flying birds that sing, live in nests in trees, migrate in winter, hop around on your lawn, and so on. Most of these properties are found in many other birds. In contrast, penguins do not fly, do not sing, do not live in nests or in trees, do not hop around on your lawn. Furthermore, they have properties that are common in other categories, such as swimming expertly and having wings that look and act like fins. These properties are more often found in fish than in birds. According to Rosch and Mervis, then, it is not because a robin is a very common bird that makes it typical. Rather, it is because the robin has the shape, size, body parts, and behaviors that are very common among birds—and not common among fish, mammals, bugs, and so forth. In a classic experiment, Rosch and Mervis (1975) made up two new categories, with arbitrary features. Subjects viewed example after example and had to learn which example was in which category. Rosch and Mervis constructed some items that had features that were common in the category and other items that had features less common in the category. The subjects learned the first type of item before they learned the second type. Furthermore, they then rated the items with common features as more typical. In another experiment, Rosch and Mervis constructed items that differed in how many features were shared with a different category. The more features were shared, the longer it took subjects to learn which category the item was in. These experiments, and many later studies, support both parts of the family resemblance theory. Category Hierarchies Many important categories fall into hierarchies, in which more concrete categories are nested inside larger, abstract categories. For example, consider the categories: brown bear, bear, mammal, vertebrate, animal, entity. Clearly, all brown bears are bears; all bears are mammals; all mammals are vertebrates; and so on. Any given object typically does not fall into just one category—it could be in a dozen different categories, some of which are structured in this hierarchical manner. Examples of biological categories come to mind most easily, but within the realm of human artifacts, hierarchical structures can readily be found: desk chair, chair, furniture, artifact, object. Brown (1958), a child language researcher, was perhaps the first to note that there seems to be a preference for which category we use to label things. If your office desk chair is in the way, you’ll probably say, “Move that chair,” rather than “Move that desk chair” or “piece of furniture.” Brown thought that the use of a single, consistent name probably helped children to learn the name for things. And, indeed, children’s first labels for categories tend to be exactly those names that adults prefer to use (Anglin, 1977). This preference is referred to as a preference for the basic level of categorization, and it was first studied in detail by Eleanor Rosch and her students (Rosch, Mervis, Gray, Johnson, & Boyes-Braem, 1976). The basic level represents a kind of Goldilocks effect, in which the category used for something is not too small (northern brown bear) and not too big (animal), but is just right (bear). The simplest way to identify an object’s basic-level category is to discover how it would be labeled in a neutral situation. Rosch et al. (1976) showed subjects pictures and asked them to provide the first name that came to mind. They found that 1,595 names were at the basic level, with 14 more specific names (subordinates) used. Only once did anyone use a more general name (superordinate). Furthermore, in printed text, basic-level labels are much more frequent than most subordinate or superordinate labels (e.g., Wisniewski & Murphy, 1989). The preference for the basic level is not merely a matter of labeling. Basic-level categories are usually easier to learn. As Brown noted, children use these categories first in language learning, and superordinates are especially difficult for children to fully acquire.[1] People are faster at identifying objects as members of basic-level categories (Rosch et al., 1976). Rosch et al. (1976) initially proposed that basic-level categories cut the world at its joints, that is, merely reflect the big differences between categories like chairs and tables or between cats and mice that exist in the world. However, it turns out that which level is basic is not universal. North Americans are likely to use names like tree, fish, and bird to label natural objects. But people in less industrialized societies seldom use these labels and instead use more specific words, equivalent to elm, trout, and finch (Berlin, 1992). Because Americans and many other people living in industrialized societies know so much less than our ancestors did about the natural world, our basic level has “moved up” to what would have been the superordinate level a century ago. Furthermore, experts in a domain often have a preferred level that is more specific than that of non-experts. Birdwatchers see sparrows rather than just birds, and carpenters see roofing hammers rather than just hammers (Tanaka & Taylor, 1991). This all suggests that the preferred level is not (only) based on how different categories are in the world, but that people’s knowledge and interest in the categories has an important effect. One explanation of the basic-level preference is that basic-level categories are more differentiated: The category members are similar to one another, but they are different from members of other categories (Murphy & Brownell, 1985; Rosch et al., 1976). (The alert reader will note a similarity to the explanation of typicality I gave above. However, here we’re talking about the entire category and not individual members.) Chairs are pretty similar to one another, sharing a lot of features (legs, a seat, a back, similar size and shape); they also don’t share that many features with other furniture. Superordinate categories are not as useful because their members are not very similar to one another. What features are common to most furniture? There are very few. Subordinate categories are not as useful, because they’re very similar to other categories: Desk chairs are quite similar to dining room chairs and easy chairs. As a result, it can be difficult to decide which subordinate category an object is in (Murphy & Brownell, 1985). Experts can differ from novices in which categories are the most differentiated, because they know different things about the categories, therefore changing how similar the categories are. [1] This is a controversial claim, as some say that infants learn superordinates before anything else (Mandler, 2004). However, if true, then it is very puzzling that older children have great difficulty learning the correct meaning of words for superordinates, as well as in learning artificial superordinate categories (Horton & Markman, 1980; Mervis, 1987). However, it seems fair to say that the answer to this question is not yet fully known. Theories of Concept Representation Now that we know these facts about the psychology of concepts, the question arises of how concepts are mentally represented. There have been two main answers. The first, somewhat confusingly called the prototype theory suggests that people have a summary representation of the category, a mental description that is meant to apply to the category as a whole. (The significance of summary will become apparent when the next theory is described.) This description can be represented as a set of weighted features (Smith & Medin, 1981). The features are weighted by their frequency in the category. For the category of birds, having wings and feathers would have a very high weight; eating worms would have a lower weight; living in Antarctica would have a lower weight still, but not zero, as some birds do live there. The idea behind prototype theory is that when you learn a category, you learn a general description that applies to the category as a whole: Birds have wings and usually fly; some eat worms; some swim underwater to catch fish. People can state these generalizations, and sometimes we learn about categories by reading or hearing such statements (“The kimodo dragon can grow to be 10 feet long”). When you try to classify an item, you see how well it matches that weighted list of features. For example, if you saw something with wings and feathers fly onto your front lawn and eat a worm, you could (unconsciously) consult your concepts and see which ones contained the features you observed. This example possesses many of the highly weighted bird features, and so it should be easy to identify as a bird. This theory readily explains the phenomena we discussed earlier. Typical category members have more, higher-weighted features. Therefore, it is easier to match them to your conceptual representation. Less typical items have fewer or lower-weighted features (and they may have features of other concepts). Therefore, they don’t match your representation as well. This makes people less certain in classifying such items. Borderline items may have features in common with multiple categories or not be very close to any of them. For example, edible seaweed does not have many of the common features of vegetables but also is not close to any other food concept (meat, fish, fruit, etc.), making it hard to know what kind of food it is. A very different account of concept representation is the exemplar theory (exemplar being a fancy name for an example; Medin & Schaffer, 1978). This theory denies that there is a summary representation. Instead, the theory claims that your concept of vegetables is remembered examples of vegetables you have seen. This could of course be hundreds or thousands of exemplars over the course of your life, though we don’t know for sure how many exemplars you actually remember. How does this theory explain classification? When you see an object, you (unconsciously) compare it to the exemplars in your memory, and you judge how similar it is to exemplars in different categories. For example, if you see some object on your plate and want to identify it, it will probably activate memories of vegetables, meats, fruit, and so on. In order to categorize this object, you calculate how similar it is to each exemplar in your memory. These similarity scores are added up for each category. Perhaps the object is very similar to a large number of vegetable exemplars, moderately similar to a few fruit, and only minimally similar to some exemplars of meat you remember. These similarity scores are compared, and the category with the highest score is chosen.[2] Why would someone propose such a theory of concepts? One answer is that in many experiments studying concepts, people learn concepts by seeing exemplars over and over again until they learn to classify them correctly. Under such conditions, it seems likely that people eventually memorize the exemplars (Smith & Minda, 1998). There is also evidence that close similarity to well-remembered objects has a large effect on classification. Allen and Brooks (1991) taught people to classify items by following a rule. However, they also had their subjects study the items, which were richly detailed. In a later test, the experimenters gave people new items that were very similar to one of the old items but were in a different category. That is, they changed one property so that the item no longer followed the rule. They discovered that people were often fooled by such items. Rather than following the category rule they had been taught, they seemed to recognize the new item as being very similar to an old one and so put it, incorrectly, into the same category. Many experiments have been done to compare the prototype and exemplar theories. Overall, the exemplar theory seems to have won most of these comparisons. However, the experiments are somewhat limited in that they usually involve a small number of exemplars that people view over and over again. It is not so clear that exemplar theory can explain real-world classification in which people do not spend much time learning individual items (how much time do you spend studying squirrels? or chairs?). Also, given that some part of our knowledge of categories is learned through general statements we read or hear, it seems that there must be room for a summary description separate from exemplar memory. Many researchers would now acknowledge that concepts are represented through multiple cognitive systems. For example, your knowledge of dogs may be in part through general descriptions such as “dogs have four legs.” But you probably also have strong memories of some exemplars (your family dog, Lassie) that influence your categorization. Furthermore, some categories also involve rules (e.g., a strike in baseball). How these systems work together is the subject of current study. [2] Actually, the decision of which category is chosen is more complex than this, but the details are beyond this discussion. Knowledge The final topic has to do with how concepts fit with our broader knowledge of the world. We have been talking very generally about people learning the features of concepts. For example, they see a number of birds and then learn that birds generally have wings, or perhaps they remember bird exemplars. From this perspective, it makes no difference what those exemplars or features are—people just learn them. But consider two possible concepts of buildings and their features in Table 2. Imagine you had to learn these two concepts by seeing exemplars of them, each exemplar having some of the features listed for the concept (as well as some idiosyncratic features). Learning the donker concept would be pretty easy. It seems to be a kind of underwater building, perhaps for deep-sea explorers. Its features seem to go together. In contrast, the blegdav doesn’t really make sense. If it’s in the desert, how can you get there by submarine, and why do they have polar bears as pets? Why would farmers live in the desert or use submarines? What good would steel windows do in such a building? This concept seems peculiar. In fact, if people are asked to learn new concepts that make sense, such as donkers, they learn them quite a bit faster than concepts such as blegdavs that don’t make sense (Murphy & Allopenna, 1994). Furthermore, the features that seem connected to one another (such as being underwater and getting there by submarine) are learned better than features that don’t seem related to the others (such as being red). Such effects demonstrate that when we learn new concepts, we try to connect them to the knowledge we already have about the world. If you were to learn about a new animal that doesn’t seem to eat or reproduce, you would be very puzzled and think that you must have gotten something wrong. By themselves, the prototype and exemplar theories don’t predict this. They simply say that you learn descriptions or exemplars, and they don’t put any constraints on what those descriptions or exemplars are. However, the knowledge approach to concepts emphasizes that concepts are meant to tell us about real things in the world, and so our knowledge of the world is used in learning and thinking about concepts. We can see this effect of knowledge when we learn about new pieces of technology. For example, most people could easily learn about tablet computers (such as iPads) when they were first introduced by drawing on their knowledge of laptops, cell phones, and related technology. Of course, this reliance on past knowledge can also lead to errors, as when people don’t learn about features of their new tablet that weren’t present in their cell phone or expect the tablet to be able to do something it can’t. One important aspect of people’s knowledge about categories is called psychological essentialism (Gelman, 2003; Medin & Ortony, 1989). People tend to believe that some categories—most notably natural kinds such as animals, plants, or minerals—have an underlying property that is found only in that category and that causes its other features. Most categories don’t actually have essences, but this is sometimes a firmly held belief. For example, many people will state that there is something about dogs, perhaps some specific gene or set of genes, that all dogs have and that makes them bark, have fur, and look the way they do. Therefore, decisions about whether something is a dog do not depend only on features that you can easily see but also on the assumed presence of this cause. Belief in an essence can be revealed through experiments describing fictional objects. Keil (1989) described to adults and children a fiendish operation in which someone took a raccoon, dyed its hair black with a white stripe down the middle, and implanted a “sac of super-smelly yucky stuff” under its tail. The subjects were shown a picture of a skunk and told that this is now what the animal looks like. What is it? Adults and children over the age of 4 all agreed that the animal is still a raccoon. It may look and even act like a skunk, but a raccoon cannot change its stripes (or whatever!)—it will always be a raccoon. Importantly, the same effect was not found when Keil described a coffeepot that was operated on to look like and function as a bird feeder. Subjects agreed that it was now a bird feeder. Artifacts don’t have an essence. Signs of essentialism include (a) objects are believed to be either in or out of the category, with no in-between; (b) resistance to change of category membership or of properties connected to the essence; and (c) for living things, the essence is passed on to progeny. Essentialism is probably helpful in dealing with much of the natural world, but it may be less helpful when it is applied to humans. Considerable evidence suggests that people think of gender, racial, and ethnic groups as having essences, which serves to emphasize the difference between groups and even justify discrimination (Hirschfeld, 1996). Historically, group differences were described by inheriting the blood of one’s family or group. “Bad blood” was not just an expression but a belief that negative properties were inherited and could not be changed. After all, if it is in the nature of “those people” to be dishonest (or clannish or athletic ...), then that could hardly be changed, any more than a raccoon can change into a skunk. Research on categories of people is an exciting ongoing enterprise, and we still do not know as much as we would like to about how concepts of different kinds of people are learned in childhood and how they may (or may not) change in adulthood. Essentialism doesn’t apply only to person categories, but it is one important factor in how we think of groups. Conclusion Concepts are central to our everyday thought. When we are planning for the future or thinking about our past, we think about specific events and objects in terms of their categories. If you’re visiting a friend with a new baby, you have some expectations about what the baby will do, what gifts would be appropriate, how you should behave toward it, and so on. Knowing about the category of babies helps you to effectively plan and behave when you encounter this child you’ve never seen before. Learning about those categories is a complex process that involves seeing exemplars (babies), hearing or reading general descriptions (“Babies like black-and-white pictures”), general knowledge (babies have kidneys), and learning the occasional rule (all babies have a rooting reflex). Current research is focusing on how these different processes take place in the brain. It seems likely that these different aspects of concepts are accomplished by different neural structures (Maddox & Ashby, 2004). Another interesting topic is how concepts differ across cultures. As different cultures have different interests and different kinds of interactions with the world, it seems clear that their concepts will somehow reflect those differences. On the other hand, the structure of categories in the world also imposes a strong constraint on what kinds of categories are actually useful. Some researchers have suggested that differences between Eastern and Western modes of thought have led to qualitatively different kinds of concepts (e.g.,Norenzayan, Smith, Kim, & Nisbett, 2002). Although such differences are intriguing, we should also remember that different cultures seem to share common categories such as chairs, dogs, parties, and jars, so the differences may not be as great as suggested by experiments designed to detect cultural effects. The interplay of culture, the environment, and basic cognitive processes in establishing concepts has yet to be fully investigated. Outside Resources Debate: The debate about Pluto and the definition of planet is an interesting one, as it illustrates the difficulty of arriving at definitions even in science. The Planetary Science Institute’s website has a series of press releases about the Pluto debate, including reactions from astronomers, while it happened. www.psi.edu Image Search: It can be interesting to get a pictorial summary of how much diversity there is among category members. If you do an image search for familiar categories such as houses, dogs, weddings, telephones, fruit, or whatever, you can get a visual display on a single page of the category structure. Of course, the results are probably biased, as people do not just randomly upload pictures of dogs or fruit, but it nonetheless will likely reveal the typicality structure, as most of the pictures will be of typical exemplars, and the atypical ones will stand out. (This activity will also demonstrate the phenomenon of ambiguity in language, as a search for “house” will yield some pictures of the TV character House, M.D. However, that is a lesson for a different module.) https://www.google.com/ Self-test: If you would like to run your own category-learning experiment, you can do so by following the link below. It works either in-browser or by download. When downloaded, users can put in their own stimuli to categorize. http://cognitrn.psych.indiana.edu/Co...ion/index.html Software: Self-test Categorization Applet - This software allows you to conduct your own categorization experiment. http://cognitrn.psych.indiana.edu/Co...ion/index.html Web: A Compendium of Category and Concept Activities and Worksheets - This website contains all types of printable worksheets and activities on how to categorize concepts. It includes word searches, picture sorts, and more. https://freelanguagestuff.com/category/ Web: An interesting article at Space.com argues (I believe correctly) that the term planet will not and should not be defined. http://www.space.com/3142-planets-defined.html Web: Most familiar categories have simple labels such as planet or dog. However, more complex categories can be made up for a particular purpose. Barsalou (1983) studied categories such as things to carry out of a burning house or ways to avoid being killed by the Mob. Interestingly, someone has published a book consisting of people’s photographs of things they would carry out of a burning house, and there is also a website showing such collections. Try to analyze what is common to the category members. What is the category’s prototype? http://theburninghouse.com/ Discussion Questions 1. Pick a couple of familiar categories and try to come up with definitions for them. When you evaluate each proposal (a) is it in fact accurate as a definition, and (b) is it a definition that people might actually use in identifying category members? 2. For the same categories, can you identify members that seem to be “better” and “worse” members? What about these items makes them typical and atypical? 3. Going around the room, point to some common objects (including things people are wearing or brought with them) and identify what the basic-level category is for that item. What are superordinate and subordinate categories for the same items? 4. List some features of a common category such as tables. The knowledge view suggests that you know reasons for why these particular features occur together. Can you articulate some of those reasons? Do the same thing for an animal category. 5. Choose three common categories: a natural kind, a human artifact, and a social event. Discuss with class members from other countries or cultures whether the corresponding categories in their cultures differ. Can you make a hypothesis about when such categories are likely to differ and when they are not? Vocabulary Basic-level category The neutral, preferred category for a given object, at an intermediate level of specificity. Category A set of entities that are equivalent in some way. Usually the items are similar to one another. Concept The mental representation of a category. Exemplar An example in memory that is labeled as being in a particular category. Psychological essentialism The belief that members of a category have an unseen property that causes them to be in the category and to have the properties associated with it. Typicality The difference in “goodness” of category members, ranging from the most typical (the prototype) to borderline members.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_7%3A_Cognition_and_Language/7.8%3A_Categories_and_Concepts.txt
By Frances Friedrich University of Utah We use the term “attention“ all the time, but what processes or abilities does that concept really refer to? This module will focus on how attention allows us to select certain parts of our environment and ignore other parts, and what happens to the ignored information. A key concept is the idea that we are limited in how much we can do at any one time. So we will also consider what happens when someone tries to do several things at once, such as driving while using electronic devices. learning objectives • Understand why selective attention is important and how it can be studied. • Learn about different models of when and how selection can occur. • Understand how divided attention or multitasking is studied, and implications of multitasking in situations such as distracted driving. What is Attention? Before we begin exploring attention in its various forms, take a moment to consider how you think about the concept. How would you define attention, or how do you use the term? We certainly use the word very frequently in our everyday language: “ATTENTION! USE ONLY AS DIRECTED!” warns the label on the medicine bottle, meaning be alert to possible danger. “Pay attention!” pleads the weary seventh-grade teacher, not warning about danger (with possible exceptions, depending on the teacher) but urging the students to focus on the task at hand. We may refer to a child who is easily distracted as having an attention disorder, although we also are told that Americans have an attention span of about 8 seconds, down from 12 seconds in 2000, suggesting that we all have trouble sustaining concentration for any amount of time (from www.Statisticbrain.com). How that number was determined is not clear from the Web site, nor is it clear how attention span in the goldfish—9 seconds!—was measured, but the fact that our average span reportedly is less than that of a goldfish is intriguing, to say the least. William James wrote extensively about attention in the late 1800s. An often quoted passage (James, 1890/1983) beautifully captures how intuitively obvious the concept of attention is, while it remains very difficult to define in measurable, concrete terms: Everyone knows what attention is. It is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration of consciousness are of its essence. It implies withdrawal from some things in order to deal effectively with others. (pp. 381–382) Notice that this description touches on the conscious nature of attention, as well as the notion that what is in consciousness is often controlled voluntarily but can also be determined by events that capture our attention. Implied in this description is the idea that we seem to have a limited capacity for information processing, and that we can only attend to or be consciously aware of a small amount of information at any given time. Many aspects of attention have been studied in the field of psychology. In some respects, we define different types of attention by the nature of the task used to study it. For example, a crucial issue in World War II was how long an individual could remain highly alert and accurate while watching a radar screen for enemy planes, and this problem led psychologists to study how attention works under such conditions. When watching for a rare event, it is easy to allow concentration to lag. (This a continues to be a challenge today for TSA agents, charged with looking at images of the contents of your carry-on items in search of knives, guns, or shampoo bottles larger than 3 oz.) Attention in the context of this type of search task refers to the level of sustained attention or vigilance one can maintain. In contrast, divided attentiontasks allow us to determine how well individuals can attend to many sources of information at once. Spatial attention refers specifically to how we focus on one part of our environment and how we move attention to other locations in the environment. These are all examples of different aspects of attention, but an implied element of most of these ideas is the concept of selective attention; some information is attended to while other information is intentionally blocked out. This module will focus on important issues in selective and divided attention, addressing these questions: • Can we pay attention to several sources of information at once, or do we have a limited capacity for information? • How do we select what to pay attention to? • What happens to information that we try to ignore? • Can we learn to divide attention between multiple tasks? Selective Attention The Cocktail Party Selective attention is the ability to select certain stimuli in the environment to process, while ignoring distracting information. One way to get an intuitive sense of how attention works is to consider situations in which attention is used. A party provides an excellent example for our purposes. Many people may be milling around, there is a dazzling variety of colors and sounds and smells, the buzz of many conversations is striking. There are so many conversations going on; how is it possible to select just one and follow it? You don’t have to be looking at the person talking; you may be listening with great interest to some gossip while pretending not to hear. However, once you are engaged in conversation with someone, you quickly become aware that you cannot also listen to other conversations at the same time. You also are probably not aware of how tight your shoes feel or of the smell of a nearby flower arrangement. On the other hand, if someone behind you mentions your name, you typically notice it immediately and may start attending to that (much more interesting) conversation. This situation highlights an interesting set of observations. We have an amazing ability to select and track one voice, visual object, etc., even when a million things are competing for our attention, but at the same time, we seem to be limited in how much we can attend to at one time, which in turn suggests that attention is crucial in selecting what is important. How does it all work? Dichotic Listening Studies This cocktail party scenario is the quintessential example of selective attention, and it is essentially what some early researchers tried to replicate under controlled laboratory conditions as a starting point for understanding the role of attention in perception (e.g., Cherry, 1953; Moray, 1959). In particular, they used dichotic listening and shadowing tasks to evaluate the selection process. Dichotic listening simply refers to the situation when two messages are presented simultaneously to an individual, with one message in each ear. In order to control which message the person attends to, the individual is asked to repeat back or “shadow” one of the messages as he hears it. For example, let’s say that a story about a camping trip is presented to John’s left ear, and a story about Abe Lincoln is presented to his right ear. The typical dichotic listening task would have John repeat the story presented to one ear as he hears it. Can he do that without being distracted by the information in the other ear? People can become pretty good at the shadowing task, and they can easily report the content of the message that they attend to. But what happens to the ignored message? Typically, people can tell you if the ignored message was a man’s or a woman’s voice, or other physical characteristics of the speech, but they cannot tell you what the message was about. In fact, many studies have shown that people in a shadowing task were not aware of a change in the language of the message (e.g., from English to German; Cherry, 1953), and they didn't even notice when the same word was repeated in the unattended ear more than 35 times (Moray, 1959)! Only the basic physical characteristics, such as the pitch of the unattended message, could be reported. On the basis of these types of experiments, it seems that we can answer the first question about how much information we can attend to very easily: not very much. We clearly have a limited capacity for processing information for meaning, making the selection process all the more important. The question becomes: How does this selection process work? Models of Selective Attention Broadbent’s Filter Model. Many researchers have investigated how selection occurs and what happens to ignored information. Donald Broadbent was one of the first to try to characterize the selection process. His Filter Model was based on the dichotic listening tasks described above as well as other types of experiments (Broadbent, 1958). He found that people select information on the basis of physical features: the sensory channel (or ear) that a message was coming in, the pitch of the voice, the color or font of a visual message. People seemed vaguely aware of the physical features of the unattended information, but had no knowledge of the meaning. As a result, Broadbent argued that selection occurs very early, with no additional processing for the unselected information. A flowchart of the model might look like this: Treisman’s Attenuation Model Broadbent’s model makes sense, but if you think about it you already know that it cannot account for all aspects of the Cocktail Party Effect. What doesn’t fit? The fact is that you tend to hear your own name when it is spoken by someone, even if you are deeply engaged in a conversation. We mentioned earlier that people in a shadowing experiment were unaware of a word in the unattended ear that was repeated many times—and yet many people noticed their own name in the unattended ear even it occurred only once. Anne Treisman (1960) carried out a number of dichotic listening experiments in which she presented two different stories to the two ears. As usual, she asked people to shadowthe message in one ear. As the stories progressed, however, she switched the stories to the opposite ears. Treisman found that individuals spontaneously followed the story, or the content of the message, when it shifted from the left ear to the right ear. Then they realized they were shadowing the wrong ear and switched back. Results like this, and the fact that you tend to hear meaningful information even when you aren’t paying attention to it, suggest that we do monitor the unattended information to some degree on the basis of its meaning. Therefore, the filter theory can’t be right to suggest that unattended information is completely blocked at the sensory analysis level. Instead, Treisman suggested that selection starts at the physical or perceptual level, but that the unattended information is not blocked completely, it is just weakened or attenuated. As a result, highly meaningful or pertinent information in the unattended ear will get through the filter for further processing at the level of meaning. The figure below shows information going in both ears, and in this case there is no filter that completely blocks nonselected information. Instead, selection of the left ear information strengthens that material, while the nonselected information in the right ear is weakened. However, if the preliminary analysis shows that the nonselected information is especially pertinent or meaningful (such as your own name), then the Attenuation Control will instead strengthen the more meaningful information. Late Selection Models Other selective attention models have been proposed as well. Alate selection or response selection model proposed by Deutsch and Deutsch (1963) suggests that all information in the unattended ear is processed on the basis of meaning, not just the selected or highly pertinent information. However, only the information that is relevant for the task response gets into conscious awareness. This model is consistent with ideas of subliminal perception; in other words, that you don’t have to be aware of or attending a message for it to be fully processed for meaning. You might notice that this figure looks a lot like that of the Early Selection model—only the location of the selective filter has changed, with the assumption that analysis of meaning occurs before selection occurs, but only the selected information becomes conscious. Multimode Model Why did researchers keep coming up with different models? Because no model really seemed to account for all the data, some of which indicates that nonselected information is blocked completely, whereas other studies suggest that it can be processed for meaning. The multimode model addresses this apparent inconsistency, suggesting that the stage at which selection occurs can change depending on the task. Johnston and Heinz (1978) demonstrated that under some conditions, we can select what to attend to at a very early stage and we do not process the content of the unattended message very much at all. Analyzing physical information, such as attending to information based on whether it is a male or female voice, is relatively easy; it occurs automatically, rapidly, and doesn’t take much effort. Under the right conditions, we can select what to attend to on the basis of the meaning of the messages. However, the late selection option—processing the content of all messages before selection—is more difficult and requires more effort. The benefit, though, is that we have the flexibility to change how we deploy our attention depending upon what we are trying to accomplish, which is one of the greatest strengths of our cognitive system. This discussion of selective attention has focused on experiments using auditory material, but the same principles hold for other perceptual systems as well. Neisser (1979) investigated some of the same questions with visual materials by superimposing two semi-transparent video clips and asking viewers to attend to just one series of actions. As with the auditory materials, viewers often were unaware of what went on in the other clearly visible video. Twenty years later, Simons and Chabris (1999) explored and expanded these findings using similar techniques, and triggered a flood of new work in an area referred to as inattentional blindness. We touch on those ideas below, and you can also refer to another Noba Module, Failures of Awareness: The Case of Inattentional Blindness for a more complete discussion. Focus Topic 1: Subliminal Perception The idea of subliminal perception—that stimuli presented below the threshold for awareness can influence thoughts, feelings, or actions—is a fascinating and kind of creepy one. Can messages you are unaware of, embedded in movies or ads or the music playing in the grocery store, really influence what you buy? Many such claims of the power of subliminal perception have been made. One of the most famous came from a market researcher who claimed that the message “Eat Popcorn” briefly flashed throughout a movie increased popcorn sales by more than 50%, although he later admitted that the study was made up (Merikle, 2000). Psychologists have worked hard to investigate whether this is a valid phenomenon. Studying subliminal perception is more difficult than it might seem, because of the difficulty of establishing what the threshold for consciousness is or of even determining what type of threshold is important; for example, Cheesman and Merikle (1984, 1986) make an important distinction between objective and subjective thresholds. The bottom line is that there is some evidence that individuals can be influenced by stimuli they are not aware of, but how complex the stimuli can be or the extent to which unconscious material can affect behavior is not settled (e.g., Bargh & Morsella, 2008; Greenwald, 1992; Merikle, 2000). Divided Attention and Multitasking In spite of the evidence of our limited capacity, we all like to think that we can do several things at once. Some people claim to be able to multitask without any problem: reading a textbook while watching television and talking with friends; talking on the phone while playing computer games; texting while driving. The fact is that we sometimes can seem to juggle several things at once, but the question remains whether dividing attention in this way impairs performance. Is it possible to overcome the limited capacity that we experience when engaging in cognitive tasks? We know that with extensive practice, we can acquire skills that do not appear to require conscious attention. As we walk down the street, we don’t need to think consciously about what muscle to contract in order to take the next step. Indeed, paying attention to automated skills can lead to a breakdown in performance, or “choking” (e.g., Beilock & Carr, 2001). But what about higher level, more mentally demanding tasks: Is it possible to learn to perform two complex tasks at the same time? Divided Attention Tasks In a classic study that examined this type of divided attention task, two participants were trained to take dictation for spoken words while reading unrelated material for comprehension (Spelke, Hirst, & Neisser, 1976). In divided attention tasks such as these, each task is evaluated separately, in order to determine baseline performance when the individual can allocate as many cognitive resources as necessary to one task at a time. Then performance is evaluated when the two tasks are performed simultaneously. A decrease in performance for either task would suggest that even if attention can be divided or switched between the tasks, the cognitive demands are too great to avoid disruption of performance. (We should note here that divided attention tasks are designed, in principle, to see if two tasks can be carried out simultaneously. A related research area looks at task switching and how well we can switch back and forth among different tasks [e.g., Monsell, 2003]. It turns out that switching itself is cognitively demanding and can impair performance.) The focus of the Spelke et al. (1976) study was whether individuals could learn to perform two relatively complex tasks concurrently, without impairing performance. The participants received plenty of practice—the study lasted 17 weeks and they had a 1-hour session each day, 5 days a week. These participants were able to learn to take dictation for lists of words and read for comprehension without affecting performance in either task, and the authors suggested that perhaps there are not fixed limits on our attentional capacity. However, changing the tasks somewhat, such as reading aloud rather than silently, impaired performance initially, so this multitasking ability may be specific to these well-learned tasks. Indeed, not everyone could learn to perform two complex tasks without performance costs (Hirst, Neisser, & Spelke, 1978), although the fact that some can is impressive. Distracted Driving More relevant to our current lifestyles are questions about multitasking while texting or having cell phone conversations. Research designed to investigate, under controlled conditions, multitasking while driving has revealed some surprising results. Certainly there are many possible types of distractions that could impair driving performance, such as applying makeup using the rearview mirror, attempting (usually in vain) to stop the kids in the backseat from fighting, fiddling with the CD player, trying to negotiate a handheld cell phone, a cigarette, and a soda all at once, eating a bowl of cereal while driving (!). But we tend to have a strong sense that we CAN multitask while driving, and cars are being built with more and more technological capabilities that encourage multitasking. How good are we at dividing attention in these cases? Most people acknowledge the distraction caused by texting while driving and the reason seems obvious: Your eyes are off the road and your hands and at least one hand (often both) are engaged while texting. However, the problem is not simply one of occupied hands or eyes, but rather that the cognitive demands on our limited capacity systems can seriously impair driving performance (Strayer, Watson, & Drews, 2011). The effect of a cell phone conversation on performance (such as not noticing someone’s brake lights or responding more slowly to them) is just as significant when the individual is having a conversation with a hands-free device as with a handheld phone; the same impairments do not occur when listening to the radio or a book on tape (Strayer & Johnston, 2001). Moreover, studies using eye-tracking devices have shown that drivers are less likely to later recognize objects that they did look at when using a cell phone while driving (Strayer & Drews, 2007). These findings demonstrate that cognitive distractions such as cell phone conversations can produce inattentional blindness, or a lack of awareness of what is right before your eyes (see also, Simons & Chabris, 1999). Sadly, although we all like to think that we can multitask while driving, in fact the percentage of people who can truly perform cognitive tasks without impairing their driving performance is estimated to be about 2% (Watson & Strayer, 2010). Summary It may be useful to think of attention as a mental resource, one that is needed to focus on and fully process important information, especially when there is a lot of distracting “noise” threatening to obscure the message. Our selective attention system allows us to find or track an object or conversation in the midst of distractions. Whether the selection process occurs early or late in the analysis of those events has been the focus of considerable research, and in fact how selection occurs may very well depend on the specific conditions. With respect to divided attention, in general we can only perform one cognitively demanding task at a time, and we may not even be aware of unattended events even though they might seem too obvious to miss (check out some examples in the Outside Resources below). This type of inattention blindness can occur even in well-learned tasks, such as driving while talking on a cell phone. Understanding how attention works is clearly important, even for our everyday lives. Outside Resources Video: Here's a wild example of how much we fail to notice when our attention is captured by one element of a scene. Video: Try this test to see how well you can focus on a task in the face of a lot of distraction. Discussion Questions 1. Discuss the implications of the different models of selective attention for everyday life. For instance, what advantages and disadvantages would be associated with being able to filter out all unwanted information at a very early stage in processing? What are the implications of processing all ignored information fully, even if you aren't consciously aware of that information? 2. Think of examples of when you feel you can successfully multitask and when you can’t. Discuss what aspects of the tasks or the situation seem to influence divided attention performance. How accurate do you think you are in judging your own multitasking ability? 3. What are the public policy implications of current evidence of inattentional blindness as a result of distracted driving? Should this evidence influence traffic safety laws? What additional studies of distracted driving would you propose? Vocabulary Dichotic listening An experimental task in which two messages are presented to different ears. Divided attention The ability to flexibly allocate attentional resources between two or more concurrent tasks. Inattentional blindness The failure to notice a fully visible object when attention is devoted to something else. Limited capacity The notion that humans have limited mental resources that can be used at a given time. Selective attention The ability to select certain stimuli in the environment to process, while ignoring distracting information. Shadowing A task in which the individual is asked to repeat an auditory message as it is presented. Subliminal perception The ability to process information for meaning when the individual is not consciously aware of that information.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_7%3A_Cognition_and_Language/7.9%3A_Attention.txt
• 8.10: Multi-Modal Perception Most of the time, we perceive the world as a unified bundle of sensations from multiple sensory modalities. In other words, our perception is multimodal. This module provides an overview of multimodal perception, including information about its neurobiology and its psychological effects. • 8.1: Sensation and Perception The topics of sensation and perception are among the oldest and most important in all of psychology. People are equipped with senses such as sight, hearing and taste that help us to take in the world around us.  In this module, you will learn about the biological processes of sensation and how these can be combined to create perceptions. • 8.2: Vision Vision is the sensory modality that transforms light into a psychological experience of the world around you, with minimal bodily effort. This module provides an overview of the most significant steps in this transformation and strategies that your brain uses to achieve this visual understanding of the environment. • 8.3: Taste and Smell Humans are omnivores (able to survive on many different foods). The omnivore’s dilemma is to identify foods that are healthy and avoid poisons. Taste and smell cooperate to solve this dilemma. Stimuli for both taste and smell are chemicals. Smell results from a biological system that essentially permits the brain to store rough sketches of the chemical structures of odor stimuli in the environment • 8.4: Hearing Hearing allows us to perceive the world of acoustic vibrations all around us, and provides us with our most important channels of communication. This module reviews the basic mechanisms of hearing, beginning with the anatomy and physiology of the ear and a brief review of the auditory pathways up to the auditory cortex • 8.5: Touch and Pain The sensory systems of touch and pain provide us with information about our environment and our bodies that is often crucial for survival and well-being. Moreover, touch is a source of pleasure. In this module, we review how information about our environment and our bodies is coded in the periphery & interpreted by the brain as touch and pain sensations. We discuss how these experiences are often dramatically shaped by top-down factors like motivation, expectation, mood, fear, stress, & context. • 8.6: The Vestibular System The vestibular system functions to detect head motion and position relative to gravity and is primarily involved in the fine control of visual gaze, posture, orthostasis, spatial orientation, and navigation. Vestibular signals are highly processed in many regions of the brain and are involved in many essential functions. In this module, we provide an overview of how the vestibular system works and how vestibular signals are used to guide behavior. • 8.7: Time and Culture There are profound cultural differences in how people think about, measure, and use their time. This module describes some major dimensions of time that are most prone to cultural variation. • 8.8: Failures of Awareness - The Case of Intentional Blindness The failure to notice unexpected objects or events when attention is focused elsewhere is now known as inattentional blindness. This module describes the history and status of research on inattentional blindness, discusses the reasons why we find these results to be counterintuitive, and the implications of failures of awareness for how we see and act in our world. • 8.9: Eyewitness Testimony and Memory Biases Eyewitnesses can provide very compelling legal testimony, but rather than recording experiences flawlessly, their memories are susceptible to a variety of errors and biases. They (like the rest of us) can make errors in remembering specific details and can even remember whole events that did not actually happen. In this module, we discuss several of the common types of errors, and what they can tell us about human memory and its interactions with the legal system. Chapter 8: Sensation and Perception By Adam John Privitera Chemeketa Community College The topics of sensation and perception are among the oldest and most important in all of psychology. People are equipped with senses such as sight, hearing and taste that help us to take in the world around us. Amazingly, our senses have the ability to convert real-world information into electrical information that can be processed by the brain. The way we interpret this information-- our perceptions-- is what leads to our experiences of the world. In this module, you will learn about the biological processes of sensation and how these can be combined to create perceptions. learning objectives • Differentiate the processes of sensation and perception. • Explain the basic principles of sensation and perception. • Describe the function of each of our senses. • Outline the anatomy of the sense organs and their projections to the nervous system. • Apply knowledge of sensation and perception to real world examples. • Explain the consequences of multimodal perception. Introduction "Once I was hiking at Cape Lookout State Park in Tillamook, Oregon. After passing through a vibrantly colored, pleasantly scented, temperate rainforest, I arrived at a cliff overlooking the Pacific Ocean. I grabbed the cold metal railing near the edge and looked out at the sea. Below me, I could see a pod of sea lions swimming in the deep blue water. All around me I could smell the salt from the sea and the scent of wet, fallen leaves." This description of a single memory highlights the way a person’s senses are so important to our experience of the world around us. Before discussing each of our extraordinary senses individually, it is necessary to cover some basic concepts that apply to all of them. It is probably best to start with one very important distinction that can often be confusing: the difference between sensation and perception. The physical process during which our sensory organs—those involved with hearing and taste, for example—respond to external stimuli is called sensation. Sensation happens when you eat noodles or feel the wind on your face or hear a car horn honking in the distance. During sensation, our sense organs are engaging in transduction, the conversion of one form of energy into another. Physical energy such as light or a sound wave is converted into a form of energy the brain can understand: electrical stimulation. After our brain receives the electrical signals, we make sense of all this stimulation and begin to appreciate the complex world around us. This psychological process—making sense of the stimuli—is called perception. It is during this process that you are able to identify a gas leak in your home or a song that reminds you of a specific afternoon spent with friends. Regardless of whether we are talking about sight or taste or any of the individual senses, there are a number of basic principles that influence the way our sense organs work. The first of these influences is our ability to detect an external stimulus. Each sense organ—our eyes or tongue, for instance—requires a minimal amount of stimulation in order to detect a stimulus. This absolute threshold explains why you don’t smell the perfume someone is wearing in a classroom unless they are somewhat close to you. The way we measure absolute thresholds is by using a method called signal detection. This process involves presenting stimuli of varying intensities to a research participant in order to determine the level at which he or she can reliably detect stimulation in a given sense. During one type of hearing test, for example, a person listens to increasingly louder tones (starting from silence) in an effort to determine the threshold at which he or she begins to hear (see Additional Resources for a video demonstration of a high-frequency ringtone that can only be heard by young people). Correctly indicating that a sound was heard is called a hit; failing to do so is called a miss. Additionally, indicating that a sound was heard when one wasn’t played is called a false alarm, and correctly identifying when a sound wasn’t played is a correct rejection. Through these and other studies, we have been able to gain an understanding of just how remarkable our senses are. For example, the human eye is capable of detecting candlelight from 30 miles away in the dark. We are also capable of hearing the ticking of a watch in a quiet environment from 20 feet away. If you think that’s amazing, I encourage you to read more about the extreme sensory capabilities of nonhuman animals; many animals possess what we would consider super-human abilities. A similar principle to the absolute threshold discussed above underlies our ability to detect the difference between two stimuli of different intensities. The differential threshold, or just noticeable difference (JND), for each sense has been studied using similar methods to signal detection. To illustrate, find a friend and a few objects of known weight (you’ll need objects that weigh 1, 2, 10 and 11 lbs.—or in metric terms: 1, 2, 5 and 5.5 kg). Have your friend hold the lightest object (1 lb. or 1 kg). Then, replace this object with the next heaviest and ask him or her to tell you which one weighs more. Reliably, your friend will say the second object every single time. It’s extremely easy to tell the difference when something weighs double what another weighs! However, it is not so easy when the difference is a smaller percentage of the overall weight. It will be much harder for your friend to reliably tell the difference between 10 and 11 lbs. (or 5 versus 5.5 kg) than it is for 1 and 2 lbs. This is phenomenon is called Weber’s Law, and it is the idea that bigger stimuli require larger differences to be noticed. Crossing into the world of perception, it is clear that our experience influences how our brain processes things. You have tasted food that you like and food that you don’t like. There are some bands you enjoy and others you can’t stand. However, during the time you first eat something or hear a band, you process those stimuli using bottom-up processing. This is when we build up to perception from the individual pieces. Sometimes, though, stimuli we’ve experienced in our past will influence how we process new ones. This is called top-down processing. The best way to illustrate these two concepts is with our ability to read. Read the following quote out loud: Notice anything odd while you were reading the text in the triangle? Did you notice the second “the”? If not, it’s likely because you were reading this from a top-down approach. Having a second “the” doesn’t make sense. We know this. Our brain knows this and doesn’t expect there to be a second one, so we have a tendency to skip right over it. In other words, your past experience has changed the way you perceive the writing in the triangle! A beginning reader—one who is using a bottom-up approach by carefully attending to each piece—would be less likely to make this error. Finally, it should be noted that when we experience a sensory stimulus that doesn’t change, we stop paying attention to it. This is why we don’t feel the weight of our clothing, hear the hum of a projector in a lecture hall, or see all the tiny scratches on the lenses of our glasses. When a stimulus is constant and unchanging, we experience sensory adaptation. During this process we become less sensitive to that stimulus. A great example of this occurs when we leave the radio on in our car after we park it at home for the night. When we listen to the radio on the way home from work the volume seems reasonable. However, the next morning when we start the car, we might be startled by how loud the radio is. We don’t remember it being that loud last night. What happened? What happened is that we adapted to the constant stimulus of the radio volume over the course of the previous day. This required us to continue to turn up the volume of the radio to combat the constantly decreasing sensitivity. However, after a number of hours away from that constant stimulus, the volume that was once reasonable is entirely too loud. We are no longer adapted to that stimulus! Now that we have introduced some basic sensory principles, let us take on each one of our fascinating senses individually. Vision How vision works Vision is a tricky matter. When we see a pizza, a feather, or a hammer, we are actually seeing light bounce off that object and into our eye. Light enters the eye through the pupil, a tiny opening behind the cornea. The pupil regulates the amount of light entering the eye by contracting (getting smaller) in bright light and dilating (getting larger) in dimmer light. Once past the pupil, light passes through the lens, which focuses an image on a thin layer of cells in the back of the eye, called the retina. Because we have two eyes in different locations, the image focused on each retina is from a slightly different angle (binocular disparity), providing us with our perception of 3D space (binocular vision). You can appreciate this by holding a pen in your hand, extending your arm in front of your face, and looking at the pen while closing each eye in turn. Pay attention to the apparent position of the pen relative to objects in the background. Depending on which eye is open, the pen appears to jump back and forth! This is how video game manufacturers create the perception of 3D without special glasses; two slightly different images are presented on top of one another. It is in the retina that light is transduced, or converted into electrical signals, by specialized cells called photoreceptors. The retina contains two main kinds of photoreceptors: rods and cones. Rods are primarily responsible for our ability to see in dim light conditions, such as during the night. Cones, on the other hand, provide us with the ability to see color and fine detail when the light is brighter. Rods and cones differ in their distribution across the retina, with the highest concentration of cones found in the fovea (the central region of focus), and rods dominating the periphery (see Figure 8.1.2). The difference in distribution can explain why looking directly at a dim star in the sky makes it seem to disappear; there aren’t enough rods to process the dim light! Next, the electrical signal is sent through a layer of cells in the retina, eventually traveling down the optic nerve. After passing through the thalamus, this signal makes it to the primary visual cortex, where information about light orientation and movement begin to come together (Hubel & Wiesel, 1962). Information is then sent to a variety of different areas of the cortex for more complex processing. Some of these cortical regions are fairly specialized—for example, for processing faces (fusiform face area) and body parts (extrastriate body area). Damage to these areas of the cortex can potentially result in a specific kind of agnosia, whereby a person loses the ability to perceive visual stimuli. A great example of this is illustrated in the writing of famous neurologist Dr. Oliver Sacks; he experienced prosopagnosia, the inability to recognize faces. These specialized regions for visual recognition comprise the ventral pathway (also called the “what” pathway). Other areas involved in processing location and movement make up the dorsal pathway (also called the “where” pathway). Together, these pathways process a large amount of information about visual stimuli (Goodale & Milner, 1992). Phenomena we often refer to as optical illusions provide misleading information to these “higher” areas of visual processing (see Additional Resources for websites containing amazing optical illusions). Dark and light adaptation Humans have the ability to adapt to changes in light conditions. As mentioned before, rods are primarily involved in our ability to see in dim light. They are the photoreceptors responsible for allowing us to see in a dark room. You might notice that this night vision ability takes around 10 minutes to turn on, a process called dark adaptation. This is because our rods become bleached in normal light conditions and require time to recover. We experience the opposite effect when we leave a dark movie theatre and head out into the afternoon sun. During light adaptation, a large number of rods and cones are bleached at once, causing us to be blinded for a few seconds. Light adaptation happens almost instantly compared with dark adaptation. Interestingly, some people think pirates wore a patch over one eye in order to keep it adapted to the dark while the other was adapted to the light. If you want to turn on a light without losing your night vision, don’t worry about wearing an eye patch, just use a red light; this wavelength doesn’t bleach your rods. Color vision Our cones allow us to see details in normal light conditions, as well as color. We have cones that respond preferentially, not exclusively, for red, green and blue (Svaetichin, 1955). This trichromatic theory is not new; it dates back to the early 19th century (Young, 1802; Von Helmholtz, 1867). This theory, however, does not explain the odd effect that occurs when we look at a white wall after staring at a picture for around 30 seconds. Try this: stare at the image of the flag in Figure 8.1.3 for 30 seconds and then immediately look at a sheet of white paper or a wall. According to the trichromatic theory of color vision, you should see white when you do that. Is that what you experienced? As you can see, the trichromatic theory doesn’t explain the afterimage you just witnessed. This is where the opponent-process theory comes in (Hering, 1920). This theory states that our cones send information to retinal ganglion cells that respond to pairs of colors (red-green, blue-yellow, black-white). These specialized cells take information from the cones and compute the difference between the two colors—a process that explains why we cannot see reddish-green or bluish-yellow, as well as why we see afterimages. Color blindness can result from issues with the cones or retinal ganglion cells involved in color vision. Hearing (Audition) Some of the most well-known celebrities and top earners in the world are musicians. Our worship of musicians may seem silly when you consider that all they are doing is vibrating the air a certain way to create sound waves, the physical stimulus for audition. People are capable of getting a large amount of information from the basic qualities of sound waves. The amplitude (or intensity) of a sound wave codes for the loudness of a stimulus; higher amplitude sound waves result in louder sounds. The pitch of a stimulus is coded in the frequency of a sound wave; higher frequency sounds are higher pitched. We can also gauge the quality, or timbre, of a sound by the complexity of the sound wave. This allows us to tell the difference between bright and dull sounds as well as natural and synthesized instruments (Välimäki & Takala, 1996). In order for us to sense sound waves from our environment they must reach our inner ear. Lucky for us, we have evolved tools that allow those waves to be funneled and amplified during this journey. Initially, sound waves are funneled by your pinna (the external part of your ear that you can actually see) into your auditory canal (the hole you stick Q-tips into despite the box advising against it). During their journey, sound waves eventually reach a thin, stretched membrane called the tympanic membrane (eardrum), which vibrates against the three smallest bones in the body—the malleus (hammer), the incus (anvil), and the stapes (stirrup)—collectively called the ossicles. Both the tympanic membrane and the ossicles amplify the sound waves before they enter the fluid-filled cochlea, a snail-shell-like bone structure containing auditory hair cells arranged on the basilar membrane (see Figure 8.1.4) according to the frequency they respond to (called tonotopic organization). Depending on age, humans can normally detect sounds between 20 Hz and 20 kHz. It is inside the cochlea that sound waves are converted into an electrical message. Because we have an ear on each side of our head, we are capable of localizing sound in 3D space pretty well (in the same way that having two eyes produces 3D vision). Have you ever dropped something on the floor without seeing where it went? Did you notice that you were somewhat capable of locating this object based on the sound it made when it hit the ground? We can reliably locate something based on which ear receives the sound first. What about the height of a sound? If both ears receive a sound at the same time, how are we capable of localizing sound vertically? Research in cats (Populin & Yin, 1998) and humans (Middlebrooks & Green, 1991) has pointed to differences in the quality of sound waves depending on vertical positioning. After being processed by auditory hair cells, electrical signals are sent through the cochlear nerve (a division of the vestibulocochlear nerve) to the thalamus, and then the primary auditory cortex of the temporal lobe. Interestingly, the tonotopic organization of the cochlea is maintained in this area of the cortex (Merzenich, Knight, & Roth, 1975; Romani, Williamson, & Kaufman, 1982). However, the role of the primary auditory cortex in processing the wide range of features of sound is still being explored (Walker, Bizley, & Schnupp, 2011). Balance and the vestibular system The inner ear isn’t only involved in hearing; it’s also associated with our ability to balance and detect where we are in space. The vestibular system is comprised of three semicircular canals—fluid-filled bone structures containing cells that respond to changes in the head’s orientation in space. Information from the vestibular system is sent through the vestibular nerve (the other division of the vestibulocochlear nerve) to muscles involved in the movement of our eyes, neck, and other parts of our body. This information allows us to maintain our gaze on an object while we are in motion. Disturbances in the vestibular system can result in issues with balance, including vertigo. Touch Who doesn’t love the softness of an old t-shirt or the smoothness of a clean shave? Who actually enjoys having sand in their swimsuit? Our skin, the body’s largest organ, provides us with all sorts of information, such as whether something is smooth or bumpy, hot or cold, or even if it’s painful. Somatosensation—which includes our ability to sense touch, temperature and pain—transduces physical stimuli, such as fuzzy velvet or scalding water, into electrical potentials that can be processed by the brain. Tactile sensation Tactile stimuli—those that are associated with texture—are transduced by special receptors in the skin called mechanoreceptors. Just like photoreceptors in the eye and auditory hair cells in the ear, these allow for the conversion of one kind of energy into a form the brain can understand. After tactile stimuli are converted by mechanoreceptors, information is sent through the thalamus to the primary somatosensory cortex for further processing. This region of the cortex is organized in a somatotopic map where different regions are sized based on the sensitivity of specific parts on the opposite side of the body (Penfield & Rasmussen, 1950). Put simply, various areas of the skin, such as lips and fingertips, are more sensitive than others, such as shoulders or ankles. This sensitivity can be represented with the distorted proportions of the human body shown in Figure 8.1.5. Pain Most people, if asked, would love to get rid of pain (nociception), because the sensation is very unpleasant and doesn’t appear to have obvious value. But the perception of pain is our body’s way of sending us a signal that something is wrong and needs our attention. Without pain, how would we know when we are accidentally touching a hot stove, or that we should rest a strained arm after a hard workout? Phantom limbs Records of people experiencing phantom limbs after amputations have been around for centuries (Mitchell, 1871). As the name suggests, people with a phantom limb have the sensations such as itching seemingly coming from their missing limb. A phantom limb can also involve phantom limb pain, sometimes described as the muscles of the missing limb uncomfortably clenching. While the mechanisms underlying these phenomena are not fully understood, there is evidence to support that the damaged nerves from the amputation site are still sending information to the brain (Weinstein, 1998) and that the brain is reacting to this information (Ramachandran & Rogers-Ramachandran, 2000). There is an interesting treatment for the alleviation of phantom limb pain that works by tricking the brain, using a special mirror box to create a visual representation of the missing limb. The technique allows the patient to manipulate this representation into a more comfortable position (Ramachandran & Rogers-Ramachandran, 1996). Smell and Taste: The Chemical Senses The two most underappreciated senses can be lumped into the broad category of chemical senses. Both olfaction (smell) and gustation (taste) require the transduction of chemical stimuli into electrical potentials. I say these senses are underappreciated because most people would give up either one of these if they were forced to give up a sense. While this may not shock a lot of readers, take into consideration how much money people spend on the perfume industry annually (\$29 billion US Dollars). Many of us pay a lot more for a favorite brand of food because we prefer the taste. Clearly, we humans care about our chemical senses. Olfaction (smell) Unlike any of the other senses discussed so far, the receptors involved in our perception of both smell and taste bind directly with the stimuli they transduce. Odorants in our environment, very often mixtures of them, bind with olfactory receptors found in the olfactory epithelium. The binding of odorants to receptors is thought to be similar to how a lock and key operates, with different odorants binding to different specialized receptors based on their shape. However, the shape theory of olfaction isn’t universally accepted and alternative theories exist, including one that argues that the vibrations of odorant molecules correspond to their subjective smells (Turin, 1996). Regardless of how odorants bind with receptors, the result is a pattern of neural activity. It is thought that our memories of these patterns of activity underlie our subjective experience of smell (Shepherd, 2005). Interestingly, because olfactory receptors send projections to the brain through the cribriform plate of the skull, head trauma has the potential to cause anosmia, due to the severing of these connections. If you are in a line of work where you constantly experience head trauma (e.g. professional boxer) and you develop anosmia, don’t worry—your sense of smell will probably come back (Sumner, 1964). Gustation (taste) Taste works in a similar fashion to smell, only with receptors found in the taste buds of the tongue, called taste receptor cells. To clarify a common misconception, taste buds are not the bumps on your tongue (papillae), but are located in small divots around these bumps. These receptors also respond to chemicals from the outside environment, except these chemicals, called tastants, are contained in the foods we eat. The binding of these chemicals with taste receptor cells results in our perception of the five basic tastes: sweet, sour, bitter, salty and umami (savory)—although some scientists argue that there are more (Stewart et al., 2010). Researchers used to think these tastes formed the basis for a map-like organization of the tongue; there was even a clever rationale for the concept, about how the back of the tongue sensed bitter so we would know to spit out poisons, and the front of the tongue sensed sweet so we could identify high-energy foods. However, we now know that all areas of the tongue with taste receptor cells are capable of responding to every taste (Chandrashekar, Hoon, Ryba, & Zuker, 2006). During the process of eating we are not limited to our sense of taste alone. While we are chewing, food odorants are forced back up to areas that contain olfactory receptors. This combination of taste and smell gives us the perception of flavor. If you have doubts about the interaction between these two senses, I encourage you to think back to consider how the flavors of your favorite foods are impacted when you have a cold; everything is pretty bland and boring, right? Putting it all Together: Multimodal Perception Though we have spent the majority of this module covering the senses individually, our real-world experience is most often multimodal, involving combinations of our senses into one perceptual experience. This should be clear after reading the description of walking through the forest at the beginning of the module; it was the combination of senses that allowed for that experience. It shouldn’t shock you to find out that at some point information from each of our senses becomes integrated. Information from one sense has the potential to influence how we perceive information from another, a process called multimodal perception. Interestingly, we actually respond more strongly to multimodal stimuli compared to the sum of each single modality together, an effect called the superadditive effect of multisensory integration. This can explain how you’re still able to understand what friends are saying to you at a loud concert, as long as you are able to get visual cues from watching them speak. If you were having a quiet conversation at a café, you likely wouldn’t need these additional cues. In fact, the principle of inverse effectiveness states that you are less likely to benefit from additional cues from other modalities if the initial unimodal stimulus is strong enough (Stein & Meredith, 1993). Because we are able to process multimodal sensory stimuli, and the results of those processes are qualitatively different from those of unimodal stimuli, it’s a fair assumption that the brain is doing something qualitatively different when they’re being processed. There has been a growing body of evidence since the mid-90’s on the neural correlates of multimodal perception. For example, neurons that respond to both visual and auditory stimuli have been identified in the superior temporal sulcus (Calvert, Hansen, Iversen, & Brammer, 2001). Additionally, multimodal “what” and “where” pathways have been proposed for auditory and tactile stimuli (Renier et al., 2009). We aren’t limited to reading about these regions of the brain and what they do; we can experience them with a few interesting examples (see Additional Resources for the “McGurk Effect,” the “Double Flash Illusion,” and the “Rubber Hand Illusion”). Conclusion Our impressive sensory abilities allow us to experience the most enjoyable and most miserable experiences, as well as everything in between. Our eyes, ears, nose, tongue and skin provide an interface for the brain to interact with the world around us. While there is simplicity in covering each sensory modality independently, we are organisms that have evolved the ability to process multiple modalities as a unified experience. Outside Resources Audio: Auditory Demonstrations from Richard Warren’s lab at the University of Wisconsin, Milwaukee www4.uwm.edu/APL/demonstrations.html Audio: Auditory Demonstrations. CD published by the Acoustical Society of America (ASA). You can listen to the demonstrations here www.feilding.net/sfuad/musi30...1/demos/audio/ Book: Ackerman, D. (1990). A natural history of the senses. Vintage. http://www.dianeackerman.com/a-natur...diane-ackerman Book: Sacks, O. (1998). The man who mistook his wife for a hat: And other clinical tales. Simon and Schuster. http://www.oliversacks.com/books-by-...took-wife-hat/ Video: Acquired knowledge and its impact on our three-dimensional interpretation of the world - 3D Street Art Video: Acquired knowledge and its impact on our three-dimensional interpretation of the world - Anamorphic Illusions Video: Cybersenses Video: Seeing Sound, Tasting Color Video: The Phantom Limb Phenomenon Web: A regularly updated website covering some of the amazing sensory capabilities of non-human animals. phenomena.nationalgeographic....animal-senses/ Web: A special ringtone that is only audible to younger people. Web: Amazing library with visual phenomena and optical illusions, explained http://michaelbach.de/ot/index.html Web: An article on the discoveries in echolocation: the use of sound in locating people and things http://www.psychologicalscience.org/...et-around.html Web: An optical illusion demonstration the opponent-process theory of color vision. Web: Anatomy of the eye http://www.eyecareamerica.org/eyecare/anatomy/ Web: Animation showing tonotopic organization of the basilar membrane. Web: Best Illusion of the Year Contest website http://illusionoftheyear.com/ Web: Demonstration of contrast gain adaptation http://www.michaelbach.de/ot/lum_contrast-adapt/ Web: Demonstration of illusory contours and lateral inhibition. Mach bands http://michaelbach.de/ot/lum-MachBands/index.html Web: Demonstration of illusory contrast and lateral inhibition. The Hermann grid http://michaelbach.de/ot/lum_herGrid/ Web: Demonstrations and illustrations of cochlear mechanics can be found here http://lab.rockefeller.edu/hudspeth/...calSimulations Web: Double Flash Illusion Web: Further information regarding what and where/how pathways http://www.scholarpedia.org/article/...where_pathways Web: Great website with a large collection of optical illusions http://www.michaelbach.de/ot/ Web: McGurk Effect Video Web: More demonstrations and illustrations of cochlear mechanics www.neurophys.wisc.edu/animations/ Web: Scientific American Frontiers: Cybersenses www.pbs.org/saf/1509/ Web: The Genetics of Taste http://www.smithsonianmag.com/arts-c...797110/?no-ist Web: The Monell Chemical Sense Center website http://www.monell.org/ Web: The Rubber Hand Illusion Web: The Tongue Map: Tasteless Myth Debunked http://www.livescience.com/7113-tong...-debunked.html Discussion Questions 1. What physical features would an organism need in order to be really good at localizing sound in 3D space? Are there any organisms that currently excel in localizing sound? What features allow them to do this? 2. What issues would exist with visual recognition of an object if a research participant had his/her corpus callosum severed? What would you need to do in order to observe these deficits? 3. There are a number of myths that exist about the sensory capabilities of infants. How would you design a study to determine what the true sensory capabilities of infants are? 4. A well-documented phenomenon experienced by millennials is the phantom vibration of a cell phone when no actual text message has been received. How can we use signal detection theory to explain this? Vocabulary Absolute threshold The smallest amount of stimulation needed for detection by a sense. Agnosia Loss of the ability to perceive stimuli. Anosmia Loss of the ability to smell. Audition Ability to process auditory stimuli. Also called hearing. Auditory canal Tube running from the outer ear to the middle ear. Auditory hair cells Receptors in the cochlea that transduce sound into electrical potentials. Binocular disparity Difference is images processed by the left and right eyes. Binocular vision Our ability to perceive 3D and depth because of the difference between the images on each of our retinas. Bottom-up processing Building up to perceptual experience from individual pieces. Chemical senses Our ability to process the environmental stimuli of smell and taste. Cochlea Spiral bone structure in the inner ear containing auditory hair cells. Cones Photoreceptors of the retina sensitive to color. Located primarily in the fovea. Dark adaptation Adjustment of eye to low levels of light. Differential threshold The smallest difference needed in order to differentiate two stimuli. (See Just Noticeable Difference (JND)) Dorsal pathway Pathway of visual processing. The “where” pathway. Flavor The combination of smell and taste. Gustation Ability to process gustatory stimuli. Also called taste. Just noticeable difference (JND) The smallest difference needed in order to differentiate two stimuli. (see Differential Threshold) Light adaptation Adjustment of eye to high levels of light. Mechanoreceptors Mechanical sensory receptors in the skin that response to tactile stimulation. Multimodal perception The effects that concurrent stimulation in more than one sensory modality has on the perception of events and objects in the world. Nociception Our ability to sense pain. Odorants Chemicals transduced by olfactory receptors. Olfaction Ability to process olfactory stimuli. Also called smell. Olfactory epithelium Organ containing olfactory receptors. Opponent-process theory Theory proposing color vision as influenced by cells responsive to pairs of colors. Ossicles A collection of three small bones in the middle ear that vibrate against the tympanic membrane. Perception The psychological process of interpreting sensory information. Phantom limb The perception that a missing limb still exists. Phantom limb pain Pain in a limb that no longer exists. Pinna Outermost portion of the ear. Primary auditory cortex Area of the cortex involved in processing auditory stimuli. Primary somatosensory cortex Area of the cortex involved in processing somatosensory stimuli. Primary visual cortex Area of the cortex involved in processing visual stimuli. Principle of inverse effectiveness The finding that, in general, for a multimodal stimulus, if the response to each unimodal component (on its own) is weak, then the opportunity for multisensory enhancement is very large. However, if one component—by itself—is sufficient to evoke a strong response, then the effect on the response gained by simultaneously processing the other components of the stimulus will be relatively small. Retina Cell layer in the back of the eye containing photoreceptors. Rods Photoreceptors of the retina sensitive to low levels of light. Located around the fovea. Sensation The physical processing of environmental stimuli by the sense organs. Sensory adaptation Decrease in sensitivity of a receptor to a stimulus after constant stimulation. Shape theory of olfaction Theory proposing that odorants of different size and shape correspond to different smells. Signal detection Method for studying the ability to correctly identify sensory stimuli. Somatosensation Ability to sense touch, pain and temperature. Somatotopic map Organization of the primary somatosensory cortex maintaining a representation of the arrangement of the body. Sound waves Changes in air pressure. The physical stimulus for audition. Superadditive effect of multisensory integration The finding that responses to multimodal stimuli are typically greater than the sum of the independent responses to each unimodal component if it were presented on its own. Tastants Chemicals transduced by taste receptor cells. Taste receptor cells Receptors that transduce gustatory information. Top-down processing Experience influencing the perception of stimuli. Transduction The conversion of one form of energy into another. Trichromatic theory Theory proposing color vision as influenced by three different cones responding preferentially to red, green and blue. Tympanic membrane Thin, stretched membrane in the middle ear that vibrates in response to sound. Also called the eardrum. Ventral pathway Pathway of visual processing. The “what” pathway. Vestibular system Parts of the inner ear involved in balance. Weber’s law States that just noticeable difference is proportional to the magnitude of the initial stimulus.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_8%3A_Sensation_and_Perception/8.1%3A_Sensation_and_Perception.txt
By Lorin Lachs California State University, Fresno Most of the time, we perceive the world as a unified bundle of sensations from multiple sensory modalities. In other words, our perception is multimodal. This module provides an overview of multimodal perception, including information about its neurobiology and its psychological effects. learning objectives • Define the basic terminology and basic principles of multimodal perception. • Describe the neuroanatomy of multisensory integration and name some of the regions of the cortex and midbrain that have been implicated in multisensory processing. • Explain the difference between multimodal phenomena and crossmodal phenomena. • Give examples of multimodal and crossmodal behavioral effects. Perception: Unified Although it has been traditional to study the various senses independently, most of the time, perception operates in the context of information supplied by multiple sensory modalities at the same time. For example, imagine if you witnessed a car collision. You could describe the stimulus generated by this event by considering each of the senses independently; that is, as a set of unimodal stimuli. Your eyes would be stimulated with patterns of light energy bouncing off the cars involved. Your ears would be stimulated with patterns of acoustic energy emanating from the collision. Your nose might even be stimulated by the smell of burning rubber or gasoline. However, all of this information would be relevant to the same thing: your perception of the car collision. Indeed, unless someone was to explicitly ask you to describe your perception in unimodal terms, you would most likely experience the event as a unified bundle of sensations from multiple senses. In other words, your perception would be multimodal. The question is whether the various sources of information involved in this multimodal stimulus are processed separately by the perceptual system or not. For the last few decades, perceptual research has pointed to the importance of multimodal perception: the effects on the perception of events and objects in the world that are observed when there is information from more than one sensory modality. Most of this research indicates that, at some point in perceptual processing, information from the various sensory modalities is integrated. In other words, the information is combined and treated as a unitary representation of the world. Questions About Multimodal Perception Several theoretical problems are raised by multimodal perception. After all, the world is a “blooming, buzzing world of confusion” that constantly bombards our perceptual system with light, sound, heat, pressure, and so forth. To make matters more complicated, these stimuli come from multiple events spread out over both space and time. To return to our example: Let’s say the car crash you observed happened on Main Street in your town. Your perception during the car crash might include a lot of stimulation that was not relevant to the car crash. For example, you might also overhear the conversation of a nearby couple, see a bird flying into a tree, or smell the delicious scent of freshly baked bread from a nearby bakery (or all three!). However, you would most likely not make the mistake of associating any of these stimuli with the car crash. In fact, we rarely combine the auditory stimuli associated with one event with the visual stimuli associated with another (although, under some unique circumstances—such as ventriloquism—we do). How is the brain able to take the information from separate sensory modalities and match it appropriately, so that stimuli that belong together stay together, while stimuli that do not belong together get treated separately? In other words, how does the perceptual system determine which unimodal stimuli must be integrated, and which must not? Once unimodal stimuli have been appropriately integrated, we can further ask about the consequences of this integration: What are the effects of multimodal perception that would not be present if perceptual processing were only unimodal? Perhaps the most robust finding in the study of multimodal perception concerns this last question. No matter whether you are looking at the actions of neurons or the behavior of individuals, it has been found that responses to multimodal stimuli are typically greater than the combined response to either modality independently. In other words, if you presented the stimulus in one modality at a time and measured the response to each of these unimodal stimuli, you would find that adding them together would still not equal the response to the multimodal stimulus. This superadditive effect of multisensory integration indicates that there are consequences resulting from the integrated processing of multimodal stimuli. The extent of the superadditive effect (sometimes referred to as multisensory enhancement) is determined by the strength of the response to the single stimulus modality with the biggest effect. To understand this concept, imagine someone speaking to you in a noisy environment (such as a crowded party). When discussing this type of multimodal stimulus, it is often useful to describe it in terms of its unimodal components: In this case, there is an auditory component (the sounds generated by the speech of the person speaking to you) and a visual component (the visual form of the face movements as the person speaks to you). In the crowded party, the auditory component of the person’s speech might be difficult to process (because of the surrounding party noise). The potential for visual information about speech—lipreading—to help in understanding the speaker’s message is, in this situation, quite large. However, if you were listening to that same person speak in a quiet library, the auditory portion would probably be sufficient for receiving the message, and the visual portion would help very little, if at all (Sumby & Pollack, 1954). In general, for a stimulus with multimodal components, if the response to each component (on its own) is weak, then the opportunity for multisensory enhancement is very large. However, if one component—by itself—is sufficient to evoke a strong response, then the opportunity for multisensory enhancement is relatively small. This finding is called the Principle of Inverse Effectiveness (Stein & Meredith, 1993) because the effectiveness of multisensory enhancement is inversely related to the unimodal response with the greatest effect. Another important theoretical question about multimodal perception concerns the neurobiology that supports it. After all, at some point, the information from each sensory modality is definitely separated (e.g., light comes in through the eyes, and sound comes in through the ears). How does the brain take information from different neural systems (optic, auditory, etc.) and combine it? If our experience of the world is multimodal, then it must be the case that at some point during perceptual processing, the unimodal information coming from separate sensory organs—such as the eyes, ears, skin—is combined. A related question asks where in the brain this integration takes place. We turn to these questions in the next section. Biological Bases of Multimodal Perception Multisensory Neurons and Neural Convergence A surprisingly large number of brain regions in the midbrain and cerebral cortex are related to multimodal perception. These regions contain neurons that respond to stimuli from not just one, but multiple sensory modalities. For example, a region called the superior temporal sulcus contains single neurons that respond to both the visual and auditory components of speech (Calvert, 2001; Calvert, Hansen, Iversen, & Brammer, 2001). These multisensory convergence zones are interesting, because they are a kind of neural intersection of information coming from the different senses. That is, neurons that are devoted to the processing of one sense at a time—say vision or touch—send their information to the convergence zones, where it is processed together. One of the most closely studied multisensory convergence zones is the superior colliculus (Stein & Meredith, 1993), which receives inputs from many different areas of the brain, including regions involved in the unimodal processing of visual and auditory stimuli (Edwards, Ginsburgh, Henkel, & Stein, 1979). Interestingly, the superior colliculus is involved in the “orienting response,” which is the behavior associated with moving one’s eye gaze toward the location of a seen or heard stimulus. Given this function for the superior colliculus, it is hardly surprising that there are multisensory neurons found there (Stein & Stanford, 2008). Crossmodal Receptive Fields The details of the anatomy and function of multisensory neurons help to answer the question of how the brain integrates stimuli appropriately. In order to understand the details, we need to discuss a neuron’s receptive field. All over the brain, neurons can be found that respond only to stimuli presented in a very specific region of the space immediately surrounding the perceiver. That region is called the neuron’s receptive field. If a stimulus is presented in a neuron’s receptive field, then that neuron responds by increasing or decreasing its firing rate. If a stimulus is presented outside of a neuron’s receptive field, then there is no effect on the neuron’s firing rate. Importantly, when two neurons send their information to a third neuron, the third neuron’s receptive field is the combination of the receptive fields of the two input neurons. This is called neural convergence, because the information from multiple neurons converges on a single neuron. In the case of multisensory neurons, the convergence arrives from different sensory modalities. Thus, the receptive fields of multisensory neurons are the combination of the receptive fields of neurons located in different sensory pathways. Now, it could be the case that the neural convergence that results in multisensory neurons is set up in a way that ignores the locations of the input neurons’ receptive fields. Amazingly, however, these crossmodal receptive fields overlap. For example, a multisensory neuron in the superior colliculus might receive input from two unimodal neurons: one with a visual receptive field and one with an auditory receptive field. It has been found that the unimodal receptive fields refer to the same locations in space—that is, the two unimodal neurons respond to stimuli in the same region of space. Crucially, the overlap in the crossmodal receptive fields plays a vital role in the integration of crossmodal stimuli. When the information from the separate modalities is coming from within these overlapping receptive fields, then it is treated as having come from the same location—and the neuron responds with a superadditive (enhanced) response. So, part of the information that is used by the brain to combine multimodal inputs is the location in space from which the stimuli came. This pattern is common across many multisensory neurons in multiple regions of the brain. Because of this, researchers have defined the spatial principle of multisensory integration: Multisensory enhancement is observed when the sources of stimulation are spatially related to one another. A related phenomenon concerns the timing of crossmodal stimuli. Enhancement effects are observed in multisensory neurons only when the inputs from different senses arrive within a short time of one another (e.g., Recanzone, 2003). Multimodal Processing in Unimodal Cortex Multisensory neurons have also been observed outside of multisensory convergence zones, in areas of the brain that were once thought to be dedicated to the processing of a single modality (unimodal cortex). For example, the primary visual cortex was long thought to be devoted to the processing of exclusively visual information. The primary visual cortex is the first stop in the cortex for information arriving from the eyes, so it processes very low-level information like edges. Interestingly, neurons have been found in the primary visual cortex that receives information from the primary auditory cortex (where sound information from the auditory pathway is processed) and from the superior temporal sulcus (a multisensory convergence zone mentioned above). This is remarkable because it indicates that the processing of visual information is, from a very early stage, influenced by auditory information. There may be two ways for these multimodal interactions to occur. First, it could be that the processing of auditory information in relatively late stages of processing feeds back to influence low-level processing of visual information in unimodal cortex (McDonald, Teder-Sälejärvi, Russo, & Hillyard, 2003). Alternatively, it may be that areas of unimodal cortex contact each other directly (Driver & Noesselt, 2008; Macaluso & Driver, 2005), such that multimodal integration is a fundamental component of all sensory processing. In fact, the large numbers of multisensory neurons distributed all around the cortex—in multisensory convergence areas and in primary cortices—has led some researchers to propose that a drastic reconceptualization of the brain is necessary (Ghazanfar & Schroeder, 2006). They argue that the cortex should not be considered as being divided into isolated regions that process only one kind of sensory information. Rather, they propose that these areas only prefer to process information from specific modalities but engage in low-level multisensory processing whenever it is beneficial to the perceiver (Vasconcelos et al., 2011). Behavioral Effects of Multimodal Perception Although neuroscientists tend to study very simple interactions between neurons, the fact that they’ve found so many crossmodal areas of the cortex seems to hint that the way we experience the world is fundamentally multimodal. As discussed above, our intuitions about perception are consistent with this; it does not seem as though our perception of events is constrained to the perception of each sensory modality independently. Rather, we perceive a unified world, regardless of the sensory modality through which we perceive it. It will probably require many more years of research before neuroscientists uncover all the details of the neural machinery involved in this unified experience. In the meantime, experimental psychologists have contributed to our understanding of multimodal perception through investigations of the behavioral effects associated with it. These effects fall into two broad classes. The first class—multimodal phenomena—concerns the binding of inputs from multiple sensory modalities and the effects of this binding on perception. The second class—crossmodal phenomena—concerns the influence of one sensory modality on the perception of another (Spence, Senkowski, & Roder, 2009). Multimodal Phenomena Audiovisual Speech Multimodal phenomena concern stimuli that generate simultaneous (or nearly simultaneous) information in more than one sensory modality. As discussed above, speech is a classic example of this kind of stimulus. When an individual speaks, she generates sound waves that carry meaningful information. If the perceiver is also looking at the speaker, then that perceiver also has access to visual patterns that carry meaningful information. Of course, as anyone who has ever tried to lipread knows, there are limits on how informative visual speech information is. Even so, the visual speech pattern alone is sufficient for very robust speech perception. Most people assume that deaf individuals are much better at lipreading than individuals with normal hearing. It may come as a surprise to learn, however, that some individuals with normal hearing are also remarkably good at lipreading (sometimes called “speechreading”). In fact, there is a wide range of speechreading ability in both normal hearing and deaf populations (Andersson, Lyxell, Rönnberg, & Spens, 2001). However, the reasons for this wide range of performance are not well understood (Auer & Bernstein, 2007; Bernstein, 2006; Bernstein, Auer, & Tucker, 2001; Mohammed et al., 2005). How does visual information about speech interact with auditory information about speech? One of the earliest investigations of this question examined the accuracy of recognizing spoken words presented in a noisy context, much like in the example above about talking at a crowded party. To study this phenomenon experimentally, some irrelevant noise (“white noise”—which sounds like a radio tuned between stations) was presented to participants. Embedded in the white noise were spoken words, and the participants’ task was to identify the words. There were two conditions: one in which only the auditory component of the words was presented (the “auditory-alone” condition), and one in both the auditory and visual components were presented (the “audiovisual” condition). The noise levels were also varied, so that on some trials, the noise was very loud relative to the loudness of the words, and on other trials, the noise was very soft relative to the words. Sumby and Pollack (1954) found that the accuracy of identifying the spoken words was much higher for the audiovisual condition than it was in the auditory-alone condition. In addition, the pattern of results was consistent with the Principle of Inverse Effectiveness: The advantage gained by audiovisual presentation was highest when the auditory-alone condition performance was lowest (i.e., when the noise was loudest). At these noise levels, the audiovisual advantage was considerable: It was estimated that allowing the participant to see the speaker was equivalent to turning the volume of the noise down by over half. Clearly, the audiovisual advantage can have dramatic effects on behavior. Another phenomenon using audiovisual speech is a very famous illusion called the “McGurk effect” (named after one of its discoverers). In the classic formulation of the illusion, a movie is recorded of a speaker saying the syllables “gaga.” Another movie is made of the same speaker saying the syllables “baba.” Then, the auditory portion of the “baba” movie is dubbed onto the visual portion of the “gaga” movie. This combined stimulus is presented to participants, who are asked to report what the speaker in the movie said. McGurk and MacDonald (1976) reported that 98 percent of their participants reported hearing the syllable “dada”—which was in neither the visual nor the auditory components of the stimulus. These results indicate that when visual and auditory information about speech is integrated, it can have profound effects on perception. Tactile/Visual Interactions in Body Ownership Not all multisensory integration phenomena concern speech, however. One particularly compelling multisensory illusion involves the integration of tactile and visual information in the perception of body ownership. In the “rubber hand illusion” (Botvinick & Cohen, 1998), an observer is situated so that one of his hands is not visible. A fake rubber hand is placed near the obscured hand, but in a visible location. The experimenter then uses a light paintbrush to simultaneously stroke the obscured hand and the rubber hand in the same locations. For example, if the middle finger of the obscured hand is being brushed, then the middle finger of the rubber hand will also be brushed. This sets up a correspondence between the tactile sensations (coming from the obscured hand) and the visual sensations (of the rubber hand). After a short time (around 10 minutes), participants report feeling as though the rubber hand “belongs” to them; that is, that the rubber hand is a part of their body. This feeling can be so strong that surprising the participant by hitting the rubber hand with a hammer often leads to a reflexive withdrawing of the obscured hand—even though it is in no danger at all. It appears, then, that our awareness of our own bodies may be the result of multisensory integration. Crossmodal Phenomena Crossmodal phenomena are distinguished from multimodal phenomena in that they concern the influence one sensory modality has on the perception of another. Visual Influence on Auditory Localization A famous (and commonly experienced) crossmodal illusion is referred to as “the ventriloquism effect.” When a ventriloquist appears to make a puppet speak, she fools the listener into thinking that the location of the origin of the speech sounds is at the puppet’s mouth. In other words, instead of localizing the auditory signal (coming from the mouth of a ventriloquist) to the correct place, our perceptual system localizes it incorrectly (to the mouth of the puppet). Why might this happen? Consider the information available to the observer about the location of the two components of the stimulus: the sounds from the ventriloquist’s mouth and the visual movement of the puppet’s mouth. Whereas it is very obvious where the visual stimulus is coming from (because you can see it), it is much more difficult to pinpoint the location of the sounds. In other words, the very precise visual location of mouth movement apparently overrides the less well-specified location of the auditory information. More generally, it has been found that the location of a wide variety of auditory stimuli can be affected by the simultaneous presentation of a visual stimulus (Vroomen & De Gelder, 2004). In addition, the ventriloquism effect has been demonstrated for objects in motion: The motion of a visual object can influence the perceived direction of motion of a moving sound source (Soto-Faraco, Kingstone, & Spence, 2003). Auditory Influence on Visual Perception A related illusion demonstrates the opposite effect: where sounds have an effect on visual perception. In the double flash illusion, a participant is asked to stare at a central point on a computer monitor. On the extreme edge of the participant’s vision, a white circle is briefly flashed one time. There is also a simultaneous auditory event: either one beep or two beeps in rapid succession. Remarkably, participants report seeing two visual flashes when the flash is accompanied by two beeps; the same stimulus is seen as a single flash in the context of a single beep or no beep (Shams, Kamitani, & Shimojo, 2000). In other words, the number of heard beeps influences the number of seen flashes! Another illusion involves the perception of collisions between two circles (called “balls”) moving toward each other and continuing through each other. Such stimuli can be perceived as either two balls moving through each other or as a collision between the two balls that then bounce off each other in opposite directions. Sekuler, Sekuler, and Lau (1997) showed that the presentation of an auditory stimulus at the time of contact between the two balls strongly influenced the perception of a collision event. In this case, the perceived sound influences the interpretation of the ambiguous visual stimulus. Crossmodal Speech Several crossmodal phenomena have also been discovered for speech stimuli. These crossmodal speech effects usually show altered perceptual processing of unimodal stimuli (e.g., acoustic patterns) by virtue of prior experience with the alternate unimodal stimulus (e.g., optical patterns). For example, Rosenblum, Miller, and Sanchez (2007) conducted an experiment examining the ability to become familiar with a person’s voice. Their first interesting finding was unimodal: Much like what happens when someone repeatedly hears a person speak, perceivers can become familiar with the “visual voice” of a speaker. That is, they can become familiar with the person’s speaking style simply by seeing that person speak. Even more astounding was their crossmodal finding: Familiarity with this visual information also led to increased recognition of the speaker’s auditory speech, to which participants had never had exposure. Similarly, it has been shown that when perceivers see a speaking face, they can identify the (auditory-alone) voice of that speaker, and vice versa (Kamachi, Hill, Lander, & Vatikiotis-Bateson, 2003; Lachs & Pisoni, 2004a, 2004b, 2004c; Rosenblum, Smith, Nichols, Lee, & Hale, 2006). In other words, the visual form of a speaker engaged in the act of speaking appears to contain information about what that speaker should sound like. Perhaps more surprisingly, the auditory form of speech seems to contain information about what the speaker should look like. Conclusion In this module, we have reviewed some of the main evidence and findings concerning the role of multimodal perception in our experience of the world. It appears that our nervous system (and the cortex in particular) contains considerable architecture for the processing of information arriving from multiple senses. Given this neurobiological setup, and the diversity of behavioral phenomena associated with multimodal stimuli, it is likely that the investigation of multimodal perception will continue to be a topic of interest in the field of experimental perception for many years to come. Outside Resources Article: A review of the neuroanatomy and methods associated with multimodal perception: http://dx.doi.org/10.1016/j.neubiorev.2011.04.015 Journal: Experimental Brain Research Special issue: Crossmodal processing www.springerlink.com/content/0014-4819/198/2-3 TED Talk: Optical Illusions http://www.ted.com/talks/beau_lotto_...how_how_we_see Video: McGurk demo Video: The Rubber Hand Illusion Web: Double-flash illusion demo http://www.cns.atr.jp/~kmtn/soundInd...llusoryFlash2/ Discussion Questions 1. The extensive network of multisensory areas and neurons in the cortex implies that much perceptual processing occurs in the context of multiple inputs. Could the processing of unimodal information ever be useful? Why or why not? 2. Some researchers have argued that the Principle of Inverse Effectiveness (PoIE) results from ceiling effects: Multisensory enhancement cannot take place when one modality is sufficient for processing because in such cases it is not possible for processing to be enhanced (because performance is already at the “ceiling”). On the other hand, other researchers claim that the PoIE stems from the perceptual system’s ability to assess the relative value of stimulus cues, and to use the most reliable sources of information to construct a representation of the outside world. What do you think? Could these two possibilities ever be teased apart? What kinds of experiments might one conduct to try to get at this issue? 3. In the late 17th century, a scientist named William Molyneux asked the famous philosopher John Locke a question relevant to modern studies of multisensory processing. The question was this: Imagine a person who has been blind since birth, and who is able, by virtue of the sense of touch, to identify three dimensional shapes such as spheres or pyramids. Now imagine that this person suddenly receives the ability to see. Would the person, without using the sense of touch, be able to identify those same shapes visually? Can modern research in multimodal perception help answer this question? Why or why not? How do the studies about crossmodal phenomena inform us about the answer to this question? Vocabulary Bouncing balls illusion The tendency to perceive two circles as bouncing off each other if the moment of their contact is accompanied by an auditory stimulus. Crossmodal phenomena Effects that concern the influence of the perception of one sensory modality on the perception of another. Crossmodal receptive field A receptive field that can be stimulated by a stimulus from more than one sensory modality. Crossmodal stimulus A stimulus with components in multiple sensory modalties that interact with each other. Double flash illusion The false perception of two visual flashes when a single flash is accompanied by two auditory beeps. Integrated The process by which the perceptual system combines information arising from more than one modality. McGurk effect An effect in which conflicting visual and auditory components of a speech stimulus result in an illusory percept. Multimodal Of or pertaining to multiple sensory modalities. Multimodal perception The effects that concurrent stimulation in more than one sensory modality has on the perception of events and objects in the world. Multimodal phenomena Effects that concern the binding of inputs from multiple sensory modalities. Multisensory convergence zones Regions in the brain that receive input from multiple unimodal areas processing different sensory modalities. Multisensory enhancement See “superadditive effect of multisensory integration.” Primary auditory cortex A region of the cortex devoted to the processing of simple auditory information. Primary visual cortex A region of the cortex devoted to the processing of simple visual information. Principle of Inverse Effectiveness The finding that, in general, for a multimodal stimulus, if the response to each unimodal component (on its own) is weak, then the opportunity for multisensory enhancement is very large. However, if one component—by itself—is sufficient to evoke a strong response, then the effect on the response gained by simultaneously processing the other components of the stimulus will be relatively small. Receptive field The portion of the world to which a neuron will respond if an appropriate stimulus is present there. Rubber hand illusion The false perception of a fake hand as belonging to a perceiver, due to multimodal sensory information. Sensory modalities A type of sense; for example, vision or audition. Spatial principle of multisensory integration The finding that the superadditive effects of multisensory integration are observed when the sources of stimulation are spatially related to one another. Superadditive effect of multisensory integration The finding that responses to multimodal stimuli are typically greater than the sum of the independent responses to each unimodal component if it were presented on its own. Unimodal Of or pertaining to a single sensory modality. Unimodal components The parts of a stimulus relevant to one sensory modality at a time. Unimodal cortex A region of the brain devoted to the processing of information from a single sensory modality.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_8%3A_Sensation_and_Perception/8.10%3A_Multi-Modal_Perception.txt
By Simona Buetti and Alejandro Lleras University of Illinois at Urbana-Champaign Vision is the sensory modality that transforms light into a psychological experience of the world around you, with minimal bodily effort. This module provides an overview of the most significant steps in this transformation and strategies that your brain uses to achieve this visual understanding of the environment. learning objectives • Describe how the eye transforms light information into neural energy. • Understand what sorts of information the brain is interested in extracting from the environment and why it is useful. • Describe how the visual system has adapted to deal with different lighting conditions. • Understand the value of having two eyes. • Understand why we have color vision. • Understand the interdependence between vision and other brain functions. What Is Vision? Think about the spectacle of a starry night. You look up at the sky, and thousands of photons from distant stars come crashing into your retina, a light-sensitive structure at the back of your eyeball. These photons are millions of years old and have survived a trip across the universe, only to run into one of your photoreceptors. Tough luck: in one thousandth of a second, this little bit of light energy becomes the fuel to a photochemical reaction known as photoactivation. The light energy becomes neural energy and triggers a cascade of neural activity that, a few hundredths of a second later, will result in your becoming aware of that distant star. You and the universe united by photons. That is the amazing power of vision. Light brings the world to you. Without moving, you know what’s out there. You can recognize friends coming to meet you before you are able to hear them coming, ripe fruits from green ones on trees without having to taste them and before reaching out to grab them. You can also tell how quickly a ball is moving in your direction (Will it hit you? Can you hit it?). How does all of that happen? First, light enters the eyeball through a tiny hole known as the pupil and, thanks to the refractive properties of your cornea and lens, this light signal gets projected sharply into the retina (see Outside Resources for links to a more detailed description of the eye structure). There, light is transduced into neural energy by about 200 million photoreceptor cells. This is where the information carried by the light about distant objects and colors starts being encoded by our brain. There are two different types of photoreceptors: rods and cones. The human eye contains more rods than cones. Rods give us sensitivity under dim lighting conditions and allow us to see at night. Cones allow us to see fine details in bright light and give us the sensation of color. Cones are tightly packed around the fovea (the central region of the retina behind your pupil) and more sparsely elsewhere. Rods populate the periphery (the region surrounding the fovea) and are almost absent from the fovea. But vision is far more complex than just catching photons. The information encoded by the photoreceptors undergoes a rapid and continuous set of ever more complex analysis so that, eventually, you can make sense of what’s out there. At the fovea, visual information is encoded separately from tiny portions of the world (each about half the width of a human hair viewed at arm’s length) so that eventually the brain can reconstruct in great detail fine visual differences from locations at which you are directly looking. This fine level of encoding requires lots of light and it is slow going (neurally speaking). In contrast, in the periphery, there is a different encoding strategy: detail is sacrificed in exchange for sensitivity. Information is summed across larger sections of the world. This aggregation occurs quickly and allows you to detect dim signals under very low levels of light, as well as detect sudden movements in your peripheral vision. The Importance of Contrast What happens next? Well, you might think that the eye would do something like record the amount of light at each location in the world and then send this information to the visual-processing areas of the brain (an astounding 30% of the cortex is influenced by visual signals!). But, in fact, that is not what eyes do. As soon as photoreceptors capture light, the nervous system gets busy analyzing differences in light, and it is these differences that get transmitted to the brain. The brain, it turns out, cares little about the overall amount of light coming from a specific part of the world, or in the scene overall. Rather, it wants to know: does the light coming from this one point differ from the light coming from the point next to it? Place your hand on the table in front of you. The contour of your hand is actually determined by the difference in light—the contrast—between the light coming from the skin in your hand and the light coming from the table underneath. To find the contour of your hand, we simply need to find the regions in the image where the difference in light between two adjacent points is maximal. Two points on your skin will reflect similar levels of light back to you, as will two points on the table. On the other hand, two points that fall on either side of the boundary contour between your hand and the table will reflect very different light. The fact that the brain is interested in coding contrast in the world reveals something deeply important about the forces that drove the evolution of our brain: encoding the absolute amount of light in the world tells us little about what is out there. But if your brain can detect the sudden appearance of a difference in light somewhere in front of you, then it must be that something new is there. That contrast signal is information. That information may represent something that you like (food, a friend) or something dangerous approaching (a tiger, a cliff). The rest of your visual system will work hard to determine what that thing is, but as quickly as 10ms after light enters your eyes, ganglion cells in your retinae have already encoded all the differences in light from the world in front of you. Contrast is so important that your neurons go out of their way not only to encode differences in light but to exaggerate those differences for you, lest you miss them. Neurons achieve this via a process known as lateral inhibition. When a neuron is firing in response to light, it produces two signals: an output signal to pass on to the next level in vision, and a lateral signal to inhibit all neurons that are next to it. This makes sense on the assumption that nearby neurons are likely responding to the same light coming from nearby locations, so this information is somewhat redundant. The magnitude of the lateral inhibitory signal a neuron produces is proportional to the excitatory input that neuron receives: the more a neuron fires, the stronger the inhibition it produces. Figure 8.2.1 illustrates how lateral inhibition amplifies contrast signals at the edges of surfaces. Sensitivity to Different Light Conditions Let’s think for a moment about the range of conditions in which your visual system must operate day in and day out. When you take a walk outdoors on a sunny day, as many as billions of photons enter your eyeballs every second. In contrast, when you wake up in the middle of the night in a dark room, there might be as little as a few hundred photons per second entering your eyes. To deal with these extremes, the visual system relies on the different properties of the two types of photoreceptors. Rods are mostly responsible for processing light when photons are scarce (just a single photon can make a rod fire!), but it takes time to replenish the visual pigment that rods require for photoactivation. So, under bright conditions, rods are quickly bleached (Stuart & Brige, 1996) and cannot keep up with the constant barrage of photons hitting them. That’s when the cones become useful. Cones require more photons to fire and, more critically, their photopigments replenish much faster than rods’ photopigments, allowing them to keep up when photons are abundant. What happens when you abruptly change lighting conditions? Under bright light, your rods are bleached. When you move into a dark environment, it will take time (up to 30 minutes) before they chemically recover (Hurley, 2002). Once they do, you will begin to see things around you that initially you could not. This phenomenon is called dark adaptation. When you go from dark to bright light (as you exit a tunnel on a highway, for instance), your rods will be bleached in a blaze and you will be blinded by the sudden light for about 1 second. However, your cones are ready to fire! Their firing will take over and you will quickly begin to see at this higher level of light. A similar, but more subtle, adjustment occurs when the change in lighting is not so drastic. Think about your experience of reading a book at night in your bed compared to reading outdoors: the room may feel to you fairly well illuminated (enough so you can read) but the light bulbs in your room are not producing the billions of photons that you encounter outside. In both cases, you feel that your experience is that of a well-lit environment. You don’t feel one experience as millions of times brighter than the other. This is because vision (as much of perception) is not proportional: seeing twice as many photons does not produce a sensation of seeing twice as bright a light. The visual system tunes into the current experience by favoring a range of contrast values that is most informative in that environment (Gardner et al., 2005). This is the concept of contrast gain: the visual system determines the mean contrast in a scene and represents values around that mean contrast best, while ignoring smaller contrast differences. (See the Outside Resources section for a demonstration.) The Reconstruction Process What happens once information leaves your eyes and enters the brain? Neurons project first into the thalamus, in a section known as the lateral geniculate nucleus. The information then splits and projects towards two different parts of the brain. Most of the computations regarding reflexive eye movements are computed in subcortical regions, the evolutionarily older part of the brain. Reflexive eye movements allow you to quickly orient your eyes towards areas of interest and to track objects as they move. The more complex computations, those that eventually allow you to have a visual experience of the world, all happen in the cortex, the evolutionarily newer region of the brain. The first stop in the cortex is at the primary visual cortex (also known as V1). Here, the “reconstruction” process begins in earnest: based on the contrast information arriving from the eyes, neurons will start computing information about color and simple lines, detecting various orientations and thicknesses. Small-scale motion signals are also computed (Hubel & Wiesel, 1962). As information begins to flow towards other “higher” areas of the system, more complex computations are performed. For example, edges are assigned to the object to which they belong, backgrounds are separated from foregrounds, colors are assigned to surfaces, and the global motion of objects is computed. Many of these computations occur in specialized brain areas. For instance, an area called MT processes global-motion information; the parahippocampal place area identifies locations and scenes; the fusiform face area specializes in identifying objects for which fine discriminations are required, like faces. There is even a brain region specialized in letter and word processing. These visual-recognition areas are located along the ventral pathway of the brain (also known as the What pathway). Other brain regions along the dorsal pathway (or Where-and-How pathway) will compute information about self- and object-motion, allowing you to interact with objects, navigate the environment, and avoid obstacles (Goodale and Milner, 1992). Now that you have a basic understanding of how your visual system works, you can ask yourself the question: why do you have two eyes? Everything that we discussed so far could be computed with information coming from a single eye. So why two? Looking at the animal kingdom gives us a clue. Animals who tend to be prey have eyes located on opposite sides of their skull. This allows them to detect predators whenever one appears anywhere around them. Humans, like most predators, have two eyes pointing in the same direction, encoding almost the exact scene twice. This redundancy gives us a binocular advantage: having two eyes not only provides you with two chances at catching a signal in front of you, but the minute difference in perspective that you get from each eye is used by your brain to reconstruct the sense of three-dimensional space. You can get an estimate of how far distant objects are from you, their size, and their volume. This is no easy feat: the signal in each eye is a two-dimensional projection of the world, like two separate pictures drawn upon your retinae. Yet, your brain effortlessly provides you with a sense of depth by combining those two signals. This 3-D reconstruction process also relies heavily on all the knowledge you acquired through experience about spatial information. For instance, your visual system learns to interpret how the volume, distance, and size of objects change as they move closer or farther from you. (See the Outside Resources section for demonstrations.) The Experience of Color Perhaps one of the most beautiful aspects of vision is the richness of the color experience that it provides us. One of the challenges that we have as scientists is to understand why the human color experience is what it is. Perhaps you have heard that dogs only have 2 types of color photoreceptors, whereas humans have 3, chickens have 4, and mantis shrimp have 16. Why is there such variation across species? Scientists believe each species has evolved with different needs and uses color perception to signal information about food, reproduction, and health that are unique to their species. For example, humans have a specific sensitivity that allows you to detect slight changes in skin tone. You can tell when someone is embarrassed, aroused, or ill. Detecting these subtle signals is adaptive in a social species like ours. How is color coded in the brain? The two leading theories of color perception were proposed in the mid-19th century, about 100 years before physiological evidence was found to corroborate them both (Svaetichin, 1956). Trichromacy theory, proposed by Young (1802) and Helmholtz (1867), proposed that the eye had three different types of color-sensitive cells based on the observation that any one color can be reproduced by combining lights from three lamps of different hue. If you can adjust separately the intensity of each light, at some point you will find the right combination of the three lights to match any color in the world. This principle is used today on TVs, computer screens, and any colored display. If you look closely enough at a pixel, you will find that it is composed of a blue, a red, and a green light, of varying intensities. Regarding the retina, humans have three types of cones: S-cones, M-cones, and L-cones (also known as blue, green, and red cones, respectively) that are sensitive to three different wavelengths of light. Around the same time, Hering made a puzzling discovery: some colors are impossible to create. Whereas you can make yellowish greens, bluish reds, greenish blues, and reddish yellows by combining two colors, you can never make a reddish green or a bluish yellow. This observation led Hering (1892) to propose the Opponent Process theory of color: color is coded via three opponent channels (red-green, blue-yellow, and black-white). Within each channel, a comparison is constantly computed between the two elements in the pair. In other words, colors are encoded as differences between two hues and not as simple combinations of hues. Again, what matters to the brain is contrast. When one element is stronger than the other, the stronger color is perceived and the weaker one is suppressed. You can experience this phenomenon by following the link below. nobaproject.com/assets/modules/module-visio... When both colors in a pair are present to equal extents, the color perception is canceled and we perceive a level of grey. This is why you cannot see a reddish green or a bluish yellow: they cancel each other out. By the way, if you are wondering where the yellow signal comes from, it turns out that it is computed by averaging the M- and L-cone signals. Are these colors uniquely human colors? Some think that they are: the red-green contrast, for example, is finely tuned to detect changes in human skin tone so you can tell when someone blushes or becomes pale. So, the next time you go out for a walk with your dog, look at the sunset and ask yourself, what color does my dog see? Probably none of the orange hues you do! So now, you can ask yourself the question: do all humans experience color in the same way? Color-blind people, as you can imagine, do not see all the colors that the rest of us see, and this is due to the fact that they lack one (or more) cones in their retina. Incidentally, there are a few women who actually have four different sets of cones in their eyes, and recent research suggests that their experience of color can be (but not always is) richer than the one from three-coned people. A slightly different question, though, is whether all three-coned people have the same internal experiences of colors: is the red inside your head the same red inside your mom’s head? That is an almost impossible question to answer that has been debated by philosophers for millennia, yet recent data suggests that there might in fact be cultural differences in the way we perceive color. As it turns out, not all cultures categorize colors in the same way, for example. And some groups “see” different shades of what we in the Western world would call the “same” color, as categorically different colors. The Berinmo tribe in New Guinea, for instance, appear to experience green shades that denote leaves that are alive as belonging to an entirely different color category than the sort of green shades that denote dying leaves. Russians, too, appear to experience light and dark shades of blue as different categories of colors, in a way that most Westerners do not. Further, current brain imaging research suggests that people’s brains change (increase in white-matter volume) when they learn new color categories! These are intriguing and suggestive findings, for certain, that seem to indicate that our cultural environment may in fact have some (small) but definite impact on how people use and experience colors across the globe. Integration with Other Modalities Vision is not an encapsulated system. It interacts with and depends on other sensory modalities. For example, when you move your head in one direction, your eyes reflexively move in the opposite direction to compensate, allowing you to maintain your gaze on the object that you are looking at. This reflex is called the vestibulo-ocular reflex. It is achieved by integrating information from both the visual and the vestibular system (which knows about body motion and position). You can experience this compensation quite simply. First, while you keep your head still and your gaze looking straight ahead, wave your finger in front of you from side to side. Notice how the image of the finger appears blurry. Now, keep your finger steady and look at it while you move your head from side to side. Notice how your eyes reflexively move to compensate the movement of your head and how the image of the finger stays sharp and stable. Vision also interacts with your proprioceptive system, to help you find where all your body parts are, and with your auditory system, to help you understand the sounds people make when they speak. You can learn more about this in the Noba module about multimodal perception (http://noba.to/cezw4qyn). Finally, vision is also often implicated in a blending-of-sensations phenomenon known as synesthesia. Synesthesia occurs when one sensory signal gives rise to two or more sensations. The most common type is grapheme-color synesthesia. About 1 in 200 individuals experience a sensation of color associated with specific letters, numbers, or words: the number 1 might always be seen as red, the number 2 as orange, etc. But the more fascinating forms of synesthesia blend sensations from entirely different sensory modalities, like taste and color or music and color: the taste of chicken might elicit a sensation of green, for example, and the timbre of violin a deep purple. Concluding Remarks We are at an exciting moment in our scientific understanding of vision. We have just begun to get a functional understanding of the visual system. It is not sufficiently evolved for us to recreate artificial visual systems (i.e., we still cannot make robots that “see” and understand light signals as we do), but we are getting there. Just recently, major breakthroughs in vision science have allowed researchers to significantly improve retinal prosthetics: photosensitive circuits that can be implanted on the back of the eyeball of blind people that connect to visual areas of the brain and have the capacity to partially restore a “visual experience” to these patients (Nirenberg & Pandarinath, 2012). And using functional magnetic brain imaging, we can now “decode” from your brain activity the images that you saw in your dreams while you were asleep (Horikawa, Tamaki, Miyawaki, & Kamitani, 2013)! Yet, there is still so much more to understand. Consider this: if vision is a construction process that takes time, whatever we see now is no longer what is front of us. Yet, humans can do amazing time-sensitive feats like hitting a 90-mph fastball in a baseball game. It appears then that a fundamental function of vision is not just to know what is happening around you now, but actually to make an accurate inference about what you are about to see next (Enns & Lleras, 2008), so that you can keep up with the world. Understanding how this future-oriented, predictive function of vision is achieved in the brain is probably the next big challenge in this fascinating realm of research. Outside Resources Video: Acquired knowledge and its impact on our three-dimensional interpretation of the world - 3D Street Art Video: Acquired knowledge and its impact on our three-dimensional interpretation of the world - Anamorphic Illusions Video: Acquired knowledge and its impact on our three-dimensional interpretation of the world - Optical Illusion Web: Amazing library with visual phenomena and optical illusions, explained http://michaelbach.de/ot/index.html Web: Anatomy of the eye http://www.eyecareamerica.org/eyecare/anatomy/ Web: Demonstration of contrast gain adaptation http://www.michaelbach.de/ot/lum_contrast-adapt/ Web: Demonstration of illusory contours and lateral inhibition. Mach bands http://michaelbach.de/ot/lum-MachBands/index.html Web: Demonstration of illusory contrast and lateral inhibition. The Hermann grid http://michaelbach.de/ot/lum_herGrid/ Web: Further information regarding what and where/how pathways http://www.scholarpedia.org/article/...where_pathways Discussion Questions 1. When running in the dark, it is recommended that you never look straight at the ground. Why? What would be a better strategy to avoid obstacles? 2. The majority of ganglion cells in the eye specialize in detecting drops in the amount of light coming from a given location. That is, they increase their firing rate when they detect less light coming from a specific location. Why might the absence of light be more important than the presence of light? Why would it be evolutionarily advantageous to code this type of information? 3. There is a hole in each one of your eyeballs called the optic disk. This is where veins enter the eyeball and where neurons (the axons of the ganglion cells) exit the eyeball. Why do you not see two holes in the world all the time? Close one eye now. Why do you not see a hole in the world now? To “experience” a blind spot, follow the instructions in this website: http://michaelbach.de/ot/cog_blindSpot/index.html 4. Imagine you were given the task of testing the color-perception abilities of a newly discovered species of monkeys in the South Pacific. How would you go about it? 5. An important aspect of emotions is that we sense them in ourselves much in the same way as we sense other perceptions like vision. Can you think of an example where the concept of contrast gain can be used to understand people’s responses to emotional events? Vocabulary Binocular advantage Benefits from having two eyes as opposed to a single eye. Cones Photoreceptors that operate in lighted environments and can encode fine visual details. There are three different kinds (S or blue, M or green and L or red) that are each sensitive to slightly different types of light. Combined, these three types of cones allow you to have color vision. Contrast Relative difference in the amount and type of light coming from two nearby locations. Contrast gain Process where the sensitivity of your visual system can be tuned to be most sensitive to the levels of contrast that are most prevalent in the environment. Dark adaptation Process that allows you to become sensitive to very small levels of light, so that you can actually see in the near-absence of light. Lateral inhibition A signal produced by a neuron aimed at suppressing the response of nearby neurons. Opponent Process Theory Theory of color vision that assumes there are four different basic colors, organized into two pairs (red/green and blue/yellow) and proposes that colors in the world are encoded in terms of the opponency (or difference) between the colors in each pair. There is an additional black/white pair responsible for coding light contrast. Photoactivation A photochemical reaction that occurs when light hits photoreceptors, producing a neural signal. Primary visual cortex (V1) Brain region located in the occipital cortex (toward the back of the head) responsible for processing basic visual information like the detection, thickness, and orientation of simple lines, color, and small-scale motion. Rods Photoreceptors that are very sensitive to light and are mostly responsible for night vision. Synesthesia The blending of two or more sensory experiences, or the automatic activation of a secondary (indirect) sensory experience due to certain aspects of the primary (direct) sensory stimulation. Trichromacy theory Theory that proposes that all of your color perception is fundamentally based on the combination of three (not two, not four) different color signals. Vestibulo-ocular reflex Coordination of motion information with visual information that allows you to maintain your gaze on an object while you move. What pathway Pathway of neural processing in the brain that is responsible for your ability to recognize what is around you. Where-and-How pathway Pathway of neural processing in the brain that is responsible for you knowing where things are in the world and how to interact with them.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_8%3A_Sensation_and_Perception/8.2%3A_Vision.txt
By Linda Bartoshuk and Derek Snyder University of Florida Humans are omnivores (able to survive on many different foods). The omnivore’s dilemma is to identify foods that are healthy and avoid poisons. Taste and smell cooperate to solve this dilemma. Stimuli for both taste and smell are chemicals. Smell results from a biological system that essentially permits the brain to store rough sketches of the chemical structures of odor stimuli in the environment. Thus, people in very different parts of the world can learn to like odors (paired with calories) or dislike odors (paired with nausea) that they encounter in their worlds. Taste information is preselected (by the nature of the receptors) to be relevant to nutrition. No learning is required; we are born loving sweet and hating bitter. Taste inhibits a variety of other systems in the brain. Taste damage releases that inhibition, thus intensifying sensations like those evoked by fats in foods. Ear infections and tonsillectomies both can damage taste. Adults who have experienced these conditions experience intensified sensations from fats and enhanced palatability of high-fat foods. This may explain why individuals who have had ear infections or tonsillectomies tend to gain weight. learning objectives • Explain the salient properties of taste and smell that help solve the omnivore’s dilemma. • Distinguish between the way pleasure/displeasure is produced by smells and tastes. • Explain how taste damage can have extensive unexpected consequences. The Omnivore's Dilemma Humans are omnivores. We can survive on a wide range of foods, unlike species, such as koalas, that have a highly specialized diet (for koalas, eucalyptus leaves). With our amazing dietary range comes a problem: the omnivore’s dilemma (Pollan, 2006; Rozin & Rozin, 1981). To survive, we must identify healthy food and avoid poisons. The senses of taste and smell cooperate to give us this ability. Smell also has other important functions in lower animals (e.g., avoid predators, identify sexual partners), but these functions are less important in humans. This module will focus on the way taste and smell interact in humans to solve the omnivore’s dilemma. Taste and Smell Anatomy Taste (gustation) and smell (olfaction) are both chemical senses; that is, the stimuli for these senses are chemicals. The more complex sense is olfaction. Olfactory receptors are complex proteins called G protein-coupled receptors (GPCRs). These structures are proteins that weave back and forth across the membranes of olfactory cells seven times, forming structures outside the cell that sense odorant molecules and structures inside the cell that activate the neural message ultimately conveyed to the brain by olfactory neurons. The structures that sense odorants can be thought of as tiny binding pockets with sites that respond to active parts of molecules (e.g., carbon chains). There are about 350 functional olfactory genes in humans; each gene expresses a particular kind of olfactory receptor. All olfactory receptors of a given kind project to structures called glomeruli (paired clusters of cells found on both sides of the brain). For a single molecule, the pattern of activation across the glomeruli paints a picture of the chemical structure of the molecule. Thus, the olfactory system can identify a vast array of chemicals present in the environment. Most of the odors we encounter are actually mixtures of chemicals (e.g., bacon odor). The olfactory system creates an image for the mixture and stores it in memory just as it does for the odor of a single molecule (Shepherd, 2005). Taste is simpler than olfaction. Bitter and sweet utilize GPCRs, just as olfaction does, but the number of different receptors is much smaller. For bitter, 25 receptors are tuned to different chemical structures (Meyerhof et al., 2010). Such a system allows us to sense many different poisons. Sweet is even simpler. The primary sweet receptor is composed of two different G protein-coupled receptors; each of these two proteins ends in large structures reminiscent of Venus flytraps. This complex receptor has multiple sites that can bind different structures. The Venus flytrap endings open so that even some very large molecules can fit inside and stimulate the receptor. Bitter is inclusive (i.e., multiple receptors tuned to very different chemical structures feed into common neurons). Sweet is exclusive. There are many sugars with similar structures, but only three of these are particularly important to humans (sucrose, glucose, and fructose). Thus, our sweet receptor tunes out most sugars, leaving only the most important to stimulate the sweet receptor. However, the ability of the sweet receptor to respond to some non-sugars presents us with one of the great mysteries of taste. Several non-sugar molecules can stimulate the primary sweet receptor (e.g., saccharine, aspartame, cyclamate). These have given rise to the artificial sweetener industry, but their biological significance is unknown. What biological purpose is served by allowing these non-sugar molecules to stimulate the primary sweet receptor? Some would have us believe that artificial sweeteners are a boon to those who want to lose weight. It seems like a no-brainer. Sugars have calories; saccharin does not. Theoretically, if we replace sugar with saccharin in our diets, we will lose weight. In fact, recent work showed that rats actually gained weight when saccharin was substituted for glucose (Swithers & Davidson, 2008). It turns out that substituting saccharin for sugar can increase appetite so more is eaten later. In addition, eating artificial sweeteners appears to alter metabolism, thus making losing weight even harder. So why did nature give us artificial sweeteners? We don’t know. One more mystery about sweet deserves comment. The discovery of the sweet receptor was met with great excitement because many investigators had searched for it for years. The fact that this complex receptor had multiple sites to which different molecules could bind explained why many different molecules taste sweet. However, this is actually a serious problem. No matter what molecule stimulates this receptor, the neural output from that receptor is the same. This would mean that the sweetness of all sweet substances would have to be the same. Yet artificial sweeteners do not taste exactly like sugar. The answer may lie in the fact that one of the two proteins that makes up the receptor can act alone, but only strong concentrations of sugar stimulate this isolated protein receptor. This permits the brain to distinguish between the sweetness of sugar and the sweetness of non-sugar molecules. Salty and sour are the simplest tastes; these stimuli ionize (break into positively and negatively charged particles). The first event in the transduction series is the movement of the positively charged particle through channels in the taste cell membrane (Chaudhari & Roper, 2010). Solving the omnivore’s dilemma: Taste affect is hard-wired The pleasure associated with sweet and salty and the displeasure associated with sour and bitter are hard-wired in the brain. Newborns love sweet (taste of mother’s milk) and hate bitter (poisons) immediately. The receptors mediating salty taste are not mature at birth in humans, but when they are mature a few weeks after birth, the baby likes dilute salt (although more concentrated salt will evoke stinging sensations that will be avoided). Sour is generally disliked (protecting against tissue damage from acid?), but to the amazement of many parents, some young children appear to actually like the sour candies available today; this may be related to the breadth of their experience with fruits (Liem & Mennella, 2003). This hard-wired affect is the most salient characteristic of taste and this is why we classify only those taste qualities with hard-wired affect as “basic tastes.” Another contribution to the omnivore’s dilemma: Olfactory affect is learned The biological functions of olfaction depend on how odors enter our noses. Sniffing brings odorants through our nostrils. The odorants hit the turbinate bones and a puff of the odorized air rises to the top of the nasal cavity, where it goes through a narrow opening (the olfactory cleft) and arrives at the olfactory mucosa (the tissue that houses the olfactory receptors). Technically, this is called “orthonasal olfaction.” Orthonasal olfaction tells us about the world external to our bodies. When we chew and swallow food, the odorants emitted by the food are forced up behind the palate (roof of the mouth) and enter our noses from the back; this is called “retronasal olfaction.” Ortho and retronasal olfaction involve the same odor molecules and the same olfactory receptors; however, the brain can tell the difference between the two and does not send the input to the same areas. Retronasal olfaction and taste project to some common areas where they are presumably integrated into flavor. Flavors tell us about the food we are eating. If retronasal olfaction is paired with nausea, the food evoking the retronasal olfactory sensation becomes disliked. If retronasal olfaction is paired with situations the brain deems valuable (calories, sweet taste, pleasure from other sources, etc.), the food evoking that sensation becomes liked. These are called conditioned aversions and preferences (Rozin & Vollmecke, 1986). Those who have experienced a conditioned aversion may have found that the dislike (even disgust) evoked when a flavor is paired with nausea can generalize to the smell of the food alone (orthonasal olfaction). Some years ago, Jeremy Wolfe and Linda Bartoshuk surveyed conditioned aversions among college students and staff that had resulted from consuming foods/beverages associated with nausea (Bartoshuk & Wolfe, 1990). In 29% of the aversions, subjects reported that even the smell of the food/beverage had become aversive. Other properties of food objects can become aversive as well. In one unusual case, an aversion to cheese crackers generalized to vanilla wafers apparently because the containers were similar. Conditioned aversions function to protect us from ingesting a food that our brains associate with illness. Conditioned preferences are harder to form, but they help us learn what is safe to eat. Is the affect associated with olfaction ever hard-wired? Pheromones are said to be olfactory molecules that evoke specific behaviors. Googling “human pheromone” will take you to websites selling various sprays that are supposed to make one more sexually appealing. However, careful research does not support such claims in humans or any other mammals (Doty, 2010). For example, amniotic fluid was at one time believed to contain a pheromone that attracted rat pups to their mother’s nipples so they could suckle. Early interest in identifying the molecule that acted as that pheromone gave way to understanding that the behavior was learned when a novel odorant, citral (which smells like lemons), was easily substituted for amniotic fluid (Pedersen, Williams, & Blass, 1982). Central interactions: Key to understanding taste damage The integration of retronasal olfaction and taste into flavor is not the only central interaction among the sensations evoked by foods. These integrations in most cases serve important biological functions, but occasionally they go awry and lead to clinical pathologies. Taste is mediated by three cranial nerves; these are bilateral nerves, each of which innervates one side of the mouth. Since they do not connect in the peripheral nervous system, interactions across the midline must occur in the brain. Incidentally, studying interactions across the midline is a classic way to draw inferences about central interactions. Insights from studies of this type were very important to understanding central processes long before we had direct imaging of brain function. Taste on the anterior two thirds of the tongue (the part you can stick out) is mediated by the chorda tympani nerve; taste on the posterior one third (the part that stays attached) is mediated by the glossopharyngeal nerve. Taste buds are tiny clusters of cells (like the segments of an orange) that are buried in the tissue of some papillae, the structures that give the tongue its bumpy appearance. Filiform papillae are the smallest and are distributed all over the tongue; they have no taste buds. In species like the cat, the filiform papillae are shaped like small spoons and help the cat hold liquids on the tongue while lapping (try lapping from a dish and you will see how hard it is without those special filiform papillae). Fungiform papillae (given this name because they resemble small button mushrooms) are larger circular structures on the anterior tongue (innervated by the chorda tympani). They contain about six taste buds each. Fungiform papillae can be seen with the naked eye, but swabbing blue food coloring on the tongue helps. The fungiform papillae do not stain as well as the rest of the tongue so they look like pink circles against a blue background. On some tongues, the spacing of fungiform papillae is like polka dots. Other tongues can have 10 times as many fungiform papillae, spaced so closely that there is little space between them. There is a connection between the density of fungiform papillae and the perception of taste. Those who experience the most intense taste sensations (we call them supertasters) tend to have the most fungiform papillae. Incidentally, this is a rare example in sensory processes of visible anatomical variation that correlates with function. We can look at the tongues of a variety of individuals and predict which of them will experience the most intense taste sensations. The structures that house taste buds innervated by the glossopharyngeal nerve are called circumvallate papillae. They are relatively large structures arrayed in an inverted V shape across the back of the tongue. Each of them looks like a small island surrounded by a moat. Taste nerves project to the brain, where they send inhibitory signals to one another. One of the biological consequences of this inhibition is taste constancy. Damage to one nerve reduces taste input but also reduces inhibition on the other nerves (Bartoshuk et al 2005). That release of inhibition intensifies the central neural signals from the undamaged nerves, thereby maintaining whole mouth function. Interestingly, this release of inhibition can be so powerful that it actually increases whole mouth taste. The small effect of limited taste damage is one of the earliest clinical observations. In 1825, Brillat-Savarin described in his book The Physiology of Taste an interview with an ex-prisoner who had suffered a horrible punishment: amputation of his tongue. “This man, whom I met in Amsterdam, where he made his living by running errands, had had some education, and it was easy to communicate with him by writing. After I had observed that the forepart of his tongue has been cut off clear to the ligament, I asked him if he still found any flavor in what he ate, and if his sense of taste had survived the cruelty to which he had been subjected. He replied that … he still possessed the ability to taste fairly well” (Brillat-Savarin, 1971, pg. 35). This injury damaged the chorda tympani but spared the glossopharyngeal nerve. We now know that taste nerves not only inhibit one another but also inhibit other oral sensations. Thus, taste damage can intensify oral touch (fats) and oral burn (chilis). In fact, taste damage appears to be linked to pain in general. Consider an animal injured in the wild. If pain reduced eating, its chance of survival would be diminished. However, nature appears to have wired the brain such that taste input inhibits pain. Eating is reinforced and the animal’s chances of survival increase. Taste damage and weight gain The effects of taste damage depend on the extent of damage. If only one taste nerve is damaged, then release of inhibition occurs. If the damage is extensive enough, function is lost with one possible exception. Preliminary data suggest that the more extensive the damage to taste, the greater the intensification of pain; this is obviously of clinical interest. Damage to a single taste nerve can intensify oral touch (e.g., the creamy, viscous sensations evoked by fats). Perhaps most surprising, damage to a single taste nerve can intensify retronasal olfaction; this may occur as a secondary result from the intensification of whole mouth taste. These sensory changes can alter the palatability of foods; in particular, high-fat foods can be rendered more palatable. Thus one of the first areas we examined was the possibility that mild taste damage could lead to increases in body mass index. Middle ear infections (otitis media) can damage the chorda tympani nerve; a tonsillectomy can damage the glossopharyngeal nerve. Head trauma damages both nerves, although it tends to take its greatest toll on the chorda tympani nerve. All of these clinical conditions increase body mass index in some individuals. More work is needed, but we suspect a link between the intensification of fat sensations, enhancement of palatability of high-fat foods, and weight gain. Outside Resources Video: Inside the Psychologists Studio with Linda Bartoshuk Video: Linda Bartoshuk at Nobel Conference 46 Video: Test your tongue: the science of taste Discussion Questions 1. In this module, we have defined “basic tastes” in terms of whether or not a sensation produces hard-wired affect. Can you think of any other definitions of basic tastes? 2. Do you think omnivores, herbivores, or carnivores have a better chance at survival? 3. Olfaction is mediated by one cranial nerve. Taste is mediated by three cranial nerves. Why do you think evolution gave more nerves to taste than to smell? What are the consequences of this? Vocabulary Conditioned aversions and preferences Likes and dislikes developed through associations with pleasurable or unpleasurable sensations. Gustation The action of tasting; the ability to taste. Olfaction The sense of smell; the action of smelling; the ability to smell. Omnivore A person or animal that is able to survive by eating a wide range of foods from plant or animal origin. Orthonasal olfaction Perceiving scents/smells introduced via the nostrils. Retronasal olfaction Perceiving scents/smells introduced via the mouth/palate.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_8%3A_Sensation_and_Perception/8.3%3A_Taste_and_Smell.txt
By Andrew J. Oxenham University of Minnesota Hearing allows us to perceive the world of acoustic vibrations all around us, and provides us with our most important channels of communication. This module reviews the basic mechanisms of hearing, beginning with the anatomy and physiology of the ear and a brief review of the auditory pathways up to the auditory cortex. An outline of the basic perceptual attributes of sound, including loudness, pitch, and timbre, is followed by a review of the principles of tonotopic organization, established in the cochlea. An overview of masking and frequency selectivity is followed by a review of the perception and neural mechanisms underlying spatial hearing. Finally, an overview is provided of auditory scene analysis, which tackles the important question of how the auditory system is able to make sense of the complex mixtures of sounds that are encountered in everyday acoustic environments. learning objectives • Describe the basic auditory attributes of sound. • Describe the structure and general function of the auditory pathways from the outer ear to the auditory cortex. • Discuss ways in which we are able to locate sounds in space. • Describe various acoustic cues that contribute to our ability to perceptually segregate simultaneously arriving sounds. Hearing forms a crucial part of our everyday life. Most of our communication with others, via speech or music, reaches us through the ears. Indeed, a saying, often attributed to Helen Keller, is that blindness separates us from things, but deafness separates us from people. The ears respond to acoustic information, or sound—tiny and rapid variations in air pressure. Sound waves travel from the source and produce pressure variations in the listener’s ear canals, causing the eardrums (or tympanic membranes) to vibrate. This module provides an overview of the events that follow, which convert these simple mechanical vibrations into our rich experience known as hearing, or auditory perception. Perceptual Attributes of Sound There are many ways to describe a sound, but the perceptual attributes of a sound can typically be divided into three main categories—namely, loudness, pitch, and timbre. Although all three refer to perception, and not to the physical sounds themselves, they are strongly related to various physical variables. Loudness The most direct physical correlate of loudness is sound intensity (or sound pressure) measured close to the eardrum. However, many other factors also influence the loudness of a sound, including its frequency content, its duration, and the context in which it is presented. Some of the earliest psychophysical studies of auditory perception, going back more than a century, were aimed at examining the relationships between perceived loudness, the physical sound intensity, and the just-noticeable differences in loudness (Fechner, 1860; Stevens, 1957). A great deal of time and effort has been spent refining various measurement methods. These methods involve techniques such as magnitude estimation, where a series of sounds (often sinusoids, or pure tones of single frequency) are presented sequentially at different sound levels, and subjects are asked to assign numbers to each tone, corresponding to the perceived loudness. Other studies have examined how loudness changes as a function of the frequency of a tone, resulting in the international standard iso-loudness-level contours (ISO, 2003), which are used in many areas of industry to assess noise and annoyance issues. Such studies have led to the development of computational models that are designed to predict the loudness of arbitrary sounds (e.g., Moore, Glasberg, & Baer, 1997). Pitch Pitch plays a crucial role in acoustic communication. Pitch variations over time provide the basis of melody for most types of music; pitch contours in speech provide us with important prosodic information in non-tone languages, such as English, and help define the meaning of words in tone languages, such as Mandarin Chinese. Pitch is essentially the perceptual correlate of waveform periodicity, or repetition rate: The faster a waveform repeats over time, the higher is its perceived pitch. The most common pitch-evoking sounds are known as harmonic complex tones. They are complex because they consist of more than one frequency, and they are harmonic because the frequencies are all integer multiples of a common fundamental frequency (F0). For instance, a harmonic complex tone with a F0 of 100 Hz would also contain energy at frequencies of 200, 300, 400 Hz, and so on. These higher frequencies are known as harmonics or overtones, and they also play an important role in determining the pitch of a sound. In fact, even if the energy at the F0 is absent or masked, we generally still perceive the remaining sound to have a pitch corresponding to the F0. This phenomenon is known as the “pitch of the missing fundamental,” and it has played an important role in the formation of theories and models about pitch (de Cheveigné, 2005). We hear pitch with sufficient accuracy to perceive melodies over a range of F0s from about 30 Hz (Pressnitzer, Patterson, & Krumbholz, 2001) up to about 4–5 kHz (Attneave & Olson, 1971; Oxenham, Micheyl, Keebler, Loper, & Santurette, 2011). This range also corresponds quite well to the range covered by musical instruments; for instance, the modern grand piano has notes that extend from 27.5 Hz to 4,186 Hz. We are able to discriminate changes in frequency above 5,000 Hz, but we are no longer very accurate in recognizing melodies or judging musical intervals. Timbre Timbre refers to the quality of sound, and is often described using words such as bright, dull, harsh, and hollow. Technically, timbre includes anything that allows us to distinguish two sounds that have the same loudness, pitch, and duration. For instance, a violin and a piano playing the same note sound very different, based on their sound quality or timbre. An important aspect of timbre is the spectral content of a sound. Sounds with more high-frequency energy tend to sound brighter, tinnier, or harsher than sounds with more low-frequency content, which might be described as deep, rich, or dull. Other important aspects of timbre include the temporal envelope (or outline) of the sound, especially how it begins and ends. For instance, a piano has a rapid onset, or attack, produced by the hammer striking the string, whereas the attack of a clarinet note can be much more gradual. Artificially changing the onset of a piano note by, for instance, playing a recording backwards, can dramatically alter its character so that it is no longer recognizable as a piano note. In general, the overall spectral content and the temporal envelope can provide a good first approximation to any sound, but it turns out that subtle changes in the spectrum over time (or spectro-temporal variations) are crucial in creating plausible imitations of natural musical instruments (Risset & Wessel, 1999). An Overview of the Auditory System Our auditory perception depends on how sound is processed through the ear. The ear can be divided into three main parts—the outer, middle, and inner ear (see Figure 8.4.1). The outer ear consists of the pinna (the visible part of the ear, with all its unique folds and bumps), the ear canal (or auditory meatus), and the tympanic membrane. Of course, most of us have two functioning ears, which turn out to be particularly useful when we are trying to figure out where a sound is coming from. As discussed below in the section on spatial hearing, our brain can compare the subtle differences in the signals at the two ears to localize sounds in space. However, this trick does not always help: for instance, a sound directly in front or directly behind you will not produce a difference between the ears. In these cases, the filtering produced by the pinnae helps us localize sounds and resolve potential front-back and up-down confusions. More generally, the folds and bumps of the pinna produce distinct peaks and dips in the frequency response that depend on the location of the sound source. The brain then learns to associate certain patterns of spectral peaks and dips with certain spatial locations. Interestingly, this learned association remains malleable, or plastic, even in adulthood. For instance, a study that altered the pinnae using molds found that people could learn to use their “new” ears accurately within a matter of a few weeks (Hofman, Van Riswick, & Van Opstal, 1998). Because of the small size of the pinna, these kinds of acoustic cues are only found at high frequencies, above about 2 kHz. At lower frequencies, the sound is basically unchanged whether it comes from above, in front, or below. The ear canal itself is a tube that helps to amplify sound in the region from about 1 to 4 kHz—a region particularly important for speech communication. The middle ear consists of an air-filled cavity, which contains the middle-ear bones, known as the incus, malleus, and stapes, or anvil, hammer, and stirrup, because of their respective shapes. They have the distinction of being the smallest bones in the body. Their primary function is to transmit the vibrations from the tympanic membrane to the oval window of the cochlea and, via a form of lever action, to better match the impedance of the air surrounding the tympanic membrane with that of the fluid within the cochlea. The inner ear includes the cochlea, encased in the temporal bone of the skull, in which the mechanical vibrations of sound are transduced into neural signals that are processed by the brain. The cochlea is a spiral-shaped structure that is filled with fluid. Along the length of the spiral runs the basilar membrane, which vibrates in response to the pressure differences produced by vibrations of the oval window. Sitting on the basilar membrane is the organ of Corti, which runs the entire length of the basilar membrane from the base (by the oval window) to the apex (the “tip” of the spiral). The organ of Corti includes three rows of outer hair cells and one row of inner hair cells. The hair cells sense the vibrations by way of their tiny hairs, or stereocillia. The outer hair cells seem to function to mechanically amplify the sound-induced vibrations, whereas the inner hair cells form synapses with the auditory nerve and transduce those vibrations into action potentials, or neural spikes, which are transmitted along the auditory nerve to higher centers of the auditory pathways. One of the most important principles of hearing—frequency analysis—is established in the cochlea. In a way, the action of the cochlea can be likened to that of a prism: the many frequencies that make up a complex sound are broken down into their constituent frequencies, with low frequencies creating maximal basilar-membrane vibrations near the apex of the cochlea and high frequencies creating maximal basilar-membrane vibrations nearer the base of the cochlea. This decomposition of sound into its constituent frequencies, and the frequency-to-place mapping, or “tonotopic” representation, is a major organizational principle of the auditory system, and is maintained in the neural representation of sounds all the way from the cochlea to the primary auditory cortex. The decomposition of sound into its constituent frequency components is part of what allows us to hear more than one sound at a time. In addition to representing frequency by place of excitation within the cochlea, frequencies are also represented by the timing of spikes within the auditory nerve. This property, known as “phase locking,” is crucial in comparing time-of-arrival differences of waveforms between the two ears (see the section on spatial hearing, below). Unlike vision, where the primary visual cortex (or V1) is considered an early stage of processing, auditory signals go through many stages of processing before they reach the primary auditory cortex, located in the temporal lobe. Although we have a fairly good understanding of the electromechanical properties of the cochlea and its various structures, our understanding of the processing accomplished by higher stages of the auditory pathways remains somewhat sketchy. With the possible exception of spatial localization and neurons tuned to certain locations in space (Harper & McAlpine, 2004; Knudsen & Konishi, 1978), there is very little consensus on the how, what, and where of auditory feature extraction and representation. There is evidence for a “pitch center” in the auditory cortex from both human neuroimaging studies (e.g., Griffiths, Buchel, Frackowiak, & Patterson, 1998; Penagos, Melcher, & Oxenham, 2004) and single-unit physiology studies (Bendor & Wang, 2005), but even here there remain some questions regarding whether a single area of cortex is responsible for coding single features, such as pitch, or whether the code is more distributed (Walker, Bizley, King, & Schnupp, 2011). Audibility, Masking, and Frequency Selectivity Overall, the human cochlea provides us with hearing over a very wide range of frequencies. Young people with normal hearing are able to perceive sounds with frequencies ranging from about 20 Hz all the way up to 20 kHz. The range of intensities we can perceive is also impressive: the quietest sounds we can hear in the medium-frequency range (between about 1 and 4 kHz) have a sound intensity that is about a factor of 1,000,000,000,000 less intense than the loudest sound we can listen to without incurring rapid and permanent hearing loss. In part because of this enormous dynamic range, we tend to use a logarithmic scale, known as decibels (dB), to describe sound pressure or intensity. On this scale, 0 dB sound pressure level (SPL) is defined as 20 micro-Pascals (μPa), which corresponds roughly to the quietest perceptible sound level, and 120 dB SPL is considered dangerously loud. Masking is the process by which the presence of one sound makes another sound more difficult to hear. We all encounter masking in our everyday lives, when we fail to hear the phone ring while we are taking a shower, or when we struggle to follow a conversation in a noisy restaurant. In general, a more intense sound will mask a less intense sound, provided certain conditions are met. The most important condition is that the frequency content of the sounds overlap, such that the activity in the cochlea produced by a masking sound “swamps” that produced by the target sound. Another type of masking, known as “suppression,” occurs when the response to the masker reduces the neural (and in some cases, the mechanical) response to the target sound. Because of the way that filtering in the cochlea functions, low-frequency sounds are more likely to mask high frequencies than vice versa, particularly at high sound intensities. This asymmetric aspect of masking is known as the “upward spread of masking.” The loss of sharp cochlear tuning that often accompanies cochlear damage leads to broader filtering and more masking—a physiological phenomenon that is likely to contribute to the difficulties experienced by people with hearing loss in noisy environments (Moore, 2007). Although much masking can be explained in terms of interactions within the cochlea, there are other forms that cannot be accounted for so easily, and that can occur even when interactions within the cochlea are unlikely. These more central forms of masking come in different forms, but have often been categorized together under the term “informational masking” (Durlach et al., 2003; Watson & Kelly, 1978). Relatively little is known about the causes of informational masking, although most forms can be ascribed to a perceptual “fusion” of the masker and target sounds, or at least a failure to segregate the target from the masking sounds. Also relatively little is known about the physiological locus of informational masking, except that at least some forms seem to originate in the auditory cortex and not before (Gutschalk, Micheyl, & Oxenham, 2008). Spatial Hearing In contrast to vision, we have a 360° field of hearing. Our auditory acuity is, however, at least an order of magnitude poorer than vision in locating an object in space. Consequently, our auditory localization abilities are most useful in alerting us and allowing us to orient towards sources, with our visual sense generally providing the finer-grained analysis. Of course, there are differences between species, and some, such as barn owls and echolocating bats, have developed highly specialized sound localization systems. Our ability to locate sound sources in space is an impressive feat of neural computation. The two main sources of information both come from a comparison of the sounds at the two ears. The first is based on interaural time differences (ITD) and relies on the fact that a sound source on the left will generate sound that will reach the left ear slightly before it reaches the right ear. Although sound is much slower than light, its speed still means that the time of arrival differences between the two ears is a fraction of a millisecond. The largest ITD we encounter in the real world (when sounds are directly to the left or right of us) are only a little over half a millisecond. With some practice, humans can learn to detect an ITD of between 10 and 20 μs (i.e., 20 millionths of a second) (Klump & Eady, 1956). The second source of information is based in interaural level differences (ILDs). At higher frequencies (higher than about 1 kHz), the head casts an acoustic “shadow,” so that when a sound is presented from the left, the sound level at the left ear is somewhat higher than the sound level at the right ear. At very high frequencies, the ILD can be as much as 20 dB, and we are sensitive to differences as small as 1 dB. As mentioned briefly in the discussion of the outer ear, information regarding the elevation of a sound source, or whether it comes from in front or behind, is contained in high-frequency spectral details that result from the filtering effects of the pinnae. In general, we are most sensitive to ITDs at low frequencies (below about 1.5 kHz). At higher frequencies we can still perceive changes in timing based on the slowly varying temporal envelope of the sound but not the temporal fine structure (Bernstein & Trahiotis, 2002; Smith, Delgutte, & Oxenham, 2002), perhaps because of a loss of neural phase-locking to the temporal fine structure at high frequencies. In contrast, ILDs are most useful at high frequencies, where the head shadow is greatest. This use of different acoustic cues in different frequency regions led to the classic and very early “duplex theory” of sound localization (Rayleigh, 1907). For everyday sounds with a broad frequency spectrum, it seems that our perception of spatial location is dominated by interaural time differences in the low-frequency temporal fine structure (Macpherson & Middlebrooks, 2002). As with vision, our perception of distance depends to a large degree on context. If we hear someone shouting at a very low sound level, we infer that the shouter must be far away, based on our knowledge of the sound properties of shouting. In rooms and other enclosed locations, the reverberation can also provide information about distance: As a speaker moves further away, the direct sound level decreases but the sound level of the reverberation remains about the same; therefore, the ratio of direct-to-reverberant energy decreases (Zahorik & Wightman, 2001). Auditory Scene Analysis There is usually more than one sound source in the environment at any one time—imagine talking with a friend at a café, with some background music playing, the rattling of coffee mugs behind the counter, traffic outside, and a conversation going on at the table next to yours. All these sources produce sound waves that combine to form a single complex waveform at the eardrum, the shape of which may bear very little relationship to any of the waves produced by the individual sound sources. Somehow the auditory system is able to break down, or decompose, these complex waveforms and allow us to make sense of our acoustic environment by forming separate auditory “objects” or “streams,” which we can follow as the sounds unfold over time (Bregman, 1990). A number of heuristic principles have been formulated to describe how sound elements are grouped to form a single object or segregated to form multiple objects. Many of these originate from the early ideas proposed in vision by the so-called Gestalt psychologists, such as Max Wertheimer. According to these rules of thumb, sounds that are in close proximity, in time or frequency, tend to be grouped together. Also, sounds that begin and end at the same time tend to form a single auditory object. Interestingly, spatial location is not always a strong or reliable grouping cue, perhaps because the location information from individual frequency components is often ambiguous due to the effects of reverberation. Several studies have looked into the relative importance of different cues by “trading off” one cue against another. In some cases, this has led to the discovery of interesting auditory illusions, where melodies that are not present in the sounds presented to either ear emerge in the perception (Deutsch, 1979), or where a sound element is perceptually “lost” in competing perceptual organizations (Shinn-Cunningham, Lee, & Oxenham, 2007). More recent attempts have used computational and neutrally based approaches to uncover the mechanisms of auditory scene analysis (e.g., Elhilali, Ma, Micheyl, Oxenham, & Shamma, 2009), and the field of computational auditory scene analysis (CASA) has emerged in part as an effort to move towards more principled, and less heuristic, approaches to understanding the parsing and perception of complex auditory scenes (e.g., Wang & Brown, 2006). Solving this problem will not only provide us with a better understanding of human auditory perception, but may provide new approaches to “smart” hearing aids and cochlear implants, as well as automatic speech recognition systems that are more robust to background noise. Conclusion Hearing provides us with our most important connection to the people around us. The intricate physiology of the auditory system transforms the tiny variations in air pressure that reach our ear into the vast array of auditory experiences that we perceive as speech, music, and sounds from the environment around us. We are only beginning to understand the basic principles of neural coding in higher stages of the auditory system, and how they relate to perception. However, even our rudimentary understanding has improved the lives of hundreds of thousands through devices such as cochlear implants, which re-create some of the ear’s functions for people with profound hearing loss. Outside Resources Audio: Auditory Demonstrations from Richard Warren’s lab at the University of Wisconsin, Milwaukee www4.uwm.edu/APL/demonstrations.html Audio: Auditory Demonstrations. CD published by the Acoustical Society of America (ASA). You can listen to the demonstrations here www.feilding.net/sfuad/musi30...1/demos/audio/ Web: Demonstrations and illustrations of cochlear mechanics can be found here http://lab.rockefeller.edu/hudspeth/...calSimulations Web: More demonstrations and illustrations of cochlear mechanics www.neurophys.wisc.edu/animations/ Discussion Questions 1. Based on the available acoustic cues, how good do you think we are at judging whether a low-frequency sound is coming from in front of us or behind us? How might we solve this problem in the real world? 2. Outer hair cells contribute not only to amplification but also to the frequency tuning in the cochlea. What are some of the difficulties that might arise for people with cochlear hearing loss, due to these two factors? Why do hearing aids not solve all these problems? 3. Why do you think the auditory system has so many stages of processing before the signals reach the auditory cortex, compared to the visual system? Is there a difference in the speed of processing required? Vocabulary Cochlea Snail-shell-shaped organ that transduces mechanical vibrations into neural signals. Interaural differences Differences (usually in time or intensity) between the two ears. Pinna Visible part of the outer ear. Tympanic membrane Ear drum, which separates the outer ear from the middle ear.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_8%3A_Sensation_and_Perception/8.4%3A_Hearing.txt
By Guro E. Løseth, Dan-Mikael Ellingson, and Siri Leknes University of Oslo, University of Gothenburg The sensory systems of touch and pain provide us with information about our environment and our bodies that is often crucial for survival and well-being. Moreover, touch is a source of pleasure. In this module, we review how information about our environment and our bodies is coded in the periphery and interpreted by the brain as touch and pain sensations. We discuss how these experiences are often dramatically shaped by top-down factors like motivation, expectation, mood, fear, stress, and context. When well-functioning, these circuits promote survival and prepare us to make adaptive decisions. Pathological loss of touch can result in perceived disconnection from the body, and insensitivity to pain can be very dangerous, leading to maladaptive hazardous behavior. On the other hand, chronic pain conditions, in which these systems start signaling pain in response to innocuous touch or even in the absence of any observable sensory stimuli, have tremendous negative impact on the lives of the affected. Understanding how our sensory-processing mechanisms can be modulated psychologically and physiologically promises to help researchers and clinicians find new ways to alleviate the suffering of chronic-pain patients. learning objectives • Describe the transduction of somatosensory signals: The properties of the receptor types as well as the difference in the properties of C-afferents and A-afferents and what functions these are thought to have. • Describe the social touch hypothesis and the role of affective touch in development and bonding. • Explain the motivation–decision model and descending modulation of pain, and give examples on how this circuitry can promote survival. • Explain how expectations and context affect pain and touch experiences. • Describe the concept of chronic pain and why treatment is so difficult. Introduction Imagine a life free of pain. How would it be—calm, fearless, serene? Would you feel invulnerable, invincible? Getting rid of pain is a popular quest—a quick search for “pain-free life” on Google returns well over 4 million hits—including links to various bestselling self-help guides promising a pain-free life in only 7 steps, 6 weeks, or 3 minutes. Pain management is a billion-dollar market, and involves much more than just pharmaceuticals. Surely a life with no pain would be a better one? Well, consider one of the “lucky few”: 12-year-old “Thomas” has never felt deep pain. Not even when a fracture made him walk around with one leg shorter than the other, so that the bones of his healthy leg were slowly crushed to destruction underneath the knee joint (see Figure 8.5.1 ). For Thomas and other members of a large Swedish family, life without pain is a harsh reality because of a mutated gene that affects the growth of the nerves conducting deep pain. Most of those affected suffer from joint damage and frequent fractures to bones in their feet and hands; some end up in wheelchairs even before they reach puberty (Minde et al., 2004). It turns out pain—generally—serves us well. Living without a sense of touch sounds less attractive than being free of pain—touch is a source of pleasure and essential to how we feel. Losing the sense of touch has severe implications—something patient G. L. experienced when an antibiotics treatment damaged the type of nerves that signal touch from her skin and the position of her joints and muscles. She reported feeling like she’d lost her physical self from her nose down, making her “disembodied”—like she no longer had any connection to the body attached to her head. If she didn’t look at her arms and legs they could just “wander off” without her knowing—initially she was unable to walk, and even after she relearned this skill she was so dependent on her visual attention that closing her eyes would cause her to land in a hopeless heap on the floor. Only light caresses like those from her children’s hands can make her feel she has a body, but even these sensations remain vague and elusive (Olausson et al., 2002; Sacks, 1985). Sensation Cutaneous Senses of the Skin Connect the Brain to the Body and the Outside World Touch and pain are aspects of the somatosensory system, which provides our brain with information about our own body (interoception) and properties of the immediate external world (exteroception) (Craig, 2002). We have somatosensory receptors located all over the body, from the surface of our skin to the depth of our joints. The information they send to the central nervous system is generally divided into four modalities: cutaneous senses(senses of the skin), proprioception (body position), kinesthesis (body movement), and nociception (pain, discomfort). We are going to focus on the cutaneous senses, which respond to tactile, thermal, and pruritic (itchy) stimuli, and events that cause tissue damage (and hence pain). In addition, there is growing evidence for a fifth modality specifically channeling pleasant touch (McGlone & Reilly, 2010). Different Receptor Types Are Sensitive to Specific Stimuli The skin can convey many sensations, such as the biting cold of a wind, the comfortable pressure of a hand holding yours, or the irritating itch from a woolen scarf. The different types of information activate specific receptors that convert the stimulation of the skin to electrical nerve impulses, a process called transduction. There are three main groups of receptors in our skin: mechanoreceptors, responding to mechanical stimuli, such as stroking, stretching, or vibration of the skin; thermoreceptors, responding to cold or hot temperatures; and chemoreceptors, responding to certain types of chemicals either applied externally or released within the skin (such as histamine from an inflammation). For an overview of the different receptor types and their properties, see Box 1. The experience of pain usually starts with activation of nociceptorsreceptors that fire specifically to potentially tissue-damaging stimuli. Most of the nociceptors are subtypes of either chemoreceptors or mechanoreceptors. When tissue is damaged or inflamed, certain chemical substances are released from the cells, and these substances activate the chemosensitive nociceptors. Mechanoreceptive nociceptors have a high threshold for activation—they respond to mechanical stimulation that is so intense it might damage the tissue. Action Potentials in the Receptor Cells Travel as Nerve Impulses with Different Speeds When you step on a pin, this activates a host of mechanoreceptors, many of which are nociceptors. You may have noticed that the sensation changes over time. First you feel a sharp stab that propels you to remove your foot, and only then you feel a wave of more aching pain. The sharp stab is signaled via fast-conducting A-fibers, which project to the somatosensory cortex. This part of the cortex is somatotopically organized—that is, the sensory signals are represented according to where in the body they stem from (see illustrations, Figure 8.5.2). The unpleasant ache you feel after the sharp pin stab is a separate, simultaneous signal sent from the nociceptors in your foot via thin C-pain or Aδ-fibers to the insular cortex and other brain regions involved in processing of emotion and interoception (see Figure 8.5.3 for a schematic representation of this pathway). The experience of stepping on a pin is, in other words, composed by two separate signals: one discriminatory signal allowing us to localize the touch stimulus and distinguish whether it’s a blunt or a sharp stab; and one affective signal that lets us know that stepping on the pin is bad. It is common to divide pain into sensory–discriminatory and affective–motivational aspects (Auvray, Myin, & Spence, 2010). This distinction corresponds, at least partly, to how this information travels from the peripheral to the central nervous system and how it is processed in the brain (Price, 2000). Affective Aspects of Touch Are Important for Development and Relationships Touch senses are not just there for discrimination or detection of potentially painful events, as Harlow and Suomi (1970) demonstrated in a series of heartbreaking experiments where baby monkeys were taken from their mothers. The infant monkeys could choose between two artificial surrogate mothers—one “warm” mother without food but with a furry, soft cover; and one cold, steel mother with food. The monkey babies spent most of their time clinging to the soft mother, and only briefly moved over to the hard, steel mother to feed, indicating that touch is of “overpowering importance” to the infant (Harlow & Suomi, 1970, p. 161). Gentle touch is central for creating and maintaining social relationships in primates; they groom each other by stroking the fur and removing parasites—an activity important not only for their individual well-being but also for group cohesion (Dunbar, 2010; Keverne, Martensz, & Tuite, 1989). Although people don’t groom each other in the same way, gentle touch is important for us, too. The sense of touch is the first to develop while one is in the womb, and human infants crave touch from the moment they’re born. From studies of human orphans, we know that touch is also crucial for human development. In Romanian orphanages where the babies were fed but not given regular attention or physical contact, the children suffered cognitive and neurodevelopmental delay (Simons & Land, 1987). Physical contact helps a crying baby calm down, and the soothing touch a mother gives to her child is thought to reduce the levels of stress hormones such as cortisol. High levels of cortisol have negative effects on neural development, and they can even lead to cell loss (Feldman, Singer, & Zagoory, 2010; Fleming, O'Day, & Kraemer, 1999; Pechtel & Pizzagalli, 2011). Thus, stress reduction through hugs and caresses might be important not only for children’s well-being, but also for the development of the infant brain. The skin senses are similar across species, likely reflecting the evolutionary advantage of being able to tell what is touching you, where it’s happening, and whether or not it’s likely to cause tissue damage. An intriguing line of touch research suggests that humans, cats, and other animals have a special, evolutionarily preserved system that promotes gentle touch because it carries social and emotional significance. On a peripheral level, this system consists of a subtype of C-fibers that responds not to painful stimuli, but rather to gentle stroking touch—called C-tactile fibers. The firing rate of the C-tactile fibers correlates closely with how pleasant the stroking feels—suggesting they are coding specifically for the gentle caresses typical of social affiliative touch (Löken, Wessberg, Morrison, McGlone, & Olausson, 2009). This finding has led to the social touch hypothesis, which proposes that C-tactile fibers form a system for touch perception that supports social bonding (Morrison, Löken, & Olausson, 2010; Olausson, Wessberg, Morrison, McGlone, & Vallbo, 2010). The discovery of the C-tactile system suggests that touch is organized in a similar way to pain; fast-conducting A-fibers contribute to sensory–discriminatory aspects, while thin C-fibers contribute to affective–motivational aspects (Löken, Wessberg, Morrison, McGlone, & Olausson, 2009). However, while these “hard-wired” afferent systems often provide us with accurate information about our environment and our bodies, how we experience touch or pain depends very much on top-down sources like motivation, expectation, mood, fear, and stress. Modulation Pain Is Necessary for Survival, but Our Brain Can Stop It if It Needs To In April 2003, the climber Aron Ralston found himself at the floor of Blue John Canyon in Utah, forced to make an appalling choice: face a slow but certain death—or amputate his right arm. Five days earlier he fell down the canyon—since then he had been stuck with his right arm trapped between an 800-lb boulder and the steep sandstone wall. Weak from lack of food and water and close to giving up, it occurred to him like an epiphany that if he broke the two bones in his forearm he could manage to cut off the rest with his pocket knife. The thought of freeing himself and surviving made him so exited he spent the next 40 minutes completely engrossed in the task: first snapping his bones using his body as a lever, then sticking his fingers into the arm, pinching bundles of muscle fibers and severing them one by one, before cutting the blue arteries and the pale “noodle-like” nerves. The pain was unimportant. Only cutting through the thick white main nerve made him stop for a minute—the flood of pain, he describes, was like thrusting his entire arm “into a cauldron of magma.” Finally free, he rappelled down a cliff and walked another 7 miles until he was rescued by some hikers (Ralston, 2010). How is it possible to do something so excruciatingly painful to yourself, and still manage to walk, talk, and think rationally afterwards? The answer lies within the brain, where signals from the body are interpreted. When we perceive somatosensory and nociceptive signals from the body, the experience is highly subjective and malleable by motivation, attention, emotion, and context. The Motivation–Decision Model and Descending Modulation of Pain According to the motivation–decision model, the brain automatically and continuously evaluates the pros and cons of any situation—weighing impending threats and available rewards (Fields, 2004, 2006). Anything more important for survival than avoiding the pain activates the brain’s descending pain modulatory system—a top-down system involving several parts of the brain and brainstem, which inhibits nociceptive signaling so that the more important actions can be attended to. In Aron’s extreme case, his actions were likely based on such an unconscious decision process—taking into account his homeostatic state (his hunger, thirst, the inflammation and decay of his crushed hand slowly affecting the rest of his body), the sensory input available (the sweet smell of his dissolving skin, the silence around him indicating his solitude), and his knowledge about the threats facing him (death, or excruciating pain that won’t kill him) versus the potential rewards (survival, seeing his family again). Aron’s story illustrates the evolutionary advantage to being able to shut off pain: The descending pain modulatory system allows us to go through with potentially life-saving actions. However, when one has reached safety or obtained the reward, healing is more important. The very same descending system can then “crank up” nociception from the body to promote healing and motivate us to avoid potentially painful actions. To facilitate or inhibit nociceptive signals from the body, the descending pain modulatory system uses a set of ON- or OFF-cells in the brainstem, which regulates how much of the nociceptive signal reaches the brain. The descending system is dependent on opioid signaling, and analgesics like morphine relieve pain via this circuit (Petrovic, Kalso, Petersson, & Ingvar, 2002). The Analgesic Power of Reward Thinking about the good things, like his loved ones and the life ahead of him, was probably pivotal to Aron’s survival. The promise of a reward can be enough to relieve pain. Expecting pain relief (getting less pain is often the best possible outcome if you’re in pain, i.e., it is a reward) from a medical treatment contributes to the placebo effect—where pain relief is due at least partly to your brain’s descending modulation circuit, and such relief depends on the brain’s own opioid system (Eippert et al., 2009; Eippert, Finsterbusch, Bingel, & Buchel, 2009; Levine, Gordon, & Fields, 1978). Eating tasty food, listening to good music, or feeling pleasant touch on your skin also decreases pain in both animals and humans, presumably through the same mechanism in the brain (Leknes & Tracey, 2008). In a now classic experiment, Dum and Herz (1984) either fed rats normal rat food or let them feast on highly rewarding chocolate-covered candy (rats love sweets) while standing on a metal plate until they learned exactly what to expect when placed there. When the plate was heated up to a noxious/painful level, the rats that expected candy endured the temperature for twice as long as the rats expecting normal chow. Moreover, this effect was completely abolished when the rats’ opioid (endorphin) system was blocked with a drug, indicating that the analgesic effect of reward anticipation was caused by endorphin release. For Aron the climber, both the stress from knowing that death was impending and the anticipation of the reward it would be to survive probably flooded his brain with endorphins, contributing to the wave of excitement and euphoria he experienced while he carried out the amputation “like a five-year-old unleashed on his Christmas presents” (Ralston, 2010). This altered his experience of the pain from the extreme tissue damage he was causing and enabled him to focus on freeing himself. Our brain, it turns out, can modulate the perception of how unpleasant pain is, while still retaining the ability to experience the intensity of the sensation (Rainville, Duncan, Price, Carrier, & Bushnell, 1997; Rainville, Feine, Bushnell, & Duncan, 1992). Social rewards, like holding the hand of your boyfriend or girlfriend, have pain-reducing effects. Even looking at a picture of him/her can have similar effects—in fact, seeing a picture of a person we feel close to not only reduces subjective pain ratings, but also the activity in pain-related brain areas (Eisenberger et al., 2011). The most common things to do when wanting to help someone through a painful experience—being present and holding the person’s hand—thus seems to have a measurably positive effect. When Touch Becomes Painful or Pain Becomes Chronic Chances are you’ve been sunburned a few times in your life and have experienced how even the lightest pat on the back or the softest clothes can feel painful on your over-sensitive skin. This condition, where innocuous touch gives a burning, tender sensation, is similar to a chronic condition called allodynia—where neuronal disease or injury makes touch that is normally pleasant feel unpleasantly painful. In allodynia, neuronal injury in the spinal dorsal horn causes Aβ-afferents, which are activated by non-nociceptive touch, to access nociceptive pathways (Liljencrantz et al., 2013). The result is that even gentle touch is interpreted by the brain as painful. While an acute pain response to noxious stimuli has a vital protective function, allodynia and other chronic pain conditions constitute a tremendous source of unnecessary suffering that affects millions of people. Approximately 100 million Americans suffer from chronic pain, and annual economic cost associated is estimated to be \$560–\$635 billion (Committee on Advancing Pain Research, Care, & Institute of Medicine, 2011). Chronic pain conditions are highly diverse, and they can involve changes on peripheral, spinal, central, and psychological levels. The mechanisms are far from fully understood, and developing appropriate treatment remains a huge challenge for pain researchers. Chronic pain conditions often begin with an injury to a peripheral nerve or the tissue surrounding it, releasing hormones and inflammatory molecules that sensitize nociceptors. This makes the nerve and neighboring afferents more excitable, so that also uninjured nerves become hyperexcitable and contribute to the persistence of pain. An injury might also make neurons fire nonstop regardless of external stimuli, providing near-constant input to the pain system. Sensitization can also happen in the brain and in the descending modulatory system of the brainstem (Zambreanu, Wise, Brooks, Iannetti, & Tracey, 2005). Exactly on which levels the pain perception is altered in chronic pain patients can be extremely difficult to pinpoint, making treatment an often exhausting process of trial and error. Suffering from chronic pain has dramatic impacts on the lives of the afflicted. Being in pain over a longer time can lead to depression, anxiety (fear or anticipation of future pain), and immobilization, all of which may in turn exacerbate pain (Wiech & Tracey, 2009). Negative emotion and attention to pain can increase sensitization to pain, possibly by keeping the descending pain modulatory system in facilitation mode. Distraction is therefore a commonly used technique in hospitals where patients have to undergo painful treatments like changing bandages on large burns. For chronic pain patients, however, diverting attention is not a long-term solution. Positive factors like social support can reduce the risk of chronic pain after an injury, and so they can help to adjust to bodily change as a result of injury. We haveve already talked about how having a hand to hold might alleviate suffering. Chronic pain treatment should target these emotional and social factors as well as the physiological. The Power of the Mind The context of pain and touch has a great impact on how we interpret it. Just imagine how different it would feel to Aron if someone amputated his hand against his will and for no discernible reason. Prolonged pain from injuries can be easier to bear if the incident causing them provides a positive context—like a war wound that testifies to a soldier’s courage and commitment—or phantom pain from a hand that was cut off to enable life to carry on. The relative meaning of pain is illustrated by a recent experiment, where the same moderately painful heat was administered to participants in two different contexts—one control context where the alternative was a nonpainful heat; and another where the alternative was an intensely painful heat. In the control context, where the moderate heat was the least preferable outcome, it was (unsurprisingly) rated as painful. In the other context it was the best possible outcome, and here the exact same moderately painful heat was actually rated as pleasant—because it meant the intensely painful heat had been avoided. This somewhat surprising change in perception—where pain becomes pleasant because it represents relief from something worse—highlights the importance of the meaning individuals ascribe to their pain, which can have decisive effects in pain treatment (Leknes et al., 2013). In the case of touch, knowing who or what is stroking your skin can make all the difference—try thinking about slugs the next time someone strokes your skin if you want an illustration of this point. In a recent study, a group of heterosexual males were told that they were about to receive sensual caresses on the leg by either a male experimenter or by an attractive female experimenter (Gazzola et al., 2012). The study participants could not see who was touching them. Although it was always the female experimenter who performed the caress, the heterosexual males rated the otherwise pleasant sensual caresses as clearly unpleasant when they believed the male experimenter did it. Moreover, brain responses to the “male touch” in somatosensory cortex were reduced, exemplifying how top-down regulation of touch resembles top-down pain inhibition. Pain and pleasure not only share modulatory systems—another common attribute is that we don’t need to be on the receiving end of it ourselves in order to experience it. How did you feel when you read about Aron cutting through his own tissue, or “Thomas” destroying his own bones unknowingly? Did you cringe? It’s quite likely that some of your brain areas processing affective aspects of pain were active even though the nociceptors in your skin and deep tissue were not firing. Pain can be experienced vicariously, as can itch, pleasurable touch, and other sensations. Tania Singer and her colleagues found in an fMRI study that some of the same brain areas that were active when participants felt pain on their own skin (anterior cingulate and insula) were also active when they were given a signal that a loved one was feeling the pain. Those who were most “empathetic” also showed the largest brain responses (Singer et al., 2004). A similar effect has been found for pleasurable touch: The posterior insula of participants watching videos of someone else’s arm being gently stroked shows the same activation as if they were receiving the touch themselves (Morrison, Bjornsdotter, & Olausson, 2011). Summary Sensory experiences connect us to the people around us, to the rest of the world, and to our own bodies. Pleasant or unpleasant, they’re part of being human. In this module, we have seen how being able to inhibit pain responses is central to our survival—and in cases like that of climber Aron Ralston, that ability can allow us to do extreme things. We have also seen how important the ability to feel pain is to our health—illustrated by young “Thomas,” who keeps injuring himself because he simply doesn’t notice pain. While “Thomas” has to learn to avoid harmful activities without the sensory input that normally guides us, G. L. has had to learn how to keep approaching and move about in a world she can hardly feel at all, with a body that is practically disconnected from her awareness. Too little sensation or too much of it leads to no good, no matter how pleasant or unpleasant the sensation usually feels. As long as we have nervous systems that function normally, we are able to adjust the volume of the sensory signals and our behavioral reactions according to the context we’re in. When it comes to sensory signals like touch and pain, we are interpreters, not measuring instruments. The quest for understanding how our sensory–processing mechanisms can be modulated, psychologically and physiologically, promises to help researchers and clinicians find new ways to alleviate distress from chronic pain. Outside Resources Book: Butler, D. S., Moseley, G. L., & Sunyata. (2003). Explain pain (p. 19). Australia: Noigroup. Book: Kringelbach, M. L., & Berridge, K. C. (Eds.). (2010). Pleasures of the brain (p. 343). Oxford, UK: Oxford University Press. Book: Ralston, A. (2004). Between a rock and a hard place: The basis of the motion picture 127 Hours. New York, NY: Atria. Book: Sacks, O. (1998). The man who mistook his wife for a hat: And other clinical tales. New York, NY: Simon & Schuster. Video: BBC Documentary series “Human Senses,” Episode 3: Touch and Vision watchdocumentary.org/watch/hu...f3e33c14a.html Video: BBC Documentary “Pleasure and Pain with Michael Mosley” http://www.bbc.co.uk/programmes/b00y377q Video: TEDxAdelaide - Lorimer Moseley – “Why Things Hurt” Video: Trailer for the film 127 Hours, directed by Danny Boyle and released in 2010 Web: Homepage for the International Association for the Study of Pain http://www.iasp-pain.org Web: Proceedings of the National Academy of Sciences Colloquium "The Neurobiology of Pain" http://www.pnas.org/content/96/14.toc#COLLOQUIUM Web: Stanford School of Medicine Pain Management Center http://paincenter.stanford.edu/ Website resource aiming to communicate “advances and issues in the clinical sciences as they relate to the role of the brain and mind in chronic pain disorders,” led by Dr. Lorimer Moseley www.bodyinmind.org/ Discussion Questions 1. Your friend has had an accident and there is a chance the injury might cause pain over a prolonged period. How would you support your friend? What would you say and do to ease the pain, and why do you think it would work? 2. We have learned that touch and pain sensation in many aspects do not reflect “objectively” the outside world or the body state. Rather, these experiences are shaped by various top-down influences, and they can even occur without any peripheral activation. This is similar to the way other sensory systems work, e.g., the visual or auditory systems, and seems to reflect a general way the brain process sensory events. Why do you think the brain interprets the incoming sensory information instead of giving a one-to-one readout the way a thermometer and other measuring instruments would? Imagine you instead had “direct unbiased access” between stimuli and sensation. What would be the advantages and disadvantages of this? 3. Feelings of pain or touch are subjective—they have a particular quality that you perceive subjectively. How can we know whether the pain you feel is similar to the pain I feel? Is it possible that modern scientists can objectively measure such subjective feelings? Vocabulary A-fibers Fast-conducting sensory nerves with myelinated axons. Larger diameter and thicker myelin sheaths increases conduction speed. Aβ-fibers conduct touch signals from low-threshold mechanoreceptors with a velocity of 80 m/s and a diameter of 10 μm; Aδ-fibers have a diameter of 2.5 μm and conduct cold, noxious, and thermal signals at 12 m/s. The third and fastest conducting A-fiber is the Aα, which conducts proprioceptive information with a velocity of 120 m/s and a diameter of 20 μm. Allodynia Pain due to a stimulus that does not normally provoke pain, e.g., when a light, stroking touch feels painful. Analgesia Pain relief. C-fibers C-fibers: Slow-conducting unmyelinated thin sensory afferents with a diameter of 1 μm and a conduction velocity of approximately 1 m/s. C-pain fibers convey noxious, thermal, and heat signals; C-tactile fibers convey gentle touch, light stroking. Chronic pain Persistent or recurrent pain, beyond usual course of acute illness or injury; sometimes present without observable tissue damage or clear cause. C-pain or Aδ-fibers C-pain fibers convey noxious, thermal, and heat signals C-tactile fibers C-tactile fibers convey gentle touch, light stroking Cutaneous senses The senses of the skin: tactile, thermal, pruritic (itchy), painful, and pleasant. Descending pain modulatory system A top-down pain-modulating system able to inhibit or facilitate pain. The pathway produces analgesia by the release of endogenous opioids. Several brain structures and nuclei are part of this circuit, such as the frontal lobe areas of the anterior cingulate cortex, orbitofrontal cortex, and insular cortex; and nuclei in the amygdala and the hypothalamus, which all project to a structure in the midbrain called the periaqueductal grey (PAG). The PAG then controls ascending pain transmission from the afferent pain system indirectly through the rostral ventromedial medulla (RVM) in the brainstem, which uses ON- and OFF-cells to inhibit or facilitate nociceptive signals at the spinal dorsal horn. Endorphin An endogenous morphine-like peptide that binds to the opioid receptors in the brain and body; synthesized in the body’s nervous system. Exteroception The sense of the external world, of all stimulation originating from outside our own bodies. Interoception The sense of the physiological state of the body. Hunger, thirst, temperature, pain, and other sensations relevant to homeostasis. Visceral input such as heart rate, blood pressure, and digestive activity give rise to an experience of the body’s internal states and physiological reactions to external stimulation. This experience has been described as a representation of “the material me,” and it is hypothesized to be the foundation of subjective feelings, emotion, and self-awareness. Nociception The neural process of encoding noxious stimuli, the sensory input from nociceptors. Not necessarily painful, and crucially not necessary for the experience of pain. Nociceptors High-threshold sensory receptors of the peripheral somatosensory nervous system that are capable of transducing and encoding noxious stimuli. Nociceptors send information about actual or impending tissue damage to the brain. These signals can often lead to pain, but nociception and pain are not the same. Noxious stimulus A stimulus that is damaging or threatens damage to normal tissues. Ocial touch hypothesis Proposes that social touch is a distinct domain of touch. C-tactile afferents form a special pathway that distinguishes social touch from other types of touch by selectively firing in response to touch of social-affective relevance; thus sending affective information parallel to the discriminatory information from the Aβ-fibers. In this way, the socially relevant touch stands out from the rest as having special positive emotional value and is processed further in affect-related brain areas such as the insula. Pain Defined as “an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage,” according to the International Association for the Study of Pain. Phantom pain Pain that appears to originate in an amputated limb. Placebo effect Effects from a treatment that are not caused by the physical properties of a treatment but by the meaning ascribed to it. These effects reflect the brain’s own activation of modulatory systems, which is triggered by positive expectation or desire for a successful treatment. Placebo analgesia is the most well-studied placebo effect and has been shown to depend, to a large degree, on opioid mechanisms. Placebo analgesia can be reversed by the pharmacological blocking of  opioid receptors. The word “placebo” is probably derived from the Latin word “placebit” (“it will please”). Sensitization Increased responsiveness of nociceptive neurons to their normal input and/or recruitment of a response to normally subthreshold inputs. Clinically, sensitization may only be inferred indirectly from phenomena such as hyperalgesia or allodynia. Sensitization can occur in the central nervous system (central sensitization) or in the periphery (peripheral sensitization). S ocial touch hypothesis Proposes that social touch is a distinct domain of touch. C-tactile afferents form a special pathway that distinguishes social touch from other types of touch by selectively firing in response to touch of social-affective relevance; thus sending affective information parallel to the discriminatory information from the Aβ-fibers. In this way, the socially relevant touch stands out from the rest as having special positive emotional value and is processed further in affect-related brain areas such as the insula. Somatosensory cortex Consists of primary sensory cortex (S1) in the postcentral gyrus in the parietal lobes and secondary somatosensory cortex (S2), which is defined functionally and found in the upper bank of the lateral sulcus, called the parietal operculum. Somatosensory cortex also includes parts of the insular cortex. Somatotopically organized When the parts of the body that are represented in a particular brain region are organized topographically according to their physical location in the body (see Figure 8.5.2 illustration). Spinothalamic tract Runs through the spinal cord’s lateral column up to the thalamus. C-fibers enter the dorsal horn of the spinal cord and form a synapse with a neuron that then crosses over to the lateral column and becomes part of the spinothalamic tract. Transduction The mechanisms that convert stimuli into electrical signals that can be transmitted and processed by the nervous system. Physical or chemical stimulation creates action potentials in a receptor cell in the peripheral nervous system, which is then conducted along the axon to the central nervous system.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_8%3A_Sensation_and_Perception/8.5%3A_Touch_and_Pain.txt
By Dora Angelaki and J. David Dickman Baylor College of Medicine The vestibular system functions to detect head motion and position relative to gravity and is primarily involved in the fine control of visual gaze, posture, orthostasis, spatial orientation, and navigation. Vestibular signals are highly processed in many regions of the brain and are involved in many essential functions. In this module, we provide an overview of how the vestibular system works and how vestibular signals are used to guide behavior. learning objectives • Define the basic structures of the vestibular receptor system. • Describe the neuroanatomy of the vestibuloocular, vestibulospinal, and vestibulo-thalamo-cortical pathways. • Describe the vestibular commissural system. • Describe the different multisensory cortical areas for motion perception. Introduction Remember the dizzy feeling you got as a child after you jumped off the merry-go-round or spun around like a top? These feelings result from activation of the vestibular system, which detects our movements through space but is not a conscious sense like vision or hearing. In fact, most vestibular functions are imperceptible, but vestibular-related sensations such as motion sickness can pop up rapidly when riding on a roller coaster, having a bumpy plane ride, or a sailing a boat in rough seas. However, these sensations are really side effects and the vestibular system is actually extremely important for everyday activities, with vestibular signals being involved in much of the brain’s information processing that controls such fundamental functions as balance, posture, gaze stabilization, spatial orientation, and navigation, to name a few. In many regions of the brain, vestibular information is combined with signals from the other senses as well as with motor information to give rise to motion perception, body awareness, and behavioral control. Here, we will explore the workings of the vestibular system and consider some of the integrated computations the brain performs using vestibular signals to guide our common behavior. Structure of the vestibular receptors The vestibular receptors lie in the inner ear next to the auditory cochlea. They detect rotational motion (head turns), linear motion (translations), and tilts of the head relative to gravity and transduce these motions into neural signals that can be sent to the brain. There are five vestibular receptors in each ear (Hearing module, Figure 8.6.1- http://noba.to/jry3cu78), including three semicircular canals (horizontal, anterior, and posterior) that transduce rotational angular accelerations and two otolith receptors (utricle and saccule) that transduce linear accelerations (Lindeman, 1969). Together, the semicircular canals and otolith organs can respond to head motion and maintained static head position relative to gravity in all directions in 3D space. These receptors are contained in a series of interconnected fluid filled tubes that are protected by a dense, overlying bone (Iurato, 1967). Each of the three semicircular canals lies in a plane that is orthogonal to the other two. The horizontal semicircular canal lies in a roughly horizontal head plane, whereas the anterior and posterior semicircular canals lie vertically in the head (Blanks, Curthoys, Bennett, & Markham, 1985). The semicircular canal receptor cells, termed hair cells, are located only in the middle of the circular tubes in a special epithelium, covered by a gelatinous membrane that stretches across the tube to form a fluid-tight seal like the skin of a drum (Figures 1A and 1B). Hair cells are so named due to an array of nearly 100 staggered-height stereocilia (like a church pipe organ) that protrude from the top of the cell into the overlying gelatin membrane (Wersäll, 1956). The shortest stereocilia are at one end of the cell and the tallest at the other (Lindeman, 1969). When the head is rotated, the fluid in the semicircular canals lags behind the head motion and pushes on the gelatin membrane, which bends the stereocilia. As shown in Figure 8.6.2, when the head moves toward the receptor hair cells (e.g., left head turns for the left horizontal semicircular canal), the stereocilia are bent toward the tallest end and special mechanically gated ion channels in the tips of the cilia open, which excites (depolarizes) the cell (Shotwell, Jacobs, & Hudspeth, 1981). Head motion in the opposite direction causes bending toward the smallest stereocilia, which closes the channels and inhibits (hyperpolarizes) the cell. The left and right ear semicircular canals have opposite polarity, so for example, when you turn your head to the left, the receptors in the left horizontal semicircular canal will be excited while right ear horizontal canal receptors will be inhibited (Figure 8.6.3). The same relationship is true for the vertical semicircular canals. Vestibular afferent nerve fibers innervate the base of the hair cell and increase or decrease their neural firing rate as the receptor cell is excited or inhibited (Dickman and Correia, 1989), respectively, and then carry these signals regarding head rotational motion to the brain as part of the vestibulocochlear nerve(Cranial nerve VIII). They enter the brainstem and terminate in the ipsilateral vestibular nuclei, cerebellum, and reticular formation (Carleton & Carpenter, 1984; Dickman & Fang, 1996). The primary vestibular hair cell and afferent neurotransmitters are glutamate and aspartate. Due to the mechanical properties of the vestibular receptor system, rotational accelerations of the head are integrated into velocity signals (Van Egmond, Groen, & Jongkess, 1949) that are then encoded by semicircular canal afferents (Fernandez & Goldberg, 1971). Detection thresholds for rotational motion have shown that afferents can discriminate differences in head velocity on the order of 2 deg/sec, but also are sensitive to a broad range of natural head movements up to high head speeds in the hundreds of deg/sec (as you might experience when you make a fast head turn toward a loud sound, or are performing gymnastics; Sadeghi, Chacron, Taylor, & Cullen, 2007; Yu, Dickman, & Angelaki, 2012). Otolith receptors are sensitive to linear accelerations and tilts of the head relative to gravity (Fernandez & Goldberg, 1976a). The utricle otolith receptor lies parallel to the horizontal semicircular canal and the saccule receptor lies vertical in the head (Hearing module, Figure 8.6.1- http://noba.to/jry3cu78). As shown in Figure 8.6.4, a special otolith epithelium contains receptor hair cells whose stereocilia extend into a gelatin membrane that is covered by a layer of calcium carbonate crystals, termed otoconia, like rocks piled up to form a jetty (Lindeman, 1969). Otoconia are not affected by fluid movements but instead are displaced by linear accelerations, including translations (e.g., forward/backward or upward/downward motions) or changes in head position relative to gravity. These linear accelerations produce displacements of the otoconia (due to their high mass), much like rocks rolling down a hill or your coffee cup falling off the car dashboard when you push the gas pedal. Movements of the otoconia bend the hair cell stereocilia and open/close channels in a similar way to that described for the semicircular canals. However, otolith hair cells are polarized such that the tallest stereocilia are pointing toward the center of the utricle and away from the center in the saccule, which effectively splits the receptors into two opposing groups (Flock, 1964; Lindeman, 1969). In this way, some hair cells are excited and some inhibited for each linear motion force or head tilt experienced, with the population of receptors and their innervating afferents being directionally tuned to all motions or head tilts in 3D space (Fernandez & Goldberg, 1976b). All vestibular hair cells and afferents receive connections from vestibular efferents, which are fibers projecting from the brain out to the vestibular receptor organs, whose function is not well understood. It is thought that efferents control the sensitivity of the receptor (Boyle, Carey, & Highstein, 1991). The primary efferents neurotransmitter is acetylcholine (Anniko & Arnold, 1991). The vestibular nuclei The vestibular nuclei comprise a large set of neural elements in the brainstem that receive motion and other multisensory signals, then regulate movement responses and sensory experience. Many vestibular nuclei neurons have reciprocal connections with the cerebellum that form important regulatory mechanisms for the control of eye movements, head movements, and posture. There are four major vestibular nuclei that lie in the rostral medulla and caudal pons of the brainstem; all receive direct input from vestibular afferents (Brodal, 1984; Precht & Shimazu, 1965). Many of these nuclei neurons receive convergent motion information from the opposite ear through an inhibitory commissural pathway that uses gamma-aminobutyric acid (GABA) as a neurotransmitter (Kasahara & Uchino, 1974; Shimazu & Precht, 1966). The commissural pathway is highly organized such that cells receiving horizontal excitatory canal signals from the ipsilateral ear will also receive contralateral inhibitory horizontal canal signals from the opposite ear This fact gives rise to a “push-pull” vestibular function, whereby directional sensitivity to head movement is coded by opposing receptor signals. Because vestibular nuclei neurons receive information from bilateral inner ear receptors and because they maintain a high spontaneous firing rate (nearly 100 impulses/sec), they are thought to act to “compare” the relative discharge rates of left vs. right canal afferent firing activity. For example, during a leftward head turn, left brainstem nuclei neurons receive high firing-rate information from the left horizontal canal and low firing-rate information from the right horizontal canal. The comparison of activity is interpreted as a left head turn. Similar nuclei neuron responses exist when the head is pitched or rolled, with the vertical semicircular canals being stimulated by the rotational motion in their sensitivity planes. However, the opposing push-pull response from the vertical canals occurs with the anterior semicircular canal in one ear and the co-planar posterior semicircular canal of the opposite ear. Damage or disease that interrupts inner ear signal information from one side of the head can change the normal resting activity in the VIIIth nerve afferent fibers and will be interpreted by the brain as a head rotation, even though the head is stationary. These effects often lead to illusions of spinning or rotating that can be quite upsetting and may produce nausea or vomiting. However, over time the commissural fibers provide for vestibular compensation, a process by which the loss of unilateral vestibular receptor function is partially restored centrally and behavioral responses, such as the vestibuloocular reflex (VOR) and postural responses, mostly recover (Beraneck et al., 2003; Fetter & Zee, 1988,; Newlands, Hesse, Haque, & Angelaki, 2001; Newlands & Perachio, 1990). In addition to the commissural pathway, many vestibular nuclei neurons receive proprioceptive signals from the spinal cord regarding muscle movement and position, visual signals regarding spatial motion, other multisensory (e.g., trigeminal) signals, and higher order signals from the cortex. It is thought that the cortical inputs regulate fine gaze and postural control, as well as suppress the normal compensatory reflexes during motion in order to elicit volitional movements. Of special significance are convergent signals from the semicircular canal and otolith afferents that allow central vestibular neurons to compute specific properties of head motion (Dickman & Angelaki, 2002). For example, Einstein (1907) showed that linear accelerations are equivalent whether they arise from translational motion or from tilts of the head relative to gravity. The otolith receptors cannot discriminate between the two, so how is it that we can tell the difference between when we are translating forward and tilting backward, where the linear acceleration signaled by the otolith afferents is the same? Vestibular nuclei and cerebellar neurons use convergent signals from both the semicircular canals and the otolith receptors to discriminate between tilt and translation, and as a result, some cells encode head tilt (Zhou, 2006) while other cells encode translational motion (Angelaki, Shaikh, Green, & Dickman, 2004). Vestibuloocular system The vestibular system is responsible for controlling gaze stability during motion (Crane & Demer, 1997). For example, if we want to read the sign in a store window while walking by, we must maintain foveal fixation on the words while compensating for the combined rotational and translational head movements incurred during our stride. The vestibular system regulates compensatory eye, neck, spinal, and limb movements in order to maintain gaze (Keshner & Peterson, 1995). One of the major components contributing to gaze stability is the VOR, which produces reflexive eye movements that are equal in magnitude and opposite in direction to the perceived head motion in 3D space (Wilson et al., 1995). The VOR is so accurate and fast that it allows people to maintain visual fixation on objects of interest while experiencing demanding motion conditions, such as running, skiing, playing tennis, and driving. In fact, gaze stabilization in humans has been shown to be completely compensatory (essentially perfect) for most natural behaviors. To produce the VOR, vestibular neurons must control each of the six pairs of eye muscles in unison through a specific set of connections to the oculomotor nuclei (Ezure & Graf, 1984). The anterior and posterior semicircular canals along with the saccule control vertical and torsional (turning of the eye around the line of sight) eye movements, while the horizontal canals and the utricle control horizontal eye movements. To understand how the VOR works, let’s take the example of the compensatory response for a leftward head turn while reading the words on a computer screen. The basic pathway consists of horizontal semicircular canal afferents that project to specific neurons in the vestibular nuclei. These nuclei cells, in turn, send an excitatory signal to the contralateral abducens nucleus, which projects through the sixth cranial nerve to innervate the lateral rectus muscle (Figure 8.6.5). Some abducens neurons send an excitatory projection back across the midline to a subdivision of cells in the ipsilateral oculomotor nucleus, which, in turn, projects through the third cranial nerve to innervate the right (ipsilateral) medial rectus muscle. When a leftward head turn is made, the left horizontal canal vestibular afferents will increase their firing rate and consequently increase the activity of vestibular nuclei neurons projecting to the opposite (contralateral) right abducens nucleus. The abducens neurons produce contraction of the right lateral rectus and, through a separate cell projection to the left oculomotor nucleus, excite the left medial rectus muscles. In addition, matching bilateral inhibitory connections relax the left lateral rectus and right medial rectus eye muscles. The resulting rightward eye movement for both eyes stabilizes the object of interest upon the retina for greatest visual acuity. During linear translations, a different type of VOR also occurs (Paige & Tomko, 1991). For example, sideways motion to the left results in a horizontal rightward eye movement to maintain visual stability on an object of interest. In a similar manner, vertical up–down head movements (such as occur while walking or running) elicit oppositely directed vertical eye movements (Angelaki, McHenry, & Hess, 2000). For these reflexes, the amplitude of the translational VOR depends on viewing distance. This is due to the fact that the vergence angle (i.e., the angle between the lines of sight for each eye) varies as a function of the inverse of the distance to the viewed visual object (Schwarz, Busettini, & Miles, 1989). Visual objects that are far away (2 meters or more) require no vergence angle, but as the visual objects get closer (e.g., when holding your finger close to your nose), a large vergence angle is needed. During translational motion, the eyes will change their vergence angle as the visual object moves from close to farther away (or vice versa). These responses are a result of activation of the otolith receptors, with connections to the oculomotor nuclei similar to those described above for the rotational vestibuloocular reflex. With tilts of the head, the resulting eye movement is termed torsion, and consists of a rotational eye movement around the line of sight that is in the direction opposite to the head tilt. As mentioned above, there are major reciprocal connections between the vestibular nuclei and the cerebellum. It has been well established that these connections are crucial for adaptive motor learning in the vestibuloocular reflex (Lisberger, Pavelko, & Broussard, 1994). Vestibulo-spinal network There are two vestibular descending pathways that regulate body muscle responses to motion and gravity, consisting of the lateral vestibulo-spinal tract (LVST) and the medial vestibulo-spinal tract(MVST). Reflexive control of head and neck muscles arises through the neurons in the medial vestibulospinal tract (MVST). These neurons comprise the rapid vestibulocollic reflex (VCR) that serves to stabilize the head in space and participates in gaze control (Peterson, Goldber, Bilotto, & Fuller, 1985). The MVST neurons receive input from vestibular receptors and the cerebellum, and somatosensory information from the spinal cord. MVST neurons carry both excitatory and inhibitory signals to innervate neck flexor and extensor motor neurons in the spinal cord. For example, if one trips over a crack in the pavement while walking, MVST neurons will receive downward and forward linear acceleration signals from the otolith receptors and forward rotation acceleration signals from the vertical semicircular canals. The VCR will compensate by providing excitatory signals to the dorsal neck flexor muscles and inhibitory signals to the ventral neck extensor muscles, which moves the head upward and opposite to the falling motion to protect it from impact. The LVST comprises a topographic organization of vestibular nuclei cells that receive substantial input from the cerebellum, proprioceptive inputs from the spinal cord, and convergent afferent signals from vestibular receptors. LVST fibers project ipsilateral to many levels of motor neurons in the cord to provide coordination of different muscle groups for postural control (Shinoda, Sugiuchi, Futami, Ando, & Kawasaki, 1994). LVST neurons contain either acetylcholine or glutamate as a neurotransmitter and exert an excitatory influence upon extensor muscle motor neurons. For example, LVST fibers produce extension of the contralateral axial and limb musculature when the body is tilted sideways. These actions serve to stabilize the body’s center of gravity in order to preserve upright posture. Vestibulo-autonomic control Some vestibular nucleus neurons send projections to the reticular formation, dorsal pontine nuclei, and nucleus of the solitary tract. These connections regulate breathing and circulation through compensatory vestibular autonomic responses that stabilize respiration and blood pressure during body motion and changes relative to gravity. They may also be important for induction of motion sickness and emesis. Vestibular signals in the thalamus and cortex The cognitive perception of motion, spatial orientation, and navigation through space arises through multisensory information from vestibular, visual, and somatosensory signals in the thalamus and cortex (Figure 8.6.6). Vestibular nuclei neurons project bilaterally to the several thalamic regions. Neurons in the ventral posterior group respond to either vestibular signals alone, or to vestibular plus somatosensory signals, and projects to primary somatosensory cortex (area 3a, 2v), somatosensory association cortex, posterior parietal cortex (areas 5 and 7), and the insula of the temporal cortex (Marlinski & McCrea, 2008; Meng, May, Dickman, & Angelaki, 2007). The posterior nuclear group (PO), near the medial geniculate body, receives both vestibular and auditory signals as well as inputs from the superior colliculus and spinal cord, indicating an integration of multiple sensory signals. Some anterior pulvinar neurons also respond to motion stimuli and project to cortical area 3a, the posterior insula, and the temporo-parietal cortex (PIVC). In humans, electrical stimulation of the thalamic areas produces sensations of movement and sometimes dizziness. Area 2v cells respond to motion, and electrical stimulation of this area in humans produces sensations of moving, spinning, or dizziness. Area 3a lies at the base of the central sulcus adjacent to the motor cortex and is thought to be involved in integrative motor control of the head and body (Guldin, Akbarian, & Grusser, 1992). Neurons in the PIVC are multisensory, responding to body motion, somatosensory, proprioceptive, and visual motion stimuli (Chen, DeAngelis, & Angelaki, 2011; Grusser, Pause, & Schreiter, 1982). PIVC and areas 3a and 2v are heavily interconnected. Vestibular neurons also have been observed in the posterior parietal cortex; in area 7, in the ventral intraparietal area (VIP), the medial intraparietal area (MIP), and the medial superior temporal area (MST). VIP contains multimodal neurons involved in spatial coding. MIP and MST neurons respond to body motion through space by multisensory integration of visual motion and vestibular signals (Gu, DeAngelis, & Angelaki , 2007) and many MST cells are directly involved in heading perception (Gu, Watkins, Angelaki, & DeAngelis, 2006). Lesions of the parietal cortical areas can result in confusions in spatial awareness. Finally, areas involved with the control of saccades and pursuit eye movements, including area 6, area 8, and the superior frontal gyrus, receive vestibular signals (Fukushima, Sato, Fukushima, Shinmei, & Kaneko, 2000). How these different cortical regions contribute to our perception of motion and spatial orientation is still not well understood. Spatial orientation and navigation Our ability to know where we are and to navigate different spatial locations is essential for survival. It is believed that a cognitive map of our environment is created through exploration and then used for spatial orientation and navigation, such as driving to the store, or walking through a dark house (McNaughton, Battaglia, Jensen, Moser, & Moser, 2006). Cells in the limbic system and the hippocampus that contribute to these functions have been identified, including place cells, grid cells, and head direction cells (Figure 6B). Place cells in the hippocampus encode specific locations in the environment (O’Keefe, 1976). Grid cells in the entorhinal cortex encode spatial maps in a tessellated pattern (Hafting, Fyhn, Molden, Moser, & Moser, 2005). Head direction cells in the anterior-dorsal thalamus encode heading direction, independent of spatial location (Taube, 1995). It is thought that these cell types work together to provide for spatial orientation, spatial memory, and our ability to navigate. Both place cells and head direction cells depend upon a functioning vestibular system to maintain their directional and orientation information (Stackman, Clark, & Taube, 2002). The pathway by which vestibular signals reach the navigation network is not well understood; however, damage to the vestibular system, hippocampus, and dorsal thalamus regions often disrupts our ability to orient in familiar environments, navigate from place to place, or even to find our way home. Motion sickness Although a number of conditions can produce motion sickness, it is generally thought that it is evoked from a mismatch in sensory cues between vestibular, visual, and proprioceptive signals (Yates, Miller, & Lucot, 1998). For example, reading a book in a car on a winding road can produce motion sickness, whereby the accelerations experienced by the vestibular system do not match the visual input. However, if one looks out the window at the scenery going by during the same travel, no sickness occurs because the visual and vestibular cues are in alignment. Sea sickness, a form of motion sickness, appears to be a special case and arises from unusual vertical oscillatory and roll motion. Human studies have found that low frequency oscillations of 0.2 Hz and large amplitudes (such as found in large seas during a storm) are most likely to cause motion sickness, with higher frequencies offering little problems. Summary Here, we have seen that the vestibular system transduces and encodes signals about head motion and position with respect to gravity, information that is then used by the brain for many essential functions and behaviors. We actually understand a great deal regarding vestibular contributions to fundamental reflexes, such as compensatory eye movements and balance during motion. More recent progress has been made toward understanding how vestibular signals combine with other sensory cues, such as vision, in the thalamus and cortex to give rise to motion perception. However, there are many complex cognitive abilities that we know require vestibular information to function, such as spatial orientation and navigation behaviors, but these systems are only just beginning to be investigated. Future research regarding vestibular system function will likely be geared to seeking answers to questions regarding how the brain copes with vestibular signal loss. In fact, according to the National Institutes of Health, nearly 35% of Americans over the age of 40 (69 million people) have reported chronic vestibular-related problems. It is therefore of significant importance to human health to better understand how vestibular cues contribute to common brain functions and how better treatment options for vestibular dysfunction can be realized. Outside Resources Animated Video of the Vestibular System http://sites.sinauer.com/neuroscienc...ions14.01.html Discussion Questions 1. If a person sustains loss of the vestibular receptors in one ear due to disease or trauma, what symptoms would the person suffer? Would the symptoms be permanent? 2. Often motion sickness is relieved when a person looks at far distance objects, such as things located on the far horizon. Why does far distance viewing help in motion sickness while close distance view (like reading a map or book) make it worse? 3. Vestibular signals combine with visual signals in certain areas of cortex and assist in motion perception. What types of cues does the visual system provide for self motion through space? What types of vestibular signals would be consistent with rotational versus translational motion? Vocabulary Abducens nucleus A group of excitatory motor neurons in the medial brainstem that send projections through the VIth cranial nerve to control the ipsilateral lateral rectus muscle. In addition, abducens interneurons send an excitatory projection across the midline to a subdivision of cells in the ipsilateral oculomotor nucleus, which project through the IIIrd cranial nerve to innervate the ipsilateral medial rectus muscle. Acetylcholine An organic compound neurotransmitter consisting of acetic acid and choline. Depending upon the receptor type, acetycholine can have excitatory, inhibitory, or modulatory effects. Afferent nerve fibers Single neurons that innervate the receptor hair cells and carry vestibular signals to the brain as part of the vestibulocochlear nerve (cranial nerve VIII). Aspartate An excitatory amino acid neurotransmitter that is widely used by vestibular receptors, afferents, and many neurons in the brain. Compensatory reflexes A stabilizing motor reflex that occurs in response to a perceived movement, such as the vestibuloocular reflex, or the postural responses that occur during running or skiing. Depolarized When receptor hair cells have mechanically gated channels open, the cell increases its membrane voltage, which produces a release of neurotransmitter to excite the innervating nerve fiber. Detection thresholds The smallest amount of head motion that can be reliably reported by an observer. Directional tuning The preferred direction of motion that hair cells and afferents exhibit where a peak excitatory response occurs and the least preferred direction where no response occurs. Cells are said to be “tuned” for a best and worst direction of motion, with in-between motion directions eliciting a lesser but observable response. Gamma-aminobutyric acid A major inhibitory neurotransmitter in the vestibular commissural system. Gaze stability A combination of eye, neck, and head responses that are all coordinated to maintain visual fixation (fovea) upon a point of interest. Glutamate An excitatory amino acid neurotransmitter that is widely used by vestibular receptors, afferents, and many neurons in the brain. Hair cells The receptor cells of the vestibular system. They are termed hair cells due to the many hairlike cilia that extend from the apical surface of the cell into the gelatin membrane. Mechanical gated ion channels in the tips of the cilia open and close as the cilia bend to cause membrane voltage changes in the hair cell that are proportional to the intensity and direction of motion. Hyperpolarizes When receptor hair cells have mechanically gated channels close, the cell decreases its membrane voltage, which produces less release of neurotransmitters to inhibit the innervating nerve fiber. Lateral rectus muscle An eye muscle that turns outward in the horizontal plane. Lateral vestibulo-spinal tract Vestibular neurons that project to all levels of the spinal cord on the ipsilateral side to control posture and balance movements. Mechanically gated ion channels Ion channels located in the tips of the stereocilia on the receptor cells that open/close as the cilia bend toward the tallest/smallest cilia, respectively. These channels are permeable to potassium ions, which are abundant in the fluid bathing the top of the hair cells. Medial vestibulo-spinal tract Vestibular nucleus neurons project bilaterally to cervical spinal motor neurons for head and neck movement control. The tract principally functions in gaze direction and stability during motion. Neurotransmitters A chemical compound used to send signals from a receptor cell to a neuron, or from one neuron to another. Neurotransmitters can be excitatory, inhibitory, or modulatory and are packaged in small vesicles that are released from the end terminals of cells. Oculomotor nuclei Includes three neuronal groups in the brainstem, the abducens nucleus, the oculomotor nucleus, and the trochlear nucleus, whose cells send motor commands to the six pairs of eye muscles. Oculomotor nucleus A group of cells in the middle brainstem that contain subgroups of neurons that project to the medial rectus, inferior oblique, inferior rectus, and superior rectus muscles of the eyes through the 3rd cranial nerve. Otoconia Small calcium carbonate particles that are packed in a layer on top of the gelatin membrane that covers the otolith receptor hair cell stereocilia. Otolith receptors Two inner ear vestibular receptors (utricle and saccule) that transduce linear accelerations and head tilt relative to gravity into neural signals that are then transferred to the brain. Proprioceptive Sensory information regarding muscle position and movement arising from receptors in the muscles, tendons, and joints. Semicircular canals A set of three inner ear vestibular receptors (horizontal, anterior, posterior) that transduce head rotational accelerations into head rotational velocity signals that are then transferred to the brain. There are three semicircular canals in each ear, with the major planes of each canal being orthogonal to each other. Stereocilia Hairlike projections from the top of the receptor hair cells. The stereocilia are arranged in ascending height and when displaced toward the tallest cilia, the mechanical gated channels open and the cell is excited (depolarized). When the stereocilia are displaced toward the smallest cilia, the channels close and the cell is inhibited (hyperpolarized). Torsion A rotational eye movement around the line of sight that consists of a clockwise or counterclockwise direction. Vergence angle The angle between the line of sight for the two eyes. Low vergence angles indicate far-viewing objects, whereas large angles indicate viewing of near objects. Vestibular compensation Following injury to one side of vestibular receptors or the vestibulocochlear nerve, the central vestibular nuclei neurons gradually recover much of their function through plasticity mechanisms. The recovery is never complete, however, and extreme motion environments can lead to dizziness, nausea, problems with balance, and spatial memory. Vestibular efferents Nerve fibers originating from a nucleus in the brainstem that project from the brain to innervate the vestibular receptor hair cells and afferent nerve terminals. Efferents have a modulatory role on their targets, which is not well understood. Vestibular system Consists of a set of motion and gravity detection receptors in the inner ear, a set of primary nuclei in the brainstem, and a network of pathways carrying motion and gravity signals to many regions of the brain. Vestibulocochlear nerve The VIIIth cranial nerve that carries fibers innervating the vestibular receptors and the cochlea. Vestibuloocular reflex Eye movements produced by the vestibular brainstem that are equal in magnitude and opposite in direction to head motion. The VOR functions to maintain visual stability on a point of interest and is nearly perfect for all natural head movements.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_8%3A_Sensation_and_Perception/8.6%3A_The_Vestibular_System.txt
By Robert V. Levine California State University, Fresno There are profound cultural differences in how people think about, measure, and use their time. This module describes some major dimensions of time that are most prone to cultural variation. learning Objectives • Understand how cultures differ in the views of time and the importance of these differences for social behavior. • Explore major components of social time. • Use these concepts to better understand the hidden dimensions of culture. Introduction It is said that “time is money” in industrialized economies. Workers are paid by the hour, lawyers charge by the minute, and advertising is sold by the second (US\$3.3 million for a 30-second commercial, or a little over \$110,000 per second, for the 2012 Super Bowl). Remarkably, the civilized mind has reduced time—the most obscure and abstract of all intangibles—to the most objective of all quantities: money. With time and things on the same value scale, we can establish how many of our working hours equal the price of a product in a store. This way of thinking about time is not universal, however. Beliefs about time remain profoundly different from culture to culture. Research shows that cultural differences in time can be as vast as those between languages. In one particularly telling study of the roots of culture shock, Spradley and Phillips asked a group of returning Peace Corps volunteers to rank 33 items concerning the amount of cultural adjustment each had required of them. The list included a wide range of items familiar to fearful travelers, such as “the type of food eaten,” the “personal cleanliness of most people,” “the number of people of your own race,” and “the general standard of living.” But aside from mastering the foreign language, the two greatest difficulties for the Peace Corps volunteers were concerned with social time: “the general pace of life,” followed by one of its most significant components, “how punctual most people are” (Spradley & Phillips, 1972). Half a century ago anthropologist Edward Hall described cultural rules of social time as the “silent language” (Hall, 1983). These informal patterns of time “are seldom, if ever, made explicit. They exist in the air around us. They are either familiar and comfortable or unfamiliar and wrong.” The world over, children simply pick up their society’s conceptions of early and late, of waiting and rushing, of the past, the present, and the future, as they mature. No dictionary clearly defines these rules of time for them or for strangers who stumble over the maddening incongruities between the time sense they bring with them and the one they face in a new land. Cultures may differ on many aspects of social time—its value, meaning, how it should be divided, allocated, and measured. The following dimensions are particularly prone to different cultural, as well as individual, interpretations: Work Versus Leisure There are cultural differences in the value placed on work, on leisure, and upon the balance between the two. Although some balance is universal, the preferred formulas differ both across cultures and between individuals in each culture. The differences are marked even within highly industrialized countries, The United States and Japan are famous for long work hours, as exemplified by the terms “workaholic” and “karoshi” (“death by overwork”) (Levine, 1997). European nations tend to also emphasize work, with many differences among countries, but generally put greater emphasis on preserving nonwork time than do people in the United States and Japan (Levine, 2012). Time spent within the workplace also varies across cultures. People tend to spend more of their work time on-task in some cultures and more of that time socializing—informal chatting, having tea or coffee with others, etc.—in other cultures. Studies have found wide cultural variation in answers to the question: “In the companies for which you have worked, what percent of time do people typically spend on tasks that are part of their job description.” For example, people working in companies in large cities in the United States tend to report in the range of “80 percent task time, 20 percent social time.” On the other hand, people working in companies in India, Nepal, Indonesia, Malaysia, and some Latin American countries tend to give answers closer to “50 percent task time, 50 percent social time” (Brislin and Kim, 2003). Sequence Each culture sets rules concerning the appropriate sequence of tasks and activities. Is it work before play, or vice versa? Do people take all of their sleep at night, or is there a siesta in the midafternoon? Is one expected to have coffee or tea and socialize, and for how long, before getting down to serious business? There are also customs about sequences over the long run. For example, how long is the socially accepted period of childhood, if it exists at all, and when is it time to assume the responsibilities of an adult? Clock and Event Time The most fundamental difference in timekeeping throughout history has been between people operating by the clock and those who measure time by social events (Lauer, 1981). This profound difference in thinking about time continues to divide cultures today. Under clock time, the hour on the timepiece governs the beginning and ending of activities. Under event time, scheduling is determined by the flow of the activity. Events begin and end when, by mutual consensus, participants “feel” the time is right (Levine, 1997). In event-time societies, modes of time-reckoning tend to express social experience. Sometimes activities occur in finely coordinated sequences, but without observing the clock. For example, anthropologists have described how participants at an Indian wake move from gathering time to prayer time, singing time, intermission, and mealtime. They move by consensual feeling—when “the time feels right”—but with no apparent concern for the time on the clock. Many countries exhort event time as a philosophy of life. In East Africa, there is a popular adage that “Even the time takes its time.” In Trinidad, it is commonly said that “Any time is Trinidad time” (Birth, 1999). In the United States and much of Europe, by contrast, the right way to measure time is assumed to be by the clock. This is especially true when it comes to work hours. Time is money, and any time not focused on-task is seen as wasted time. Even the language of time may be more or less event-oriented. The Kachin people of North Burma, for example, have no single word equivalent of “time.” They use the word ahkying to refer to the “time” of the clock, na to a long “time,” tawng to a short “time,” ta to springtime, and asak to the “time” of a person’s life. Whereas, clock time cultures treat time as an objective entity—it is a noun in English—the Kachin words for time are treated more like adverbs (Levine, 1997). These different ways of time-keeping can often lead to cultural misunderstandings. Individuals operating on clock time are careful to be punctual and expect the same of others. Those on event time are more spontaneous in beginning and ending events and, as a result, tend to be less punctual and more understanding when others are less punctual. There are also differences within cultures—on both the individual and situational levels. To take just one example, some workers may prosper under clearly defined schedules while others may prefer to complete their work on their own schedules. Similarly, some jobs (for example, financial traders) demand clock-time precision while others (for example, some creative arts) thrive on the spontaneity of event-time scheduling. Levine (2012) argues for fluency in both approaches and to recognize when either is more beneficial. Calendars Many cultures use social activities to define their calendars rather than the other way around. The calendars of the Nuer people from the Upper Nile in the Sudan, for example, are based on the seasonal changes in their environment. They know that the month of kur is occurring because they are building their fishing dams and cattle camps. When they break camp and return to their villages, they know it must now be the month of dwat. Most societies have some type of week, but it is not always seven days long. The Muysca of Columbia had a three-day week. The Incas of Peru had a 10-day week. Often the length of the week reflects cycles of activities, rather than the other way around. For many, the market is the main activity requiring group coordination. The Khasi people hold their markets every eighth day. Consequently, they have made their week eight days long and named the days of the week after the places where the main markets occur (Levine, 2005). Polychronic and Monochronic Time Industrial/organizational psychologists emphasize the significance of monochronic versus polychronic work patterns (Bluedorn, 2002). People and organizations in clock-time cultures are more likely to emphasize monochronic (M-time) approaches, meaning they like to focus on one activity at a time. People in event time cultures, on the other hand, tend to emphasize polychronic (P-time) approaches, meaning they prefer to do several things at once. These labels were originally developed by Hall (1983). M-time people like to work from start to finish in linear sequence: The first task is begun and completed before turning to another, which is then begun and completed. In polychronic time, however, one project goes on until there is an inclination or inspiration to turn to another, which may lead to an idea for another, then back to first, with intermittent and unpredictable pauses and reassumptions of one task or another. Progress on P-time occurs a little at a time on each task. P-time cultures are characterized by a strong involvement with people. They emphasize the completion of human transactions rather than keeping to schedules. For example, two P-time individuals who are deep in conversation will typically choose to arrive late for their next appointment rather than cut into the flow of their discussion. Both would be insulted, in fact, if their partner were to abruptly terminate the conversation before it came to a spontaneous conclusion. Levine (2012) argues for the value of shifting between each approach depending on the characteristics of the individuals and the situations involved. In a corporation, for example, some positions may require tight scheduling of time (e.g., accountants during tax time). On the other hand, employees in research and development may be most productive when less tightly controlled. Silence and “Doing Nothing” In some cultures, notably the United States and Western Europe, silence makes people uncomfortable. It may denote nothing is happening or that something is going wrong. The usual response is to say something, to fill the silence or to keep the meeting or conversation going. People in other cultures, including many Asian and Pacific Island nations, are quite comfortable with silence. It is seen as an opportunity to focus inward and gather one’s thoughts before you speak. The Japanese emphasize “ma,” which roughly translates as the “space” between things, or the “pause.” It implies that what happens between things, or what doesn’t seem to be happening, is as or more important than what is visibly happening. As an extreme example, consider a question people in Brunei often begin their day by asking: “What isn’t going to happen today?” Brislin (2000) has described how cultural misunderstandings and counterproductive decisions often arise from these differences. For example, “Americans will sometimes misinterpret long periods of silence as a signal that they should make a concession. Their negotiating counterparts in Asia know this and will sometimes prolong their silence in the expectation that a concession will be made.” A related temporal difference concerns what people perceive as “wasted time.” People, cultures, and economies that emphasize the rule that “time is money” may see any time not devoted to tangible production as wasted time. People in other cultures, however, believe that overemphasis on this rule is a waste of one’s time in a larger sense, that it is a wasteful way to spend one’s life. If something more worthy of one’s attention—be it social- or work-related—challenges a planned schedule, it is seen as wasteful to not deviate from the planned schedule. In fact, the term “wasted time” may make little sense. A typical comment may be, “There is no such thing as wasted time. If you are not doing one thing, you are doing something else” (Levine, 1997). Norms Concerning Waiting Cultures differ in their norms for waiting, not only how long it is appropriate to keep a person waiting but how the rules change depending on the situation and the people involved. Levine (1997) describes a number of “rules” to waiting and how these rules differ in various cultures. Some useful questions: Are the rules based on the principle that time is money? Who is expected to wait for whom, under what circumstances, and for how long? Are some individuals—by virtue of their status, power, and/or wealth—exempt from waiting? What is the protocol for waiting in line? Is it an orderly procedure, as in the United Kingdom, or do people just nudge their way through the crowd, pushing the people ahead of them, until they somehow make their way to the front, as in India? Is there a procedure for buying oneself a place in front, or off the line completely? What social message is being sent when the accepted rules are broken? Temporal Orientation There are individual and cultural differences in people’s orientation toward the past, present, and future. Zimbardo and Boyd (2008) have developed a scale that distinguishes between six types of temporal frames: 1. Past negative—a pessimistic, negative, or aversive orientation toward the past. 2. Past positive—a warm, sentimental, nostalgic, and positive construction of the past. 3. Present hedonistic—hedonistic orientation attitude toward time and life. 4. Present fatalistic—a fatalistic, helpless, and hopeless attitude toward the future and life. 5. Future—planning for, and achievement of, future goals, characterizing a general future orientation. 6. Future transcendental—an orientation to the future beyond one’s own death. Zimbardo and Boyd have found large individual and cultural differences on both the individual subscales and the patterns of the subscales taken together. They describe a wide range of consequences of these differences. Time perspective affects political, economic, personal, social, environmental, and other domains of life and society. One of the paradoxes, they report, is that each particular temporal perspective is associated with numerous personal and social benefits but that, in excess, they are associated with even greater costs. There are both positive and negative processes associated with each perspective. Individuals who focus on the past, for example, are often described with terms such as happy, grateful, patriotic, high self-esteem, and having strong personal values; on the other hand, past time perspective can be associated with terms such as depressed, guilty, angry and revengeful, and resistant to change. Similarly, a focus on the present may be associated with strong social affiliations, joy, sensuality, sexuality, energy, and improvisation; but it may also be associated with violence, anger, over-fatalism, risk-taking, and addictive behavior. A focus on the future may be associated with achievement, self-efficacy, healthy behaviors, and hope for change; but also with anxiety, social isolation, competitiveness, and unhealthy physical consequences ranging from coronary artery disease to sexual impotence. The authors argue for the importance of a healthy balance in one’s temporal orientation. The Pace of Life There are profound differences in the pace of life on many levels—individual temperament, cultural norms, between places, at different times, during different activities. Levine and Norenzayan (1999) conducted a series of field experiments measuring walking speed, work speed, and concern with clock time in countries around the world. They found that the characteristic pace of life of a place has consequences—both positive and negative—for the physical, social, economic, and psychological well-being of people who live there. The optimal pace, they argue, requires flexibility and sensitivity to matching individual preferences to the requirements of the situation. Conclusion Understanding the values and assumptions a culture places on these temporal dimensions is essential to creating policies that enhance the quality of peoples’ lives. The historian Lewis Mumford once observed how “each culture believes that every other space and time is an approximation to or perversion of the real space and time in which it lives.” The truth, however, is there is no single correct way to think about time. There are different ways of thinking, each with their pluses and minuses, and all may be of value in given situations. Outside Resources Video: Dealing with Time Video: RSA Animate—The Secret Powers of Time Discussion Questions 1. Can you give an example of Edward Hall’s notion of time as a “silent language”? 2. Can you give an example of clock time in your own life? Can you give an example of event time? 3. Are there activities where you might benefit from another culture’s approach to time rather than your usual approach? Give an example. 4. What do you think are the consequences, both positive and negative, of a faster pace of life? 5. Is it fair to conclude that some cultural time practices are more advanced than others? That some are healthier than others? Explain. Vocabulary Clock time Scheduling activities according to the time on the clock. Ma Japanese way of thinking that emphasizes attention to the spaces between things rather than the things themselves. Monochronic (M-time) Monochronic thinking focuses on doing one activity, from beginning to completion, at a time. Pace of life The frequency of events per unit of time; also referred to as speed or tempo. Polychronic (P-time) Polychronic thinking switches back and forth among multiple activities as the situation demands. Silent language Cultural norms of time and time use as they pertain to social communication and interaction. Social time Scheduling by the flow of the activity. Events begin and end when, by mutual consensus, participants “feel” the time is right. Temporal perspective The extent to which we are oriented toward the past, present, and future.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_8%3A_Sensation_and_Perception/8.7%3A_Time_and_Culture.txt
By Daniel Simons University of Illinois at Urbana-Champaign We think important objects and events in our world will automatically grab our attention, but they often don’t, particularly when our attention is focused on something else. The failure to notice unexpected objects or events when attention is focused elsewhere is now known as inattentional blindness. The study of such failures of awareness has a long history, but their practical importance has received increasing attention over the past decade. This module describes the history and status of research on inattentional blindness, discusses the reasons why we find these results to be counterintuitive, and the implications of failures of awareness for how we see and act in our world. learning objectives • Learn about inattentional blindness and why it occurs. • Identify ways in which failures of awareness are counterintuitive. • Better understand the link between focused attention and failures of awareness. Do you regularly spot editing errors in movies? Can you multitask effectively, texting while talking with your friends or watching television? Are you fully aware of your surroundings? If you answered yes to any of those questions, you’re not alone. And, you’re most likely wrong. More than 50 years ago, experimental psychologists began documenting the many ways that our perception of the world is limited, not by our eyes and ears, but by our minds. We appear able to process only one stream of information at a time, effectively filtering other information from awareness. To a large extent, we perceive only that which receives the focus of our cognitive efforts: our attention. Imagine the following task, known as dichotic listening (e.g., Cherry, 1953; Moray, 1959; Treisman, 1960): You put on a set of headphones that play two completely different speech streams, one to your left ear and one to your right ear. Your task is to repeat each syllable spoken into your left ear as quickly and accurately as possible, mimicking each sound as you hear it. When performing this attention-demanding task, you won’t notice if the speaker in your right ear switches to a different language or is replaced by a different speaker with a similar voice. You won’t notice if the content of their speech becomes nonsensical. In effect, you are deaf to the substance of the ignored speech. But, that is not because of the limits of your auditory senses. It is a form of cognitive deafness, due to the nature of focused, selective attention. Even if the speaker on your right headphone says your name, you will notice it only about one-third of the time (Conway, Cowan, & Bunting, 2001). And, at least by some accounts, you only notice it that often because you still devote some of your limited attention to the ignored speech stream (Holendar, 1986). In this task, you will tend to notice only large physical changes (e.g., a switch from a male to a female speaker), but not substantive ones, except in rare cases. This selective listening task highlights the power of attention to filter extraneous information from awareness while letting in only those elements of our world that we want to hear. Focused attention is crucial to our powers of observation, making it possible for us to zero in on what we want to see or hear while filtering out irrelevant distractions. But, it has consequences as well: We can miss what would otherwise be obvious and important signals. The same pattern holds for vision. In a groundbreaking series of studies in the 1970s and early 1980s, Neisser and his colleagues devised a visual analogue of the dichotic listening task (Neisser & Becklen, 1975). Their subjects viewed a video of two distinct, but partially transparent and overlapping, events. For example, one event might involve two people playing a hand-clapping game and the other might show people passing a ball. Because the two events were partially transparent and overlapping, both produced sensory signals on the retina regardless of which event received the participant’s attention. When participants were asked to monitor one of the events by counting the number of times the actors performed an action (e.g., hand clapping or completed passes), they often failed to notice unexpected events in the ignored video stream (e.g., the hand-clapping players stopping their game and shaking hands). As for dichotic listening, the participants were unaware of events happening outside the focus of their attention, even when looking right at them. They could tell that other “stuff” was happening on the screen, but many were unaware of the meaning or substance of that stuff. To test the power of selective attention to induce failures of awareness, Neisser and colleagues (Neisser, 1979) designed a variant of this task in which participants watched a video of two teams of players, one wearing white shirts and one wearing black shirts. Subjects were asked to press a key whenever the players in white successfully passed a ball, but to ignore the players in black. As for the other videos, the teams were filmed separately and then superimposed so that they literally occupied the same space (they were partially transparent). Partway through the video, a person wearing a raincoat and carrying an umbrella strolled through the scene. People were so intently focused on spotting passes that they often missed the “umbrella woman.” (Pro tip: If you look closely at the video, you’ll see that Ulric Neisser plays on both the black and white teams.) These surprising findings were well known in the field, but for decades, researchers dismissed their implications because the displays had such an odd, ghostly appearance. Of course, we would notice if the displays were fully opaque and vivid rather than partly transparent and grainy. Surprisingly, no studies were built on Neisser’s method for nearly 20 years. Inspired by these counterintuitive findings and after discussing them with Neisser himself, Christopher Chabris and I revisited them in the late 1990s (Simons & Chabris, 1999). We replicated Neisser’s work, again finding that many people missed the umbrella woman when all of the actors in the video were partially transparent and occupying the same space. But, we added another wrinkle: a version of the video in which all of the actions of both teams of players were choreographed and filmed with a single camera. The players moved in and around each other and were fully visible. In the most dramatic version, we had a woman in a gorilla suit walk into the scene, stop to face the camera, thump her chest, and then walk off the other side after nine seconds on screen. Fully half the observers missed the gorilla when counting passes by the team in white. This phenomenon is now known as inattentional blindness, the surprising failure to notice an unexpected object or event when attention is focused on something else (Mack & Rock, 1998). The past 15 years has seen a surge of interest in such failures of awareness, and we now have a better handle on the factors that cause people to miss unexpected events as well as the range of situations in which inattentional blindness occurs. People are much more likely to notice unexpected objects that share features with the attended items in a display (Most et al., 2001). For example, if you count passes by the players wearing black, you are more likely to notice the gorilla than if you count passes by the players wearing white because the color of the gorilla more closely matches that of the black-shirted players (Simons & Chabris, 1999). However, even unique items can go unnoticed. In one task, people monitored black shapes and ignored white shapes that moved around a computer window (Most et al., 2001). Approximately 30 percent of them failed to detect the bright red cross traversing the display, even though it was the only colored item and was visible for five seconds. Another crucial influence on noticing is the effort you put into the attention-demanding task. If you have to keep separate counts of bounce passes and aerial passes, you are less likely to notice the gorilla (Simons & Chabris, 1999), and if you are tracking faster moving objects, you are less likely to notice (Simons & Jensen, 2009). You can even miss unexpected visual objects when you devote your limited cognitive resources to a memory task (Fougnie & Marois, 2007), so the limits are not purely visual. Instead, they appear to reflect limits on the capacity of attention. Without attention to the unexpected event, you are unlikely to become aware of it (Mack & Rock, 1998; Most, Scholl, Clifford, & Simons, 2005). Inattentional blindness is not just a laboratory curiosity—it also occurs in the real world and under more natural conditions. In a recent study (Chabris, Weinberger, Fontaine, & Simons, 2011), Chabris and colleagues simulated a famous police misconduct case in which a Boston police officer was convicted of lying because he claimed not to have seen a brutal beating (Lehr, 2009). At the time, he had been chasing a murder suspect and ran right past the scene of a brutal assault. In Chabris’ simulation, subjects jogged behind an experimenter who ran right past a simulated fight scene. At night, 65 percent missed the fight scene. Even during broad daylight, 44 percent of observers jogged right passed it without noticing, lending some plausibility to the Boston cop’s story that he was telling the truth and never saw the beating. Perhaps more importantly, auditory distractions can induce real-world failures to see. Although people believe they can multitask, few can. And, talking on a phone while driving or walking decreases situation awareness and increases the chances that people will miss something important (Strayer & Johnston, 2001). In a dramatic illustration of cell phone–induced inattentional blindness, Ira Hymen observed that people talking on a cell phone as they walked across a college campus were less likely than other pedestrians to notice a unicycling clown who rode across their path (Hyman, Boss, Wise, McKenzie, & Caggiano, 2011). Recently, the study of this sort of awareness failure has returned to its roots in studies of listening, with studies documenting inattentional deafness: When listening to a set of spatially localized conversations over headphones, people often fail to notice the voice of a person walking through the scene repeatedly stating “I am a gorilla” (Dalton & Fraenkel, 2012). Under conditions of focused attention, we see and hear far less of the unattended information than we might expect (Macdonald & Lavie, 2011; Wayand, Levin, & Varakin, 2005). We now have a good understanding of the ways in which focused attention affects the detection of unexpected objects falling outside that focus. The greater the demands on attention, the less likely people are to notice objects falling outside their attention (Macdonald & Lavie, 2011; Simons & Chabris, 1999; Simons & Jensen, 2009). The more like the ignored elements of a scene, the less likely people are to notice. And, the more distracted we are, the less likely we are to be aware of our surroundings. Under conditions of distraction, we effectively develop tunnel vision. Despite this growing understanding of the limits of attention and the factors that lead to more or less noticing, we have relatively less understanding of individual differences in noticing (Simons & Jensen, 2009). Do some people consistently notice the unexpected while others are obliviously unaware of their surroundings? Or, are we all subject to inattentional blindness due to structural limits on the nature of attention? The question remains controversial. A few studies suggest that those people who have a greater working memory capacity are more likely to notice unexpected objects (Hannon & Richards, 2010; Richards, Hannon, & Derakshan, 2010). In effect, those who have more resources available when focusing attention are more likely to spot other aspects of their world. However, other studies find no such relationship: Those with greater working memory capacity are not any more likely to spot an unexpected object or event (Seegmiller, Watson, & Strayer, 2011; Bredemeier & Simons, 2012). There are theoretical reasons to predict each pattern. With more resources available, people should be more likely to notice (see Macdonald & Lavie, 2011). However, people with greater working memory capacity also tend to be better able to maintain their focus on their prescribed task, meaning that they should be less likely to notice. At least one study suggests that the ability to perform a task does not predict the likelihood of noticing (Simons & Jensen, 2009; for a replication, see Bredemeier & Simons, 2012). In a study I conducted with Melinda Jensen, we measured how well people could track moving objects around a display, gradually increasing the speed until people reached a level of 75% accuracy. Tracking ability varied greatly: Some people could track objects at more than twice the speed others could. Yet, the ability to track objects more easily was unrelated to the odds of noticing an unexpected event. Apparently, as long as people try to perform the tracking task, they are relatively unlikely to notice unexpected events. What makes these findings interesting and important is that they run counter to our intuitions. Most people are confident they would notice the chest-thumping gorilla. In fact, nearly 90%believe they would spot the gorilla (Levin & Angelone, 2008), and in a national survey, 78% agreed with the statement, “People generally notice when something unexpected enters their field of view, even when they’re paying attention to something else” (Simons & Chabris, 2010). Similarly, people are convinced that they would spot errors in movies or changes to a conversation partner (Levin & Angelone, 2008). We think we see and remember far more of our surroundings than we actually do. But why do we have such mistaken intuitions? One explanation for this mistaken intuition is that our experiences themselves mislead us (Simons & Chabris, 2010). We rarely experience a study situation such as the gorilla experiment in which we are forced to confront something obvious that we just missed. That partly explains why demonstrations such as that one are so powerful: We expect that we would notice the gorilla, and we cannot readily explain away our failure to notice it. Most of the time, we are happily unaware of what we have missed, but we are fully aware of those elements of a scene that we have noticed. Consequently, if we assume our experiences are representative of the state of the world, we will conclude that we notice unexpected events. We don’t easily think about what we’re missing. Given the limits on attention coupled with our mistaken impression that important events will capture our attention, how has our species survived? Why weren’t our ancestors eaten by unexpected predators? One reason is that our ability to focus attention intently might have been more evolutionarily useful than the ability to notice unexpected events. After all, for an event to be unexpected, it must occur relatively infrequently. Moreover, most events don’t require our immediate attention, so if inattentional blindness delays our ability to notice the events, the consequences could well be minimal. In a social context, others might notice that event and call attention to it. Although inattentional blindness might have had minimal consequences over the course of our evolutionary history, it does have consequences now. At pedestrian speeds and with minimal distraction, inattentional blindness might not matter for survival. But in modern society, we face greater distractions and move at greater speeds, and even a minor delay in noticing something unexpected can mean the difference between a fender-bender and a lethal collision. If talking on a phone increases your odds of missing a unicycling clown, it likely also increases your odds of missing the child who runs into the street or the car that runs a red light. Why, then, do people continue to talk on the phone when driving? The reason might well be the same mistaken intuition that makes inattentional blindness surprising: Drivers simply do not notice how distracted they are when they are talking on a phone, so they believe they can drive just as well when talking on a phone even though they can’t (Strayer & Johnston, 2001). So, what can you do about inattentional blindness? The short answer appears to be, “not much.” There is no magical elixir that will overcome the limits on attention, allowing you to notice everything (and that would not be a good outcome anyway). But, there is something you can do to mitigate the consequences of such limits. Now that you know about inattentional blindness, you can take steps to limit its impact by recognizing how your intuitions will lead you astray. First, maximize the attention you do have available by avoiding distractions, especially under conditions for which an unexpected event might be catastrophic. The ring of a new call or the ding of a new text are hard to resist, so make it impossible to succumb to the temptation by turning your phone off or putting it somewhere out of reach when you are driving. If you know that you will be tempted and you know that using your phone will increase inattentional blindness, you must be proactive. Second, pay attention to what others might not notice. If you are a bicyclist, don’t assume that the driver sees you, even if they appear to make eye contact. Looking is not the same as seeing. Only by understanding the limits of attention and by recognizing our mistaken beliefs about what we “know” to be true can we avoid the modern-day consequences of those limits. Outside Resources Article: Scholarpedia article on inattentional blindness http://www.scholarpedia.org/article/...onal_blindness Video: The original gorilla video Video: The sequel to the gorilla video Web: Website for Chabris & Simons book, The Invisible Gorilla. Includes links to videos and descriptions of the research on inattentional blindness http://www.theinvisiblegorilla.com Discussion Questions 1. Many people, upon learning about inattentional blindness, try to think of ways to eliminate it, allowing themselves complete situation awareness. Why might we be far worse off if we were not subject to inattentional blindness? 2. If inattentional blindness cannot be eliminated, what steps might you take to avoid its consequences? 3. Can you think of situations in which inattentional blindness is highly likely to be a problem? Can you think of cases in which inattentional blindness would not have much of an impact? Vocabulary Dichotic listening A task in which different audio streams are presented to each ear. Typically, people are asked to monitor one stream while ignoring the other. Inattentional blindness The failure to notice a fully visible, but unexpected, object or event when attention is devoted to something else. Inattentional deafness The auditory analog of inattentional blindness. People fail to notice an unexpected sound or voice when attention is devoted to other aspects of a scene. Selective listening A method for studying selective attention in which people focus attention on one auditory stream of information while deliberately ignoring other auditory information.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_8%3A_Sensation_and_Perception/8.8%3A_Failures_of_Awareness_-_The_Case_of_Intentional_Blindness.txt
By Cara Laney and Elizabeth F. Loftus Reed College, University of California, Irvine Eyewitnesses can provide very compelling legal testimony, but rather than recording experiences flawlessly, their memories are susceptible to a variety of errors and biases. They (like the rest of us) can make errors in remembering specific details and can even remember whole events that did not actually happen. In this module, we discuss several of the common types of errors, and what they can tell us about human memory and its interactions with the legal system. learning objectives • Describe the kinds of mistakes that eyewitnesses commonly make and some of the ways that this can impede justice. • Explain some of the errors that are common in human memory. • Describe some of the important research that has demonstrated human memory errors and their consequences. What Is Eyewitness Testimony? Eyewitness testimony is what happens when a person witnesses a crime (or accident, or other legally important event) and later gets up on the stand and recalls for the court all the details of the witnessed event. It involves a more complicated process than might initially be presumed. It includes what happens during the actual crime to facilitate or hamper witnessing, as well as everything that happens from the time the event is over to the later courtroom appearance. The eyewitness may be interviewed by the police and numerous lawyers, describe the perpetrator to several different people, and make an identification of the perpetrator, among other things. Why Is Eyewitness Testimony an Important Area of Psychological Research? When an eyewitness stands up in front of the court and describes what happened from her own perspective, this testimony can be extremely compelling—it is hard for those hearing this testimony to take it “with a grain of salt,” or otherwise adjust its power. But to what extent is this necessary? There is now a wealth of evidence, from research conducted over several decades, suggesting that eyewitness testimony is probably the most persuasive form of evidence presented in court, but in many cases, its accuracy is dubious. There is also evidence that mistaken eyewitness evidence can lead to wrongful conviction—sending people to prison for years or decades, even to death row, for crimes they did not commit. Faulty eyewitness testimony has been implicated in at least 75% of DNA exoneration cases—more than any other cause (Garrett, 2011). In a particularly famous case, a man named Ronald Cotton was identified by a rape victim, Jennifer Thompson, as her rapist, and was found guilty and sentenced to life in prison. After more than 10 years, he was exonerated (and the real rapist identified) based on DNA evidence. For details on this case and other (relatively) lucky individuals whose false convictions were subsequently overturned with DNA evidence, see the Innocence Project website (http://www.innocenceproject.org/). There is also hope, though, that many of the errors may be avoidable if proper precautions are taken during the investigative and judicial processes. Psychological science has taught us what some of those precautions might involve, and we discuss some of that science now. Misinformation In an early study of eyewitness memory, undergraduate subjects first watched a slideshow depicting a small red car driving and then hitting a pedestrian (Loftus, Miller, & Burns, 1978). Some subjects were then asked leading questions about what had happened in the slides. For example, subjects were asked, “How fast was the car traveling when it passed the yield sign?” But this question was actually designed to be misleading, because the original slide included a stop sign rather than a yield sign. Later, subjects were shown pairs of slides. One of the pair was the original slide containing the stop sign; the other was a replacement slide containing a yield sign. Subjects were asked which of the pair they had previously seen. Subjects who had been asked about the yield sign were likely to pick the slide showing the yield sign, even though they had originally seen the slide with the stop sign. In other words, the misinformation in the leading question led to inaccurate memory. This phenomenon is called the misinformation effect, because the misinformation that subjects were exposed to after the event (here in the form of a misleading question) apparently contaminates subjects’ memories of what they witnessed. Hundreds of subsequent studies have demonstrated that memory can be contaminated by erroneous information that people are exposed to after they witness an event (see Frenda, Nichols, & Loftus, 2011; Loftus, 2005). The misinformation in these studies has led people to incorrectly remember everything from small but crucial details of a perpetrator’s appearance to objects as large as a barn that wasn’t there at all. These studies have demonstrated that young adults (the typical research subjects in psychology) are often susceptible to misinformation, but that children and older adults can be even more susceptible (Bartlett & Memon, 2007; Ceci & Bruck, 1995). In addition, misinformation effects can occur easily, and without any intention to deceive (Allan & Gabbert, 2008). Even slight differences in the wording of a question can lead to misinformation effects. Subjects in one study were more likely to say yes when asked “Did you see the broken headlight?” than when asked “Did you see a broken headlight?” (Loftus, 1975). Other studies have shown that misinformation can corrupt memory even more easily when it is encountered in social situations (Gabbert, Memon, Allan, & Wright, 2004). This is a problem particularly in cases where more than one person witnesses a crime. In these cases, witnesses tend to talk to one another in the immediate aftermath of the crime, including as they wait for police to arrive. But because different witnesses are different people with different perspectives, they are likely to see or notice different things, and thus remember different things, even when they witness the same event. So when they communicate about the crime later, they not only reinforce common memories for the event, they also contaminate each other’s memories for the event (Gabbert, Memon, & Allan, 2003; Paterson & Kemp, 2006; Takarangi, Parker, & Garry, 2006). The misinformation effect has been modeled in the laboratory. Researchers had subjects watch a video in pairs. Both subjects sat in front of the same screen, but because they wore differently polarized glasses, they saw two different versions of a video, projected onto a screen. So, although they were both watching the same screen, and believed (quite reasonably) that they were watching the same video, they were actually watching two different versions of the video (Garry, French, Kinzett, & Mori, 2008). In the video, Eric the electrician is seen wandering through an unoccupied house and helping himself to the contents thereof. A total of eight details were different between the two videos. After watching the videos, the “co-witnesses” worked together on 12 memory test questions. Four of these questions dealt with details that were different in the two versions of the video, so subjects had the chance to influence one another. Then subjects worked individually on 20 additional memory test questions. Eight of these were for details that were different in the two videos. Subjects’ accuracy was highly dependent on whether they had discussed the details previously. Their accuracy for items they had not previously discussed with their co-witness was 79%. But for items that they had discussed, their accuracy dropped markedly, to 34%. That is, subjects allowed their co-witnesses to corrupt their memories for what they had seen. Identifying Perpetrators In addition to correctly remembering many details of the crimes they witness, eyewitnesses often need to remember the faces and other identifying features of the perpetrators of those crimes. Eyewitnesses are often asked to describe that perpetrator to law enforcement and later to make identifications from books of mug shots or lineups. Here, too, there is a substantial body of research demonstrating that eyewitnesses can make serious, but often understandable and even predictable, errors (Caputo & Dunning, 2007; Cutler & Penrod, 1995). In most jurisdictions in the United States, lineups are typically conducted with pictures, called photo spreads, rather than with actual people standing behind one-way glass (Wells, Memon, & Penrod, 2006). The eyewitness is given a set of small pictures of perhaps six or eight individuals who are dressed similarly and photographed in similar circumstances. One of these individuals is the police suspect, and the remainder are “foils” or “fillers” (people known to be innocent of the particular crime under investigation). If the eyewitness identifies the suspect, then the investigation of that suspect is likely to progress. If a witness identifies a foil or no one, then the police may choose to move their investigation in another direction. This process is modeled in laboratory studies of eyewitness identifications. In these studies, research subjects witness a mock crime (often as a short video) and then are asked to make an identification from a photo or a live lineup. Sometimes the lineups are target present, meaning that the perpetrator from the mock crime is actually in the lineup, and sometimes they are target absent, meaning that the lineup is made up entirely of foils. The subjects, or mock witnesses, are given some instructions and asked to pick the perpetrator out of the lineup. The particular details of the witnessing experience, the instructions, and the lineup members can all influence the extent to which the mock witness is likely to pick the perpetrator out of the lineup, or indeed to make any selection at all. Mock witnesses (and indeed real witnesses) can make errors in two different ways. They can fail to pick the perpetrator out of a target present lineup (by picking a foil or by neglecting to make a selection), or they can pick a foil in a target absent lineup (wherein the only correct choice is to not make a selection). Some factors have been shown to make eyewitness identification errors particularly likely. These include poor vision or viewing conditions during the crime, particularly stressful witnessing experiences, too little time to view the perpetrator or perpetrators, too much delay between witnessing and identifying, and being asked to identify a perpetrator from a race other than one’s own (Bornstein, Deffenbacher, Penrod, & McGorty, 2012; Brigham, Bennett, Meissner, & Mitchell, 2007; Burton, Wilson, Cowan, & Bruce, 1999; Deffenbacher, Bornstein, Penrod, & McGorty, 2004). It is hard for the legal system to do much about most of these problems. But there are some things that the justice system can do to help lineup identifications “go right.” For example, investigators can put together high-quality, fair lineups. A fair lineup is one in which the suspect and each of the foils is equally likely to be chosen by someone who has read an eyewitness description of the perpetrator but who did not actually witness the crime (Brigham, Ready, & Spier, 1990). This means that no one in the lineup should “stick out,” and that everyone should match the description given by the eyewitness. Other important recommendations that have come out of this research include better ways to conduct lineups, “double blind” lineups, unbiased instructions for witnesses, and conducting lineups in a sequential fashion (see Technical Working Group for Eyewitness Evidence, 1999; Wells et al., 1998; Wells & Olson, 2003). Kinds of Memory Biases Memory is also susceptible to a wide variety of other biases and errors. People can forget events that happened to them and people they once knew. They can mix up details across time and place. They can even remember whole complex events that never happened at all. Importantly, these errors, once made, can be very hard to unmake. A memory is no less “memorable” just because it is wrong. Some small memory errors are commonplace, and you have no doubt experienced many of them. You set down your keys without paying attention, and then cannot find them later when you go to look for them. You try to come up with a person’s name but cannot find it, even though you have the sense that it is right at the tip of your tongue (psychologists actually call this the tip-of-the-tongue effect, or TOT) (Brown, 1991). Other sorts of memory biases are more complicated and longer lasting. For example, it turns out that our expectations and beliefs about how the world works can have huge influences on our memories. Because many aspects of our everyday lives are full of redundancies, our memory systems take advantage of the recurring patterns by forming and using schemata, or memory templates (Alba & Hasher, 1983; Brewer & Treyens, 1981). Thus, we know to expect that a library will have shelves and tables and librarians, and so we don’t have to spend energy noticing these at the time. The result of this lack of attention, however, is that one is likely to remember schema-consistent information (such as tables), and to remember them in a rather generic way, whether or not they were actually present. False Memory Some memory errors are so “large” that they almost belong in a class of their own: false memories. Back in the early 1990s a pattern emerged whereby people would go into therapy for depression and other everyday problems, but over the course of the therapy develop memories for violent and horrible victimhood (Loftus & Ketcham, 1994). These patients’ therapists claimed that the patients were recovering genuine memories of real childhood abuse, buried deep in their minds for years or even decades. But some experimental psychologists believed that the memories were instead likely to be false—created in therapy. These researchers then set out to see whether it would indeed be possible for wholly false memories to be created by procedures similar to those used in these patients’ therapy. In early false memory studies, undergraduate subjects’ family members were recruited to provide events from the students’ lives. The student subjects were told that the researchers had talked to their family members and learned about four different events from their childhoods. The researchers asked if the now undergraduate students remembered each of these four events—introduced via short hints. The subjects were asked to write about each of the four events in a booklet and then were interviewed two separate times. The trick was that one of the events came from the researchers rather than the family (and the family had actually assured the researchers that this event had not happened to the subject). In the first such study, this researcher-introduced event was a story about being lost in a shopping mall and rescued by an older adult. In this study, after just being asked whether they remembered these events occurring on three separate occasions, a quarter of subjects came to believe that they had indeed been lost in the mall (Loftus & Pickrell, 1995). In subsequent studies, similar procedures were used to get subjects to believe that they nearly drowned and had been rescued by a lifeguard, or that they had spilled punch on the bride’s parents at a family wedding, or that they had been attacked by a vicious animal as a child, among other events (Heaps & Nash, 1999; Hyman, Husband, & Billings, 1995; Porter, Yuille, & Lehman, 1999). More recent false memory studies have used a variety of different manipulations to produce false memories in substantial minorities and even occasional majorities of manipulated subjects (Braun, Ellis, & Loftus, 2002; Lindsay, Hagen, Read, Wade, & Garry, 2004; Mazzoni, Loftus, Seitz, & Lynn, 1999; Seamon, Philbin, & Harrison, 2006; Wade, Garry, Read, & Lindsay, 2002). For example, one group of researchers used a mock-advertising study, wherein subjects were asked to review (fake) advertisements for Disney vacations, to convince subjects that they had once met the character Bugs Bunny at Disneyland—an impossible false memory because Bugs is a Warner Brothers character (Braun et al., 2002). Another group of researchers photoshopped childhood photographs of their subjects into a hot air balloon picture and then asked the subjects to try to remember and describe their hot air balloon experience (Wade et al., 2002). Other researchers gave subjects unmanipulated class photographs from their childhoods along with a fake story about a class prank, and thus enhanced the likelihood that subjects would falsely remember the prank (Lindsay et al., 2004). Using a false feedback manipulation, we have been able to persuade subjects to falsely remember having a variety of childhood experiences. In these studies, subjects are told (falsely) that a powerful computer system has analyzed questionnaires that they completed previously and has concluded that they had a particular experience years earlier. Subjects apparently believe what the computer says about them and adjust their memories to match this new information. A variety of different false memories have been implanted in this way. In some studies, subjects are told they once got sick on a particular food (Bernstein, Laney, Morris, & Loftus, 2005). These memories can then spill out into other aspects of subjects’ lives, such that they often become less interested in eating that food in the future (Bernstein & Loftus, 2009b). Other false memories implanted with this methodology include having an unpleasant experience with the character Pluto at Disneyland and witnessing physical violence between one’s parents (Berkowitz, Laney, Morris, Garry, & Loftus, 2008; Laney & Loftus, 2008). Importantly, once these false memories are implanted—whether through complex methods or simple ones—it is extremely difficult to tell them apart from true memories (Bernstein & Loftus, 2009a; Laney & Loftus, 2008). Conclusion To conclude, eyewitness testimony is very powerful and convincing to jurors, even though it is not particularly reliable. Identification errors occur, and these errors can lead to people being falsely accused and even convicted. Likewise, eyewitness memory can be corrupted by leading questions, misinterpretations of events, conversations with co-witnesses, and their own expectations for what should have happened. People can even come to remember whole events that never occurred. The problems with memory in the legal system are real. But what can we do to start to fix them? A number of specific recommendations have already been made, and many of these are in the process of being implemented (e.g., Steblay & Loftus, 2012; Technical Working Group for Eyewitness Evidence, 1999; Wells et al., 1998). Some of these recommendations are aimed at specific legal procedures, including when and how witnesses should be interviewed, and how lineups should be constructed and conducted. Other recommendations call for appropriate education (often in the form of expert witness testimony) to be provided to jury members and others tasked with assessing eyewitness memory. Eyewitness testimony can be of great value to the legal system, but decades of research now argues that this testimony is often given far more weight than its accuracy justifies. Outside Resources Video 1: Eureka Foong's - The Misinformation Effect. This is a student-made video illustrating this phenomenon of altered memory. It was one of the winning entries in the 2014 Noba Student Video Award. Video 2: Ang Rui Xia & Ong Jun Hao's - The Misinformation Effect. Another student-made video exploring the misinformation effect. Also an award winner from 2014. Discussion Questions 1. Imagine that you are a juror in a murder case where an eyewitness testifies. In what ways might your knowledge of memory errors affect your use of this testimony? 2. How true to life do you think television shows such as CSI or Law & Order are in their portrayals of eyewitnesses? 3. Many jurisdictions in the United States use “show-ups,” where an eyewitness is brought to a suspect (who may be standing on the street or in handcuffs in the back of a police car) and asked, “Is this the perpetrator?” Is this a good or bad idea, from a psychological perspective? Why? Vocabulary False memories Memory for an event that never actually occurred, implanted by experimental manipulation or other means. Foils Any member of a lineup (whether live or photograph) other than the suspect. Misinformation effect A memory error caused by exposure to incorrect information between the original event (e.g., a crime) and later memory test (e.g., an interview, lineup, or day in court). Mock witnesses A research subject who plays the part of a witness in a study. Photo spreads A selection of normally small photographs of faces given to a witness for the purpose of identifying a perpetrator. Schema (plural: schemata) A memory template, created through repeated exposure to a particular class of objects or events.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_8%3A_Sensation_and_Perception/8.9%3A_Eyewitness_Testimony_and_Memory_Biases.txt
• 9.1: History of Mental Illness This module is divided into three parts. The first is a brief introduction to various criteria we use to define or distinguish between normality and abnormality. The second, largest part is a history of mental illness from the Stone Age to the 20th century, with a special emphasis on the recurrence of three causal explanations for mental illness; supernatural, somatogenic, and psychogenic factors. The third part concludes with a brief description of the issue of diagnosis. • 9.2: Therapeutic Orientations This module outlines some of the best-known therapeutic approaches and explains the history, techniques, advantages, and disadvantages associated with each. The most effective modern approach is cognitive behavioral therapy (CBT). We also discuss psychoanalytic therapy, person-centered therapy, and mindfulness-based approaches. Drug therapy and emerging new treatment strategies will also be briefly explored. • 9.3: ADHD and Behavior Disorders in Children First, we will review how ADHD is diagnosed in children, with a focus on how mental health professionals distinguish between ADHD and normal behavior problems in childhood. Second, we will describe what is known about the causes of ADHD. Third, we will describe the treatments that are used to help children with ADHD and their families. The module will conclude with a brief discussion of how we expect that the diagnosis and treatment of ADHD will change over the coming decades. • 9.4: Anxiety and Related Disorders Anxiety disorders develop out of a blend of biological (genetic) and psychological factors that, when combined with stress, may lead to the development of ailments. Primary anxiety-related diagnoses include generalized anxiety disorder, panic disorder, specific phobia, social anxiety disorder (social phobia), n this module, we summarize the main clinical features of each of these disorders and discuss their similarities and differences with everyday experiences of anxiety. • 9.5: Social Anxiety Social anxiety occurs when we are overly concerned about being humiliated, embarrassed, evaluated, or rejected by others in social situations. Everyone experiences social anxiety some of the time, but for a minority of people, the frequency and intensity of social anxiety is intense enough to interfere with meaningful activities (e.g., relationships, academics, career aspirations). • 9.6: Dissociative Disorders In psychopathology, dissociation happens when thoughts, feelings, and experiences of our consciousness and memory do not collaborate well with each other. This module provides an overview of dissociative disorders, including the definitions of dissociation, its origins and competing theories, and their relation to traumatic experiences and sleep problems. • 9.7: Mood Disorders Mood disorders are extended periods of depressed, euphoric, or irritable moods that in combination with other symptoms cause the person significant distress and interfere with his or her daily life, often resulting in social and occupational difficulties. In this module, we describe major mood disorders, including their symptom presentations, general prevalence rates, and how and why the rates of these disorders tend to vary by age, gender, and race. • 9.8: Personality Disorders The purpose of this module is to define what is meant by a personality disorder, identify the five domains of general personality (i.e., neuroticism, extraversion, openness, agreeableness, and conscientiousness) and identify the six personality disorders proposed for retention in the 5th edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5). • 9.9: Psychopathy Psychopathy (or “psychopathic personality”) is a topic that has long fascinated the public at large as well as scientists and clinical practitioners. However, it has also been subject to considerable confusion and scholarly debate over the years. This module reviews alternative conceptions of psychopathy that have been proposed historically, and reviews major instruments currently in use for the assessment of psychopathic tendencies in clinical and nonclinical samples. • 9.10: Schizophrenia Spectrum Disorders In this module, we summarize the primary clinical features of these disorders, describe the known cognitive and neurobiological changes associated with schizophrenia, describe potential risk factors and/or causes for the development of schizophrenia, and describe currently available treatments for schizophrenia. • 9.11: Autism: Insights from the Study of the Social Brain People with autism spectrum disorder (ASD) suffer from a profound social disability. Social neuroscience is the study of the parts of the brain that support social interactions or the “social brain.” This module provides an overview of ASD and focuses on understanding how social brain dysfunction leads to ASD. • 9.12: Psychopharmacology Psychopharmacology is the study of how drugs affect behavior. If a drug changes your perception, or the way you feel or think, the drug exerts effects on your brain and nervous system. In this module, we will provide an overview of some of these topics as well as discuss some current controversial areas in the field of psychopharmacology. Chapter 9: Psychological Disorders and Treatments By Ingrid G. Farreras Hood College This module is divided into three parts. The first is a brief introduction to various criteria we use to define or distinguish between normality and abnormality. The second, largest part is a history of mental illness from the Stone Age to the 20th century, with a special emphasis on the recurrence of three causal explanations for mental illness; supernatural, somatogenic, and psychogenic factors. This part briefly touches upon trephination, the Greek theory of hysteria within the context of the four bodily humors, witch hunts, asylums, moral treatment, mesmerism, catharsis, the mental hygiene movement, deinstitutionalization, community mental health services, and managed care. The third part concludes with a brief description of the issue of diagnosis. learning objectives • Identify what the criteria used to distinguish normality from abnormality are. • Understand the difference among the three main etiological theories of mental illness. • Describe specific beliefs or events in history that exemplify each of these etiological theories (e.g., hysteria, humorism, witch hunts, asylums, moral treatments). • Explain the differences in treatment facilities for the mentally ill (e.g., mental hospitals, asylums, community mental health centers). • Describe the features of the “moral treatment” approach used by Chiarughi, Pinel, and Tuke. • Describe the reform efforts of Dix and Beers and the outcomes of their work. • Describe Kräpelin’s classification of mental illness and the current DSM system. History of Mental Illness References to mental illness can be found throughout history. The evolution of mental illness, however, has not been linear or progressive but rather cyclical. Whether a behavior is considered normal or abnormal depends on the context surrounding the behavior and thus changes as a function of a particular time and culture. In the past, uncommon behavior or behavior that deviated from the sociocultural norms and expectations of a specific culture and period has been used as a way to silence or control certain individuals or groups. As a result, a less cultural relativist view of abnormal behavior has focused instead on whether behavior poses a threat to oneself or others or causes so much pain and suffering that it interferes with one’s work responsibilities or with one’s relationships with family and friends. Throughout history there have been three general theories of the etiology of mental illness:supernatural, somatogenic, and psychogenic. Supernatural theories attribute mental illness to possession by evil or demonic spirits, displeasure of gods, eclipses, planetary gravitation, curses, and sin. Somatogenic theories identify disturbances in physical functioning resulting from either illness, genetic inheritance, or brain damage or imbalance. Psychogenic theories focus on traumatic or stressful experiences, maladaptive learned associations and cognitions, or distorted perceptions. Etiological theories of mental illness determine the care and treatment mentally ill individuals receive. As we will see below, an individual believed to be possessed by the devil will be viewed and treated differently from an individual believed to be suffering from an excess of yellow bile. Their treatments will also differ, from exorcism to blood-letting. The theories, however, remain the same. They coexist as well as recycle over time. Trephination is an example of the earliest supernatural explanation for mental illness. Examination of prehistoric skulls and cave art from as early as 6500 BC has identified surgical drilling of holes in skulls to treat head injuries and epilepsy as well as to allow evil spirits trapped within the skull to be released (Restak, 2000). Around 2700 BC, Chinese medicine’s concept of complementary positive and negative bodily forces (“yin and yang”) attributed mental (and physical) illness to an imbalance between these forces. As such, a harmonious life that allowed for the proper balance of yin and yang and movement of vital air was essential (Tseng, 1973). Mesopotamian and Egyptian papyri from 1900 BC describe women suffering from mental illness resulting from a wandering uterus (later named hysteria by the Greeks): The uterus could become dislodged and attached to parts of the body like the liver or chest cavity, preventing their proper functioning or producing varied and sometimes painful symptoms. As a result, the Egyptians, and later the Greeks, also employed a somatogenic treatment of strong smelling substances to guide the uterus back to its proper location (pleasant odors to lure and unpleasant ones to dispel). Throughout classical antiquity we see a return to supernatural theories of demonic possession or godly displeasure to account for abnormal behavior that was beyond the person’s control. Temple attendance with religious healing ceremonies and incantations to the gods were employed to assist in the healing process. Hebrews saw madness as punishment from God, so treatment consisted of confessing sins and repenting. Physicians were also believed to be able to comfort and cure madness, however. Greek physicians rejected supernatural explanations of mental disorders. It was around 400 BC that Hippocrates (460–370 BC) attempted to separate superstition and religion from medicine by systematizing the belief that a deficiency in or especially an excess of one of the four essential bodily fluids (i.e., humors)—blood, yellow bile, black bile, and phlegm—was responsible for physical and mental illness. For example, someone who was too temperamental suffered from too much blood and thus blood-letting would be the necessary treatment. Hippocrates classified mental illness into one of four categories—epilepsy, mania, melancholia, and brain fever—and like other prominent physicians and philosophers of his time, he did not believe mental illness was shameful or that mentally ill individuals should be held accountable for their behavior. Mentally ill individuals were cared for at home by family members and the state shared no responsibility for their care. Humorism remained a recurrent somatogenic theory up until the 19th century. While Greek physician Galen (AD 130–201) rejected the notion of a uterus having an animistic soul, he agreed with the notion that an imbalance of the four bodily fluids could cause mental illness. He also opened the door for psychogenic explanations for mental illness, however, by allowing for the experience of psychological stress as a potential cause of abnormality. Galen’s psychogenic theories were ignored for centuries, however, as physicians attributed mental illness to physical causes throughout most of the millennium. By the late Middle Ages, economic and political turmoil threatened the power of the Roman Catholic church. Between the 11th and 15th centuries, supernatural theories of mental disorders again dominated Europe, fueled by natural disasters like plagues and famines that lay people interpreted as brought about by the devil. Superstition, astrology, and alchemy took hold, and common treatments included prayer rites, relic touching, confessions, and atonement. Beginning in the 13th century the mentally ill, especially women, began to be persecuted as witches who were possessed. At the height of the witch hunts during the 15th through 17th centuries, with the Protestant Reformation having plunged Europe into religious strife, two Dominican monks wrote the Malleus Maleficarum (1486) as the ultimate manual to guide witch hunts. Johann Weyer and Reginald Scot tried to convince people in the mid- to late-16th century that accused witches were actually women with mental illnesses and that mental illness was not due to demonic possession but to faulty metabolism and disease, but the Church’s Inquisition banned both of their writings. Witch-hunting did not decline until the 17th and 18th centuries, after more than 100,000 presumed witches had been burned at the stake (Schoeneman, 1977; Zilboorg & Henry, 1941). Modern treatments of mental illness are most associated with the establishment of hospitals and asylumsbeginning in the 16th century. Such institutions’ mission was to house and confine the mentally ill, the poor, the homeless, the unemployed, and the criminal. War and economic depression produced vast numbers of undesirables and these were separated from society and sent to these institutions. Two of the most well-known institutions, St. Mary of Bethlehem in London, known as Bedlam, and the Hôpital Général of Paris—which included La Salpêtrière, La Pitié, and La Bicêtre—began housing mentally ill patients in the mid-16th and 17th centuries. As confinement laws focused on protecting the public from the mentally ill, governments became responsible for housing and feeding undesirables in exchange for their personal liberty. Most inmates were institutionalized against their will, lived in filth and chained to walls, and were commonly exhibited to the public for a fee. Mental illness was nonetheless viewed somatogenically, so treatments were similar to those for physical illnesses: purges, bleedings, and emetics. While inhumane by today’s standards, the view of insanity at the time likened the mentally ill to animals (i.e., animalism) who did not have the capacity to reason, could not control themselves, were capable of violence without provocation, did not have the same physical sensitivity to pain or temperature, and could live in miserable conditions without complaint. As such, instilling fear was believed to be the best way to restore a disordered mind to reason. By the 18th century, protests rose over the conditions under which the mentally ill lived, and the 18th and 19th centuries saw the growth of a more humanitarian view of mental illness. In 1785 Italian physician Vincenzo Chiarughi (1759–1820) removed the chains of patients at his St. Boniface hospital in Florence, Italy, and encouraged good hygiene and recreational and occupational training. More well known, French physician Philippe Pinel (1745–1826) and former patient Jean-Baptise Pussin created a “traitement moral” at La Bicêtre and the Salpêtrière in 1793 and 1795 that also included unshackling patients, moving them to well-aired, well-lit rooms, and encouraging purposeful activity and freedom to move about the grounds (Micale, 1985). In England, humanitarian reforms rose from religious concerns. William Tuke (1732–1822) urged the Yorkshire Society of (Quaker) Friends to establish the York Retreat in 1796, where patients were guests, not prisoners, and where the standard of care depended on dignity and courtesy as well as the therapeutic and moral value of physical work (Bell, 1980). While America had asylums for the mentally ill—such as the Pennsylvania Hospital in Philadelphia and the Williamsburg Hospital, established in 1756 and 1773—the somatogenic theory of mental illness of the time—promoted especially by the father of America psychiatry, Benjamin Rush (1745–1813)—had led to treatments such as blood-letting, gyrators, and tranquilizer chairs. When Tuke’s York Retreat became the model for half of the new private asylums established in the United States, however, psychogenic treatments such as compassionate care and physical labor became the hallmarks of the new American asylums, such as the Friends Asylum in Frankford, Pennsylvania, and the Bloomingdale Asylum in New York City, established in 1817 and 1821 (Grob, 1994). Moral treatment had to be abandoned in America in the second half of the 19th century, however, when these asylums became overcrowded and custodial in nature and could no longer provide the space nor attention necessary. When retired school teacher Dorothea Dix discovered the negligence that resulted from such conditions, she advocated for the establishment of state hospitals. Between 1840 and1880, she helped establish over 30 mental institutions in the United States and Canada (Viney & Zorich, 1982). By the late 19th century, moral treatment had given way to the mental hygiene movement, founded by former patient Clifford Beers with the publication of his 1908 memoir A Mind That Found Itself. Riding on Pasteur’s breakthrough germ theory of the 1860s and 1870s and especially on the early 20th century discoveries of vaccines for cholera, syphilis, and typhus, the mental hygiene movement reverted to a somatogenic theory of mental illness. European psychiatry in the late 18th century and throughout the 19th century, however, struggled between somatogenic and psychogenic explanations of mental illness, particularly hysteria, which caused physical symptoms such as blindness or paralysis with no apparent physiological explanation. Franz Anton Mesmer (1734–1815), influenced by contemporary discoveries in electricity, attributed hysterical symptoms to imbalances in a universal magnetic fluid found in individuals, rather than to a wandering uterus (Forrest, 1999). James Braid (1795–1860) shifted this belief in mesmerism to one in hypnosis, thereby proposing a psychogenic treatment for the removal of symptoms. At the time, famed Salpetriere Hospital neurologist Jean-Martin Charcot (1825–1893), and Ambroise Auguste Liébault (1823–1904) and Hyppolyte Bernheim (1840–1919) of the Nancy School in France, were engaged in a bitter etiological battle over hysteria, with Charcot maintaining that the hypnotic suggestibility underlying hysteria was a neurological condition while Liébault and Bernheim believed it to be a general trait that varied in the population. Josef Breuer (1842–1925) and Sigmund Freud (1856–1939) would resolve this dispute in favor of a psychogenic explanation for mental illness by treating hysteria through hypnosis, which eventually led to the cathartic method that became the precursor for psychoanalysis during the first half of the 20th century. Psychoanalysis was the dominant psychogenic treatment for mental illness during the first half of the 20th century, providing the launching pad for the more than 400 different schools of psychotherapy found today (Magnavita, 2006). Most of these schools cluster around broader behavioral, cognitive, cognitive-behavioral, psychodynamic, and client-centered approaches to psychotherapy applied in individual, marital, family, or group formats. Negligible differences have been found among all these approaches, however; their efficacy in treating mental illness is due to factors shared among all of the approaches (not particular elements specific to each approach): the therapist-patient alliance, the therapist’s allegiance to the therapy, therapist competence, and placebo effects (Luborsky et al., 2002; Messer & Wampold, 2002). In contrast, the leading somatogenic treatment for mental illness can be found in the establishment of the first psychotropic medications in the mid-20th century. Restraints, electro-convulsive shock therapy, and lobotomies continued to be employed in American state institutions until the 1970s, but they quickly made way for a burgeoning pharmaceutical industry that has viewed and treated mental illness as a chemical imbalance in the brain. Both etiological theories coexist today in what the psychological discipline holds as the biopsychosocial model of explaining human behavior. While individuals may be born with a genetic predisposition for a certain psychological disorder, certain psychological stressors need to be present for them to develop the disorder. Sociocultural factors such as sociopolitical or economic unrest, poor living conditions, or problematic interpersonal relationships are also viewed as contributing factors. However much we want to believe that we are above the treatments described above, or that the present is always the most enlightened time, let us not forget that our thinking today continues to reflect the same underlying somatogenic and psychogenic theories of mental illness discussed throughout this cursory 9,000-year history. Diagnosis of Mental Illness Progress in the treatment of mental illness necessarily implies improvements in the diagnosis of mental illness. A standardized diagnostic classification system with agreed-upon definitions of psychological disorders creates a shared language among mental-health providers and aids in clinical research. While diagnoses were recognized as far back as the Greeks, it was not until 1883 that German psychiatrist Emil Kräpelin (1856–1926) published a comprehensive system of psychological disorders that centered around a pattern of symptoms (i.e., syndrome) suggestive of an underlying physiological cause. Other clinicians also suggested popular classification systems but the need for a single, shared system paved the way for the American Psychiatric Association’s 1952 publication of the first Diagnostic and Statistical Manual (DSM). The DSM has undergone various revisions (in 1968, 1980, 1987, 1994, 2000, 2013), and it is the 1980 DSM-III version that began a multiaxial classification system that took into account the entire individual rather than just the specific problem behavior. Axes I and II contain the clinical diagnoses, including mental retardation and personality disorders. Axes III and IV list any relevant medical conditions or psychosocial or environmental stressors, respectively. Axis V provides a global assessment of the individual’s level of functioning. The most recent version -- the DSM-5-- has combined the first three axes and removed the last two. These revisions reflect an attempt to help clinicians streamline diagnosis and work better with other diagnostic systems such as health diagnoses outlined by the World Health Organization. While the DSM has provided a necessary shared language for clinicians, aided in clinical research, and allowed clinicians to be reimbursed by insurance companies for their services, it is not without criticism. The DSM is based on clinical and research findings from Western culture, primarily the United States. It is also a medicalized categorical classification system that assumes disordered behavior does not differ in degree but in kind, as opposed to a dimensional classification system that would plot disordered behavior along a continuum. Finally, the number of diagnosable disorders has tripled since it was first published in 1952, so that almost half of Americans will have a diagnosable disorder in their lifetime, contributing to the continued concern of labeling and stigmatizing mentally ill individuals. These concerns appear to be relevant even in the DSM-5 version that came out in May of 2013. Outside Resources Video: An introduction to and overview of psychology, from its origins in the nineteenth century to current study of the brain's biochemistry. www.learner.org/series/discoveringpsychology/01/e01expand.html Video: The BBC provides an overview of ancient Greek approaches to health and medicine. www.tes.com/teaching-resource/ancient-greek-approaches-to-health-and-medicine-6176019 Web: Images from the History of Medicine. Search \"mental illness\" http://ihm.nlm.nih.gov/luna/servlet/view/all Web: Science Museum Brought to Life www.sciencemuseum.org.uk/brou...ndillness.aspx Web: The Social Psychology Network provides a number of links and resources. https://www.socialpsychology.org/history.htm Web: The UCL Center for the History of Medicine www.ucl.ac.uk/histmed/ Web: The Wellcome Library. Search \"mental illness\". http://wellcomelibrary.org/ Web: US National Library of Medicine http://vsearch.nlm.nih.gov/vivisimo/cgi-bin/query-meta?query=mental+illness&v:project=nlm-main-website Discussion Questions 1. What does it mean to say that someone is mentally ill? What criteria are usually considered to determine whether someone is mentally ill? 2. Describe the difference between supernatural, somatogenic, and psychogenic theories of mental illness and how subscribing to a particular etiological theory determines the type of treatment used. 3. How did the Greeks describe hysteria and what treatment did they prescribe? 4. Describe humorism and how it explained mental illness. 5. Describe how the witch hunts came about and their relationship to mental illness. 6. Describe the development of treatment facilities for the mentally insane, from asylums to community mental health centers. 7. Describe the humane treatment of the mentally ill brought about by Chiarughi, Pinel, and Tuke in the late 18th and early 19th centuries and how it differed from the care provided in the centuries preceding it. 8. Describe William Tuke’s treatment of the mentally ill at the York Retreat within the context of the Quaker Society of Friends. What influence did Tuke’s treatment have in other parts of the world? 9. What are the 20th-century treatments resulting from the psychogenic and somatogenic theories of mental illness? 10. Describe why a classification system is important and how the leading classification system used in the United States works. Describe some concerns with regard to this system. Vocabulary Animism The belief that everyone and everything had a “soul” and that mental illness was due to animistic causes, for example, evil spirits controlling an individual and his/her behavior. Asylum A place of refuge or safety established to confine and care for the mentally ill; forerunners of the mental hospital or psychiatric facility. Biopsychosocial model A model in which the interaction of biological, psychological, and sociocultural factors is seen as influencing the development of the individual. C athartic method A therapeutic procedure introduced by Breuer and developed further by Freud in the late 19th century whereby a patient gains insight and emotional relief from recalling and reliving traumatic events. Cultural relativism The idea that cultural norms and values of a society can only be understood on their own terms or in their own context. Etiology The causal description of all of the factors that contribute to the development of a disorder or illness. Humorism (or humoralism) A belief held by ancient Greek and Roman physicians (and until the 19th century) that an excess or deficiency in any of the four bodily fluids, or humors—blood, black bile, yellow bile, and phlegm—directly affected their health and temperament. Hysteria Term used by the ancient Greeks and Egyptians to describe a disorder believed to be caused by a woman’s uterus wandering throughout the body and interfering with other organs (today referred to as conversion disorder, in which psychological problems are expressed in physical form). Maladaptive Term referring to behaviors that cause people who have them physical or emotional harm, prevent them from functioning in daily life, and/or indicate that they have lost touch with reality and/or cannot control their thoughts and behavior (also called dysfunctional). Mesmerism Derived from Franz Anton Mesmer in the late 18th century, an early version of hypnotism in which Mesmer claimed that hysterical symptoms could be treated through animal magnetism emanating from Mesmer’s body and permeating the universe (and later through magnets); later explained in terms of high suggestibility in individuals. Psychogenesis Developing from psychological origins. Somatogenesis Developing from physical/bodily origins. Supernatural Developing from origins beyond the visible observable universe. Syndrome Involving a particular group of signs and symptoms. “Traitement moral” (moral treatment) A therapeutic regimen of improved nutrition, living conditions, and rewards for productive behavior that has been attributed to Philippe Pinel during the French Revolution, when he released mentally ill patients from their restraints and treated them with compassion and dignity rather than with contempt and denigration. Trephination The drilling of a hole in the skull, presumably as a way of treating psychological disorders.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_9%3A_Psychological_Disorders_and_Treatments/9.01%3A_History_of_Mental_Illness.txt
By Hannah Boettcher, Stefan G. Hofmann, and Q. Jade Wu Boston University In the past century, a number of psychotherapeutic orientations have gained popularity for treating mental illnesses. This module outlines some of the best-known therapeutic approaches and explains the history, techniques, advantages, and disadvantages associated with each. The most effective modern approach is cognitive behavioral therapy (CBT). We also discuss psychoanalytic therapy, person-centered therapy, and mindfulness-based approaches. Drug therapy and emerging new treatment strategies will also be briefly explored. Learning Objectives • Become familiar with the most widely practiced approaches to psychotherapy. • For each therapeutic approach, consider: history, goals, key techniques, and empirical support. • Consider the impact of emerging treatment strategies in mental health. Introduction The history of mental illness can be traced as far back as 1500 BCE, when the ancient Egyptians noted cases of “distorted concentration” and “emotional distress in the heart or mind” (Nasser, 1987). Today, nearly half of all Americans will experience mental illness at some point in their lives, and mental health problems affect more than one-quarter of the population in any given year (Kessler et al., 2005). Fortunately, a range of psychotherapies exist to treat mental illnesses. This module provides an overview of some of the best-known schools of thought in psychotherapy. Currently, the most effective approach is called Cognitive Behavioral Therapy (CBT); however, other approaches, such as psychoanalytic therapy, person-centered therapy, and mindfulness-based therapies are also used—though the effectiveness of these treatments aren’t as clear as they are for CBT. Throughout this module, note the advantages and disadvantages of each approach, paying special attention to their support by empirical research. Psychoanalysis and Psychodynamic Therapy The earliest organized therapy for mental disorders was psychoanalysis. Made famous in the early 20th century by one of the best-known clinicians of all time, Sigmund Freud, this approach stresses that mental health problems are rooted in unconscious conflicts and desires. In order to resolve the mental illness, then, these unconscious struggles must be identified and addressed. Psychoanalysis often does this through exploring one’s early childhood experiences that may have continuing repercussions on one’s mental health in the present and later in life. Psychoanalysis is an intensive, long-term approach in which patients and therapists may meet multiple times per week, often for many years. History of Psychoanalytic Therapy Freud initially suggested that mental health problems arise from efforts to push inappropriate sexual urges out of conscious awareness (Freud, 1895/1955). Later, Freud suggested more generally that psychiatric problems are the result of tension between different parts of the mind: the id, the superego, and the ego. In Freud’s structural model, the id represents pleasure-driven unconscious urges (e.g., our animalistic desires for sex and aggression), while the superego is the semi-conscious part of the mind where morals and societal judgment are internalized (e.g., the part of you that automatically knows how society expects you to behave). The ego—also partly conscious—mediates between the id and superego. Freud believed that bringing unconscious struggles like these (where the id demands one thing and the superego another) into conscious awareness would relieve the stress of the conflict (Freud, 1920/1955)—which became the goal of psychoanalytic therapy. Although psychoanalysis is still practiced today, it has largely been replaced by the more broadly defined psychodynamic therapy. This latter approach has the same basic tenets as psychoanalysis, but is briefer, makes more of an effort to put clients in their social and interpersonal context, and focuses more on relieving psychological distress than on changing the person. Techniques in Psychoanalysis Psychoanalysts and psychodynamic therapists employ several techniques to explore patients’ unconscious mind. One common technique is called free association. Here, the patient shares any and all thoughts that come to mind, without attempting to organize or censor them in any way. For example, if you took a pen and paper and just wrote down whatever came into your head, letting one thought lead to the next without allowing conscious criticism to shape what you were writing, you would be doing free association. The analyst then uses his or her expertise to discern patterns or underlying meaning in the patient’s thoughts. Sometimes, free association exercises are applied specifically to childhood recollections. That is, psychoanalysts believe a person’s childhood relationships with caregivers often determine the way that person relates to others, and predicts later psychiatric difficulties. Thus, exploring these childhood memories, through free association or otherwise, can provide therapists with insights into a patient’s psychological makeup. Because we don’t always have the ability to consciously recall these deep memories, psychoanalysts also discuss their patients’ dreams. In Freudian theory, dreams contain not only manifest (or literal) content, but also latent (or symbolic) content (Freud, 1900; 1955). For example, someone may have a dream that his/her teeth are falling out—the manifest or actual content of the dream. However, dreaming that one’s teeth are falling out could be a reflection of the person’s unconscious concern about losing his or her physical attractiveness—the latent or metaphorical content of the dream. It is the therapist’s job to help discover the latent content underlying one’s manifest content through dream analysis. In psychoanalytic and psychodynamic therapy, the therapist plays a receptive role—interpreting the patient’s thoughts and behavior based on clinical experience and psychoanalytic theory. For example, if during therapy a patient begins to express unjustified anger toward the therapist, the therapist may recognize this as an act of transference. That is, the patient may be displacing feelings for people in his or her life (e.g., anger toward a parent) onto the therapist. At the same time, though, the therapist has to be aware of his or her own thoughts and emotions, for, in a related process, called countertransference, the therapist may displace his/her own emotions onto the patient. The key to psychoanalytic theory is to have patients uncover the buried, conflicting content of their mind, and therapists use various tactics—such as seating patients to face away from them—to promote a freer self-disclosure. And, as a therapist spends more time with a patient, the therapist can come to view his or her relationship with the patient as another reflection of the patient’s mind. Advantages and Disadvantages of Psychoanalytic Therapy Psychoanalysis was once the only type of psychotherapy available, but presently the number of therapists practicing this approach is decreasing around the world. Psychoanalysis is not appropriate for some types of patients, including those with severe psychopathology or mental retardation. Further, psychoanalysis is often expensive because treatment usually lasts many years. Still, some patients and therapists find the prolonged and detailed analysis very rewarding. Perhaps the greatest disadvantage of psychoanalysis and related approaches is the lack of empirical support for their effectiveness. The limited research that has been conducted on these treatments suggests that they do not reliably lead to better mental health outcomes (e.g., Driessen et al., 2010). And, although there are some reviews that seem to indicate that long-term psychodynamic therapies might be beneficial (e.g., Leichsenring & Rabung, 2008), other researchers have questioned the validity of these reviews. Nevertheless, psychoanalytic theory was history’s first attempt at formal treatment of mental illness, setting the stage for the more modern approaches used today. Humanistic and Person-Centered Therapy One of the next developments in therapy for mental illness, which arrived in the mid-20th century, is called humanistic or person-centered therapy (PCT). Here, the belief is that mental health problems result from an inconsistency between patients’ behavior and their true personal identity. Thus, the goal of PCT is to create conditions under which patients can discover their self-worth, feel comfortable exploring their own identity, and alter their behavior to better reflect this identity. History of Person-Centered Therapy PCT was developed by a psychologist named Carl Rogers, during a time of significant growth in the movements of humanistic theory and human potential. These perspectives were based on the idea that humans have an inherent drive to realize and express their own capabilities and creativity. Rogers, in particular, believed that all people have the potential to change and improve, and that the role of therapists is to foster self-understanding in an environment where adaptive change is most likely to occur (Rogers, 1951). Rogers suggested that the therapist and patient must engage in a genuine, egalitarian relationship in which the therapist is nonjudgmental and empathetic. In PCT, the patient should experience both a vulnerability to anxiety, which motivates the desire to change, and an appreciation for the therapist’s support. Techniques in Person-Centered Therapy Humanistic and person-centered therapy, like psychoanalysis, involves a largely unstructured conversation between the therapist and the patient. Unlike psychoanalysis, though, a therapist using PCT takes a passive role, guiding the patient toward his or her own self-discovery. Rogers’s original name for PCT was non-directive therapy, and this notion is reflected in the flexibility found in PCT. Therapists do not try to change patients’ thoughts or behaviors directly. Rather, their role is to provide the therapeutic relationship as a platform for personal growth. In these kinds of sessions, the therapist tends only to ask questions and doesn’t provide any judgment or interpretation of what the patient says. Instead, the therapist is present to provide a safe and encouraging environment for the person to explore these issues for him- or herself. An important aspect of the PCT relationship is the therapist’s unconditional positive regard for the patient’s feelings and behaviors. That is, the therapist is never to condemn or criticize the patient for what s/he has done or thought; the therapist is only to express warmth and empathy. This creates an environment free of approval or disapproval, where patients come to appreciate their value and to behave in ways that are congruent with their own identity. Advantages and Disadvantages of Person-Centered Therapy One key advantage of person-centered therapy is that it is highly acceptable to patients. In other words, people tend to find the supportive, flexible environment of this approach very rewarding. Furthermore, some of the themes of PCT translate well to other therapeutic approaches. For example, most therapists of any orientation find that clients respond well to being treated with nonjudgmental empathy. The main disadvantage to PCT, however, is that findings about its effectiveness are mixed. One possibility for this could be that the treatment is primarily based on unspecific treatment factors. That is, rather than using therapeutic techniques that are specific to the patient and the mental problem (i.e., specific treatment factors), the therapy focuses on techniques that can be applied to anyone (e.g., establishing a good relationship with the patient) (Cuijpers et al., 2012; Friedli, King, Lloyd, & Horder, 1997). Similar to how “one-size-fits-all” doesn’t really fit every person, PCT uses the same practices for everyone, which may work for some people but not others. Further research is necessary to evaluate its utility as a therapeutic approach. Cognitive Behavioral Therapy Although both psychoanalysis and PCT are still used today, another therapy, cognitive-behavioral therapy (CBT), has gained more widespread support and practice. CBT refers to a family of therapeutic approaches whose goal is to alleviate psychological symptoms by changing their underlying cognitions and behaviors. The premise of CBT is that thoughts, behaviors, and emotions interact and contribute to various mental disorders. For example, let’s consider how a CBT therapist would view a patient who compulsively washes her hands for hours every day. First, the therapist would identify the patient’s maladaptive thought: “If I don’t wash my hands like this, I will get a disease and die.” The therapist then identifies how this maladaptive thought leads to a maladaptive emotion: the feeling of anxiety when her hands aren’t being washed. And finally, this maladaptive emotion leads to the maladaptive behavior: the patient washing her hands for hours every day. CBT is a present-focused therapy (i.e., focused on the “now” rather than causes from the past, such as childhood relationships) that uses behavioral goals to improve one’s mental illness. Often, these behavioral goals involve between-session homework assignments. For example, the therapist may give the hand-washing patient a worksheet to take home; on this worksheet, the woman is to write down every time she feels the urge to wash her hands, how she deals with the urge, and what behavior she replaces that urge with. When the patient has her next therapy session, she and the therapist review her “homework” together. CBT is a relatively brief intervention of 12 to 16 weekly sessions, closely tailored to the nature of the psychopathology and treatment of the specific mental disorder. And, as the empirical data shows, CBT has proven to be highly efficacious for virtually all psychiatric illnesses (Hofmann, Asnaani, Vonk, Sawyer, & Fang, 2012). History of Cognitive Behavioral Therapy CBT developed from clinical work conducted in the mid-20th century by Dr. Aaron T. Beck, a psychiatrist, and Albert Ellis, a psychologist. Beck used the term automatic thoughts to refer to the thoughts depressed patients report experiencing spontaneously. He observed that these thoughts arise from three belief systems, or schemas: beliefs about the self, beliefs about the world, and beliefs about the future. In treatment, therapy initially focuses on identifying automatic thoughts (e.g., “If I don’t wash my hands constantly, I’ll get a disease”), testing their validity, and replacing maladaptive thoughts with more adaptive thoughts (e.g., “Washing my hands three times a day is sufficient to prevent a disease”). In later stages of treatment, the patient’s maladaptive schemas are examined and modified. Ellis (1957) took a comparable approach, in what he called rational-emotive-behavioral therapy (REBT), which also encourages patients to evaluate their own thoughts about situations. Techniques in CBT Beck and Ellis strove to help patients identify maladaptive appraisals, or the untrue judgments and evaluations of certain thoughts. For example, if it’s your first time meeting new people, you may have the automatic thought, “These people won’t like me because I have nothing interesting to share.” That thought itself is not what’s troublesome; the appraisal (or evaluation) that it might have merit is what’s troublesome. The goal of CBT is to help people make adaptive, instead of maladaptive, appraisals (e.g., “I do know interesting things!”). This technique of reappraisal, or cognitive restructuring, is a fundamental aspect of CBT. With cognitive restructuring, it is the therapist’s job to help point out when a person has an inaccurate or maladaptive thought, so that the patient can either eliminate it or modify it to be more adaptive. In addition to thoughts, though, another important treatment target of CBT is maladaptive behavior. Every time a person engages in maladaptive behavior (e.g., never speaking to someone in new situations), he or she reinforces the validity of the maladaptive thought, thus maintaining or perpetuating the psychological illness. In treatment, the therapist and patient work together to develop healthy behavioral habits (often tracked with worksheet-like homework), so that the patient can break this cycle of maladaptive thoughts and behaviors. For many mental health problems, especially anxiety disorders, CBT incorporates what is known as exposure therapy. During exposure therapy, a patient confronts a problematic situation and fully engages in the experience instead of avoiding it. For example, imagine a man who is terrified of spiders. Whenever he encounters one, he immediately screams and panics. In exposure therapy, the man would be forced to confront and interact with spiders, rather than simply avoiding them as he usually does. The goal is to reduce the fear associated with the situation through extinction learning, a neurobiological and cognitive process by which the patient “unlearns” the irrational fear. For example, exposure therapy for someone terrified of spiders might begin with him looking at a cartoon of a spider, followed by him looking at pictures of real spiders, and later, him handling a plastic spider. After weeks of this incremental exposure, the patient may even be able to hold a live spider. After repeated exposure (starting small and building one’s way up), the patient experiences less physiological fear and maladaptive thoughts about spiders, breaking his tendency for anxiety and subsequent avoidance. Advantages and Disadvantages of CBT CBT interventions tend to be relatively brief, making them cost-effective for the average consumer. In addition, CBT is an intuitive treatment that makes logical sense to patients. It can also be adapted to suit the needs of many different populations. One disadvantage, however, is that CBT does involve significant effort on the patient’s part, because the patient is an active participant in treatment. Therapists often assign “homework” (e.g., worksheets for recording one’s thoughts and behaviors) between sessions to maintain the cognitive and behavioral habits the patient is working on. The greatest strength of CBT is the abundance of empirical support for its effectiveness. Studies have consistently found CBT to be equally or more effective than other forms of treatment, including medication and other therapies (Butler, Chapman, Forman, & Beck, 2006; Hofmann et al., 2012). For this reason, CBT is considered a first-line treatment for many mental disorders. Focus Topic: Pioneers of CBT The central notion of CBT is the idea that a person’s behavioral and emotional responses are causally influenced by one’s thinking. The stoic Greek philosopher Epictetus is quoted as saying, “men are not moved by things, but by the view they take of them.” Meaning, it is not the event per se, but rather one’s assumptions (including interpretations and perceptions) of the event that are responsible for one’s emotional response to it. Beck calls these assumptions about events and situations automatic thoughts (Beck, 1979), whereas Ellis (1962) refers to these assumptions as self-statements. The cognitive model assumes that these cognitive processes cause the emotional and behavioral responses to events or stimuli. This causal chain is illustrated in Ellis’s ABC model, in which A stands for the antecedent event, B stands for belief, and C stands for consequence. During CBT, the person is encouraged to carefully observe the sequence of events and the response to them, and then explore the validity of the underlying beliefs through behavioral experiments and reasoning, much like a detective or scientist. Acceptance and Mindfulness-Based Approaches Unlike the preceding therapies, which were developed in the 20th century, this next one was born out of age-old Buddhist and yoga practices. Mindfulness, or a process that tries to cultivate a nonjudgmental, yet attentive, mental state, is a therapy that focuses on one’s awareness of bodily sensations, thoughts, and the outside environment. Whereas other therapies work to modify or eliminate these sensations and thoughts, mindfulness focuses on nonjudgmentally accepting them (Kabat-Zinn, 2003; Baer, 2003). For example, whereas CBT may actively confront and work to change a maladaptive thought, mindfulness therapy works to acknowledge and accept the thought, understanding that the thought is spontaneous and not what the person truly believes. There are two important components of mindfulness: (1) self-regulation of attention, and (2) orientation toward the present moment (Bishop et al., 2004). Mindfulness is thought to improve mental health because it draws attention away from past and future stressors, encourages acceptance of troubling thoughts and feelings, and promotes physical relaxation. Techniques in Mindfulness-Based Therapy Psychologists have adapted the practice of mindfulness as a form of psychotherapy, generally called mindfulness-based therapy (MBT). Several types of MBT have become popular in recent years, including mindfulness-based stress reduction (MBSR) (e.g., Kabat-Zinn, 1982) and mindfulness-based cognitive therapy (MBCT) (e.g., Segal, Williams, & Teasdale, 2002). MBSR uses meditation, yoga, and attention to physical experiences to reduce stress. The hope is that reducing a person’s overall stress will allow that person to more objectively evaluate his or her thoughts. In MBCT, rather than reducing one’s general stress to address a specific problem, attention is focused on one’s thoughts and their associated emotions. For example, MBCT helps prevent relapses in depression by encouraging patients to evaluate their own thoughts objectively and without value judgment (Baer, 2003). Although cognitive behavioral therapy (CBT) may seem similar to this, it focuses on “pushing out” the maladaptive thought, whereas mindfulness-based cognitive therapy focuses on “not getting caught up” in it. The treatments used in MBCT have been used to address a wide range of illnesses, including depression, anxiety, chronic pain, coronary artery disease, and fibromyalgia (Hofmann, Sawyer, Witt & Oh, 2010). Mindfulness and acceptance—in addition to being therapies in their own right—have also been used as “tools” in other cognitive-behavioral therapies, particularly in dialectical behavior therapy (DBT) (e.g., Linehan, Amstrong, Suarez, Allmon, & Heard, 1991). DBT, often used in the treatment of borderline personality disorder, focuses on skills training. That is, it often employs mindfulness and cognitive behavioral therapy practices, but it also works to teach its patients “skills” they can use to correct maladaptive tendencies. For example, one skill DBT teaches patients is called distress tolerance—or, ways to cope with maladaptive thoughts and emotions in the moment. For example, people who feel an urge to cut themselves may be taught to snap their arm with a rubber band instead. The primary difference between DBT and CBT is that DBT employs techniques that address the symptoms of the problem (e.g., cutting oneself) rather than the problem itself (e.g., understanding the psychological motivation to cut oneself). CBT does not teach such skills training because of the concern that the skills—even though they may help in the short-term—may be harmful in the long-term, by maintaining maladaptive thoughts and behaviors. DBT is founded on the perspective of a dialectical worldview. That is, rather than thinking of the world as “black and white,” or “only good and only bad,” it focuses on accepting that some things can have characteristics of both “good” and “bad.” So, in a case involving maladaptive thoughts, instead of teaching that a thought is entirely bad, DBT tries to help patients be less judgmental of their thoughts (as with mindfulness-based therapy) and encourages change through therapeutic progress, using cognitive-behavioral techniques as well as mindfulness exercises. Another form of treatment that also uses mindfulness techniques is acceptance and commitment therapy (ACT) (Hayes, Strosahl, & Wilson, 1999). In this treatment, patients are taught to observe their thoughts from a detached perspective (Hayes et al., 1999). ACT encourages patients not to attempt to change or avoid thoughts and emotions they observe in themselves, but to recognize which are beneficial and which are harmful. However, the differences among ACT, CBT, and other mindfulness-based treatments are a topic of controversy in the current literature. Advantages and Disadvantages of Mindfulness-Based Therapy Two key advantages of mindfulness-based therapies are their acceptability and accessibility to patients. Because yoga and meditation are already widely known in popular culture, consumers of mental healthcare are often interested in trying related psychological therapies. Currently, psychologists have not come to a consensus on the efficacy of MBT, though growing evidence supports its effectiveness for treating mood and anxiety disorders. For example, one review of MBT studies for anxiety and depression found that mindfulness-based interventions generally led to moderate symptom improvement (Hofmann et al., 2010). Emerging Treatment Strategies With growth in research and technology, psychologists have been able to develop new treatment strategies in recent years. Often, these approaches focus on enhancing existing treatments, such as cognitive-behavioral therapies, through the use of technological advances. For example, internet- and mobile-delivered therapies make psychological treatments more available, through smartphones and online access. Clinician-supervised online CBT modules allow patients to access treatment from home on their own schedule—an opportunity particularly important for patients with less geographic or socioeconomic access to traditional treatments. Furthermore, smartphones help extend therapy to patients’ daily lives, allowing for symptom tracking, homework reminders, and more frequent therapist contact. Another benefit of technology is cognitive bias modification. Here, patients are given exercises, often through the use of video games, aimed at changing their problematic thought processes. For example, researchers might use a mobile app to train alcohol abusers to avoid stimuli related to alcohol. One version of this game flashes four pictures on the screen—three alcohol cues (e.g., a can of beer, the front of a bar) and one health-related image (e.g., someone drinking water). The goal is for the patient to tap the healthy picture as fast as s/he can. Games like these aim to target patients’ automatic, subconscious thoughts that may be difficult to direct through conscious effort. That is, by repeatedly tapping the healthy image, the patient learns to “ignore” the alcohol cues, so when those cues are encountered in the environment, they will be less likely to trigger the urge to drink. Approaches like these are promising because of their accessibility, however they require further research to establish their effectiveness. Yet another emerging treatment employs CBT-enhancing pharmaceutical agents. These are drugs used to improve the effects of therapeutic interventions. Based on research from animal experiments, researchers have found that certain drugs influence the biological processes known to be involved in learning. Thus, if people take these drugs while going through psychotherapy, they are better able to “learn” the techniques for improvement. For example, the antibiotic d-cycloserine improves treatment for anxiety disorders by facilitating the learning processes that occur during exposure therapy. Ongoing research in this exciting area may prove to be quite fruitful. Pharmacological Treatments Up until this point, all the therapies we have discussed have been talk-based or meditative practices. However, psychiatric medications are also frequently used to treat mental disorders, including schizophrenia, bipolar disorder, depression, and anxiety disorders. Psychiatric drugs are commonly used, in part, because they can be prescribed by general medical practitioners, whereas only trained psychologists are qualified to deliver effective psychotherapy. While drugs and CBT therapies tend to be almost equally effective, choosing the best intervention depends on the disorder and individual being treated, as well as other factors—such as treatment availability and comorbidity (i.e., having multiple mental or physical disorders at once). Although many new drugs have been introduced in recent decades, there is still much we do not understand about their mechanism in the brain. Further research is needed to refine our understanding of both pharmacological and behavioral treatments before we can make firm claims about their effectiveness. Integrative and Eclectic Psychotherapy In discussing therapeutic orientations, it is important to note that some clinicians incorporate techniques from multiple approaches, a practice known as integrative or eclectic psychotherapy. For example, a therapist may employ distress tolerance skills from DBT (to resolve short-term problems), cognitive reappraisal from CBT (to address long-standing issues), and mindfulness-based meditation from MBCT (to reduce overall stress). And, in fact, between 13% and 42% of therapists have identified their own approaches as integrative or eclectic (Norcross & Goldfried, 2005). Conclusion Throughout human history we have had to deal with mental illness in one form or another. Over time, several schools of thought have emerged for treating these problems. Although various therapies have been shown to work for specific individuals, cognitive behavioral therapy is currently the treatment most widely supported by empirical research. Still, practices like psychodynamic therapies, person-centered therapy, mindfulness-based treatments, and acceptance and commitment therapy have also shown success. And, with recent advances in research and technology, clinicians are able to enhance these and other therapies to treat more patients more effectively than ever before. However, what is important in the end is that people actually seek out mental health specialists to help them with their problems. One of the biggest deterrents to doing so is that people don’t understand what psychotherapy really entails. Through understanding how current practices work, not only can we better educate people about how to get the help they need, but we can continue to advance our treatments to be more effective in the future. Outside Resources Article: A personal account of the benefits of mindfulness-based therapy https://www.theguardian.com/lifeandstyle/2014/jan/11/julie-myerson-mindfulness-based-cognitive-therapy Article: The Effect of Mindfulness-Based Therapy on Anxiety and Depression: A Meta-Analytic Review https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2848393/ Video: An example of a person-centered therapy session. Video: Carl Rogers, the founder of the humanistic, person-centered approach to psychology, discusses the position of the therapist in PCT. Video: CBT (cognitive behavioral therapy) is one of the most common treatments for a range of mental health problems, from anxiety, depression, bipolar, OCD or schizophrenia. This animation explains the basics and how you can decide whether it's best for you or not. Web: An overview of the purpose and practice of cognitive behavioral therapy (CBT) http://psychcentral.com/lib/in-depth-cognitive-behavioral-therapy/ Web: The history and development of psychoanalysis http://www.freudfile.org/psychoanalysis/history.html Discussion Questions 1. Psychoanalytic theory is no longer the dominant therapeutic approach, because it lacks empirical support. Yet many consumers continue to seek psychoanalytic or psychodynamic treatments. Do you think psychoanalysis still has a place in mental health treatment? If so, why? 2. What might be some advantages and disadvantages of technological advances in psychological treatment? What will psychotherapy look like 100 years from now? 3. Some people have argued that all therapies are about equally effective, and that they all affect change through common factors such as the involvement of a supportive therapist. Does this claim sound reasonable to you? Why or why not? 4. When choosing a psychological treatment for a specific patient, what factors besides the treatment’s demonstrated efficacy should be taken into account? Vocabulary Acceptance and commitment therapy A therapeutic approach designed to foster nonjudgmental observation of one’s own mental processes. Automatic thoughts Thoughts that occur spontaneously; often used to describe problematic thoughts that maintain mental disorders. Cognitive bias modification Using exercises (e.g., computer games) to change problematic thinking habits. Cognitive-behavioral therapy (CBT) A family of approaches with the goal of changing the thoughts and behaviors that influence psychopathology. Comorbidity Describes a state of having more than one psychological or physical disorder at a given time. Dialectical behavior therapy (DBT) A treatment often used for borderline personality disorder that incorporates both cognitive-behavioral and mindfulness elements. Dialectical worldview A perspective in DBT that emphasizes the joint importance of change and acceptance. Exposure therapy A form of intervention in which the patient engages with a problematic (usually feared) situation without avoidance or escape. Free association In psychodynamic therapy, a process in which the patient reports all thoughts that come to mind without censorship, and these thoughts are interpreted by the therapist. Integrative or eclectic psychotherapy Also called integrative psychotherapy, this term refers to approaches combining multiple orientations (e.g., CBT with psychoanalytic elements). Integrative or eclectic psychotherapy Also called integrative psychotherapy, this term refers to approaches combining multiple orientations (e.g., CBT with psychoanalytic elements). Mindfulness A process that reflects a nonjudgmental, yet attentive, mental state. Mindfulness-based therapy A form of psychotherapy grounded in mindfulness theory and practice, often involving meditation, yoga, body scan, and other features of mindfulness exercises. Person-centered therapy A therapeutic approach focused on creating a supportive environment for self-discovery. Psychoanalytic therapy Sigmund Freud’s therapeutic approach focusing on resolving unconscious conflicts. Psychodynamic therapy Treatment applying psychoanalytic principles in a briefer, more individualized format. Reappraisal, or Cognitive restructuring The process of identifying, evaluating, and changing maladaptive thoughts in psychotherapy. Schema A mental representation or set of beliefs about something. Unconditional positive regard In person-centered therapy, an attitude of warmth, empathy and acceptance adopted by the therapist in order to foster feelings of inherent worth in the patient.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_9%3A_Psychological_Disorders_and_Treatments/9.02%3A_Therapeutic_Orientations.txt
By Richard Milich and Walter Roberts University of Kentucky Attention-Deficit/Hyperactivity Disorder (ADHD) is a psychiatric disorder that is most often diagnosed in school-aged children. Many children with ADHD find it difficult to focus on tasks and follow instructions, and these characteristics can lead to problems in school and at home. How children with ADHD are diagnosed and treated is a topic of controversy, and many people, including scientists and nonscientists alike, hold strong beliefs about what ADHD is and how people with the disorder should be treated. This module will familiarize the reader with the scientific literature on ADHD. First, we will review how ADHD is diagnosed in children, with a focus on how mental health professionals distinguish between ADHD and normal behavior problems in childhood. Second, we will describe what is known about the causes of ADHD. Third, we will describe the treatments that are used to help children with ADHD and their families. The module will conclude with a brief discussion of how we expect that the diagnosis and treatment of ADHD will change over the coming decades. learning objectives • Distinguish childhood behavior disorders from phases of typical child development. • Describe the factors contributing to Attention-Deficit/Hyperactivity Disorder (ADHD) • Understand the controversies surrounding the legitimacy and treatment of childhood behavior disorders • Describe the empirically supported treatments for Attention-Deficit/Hyperactivity Disorder (ADHD) Introduction Childhood is a stage of life characterized by rapid and profound development. Starting at birth, children develop the skills necessary to function in the world around them at a rate that is faster than any other time in life. This is no small accomplishment! By the end of their first decade of life, most children have mastered the complex cognitive operations required to comply with rules, such as stopping themselves from acting impulsively, paying attention to parents and teachers in the face of distraction, and sitting still despite boredom. Indeed, acquiring self-control is an important developmental task for children (Mischel, Shoda, & Rodriguez, 1989), because they are expected to comply with directions from adults, stay on task at school, and play appropriately with peers. For children with Attention-Deficit/Hyperactivity Disorder (ADHD), however, exercising self-control is a unique challenge. These children, oftentimes despite their best intentions, struggle to comply with adults’ instructions, and they are often labeled as “problem children” and “rule breakers.” Historically, people viewed these children as willfully noncompliant due to moral or motivational defect (Still, 1902). However, scientists now know that the noncompliance observed in children with ADHD can be explained by a number of factors, including neurological dysfunction. The goal of this module is to review the classification, causes, consequences, and treatment of ADHD. ADHD is somewhat unique among the psychiatric disorders in that most people hold strong opinions about the disorder, perhaps due to its more controversial qualities. When applicable, we will discuss some of the controversial beliefs held by social critics and laypeople, as well as scientists who study the disorder. Our hope is that a discussion of these controversies will allow you to reach your own conclusions about the legitimacy of the disorder. Why Diagnose Children’s Behavior Problems? When a family is referred to a mental health professional for help dealing with their child’s problematic behaviors, the clinician’s first goal is to identify the nature and cause of the child’s problems. Accurately diagnosing children’s behavior problems is an important step in the intervention process, because a child’s diagnosis can guide clinical decision making. Childhood behavior problems often arise from different causes, require different methods for treating, and have different developmental courses. Arriving at a diagnosis will allow the clinician to make inferences about how each child will respond to different treatments and provide predictive information to the family about how the disorder will affect the child as he or she develops. Despite the utility of the current diagnostic system, the practice of diagnosing children’s behavior problems is controversial. Many adults feel strongly that labeling children as “disordered” is stigmatizing and harmful to children’s self-concept. There is some truth in this concern. One study found that children have more negative attitudes toward a play partner if they are led to believe that their partner has ADHD, regardless of whether or not their partner actually has the disorder (Harris, Milich, Corbitt, Hoover, & Brady, 1992). Others have criticized the use of the diagnostic system because they believe it pathologizes normal behavior in children. Despite these criticisms, the diagnostic system has played a central role in research and treatment of child behavior disorders, and it is unlikely to change substantially in the near future. This section will describe ADHD as a diagnostic category and discuss controversies surrounding the legitimacy of this disorder. ADHD is the most commonly diagnosed childhood behavior disorder. It affects 3% to 7% of children in the United States (American Psychiatric Association, 2000), and approximately 65% of children diagnosed with ADHD will continue to experience symptoms as adults (Faraone, Biederman, & Mick, 2006). The core symptoms of ADHD are organized into two clusters, including clusters of hyperactivity/impulsivity and inattention. The hyperactive symptom cluster describes children who are perpetually in motion even during times when they are expected to be still, such as during class or in the car. The impulsive symptom cluster describes difficulty in delaying response and acting without considering the repercussions of behavior. Hyperactive and impulsive symptoms are closely related, and boys are more likely than girls to experience symptoms from this cluster (Hartung & Widiger, 1998). Inattentive symptoms describe difficulty with organization and task follow-through, as well as a tendency to be distracted by external stimuli. Two children diagnosed with ADHD can have very different symptom presentations. In fact, children can be diagnosed with different subtypes of the disorder (i.e., Combined Type, Predominantly Inattentive Type, or Predominantly Hyperactive-Impulsive Type) according to the number of symptoms they have in each cluster. Are These Diagnoses Valid? Many laypeople and social critics argue that ADHD is not a “real” disorder. These individuals claim that children with ADHD are only “disordered” because parents and school officials have trouble managing their behavior. These criticisms raise an interesting question about what constitutes a psychiatric disorder in children: How do scientists distinguish between clinically significant ADHD symptoms and normal instances of childhood impulsivity, hyperactivity, and inattention? After all, many 4-year-old boys are hyperactive and cannot focus on a task for very long. To address this issue, several criteria are used to distinguish between normal and disordered behavior: 1. The symptoms must significantly impair the child’s functioning in important life domains (e.g., school, home). 2. The symptoms must be inappropriate for the child’s developmental level. One goal of this module will be to examine whether ADHD meets the criteria of a “true” disorder. The first criterion states that children with ADHD should show impairment in major functional domains. This is certainly true for children with ADHD. These children have lower academic achievement compared with their peers. They are more likely to repeat a grade or be suspended and less likely to graduate from high school (Loe & Feldman, 2007). Children with ADHD are often unpopular among their peers, and many of these children are actively disliked and socially rejected (Landau, Milich, & Diener, 1998). Children with ADHD are likely to experience comorbid psychological problems such as learning disorders, depression, anxiety, and oppositional defiant disorder. As they grow up, adolescents and adults with ADHD are at risk to abuse alcohol and other drugs (Molina & Pelham, 2003) and experience other adverse outcomes (see Focus Topic 1). In sum, there is sufficient evidence to conclude that children diagnosed with ADHD are significantly impaired by their symptoms. Focus Topic 1: Adult outcomes of children with ADHD Children with ADHD often continue to experience symptoms of the disorder as adults. Historically, this fact was not recognized by the medical community; instead, they believed that children “matured out” of their symptoms as they entered adulthood. Fortunately, opinions have changed over time, and it is now generally accepted that ADHD can be present among adults. A recent prevalence estimate suggests that 4.4% of adults in the United States meet criteria for ADHD (Kessler et al., 2006). This study also found that the majority of adults with ADHD are not receiving treatment for their disorder. Adult ADHD, if left untreated, can cause numerous negative outcomes, including: • Depression and poor self-concept, personality disorder, and other psychiatric comorbidity (Kessler et al., 2006) • Substance abuse (Molina & Pelham, 2003) • Poor work performance, termination from jobs, chronic unemployment, and poor academic achievement (Barkley, Fischer, Smallish, & Fletcher, 2006) • Divorce and problems with interpersonal relationships (Biederman et al., 2006) • High-risk sexual behaviors and early parenthood (Barkley et al., 2006; Flory, Molina, Pelham, Gnagy, & Smith, 2006) • Impairments in driving ability (Weafer, Fillmore, & Milich, 2009) • Obesity (Cortese et al., 2008) Despite the list of negative outcomes associated with adult ADHD, adults with the disorder are not doomed to live unfulfilling lives of limited accomplishment. Many adults with ADHD have benefited from treatment and are able to overcome their symptoms. For example, pharmacological treatment of adult ADHD has been shown to reduce risk of criminal behavior (Lichtenstein et al., 2012). Others have succeeded by avoiding careers in which their symptoms would be particularly problematic (e.g., those with heavy organizational demands). In any case, it is important that people with ADHD are identified and treated early, because early treatment predicts more positive outcomes in adulthood (Kessler et al., 2006). It is also important to determine that a child’s symptoms are not caused by normal patterns of development. Many of the behaviors that are diagnostic of ADHD in some children would be considered developmentally appropriate for a younger child. This is true for many psychological and psychiatric disorders in childhood. For example, bedwetting is quite common in 3-year-old children; at this age, most children have not gained control over nighttime urination. For this reason, a 3-year-old child who wets the bed would not be diagnosed with enuresis (i.e., the clinical term for chronic bedwetting), because his or her behavior is developmentally appropriate. Bedwetting in an 8-year-old child, however, is developmentally inappropriate. At this age, children are expected to remain dry overnight, and failure to master this skill would prevent children from sleeping over at friends’ houses or attending overnight camps. A similar example of developmentally appropriate versus inappropriate hyperactivity and noncompliance is provided in Focus Topic 2. Focus Topic 2: Two children referred for problems with noncompliance and hyperactivity Case 1 - Michael Michael, a 4-year-old boy, was referred to a child psychologist to be evaluated for ADHD. His parents reported that Michael would not comply with their instructions. They also complained that Michael would not remain seated during “quality time” with his father. The evaluating psychologist interviewed the family, and by all accounts Michael was noncompliant and often left his seat. Specifically, when Michael’s mother asked him to prepare his preschool lunch, Michael would leave the kitchen and play with his toys soon after opening his lunch box. Further, the psychologist found that quality time involved Michael and his father sitting down for several hours to watch movies. In other settings, such as preschool, Michael was compliant with his teacher’s request and no more active than his peers. In this case, Michael’s parents held unrealistic expectations for a child at Michael’s developmental level. The psychologist would likely educate Michael’s parents about normative child development rather than diagnosing Michael with ADHD. Case 2 - Jake Jake, a 10-year-old boy, was referred to the same psychologist as Michael. Jake’s mother was concerned because Jake was not getting ready for school on time. Jake also had trouble remaining seated during dinner, which interrupted mealtime for the rest of the family. The psychologist found that in the morning, Jake would complete one or two steps of his routine before he became distracted and switched activities, despite his mother’s constant reminders. During dinnertime, Jake would leave his seat between 10 and 15 times over the course of the meal. Jake’s teachers were worried because Jake was only able to complete 50% of his homework. Further, his classmates would not pick Jake for team sports during recess because he often became distracted and wondered off during the game. In this case, Jake’s symptoms would not be considered developmentally appropriate for a 10-year-old child. Further, his symptoms caused him to experience impairment at home and school. Unlike Michael, Jake probably would be diagnosed with ADHD. Why Do Some Children Develop Behavior Disorders? The reasons that some children develop ADHD are complex, and it is generally recognized that a single cause is insufficient to explain why an individual child does or does not have the disorder. Researchers have attempted to identify risk factors that predispose a child to develop ADHD. These risk factors range in scope from genetic (e.g., specific gene polymorphisms) to familial (e.g., poor parenting) to cultural (e.g., low socioeconomic status). This section will identify some of the risk factors that are thought to contribute to ADHD. It will conclude by reviewing some of the more controversial ideas about the causes of ADHD, such as poor parenting and children’s diets, and review some of the evidence pertaining to these causes. Most experts believe that genetic and neurophysiological factors cause the majority of ADHD cases. Indeed, ADHD is primarily a genetic disorder—twin studies find that whether or not a child develops ADHD is due in large part (75%) to genetic variations (Faraone et al., 2005). Further, children with a family history of ADHD are more likely to develop ADHD themselves (Faraone & Biederman, 1994). Specific genes that have been associated with ADHD are linked to neurotransmitters such as dopamine and serotonin. In addition, neuroimagining studies have found that children with ADHD show reduced brain volume in some regions of the brain, such as the prefrontal cortex, the corpus callosum, the anterior cingulate cortex, the basal ganglia, and the cerebellum (Seidman, Valera, & Makris, 2005). Among their other functions, these regions of the brain are implicated in organization, impulse control, and motor activity, so the reduced volume of these structures in children with ADHD may cause some of their symptoms. Although genetics appear to be a main cause of ADHD, recent studies have shown that environmental risk factors may cause a minority of ADHD cases. Many of these environmental risk factors increase the risk for ADHD by disrupting early development and compromising the integrity of the central nervous system. Environmental influences such as low birth weight, malnutrition, and maternal alcohol and nicotine use during pregnancy can increase the likelihood that a child will develop ADHD (Mick, Biederman, Faraone, Sayer, & Kleinman, 2002). Additionally, recent studies have shown that exposure to environmental toxins, such as lead and pesticides, early in a child’s life may also increase risk of developing ADHD (Nigg, 2006). Controversies on Causes of ADHD Controversial explanations for the development of ADHD have risen and fallen in popularity since the 1960s. Some of these ideas arise from cultural folklore, others can be traced to “specialists” trying to market an easy fix for ADHD based on their proposed cause. Some other ideas contain a kernel of truth but have been falsely cast as causing the majority of ADHD cases. Some critics have proposed that poor parenting is a major cause of ADHD. This explanation is popular because it is intuitively appealing—one can imagine how a child who is not being disciplined at home may be noncompliant in other settings. Although it is true that parents of children with ADHD use discipline less consistently, and a lack of structure and discipline in the home can exacerbate symptoms in children with ADHD (Campbell, 2002), it is unlikely that poor parenting alone causes ADHD in the first place. To the contrary, research suggests that the noncompliance and impulsivity on the child’s part can cause caregivers to use discipline less effectively. In a classic series of studies, Cunningham and Barkley (1979) showed that mothers of children with ADHD were less attentive to their children and imposed more structure to their playtime relative to mothers of typically developing children. However, these researchers also showed that when the children were given stimulant medication, their compliance increased and their mothers’ parenting behavior improved to the point where it was comparable to that of the mothers of children without ADHD (Barkley & Cunningham, 1979). This research suggests that instead of poor parenting causing children to develop ADHD, it is the stressful effects of managing an impulsive child that causes parenting problems in their caregivers. One can imagine how raising a child with ADHD could be stressful for parents. In fact, one study showed that a brief interaction with an impulsive and noncompliant child caused parents to increase their alcohol consumption—presumably these parents were drinking to cope with the stress of dealing with the impulsive child (Pelham et al., 1997). It is, therefore, important to consider the reciprocal effects of noncompliant children on parenting behavior, rather than assuming that parenting ability has a unidirectional effect on child behavior. Other purported causes of ADHD are dietary. For example, it was long believed that excessive sugar intake can cause children to become hyperactive. This myth is largely disproven (Milich, Wolraich, & Lindgren, 1986). However, other diet-oriented explanations for ADHD, such as sensitivity to certain food additives, have been proposed (Feingold, 1976). These theories have received a bit more support than the sugar hypothesis (Pelsser et al., 2011). In fact, the possibility that certain food additives may cause hyperactivity in children led to a ban on several artificial food colorings in the United Kingdom, although the Food and Drug Administration rejected similar measures in the United States. Even if artificial food dyes do cause hyperactivity in a subgroup of children, research does not support these food additives as a primary cause of ADHD. Further, research support for elimination diets as a treatment for ADHD has been inconsistent at best. In sum, scientists are still working to determine what causes children to develop ADHD, and despite substantial progress over the past four decades, there are still many unanswered questions. In most cases, ADHD is probably caused by a combination of genetic and environmental factors. For example, a child with a genetic predisposition to ADHD may develop the disorder after his or her mother uses tobacco during her pregnancy, whereas a child without the genetic predisposition may not develop the disorder in the same environment. Fortunately, the causes of ADHD are relatively unimportant for the families of children with ADHD who wish to receive treatment, because what caused the disorder for an individual child generally does not influence how it is treated. Methods of Treating ADHD in Children There are several types of evidence-based treatment available to families of children with ADHD. The type of treatment that might be used depends on many factors, including the child’s diagnosis and treatment history, as well as parent preference. To treat children with less severe noncompliance problems, parents can be trained to systematically use contingency management (i.e., rewards and punishments) to manage their children’s behavior more effectively (Kazdin, 2005). For the children with ADHD, however, more intensive treatments often are necessary. Medication The most common method of treating ADHD is to prescribe stimulant medications such as Adderall™. These medications treat many of the core symptoms of ADHD—treated children will show improved impulse control, time-on-task, and compliance with adults, and decreased hyperactivity and disruptive behavior. However, there are also negative side effects to stimulant medication, such as growth and appetite suppression, increased blood pressure, insomnia, and changes in mood (Barkley, 2006). Although these side effects can be unpleasant for children, they can often be avoided with careful monitoring and dosage adjustments. Opinions differ on whether stimulants should be used to treat children with ADHD. Proponents argue that stimulants are relatively safe and effective, and that untreated ADHD poses a much greater risk to children (Barkley, 2006). Critics argue that because many stimulant medications are similar to illicit drugs, such as cocaine and methamphetamine, long-term use may cause cardiovascular problems or predispose children to abuse illicit drugs. However, longitudinal studies have shown that people taking these medications are not more likely to experience cardiovascular problems or to abuse drugs (Biederman, Wilens, Mick, Spencer, & Faraone, 1999; Cooper et al., 2011). On the other hand, it is not entirely clear how long-term stimulant treatment can affect the brain, particularly in adults who have been medicated for ADHD since childhood. Finally, critics of psychostimulant medication have proposed that stimulants are increasingly being used to manage energetic but otherwise healthy children. It is true that the percentage of children prescribed stimulant medication has increased since the 1980s. This increase in use is not unique to stimulant medication, however. Prescription rates have similarly increased for most types of psychiatric medication (Olfson, Marcus, Weissman, & Jensen, 2002). As parents and teachers become more aware of ADHD, one would expect that more children with ADHD will be identified and treated with stimulant medication. Further, the percentage of children in the United States being treated with stimulant medication is lower than the estimated prevalence of children with ADHD in the general population (Nigg, 2006). Parent Management Training Parenting children with ADHD can be challenging. Parents of these children are understandably frustrated by their children’s misbehavior. Standard discipline tactics, such as warnings and privilege removal, can feel ineffective for children with ADHD. This often leads to ineffective parenting, such as yelling at or ridiculing the child with ADHD. This cycle can leave parents feeling hopeless and children with ADHD feeling alienated from their family. Fortunately, parent management training can provide parents with a number of tools to cope with and effectively manage their child’s impulsive and oppositional behavior. Parent management training teaches parents to use immediate, consistent, and powerful consequences (i.e., rewards and punishment), because children with ADHD respond well to these types of behavioral contingencies (Luman, Oosterlaan, & Sergeant, 2005). Other, more intensive, psychosocial treatments use similar behavioral principles in summer camp–based settings (Pelham, Fabiano, Gnagy, Greiner, & Hoza, 2004), and school-based intervention programs are becoming more popular. A description of a school-based intervention program for ADHD is described in Focus Topic 3. Focus Topic 3: Treating ADHD in Schools Succeeding at school is one of the most difficult challenges faced by children with ADHD and their parents. Teachers expect students to attend to lessons, complete lengthy assignments, and comply with rules for approximately seven hours every day. One can imagine how a child with hyperactive and inattentive behaviors would struggle under these demands, and this mismatch can lead to frustration for the student and his or her teacher. Disruptions caused by the child with ADHD can also distract and frustrate peers. Succeeding at school is an important goal for children, so researchers have developed and validated intervention strategies based on behavioral principles of contingency management that can help children with ADHD adhere to rules in the classroom (described in DuPaul & Stoner, 2003). Illustrative characteristics of an effective school-based contingency management system are described below: Token reinforcement program This program allows a student to earn tokens (points, stars, etc.) by meeting behavioral goals and not breaking rules. These tokens act as secondary reinforcers because they can be redeemed for privileges or goods. Parents and teachers work with the students to identify problem behaviors and create concrete behavioral goals. For example, if a student is disruptive during silent reading time, then a goal might be for him or her to remain seated for at least 80% of reading time. Token reinforcement programs are most effective when tokens are provided for appropriate behavior and removed for inappropriate behavior. Time out Time out can be an effective punishment when used correctly. Teachers should place a student in time out only when they fail to respond to token removal or if they engage in a severely disruptive behavior (e.g., physical aggression). When placed in time out, the student should not have access to any type of reinforcement (e.g., toys, social interaction), and the teacher should monitor their behavior throughout time out. Daily report card The teacher keeps track of whether or not the student meets his or her goals and records this information on a report card. This information is sent home with the student each day so parents can integrate the student’s performance at school into a home-based contingency management program. Educational services and accommodations Students with ADHD often show deficits in specific academic skills (e.g., reading skills, math skills), and these deficits can be improved through direct intervention. Students with ADHD may spend several hours each week working one-on-one with an educator to improve their academic skills. Environmental accommodations can also help a student with ADHD be successful. For example, a student who has difficulty focusing during a test can be allowed extra time in a low-distraction setting. What Works Best? The Multimodal Treatment Study Recently, a large-scale study, the Multimodal Treatment Study (MTA) of Children with ADHD, compared pharmacological and behavioral treatment of ADHD (MTA Cooperative Group, 1999). This study compared the outcomes of children with ADHD in four different treatment conditions, including standard community care, intensive behavioral treatment, stimulant medication management, and the combination of intensive behavioral treatment and stimulant medication. In terms of core symptom relief, stimulant medication was the most effective treatment, and combined treatment was no more effective than stimulant medication alone (MTA Cooperative Group, 1999). Behavioral treatment was advantageous in other ways, however. For example, children who received combined treatment were less disruptive at school than children receiving stimulant medication alone (Hinshaw et al., 2000). Other studies have found that children who receive behavioral treatment require lower doses of stimulant medication to achieve the desired outcomes (Pelham et al., 2005). This is important because children are better able to tolerate lower doses of stimulant medication. Further, parents report being more satisfied with treatment when behavioral management is included as a component in the program (Jensen et al., 2001). In sum, stimulant medication and behavioral treatment each have advantages and disadvantages that complement the other, and the best outcomes likely occur when both forms of treatment are used to improve children’s behavior. The Future of ADHD It is difficult to predict the future; however, based on trends in research and public discourse, we can predict how the field may change as time progresses. This section will discuss two areas of research and public policy that will shape how we understand and treat ADHD in the coming decades. Controlling Access to Stimulant Medication It is no secret that many of the drugs used to treat ADHD are popular drugs of abuse among high school and college students, and this problem seems to be getting worse. The rate of illicit stimulant use has steadily risen over the past several decades (Teter, McCabe, Cranford, Boyd, & Guthrie, 2005), and it is probably not a coincidence that prescription rates for stimulant medication have increased during the same time period (Setlik, Bond, & Ho, 2009). Students who abuse stimulants often report doing so because they act as an academic performance enhancer by boosting alertness and concentration. Although they may enhance performance in the short term, nonmedical use of these drugs can lead to dependence and other adverse health consequences, especially when taken in ways other than prescribed (e.g., crushed and snorted) (Volkow & Swanson, 2003). Stimulants can be particularly dangerous when they are taken without supervision from a physician, because this may lead to adverse drug interactions or side effects. Because this increase in prescription stimulant abuse represents a threat to public health, an important goal for policy makers will be to reduce the availability of prescription stimulants to those who would use them for nonmedical reasons. One of the first steps for addressing prescription stimulant abuse will be understanding how illicit users gain access to medication. Probably the most common method of obtaining stimulants is through drug diversion. The majority of college students who abuse stimulants report obtaining them from peers with valid prescriptions (McCabe & Boyd, 2005). Another way that would-be abusers may gain access to medication is by malingering(i.e., faking)symptoms of ADHD (Quinn, 2003). These individuals will knowingly exaggerate their symptoms to a physician in order to obtain a prescription. Other sources of illicit prescription drugs have been identified (e.g., pharmacy websites) (Califano, 2004), but more research is needed to understand how much these sources contribute to the problem. As we gain an understanding of how people gain access to illicit medication, policy makers and researchers can make efforts to curtail the rate of stimulant misuse. For example, because drug diversion is a major source of illicit stimulants, policymakers have enacted prescription monitoring programs to keep track of patient’s prescription-seeking behavior (Office of Drug Control Policy, 2011), and, in some cases, patients are required to pass drug screens before receiving their prescriptions. To address malingering, researchers are working to develop psychological tests that can identify individuals who are faking symptoms (Jasinski et al., 2011). Finally, pharmacologists are working to develop stimulant medications that do not carry the same risk of abuse as the currently available drugs (e.g., lisdexamfetamine) (Biederman et al., 2007). Although all of these measures will reduce illicit users’ access to stimulant medication, it is important to consider how the policies will affect access among people who need these medications to treat their ADHD symptoms. Prescription tracking programs may reduce physicians’ willingness to prescribe stimulants out of fear of being investigated by law enforcement. Patients with ADHD with comorbid substance abuse problems may be denied access to stimulant medication because they are considered high risk for drug diversion. Similarly, lengthy psychological evaluations to assess for malingering and mandated drug screenings may be prohibitively expensive for less affluent individuals with ADHD. These measures to reduce illicit drug use are necessary from a public health perspective, but as we move forward and enact policies to reduce stimulant abuse, it will be equally important to consider impact of such legislation on patients’ access to treatment. The Role of Neuroscience and Behavioral Genetics in Understanding ADHD Much of the research on ADHD has been conducted to answer several deceptively complex questions: What causes ADHD? How are people with ADHD different from their typically developing peers? How can ADHD be prevented or treated? Historically, our tools for answering these questions was limited to observing outward human behavior, and our ability to ask questions about the physiology of ADHD was severely limited by the technology of the time. In the past two decades, however, rapid advances in technology (e.g., functional magnetic resonance imaging, genetic analysis) have allowed us to probe the physiological bases of human behavior. An exciting application of this technology is that we are able to extend our understanding of ADHD beyond basic behavior; we are learning about the underlying neurophysiology and genetics of the disorder. As we gain a fuller understanding of ADHD, we may be able to apply this knowledge to improve prevention and treatment of the disorder. Knowledge of the underlying physiology of ADHD may guide efforts to develop new nonstimulant medications, which may not carry the side effects or abuse potential of traditional stimulants. Similarly, these advances may improve our ability to diagnose ADHD. Although it is extremely unlikely that a perfectly accurate genetic or neuroimaging test for ADHD will ever be developed (Thome et al., 2012), such procedures could be used in conjunction with behavioral evaluation and questionnaires to improve diagnostic accuracy. Finally, identifying genetic traits that predispose children to develop ADHD may allow physicians to use targeted prevention programs that could reduce the chances that children at risk for developing the disorder will experience symptoms. Discussion Questions 1. Does ADHD meet the definition of a psychiatric disorder? 2. Explain the difference between developmentally appropriate and developmentally inappropriate behavior problems. 3. Do you believe that it is ethical to prescribe stimulant medication to children? Why or why not? What are the risks associated with withholding stimulant medication from children with ADHD? 4. How should society balance the need to treat individuals with ADHD using stimulants with public health concerns about the abuse of these same medications? Vocabulary Contingency management A reward or punishment that systematically follows a behavior. Parents can use contingencies to modify their children’s behavior. Drug diversion When a drug that is prescribed to treat a medical condition is given to another individual who seeks to use the drug illicitly. Malingering Fabrication or exaggeration of medical symptoms to achieve secondary gain (e.g., receive medication, avoid school). Oppositional defiant disorder A childhood behavior disorder that is characterized by stubbornness, hostility, and behavioral defiance. This disorder is highly comorbid with ADHD. Parent management training A treatment for childhood behavior problems that teaches parents how to use contingencies to more effectively manage their children’s behavior. Pathologizes To define a trait or collection of traits as medically or psychologically unhealthy or abnormal.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_9%3A_Psychological_Disorders_and_Treatments/9.03%3A_ADHD_and_Behavior_Disorders_in_Children.txt
By David H. Barlow and Kristen K. Ellard Boston University, Massachusetts General Hospital, Harvard Medical School Anxiety is a natural part of life and, at normal levels, helps us to function at our best. However, for people with anxiety disorders, anxiety is overwhelming and hard to control. Anxiety disorders develop out of a blend of biological (genetic) and psychological factors that, when combined with stress, may lead to the development of ailments. Primary anxiety-related diagnoses include generalized anxiety disorder, panic disorder, specific phobia, social anxiety disorder (social phobia), post traumatic stress disorder, and obsessive-compulsive disorder. In this module, we summarize the main clinical features of each of these disorders and discuss their similarities and differences with everyday experiences of anxiety. learning objectives • Understand the relationship between anxiety and anxiety disorders. • Identify key vulnerabilities for developing anxiety and related disorders. • Identify main diagnostic features of specific anxiety-related disorders. • Differentiate between disordered and non-disordered functioning. Introduction What is anxiety? Most of us feel some anxiety almost every day of our lives. Maybe you have an important test coming up for school. Or maybe there’s that big game next Saturday, or that first date with someone new you are hoping to impress. Anxiety can be defined as a negative mood state that is accompanied by bodily symptoms such as increased heart rate, muscle tension, a sense of unease, and apprehension about the future (APA, 2013; Barlow, 2002). Anxiety is what motivates us to plan for the future, and in this sense, anxiety is actually a good thing. It’s that nagging feeling that motivates us to study for that test, practice harder for that game, or be at our very best on that date. But some people experience anxiety so intensely that it is no longer helpful or useful. They may become so overwhelmed and distracted by anxiety that they actually fail their test, fumble the ball, or spend the whole date fidgeting and avoiding eye contact. If anxiety begins to interfere in the person’s life in a significant way, it is considered a disorder. Anxiety and closely related disorders emerge from “triple vulnerabilities,”a combination of biological, psychological, and specific factors that increase our risk for developing a disorder (Barlow, 2002; Suárez, Bennett, Goldstein, & Barlow, 2009). Biological vulnerabilities refer to specific genetic and neurobiological factors that might predispose someone to develop anxiety disorders. No single gene directly causes anxiety or panic, but our genes may make us more susceptible to anxiety and influence how our brains react to stress (Drabant et al., 2012; Gelernter & Stein, 2009; Smoller, Block, & Young, 2009). Psychological vulnerabilities refer to the influences that our early experiences have on how we view the world. If we were confronted with unpredictable stressors or traumatic experiences at younger ages, we may come to view the world as unpredictable and uncontrollable, even dangerous (Chorpita & Barlow, 1998; Gunnar & Fisher, 2006). Specific vulnerabilities refer to how our experiences lead us to focus and channel our anxiety (Suárez et al., 2009). If we learned that physical illness is dangerous, maybe through witnessing our family’s reaction whenever anyone got sick, we may focus our anxiety on physical sensations. If we learned that disapproval from others has negative, even dangerous consequences, such as being yelled at or severely punished for even the slightest offense, we might focus our anxiety on social evaluation. If we learn that the “other shoe might drop” at any moment, we may focus our anxiety on worries about the future. None of these vulnerabilities directly causes anxiety disorders on its own—instead, when all of these vulnerabilities are present, and we experience some triggering life stress, an anxiety disorder may be the result (Barlow, 2002; Suárez et al., 2009). In the next sections, we will briefly explore each of the major anxiety based disorders, found in the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) (APA, 2013). Generalized Anxiety Disorder Most of us worry some of the time, and this worry can actually be useful in helping us to plan for the future or make sure we remember to do something important. Most of us can set aside our worries when we need to focus on other things or stop worrying altogether whenever a problem has passed. However, for someone with generalized anxiety disorder (GAD), these worries become difficult, or even impossible, to turn off. They may find themselves worrying excessively about a number of different things, both minor and catastrophic. Their worries also come with a host of other symptoms such as muscle tension, fatigue, agitation or restlessness, irritability, difficulties with sleep (either falling asleep, staying asleep, or both), or difficulty concentrating.The DSM-5 criteria specify that at least six months of excessive anxiety and worry of this type must be ongoing, happening more days than not for a good proportion of the day, to receive a diagnosis of GAD. About 5.7% of the population has met criteria for GAD at some point during their lifetime (Kessler, Berglund, et al., 2005), making it one of the most common anxiety disorders (see Table 1). What makes a person with GAD worry more than the average person? Research shows that individuals with GAD are more sensitive and vigilant toward possible threats than people who are not anxious (Aikins & Craske, 2001; Barlow, 2002; Bradley, Mogg, White, Groom, & de Bono, 1999). This may be related to early stressful experiences, which can lead to a view of the world as an unpredictable, uncontrollable, and even dangerous place. Some have suggested that people with GAD worry as a way to gain some control over these otherwise uncontrollable or unpredictable experiences and against uncertain outcomes (Dugas, Gagnon, Ladouceur, & Freeston, 1998). By repeatedly going through all of the possible “What if?” scenarios in their mind, the person might feel like they are less vulnerable to an unexpected outcome, giving them the sense that they have some control over the situation (Wells, 2002). Others have suggested people with GAD worry as a way to avoid feeling distressed (Borkovec, Alcaine, & Behar, 2004). For example, Borkovec and Hu (1990) found that those who worried when confronted with a stressful situation had less physiological arousal than those who didn’t worry, maybe because the worry “distracted” them in some way. The problem is, all of this “what if?”-ing doesn’t get the person any closer to a solution or an answer and, in fact, might take them away from important things they should be paying attention to in the moment, such as finishing an important project. Many of the catastrophic outcomes people with GAD worry about are very unlikely to happen, so when the catastrophic event doesn’t materialize, the act of worrying gets reinforced (Borkovec, Hazlett-Stevens, & Diaz, 1999). For example, if a mother spends all night worrying about whether her teenage daughter will get home safe from a night out and the daughter returns home without incident, the mother could easily attribute her daughter’s safe return to her successful “vigil.” What the mother hasn’t learned is that her daughter would have returned home just as safe if she had been focusing on the movie she was watching with her husband, rather than being preoccupied with worries. In this way, the cycle of worry is perpetuated, and, subsequently, people with GAD often miss out on many otherwise enjoyable events in their lives. Panic Disorder and Agoraphobia Have you ever gotten into a near-accident or been taken by surprise in some way? You may have felt a flood of physical sensations, such as a racing heart, shortness of breath, or tingling sensations. This alarm reaction is called the “fight or flight” response (Cannon, 1929) and is your body’s natural reaction to fear, preparing you to either fight or escape in response to threat or danger. It’s likely you weren’t too concerned with these sensations, because you knew what was causing them. But imagine if this alarm reaction came “out of the blue,” for no apparent reason, or in a situation in which you didn’t expect to be anxious or fearful. This is called an “unexpected” panic attack or a false alarm. Because there is no apparent reason or cue for the alarm reaction, you might react to the sensations with intense fear, maybe thinking you are having a heart attack, or going crazy, or even dying. You might begin to associate the physical sensations you felt during this attack with this fear and may start to go out of your way to avoid having those sensations again. Unexpected panic attacks such as these are at the heart of panic disorder (PD). However, to receive a diagnosis of PD, the person must not only have unexpected panic attacks but also must experience continued intense anxiety and avoidance related to the attack for at least one month, causing significant distress or interference in their lives. People with panic disorder tend to interpret even normal physical sensations in a catastrophic way, which triggers more anxiety and, ironically, more physical sensations, creating a vicious cycle of panic (Clark, 1986, 1996). The person may begin to avoid a number of situations or activities that produce the same physiological arousal that was present during the beginnings of a panic attack. For example, someone who experienced a racing heart during a panic attack might avoid exercise or caffeine. Someone who experienced choking sensations might avoid wearing high-necked sweaters or necklaces. Avoidance of these internal bodily or somatic cues for panic has been termed interoceptive avoidance (Barlow & Craske, 2007; Brown, White, & Barlow, 2005; Craske & Barlow, 2008; Shear et al., 1997). The individual may also have experienced an overwhelming urge to escape during the unexpected panic attack. This can lead to a sense that certain places or situations—particularly situations where escape might not be possible—are not “safe.” These situations become external cues for panic. If the person begins to avoid several places or situations, or still endures these situations but does so with a significant amount of apprehension and anxiety, then the person also has agoraphobia (Barlow, 2002; Craske & Barlow, 1988; Craske & Barlow, 2008). Agoraphobia can cause significant disruption to a person’s life, causing them to go out of their way to avoid situations, such as adding hours to a commute to avoid taking the train or only ordering take-out to avoid having to enter a grocery store. In one tragic case seen by our clinic, a woman suffering from agoraphobia had not left her apartment for 20 years and had spent the past 10 years confined to one small area of her apartment, away from the view of the outside. In some cases, agoraphobia develops in the absence of panic attacks and therefor is a separate disorder in DSM-5. But agoraphobia often accompanies panic disorder. About 4.7% of the population has met criteria for PD or agoraphobia over their lifetime (Kessler, Chiu, Demler, Merikangas, & Walters, 2005; Kessler et al., 2006) (see Table 1). In all of these cases of panic disorder, what was once an adaptive natural alarm reaction now becomes a learned, and much feared, false alarm. Specific Phobia The majority of us might have certain things we fear, such as bees, or needles, or heights (Myers et al., 1984). But what if this fear is so consuming that you can’t go out on a summer’s day, or get vaccines needed to go on a special trip, or visit your doctor in her new office on the 26th floor? To meet criteria for a diagnosis of specific phobia, there must be an irrational fear of a specific object or situation that substantially interferes with the person’s ability to function. For example, a patient at our clinic turned down a prestigious and coveted artist residency because it required spending time near a wooded area, bound to have insects. Another patient purposely left her house two hours early each morning so she could walk past her neighbor’s fenced yard before they let their dog out in the morning. The list of possible phobias is staggering, but four major subtypes of specific phobia are recognized: blood-injury-injection (BII) type, situational type (such as planes, elevators, or enclosed places), natural environment type for events one may encounter in nature (for example, heights, storms, and water), and animal type. A fifth category “other” includes phobias that do not fit any of the four major subtypes (for example, fears of choking, vomiting, or contracting an illness). Most phobic reactions cause a surge of activity in the sympathetic nervous system and increased heart rate and blood pressure, maybe even a panic attack. However, people with BII type phobias usually experience a marked drop in heart rate and blood pressure and may even faint. In this way, those with BII phobias almost always differ in their physiological reaction from people with other types of phobia (Barlow & Liebowitz, 1995; Craske, Antony, & Barlow, 2006; Hofmann, Alpers, & Pauli, 2009; Ost, 1992). BII phobia also runs in families more strongly than any phobic disorder we know (Antony & Barlow, 2002; Page & Martin, 1998). Specific phobia is one of the most common psychological disorders in the United States, with 12.5% of the population reporting a lifetime history of fears significant enough to be considered a “phobia” (Arrindell et al., 2003; Kessler, Berglund, et al., 2005) (see Table 1). Most people who suffer from specific phobia tend to have multiple phobias of several types (Hofmann, Lehman, & Barlow, 1997). Social Anxiety Disorder (Social Phobia) Many people consider themselves shy, and most people find social evaluation uncomfortable at best, or giving a speech somewhat mortifying. Yet, only a small proportion of the population fear these types of situations significantly enough to merit a diagnosis of social anxiety disorder (SAD) (APA, 2013). SAD is more than exaggerated shyness (Bogels et al., 2010; Schneier et al., 1996). To receive a diagnosis of SAD, the fear and anxiety associated with social situations must be so strong that the person avoids them entirely, or if avoidance is not possible, the person endures them with a great deal of distress. Further, the fear and avoidance of social situations must get in the way of the person’s daily life, or seriously limit their academic or occupational functioning. For example, a patient at our clinic compromised her perfect 4.0 grade point average because she could not complete a required oral presentation in one of her classes, causing her to fail the course. Fears of negative evaluation might make someone repeatedly turn down invitations to social events or avoid having conversations with people, leading to greater and greater isolation. The specific social situations that trigger anxiety and fear range from one-on-one interactions, such as starting or maintaining a conversation; to performance-based situations, such as giving a speech or performing on stage; to assertiveness, such as asking someone to change disruptive or undesirable behaviors. Fear of social evaluation might even extend to such things as using public restrooms, eating in a restaurant, filling out forms in a public place, or even reading on a train. Any type of situation that could potentially draw attention to the person can become a feared social situation. For example, one patient of ours went out of her way to avoid any situation in which she might have to use a public restroom for fear that someone would hear her in the bathroom stall and think she was disgusting. If the fear is limited to performance-based situations, such as public speaking, a diagnosis of SAD performance only is assigned. What causes someone to fear social situations to such a large extent? The person may have learned growing up that social evaluation in particular can be dangerous, creating a specific psychological vulnerability to develop social anxiety (Bruch & Heimberg, 1994; Lieb et al., 2000; Rapee & Melville, 1997). For example, the person’s caregivers may have harshly criticized and punished them for even the smallest mistake, maybe even punishing them physically. Or, someone might have experienced a social trauma that had lasting effects, such as being bullied or humiliated. Interestingly, one group of researchers found that 92% of adults in their study sample with social phobia experienced severe teasing and bullying in childhood, compared with only 35% to 50% among people with other anxiety disorders (McCabe, Antony, Summerfeldt, Liss, & Swinson, 2003). Someone else might react so strongly to the anxiety provoked by a social situation that they have an unexpected panic attack. This panic attack then becomes associated (conditioned response) with the social situation, causing the person to fear they will panic the next time they are in that situation. This is not considered PD, however, because the person’s fear is more focused on social evaluation than having unexpected panic attacks, and the fear of having an attack is limited to social situations. As many as 12.1% of the general population suffer from social phobia at some point in their lives (Kessler, Berglund, et al., 2005), making it one of the most common anxiety disorders, second only to specific phobia (see Table 1). Posttraumatic Stress Disorder With stories of war, natural disasters, and physical and sexual assault dominating the news, it is clear that trauma is a reality for many people. Many individual traumas that occur every day never even make the headlines, such as a car accident, domestic abuse, or the death of a loved one. Yet, while many people face traumatic events, not everyone who faces a trauma develops a disorder. Some, with the help of family and friends, are able to recover and continue on with their lives (Friedman, 2009). For some, however, the months and years following a trauma are filled with intrusive reminders of the event, a sense of intense fear that another traumatic event might occur, or a sense of isolation and emotional numbing. They may engage in a host of behaviors intended to protect themselves from being vulnerable or unsafe, such as constantly scanning their surroundings to look for signs of potential danger, never sitting with their back to the door, or never allowing themselves to be anywhere alone. This lasting reaction to trauma is what characterizes posttraumatic stress disorder (PTSD). A diagnosis of PTSD begins with the traumatic event itself. An individual must have been exposed to an event that involves actual or threatened death, serious injury, or sexual violence. To receive a diagnosis of PTSD, exposure to the event must include either directly experiencing the event, witnessing the event happening to someone else, learning that the event occurred to a close relative or friend, or having repeated or extreme exposure to details of the event (such as in the case of first responders). The person subsequently re-experiences the event through both intrusive memories and nightmares. Some memories may come back so vividly that the person feels like they are experiencing the event all over again, what is known as having a flashback. The individual may avoid anything that reminds them of the trauma, including conversations, places, or even specific types of people. They may feel emotionally numb or restricted in their ability to feel, which may interfere in their interpersonal relationships. The person may not be able to remember certain aspects of what happened during the event. They may feel a sense of a foreshortened future, that they will never marry, have a family, or live a long, full life. They may be jumpy or easily startled, hypervigilant to their surroundings, and quick to anger. The prevalence of PTSD among the population as a whole is relatively low, with 6.8% having experienced PTSD at some point in their life (Kessler, Berglund, et al., 2005) (see Table 1). Combat and sexual assault are the most common precipitating traumas (Kessler, Sonnega, Bromet, Hughes, & Nelson, 1995). Whereas PTSD was previously categorized as an Anxiety Disorder, in the most recent version of the DSM (DSM-5; APA, 2013) it has been reclassified under the more specific category of Trauma- and Stressor-Related Disorders. A person with PTSD is particularly sensitive to both internal and external cues that serve as reminders of their traumatic experience. For example, as we saw in PD, the physical sensations of arousal present during the initial trauma can become threatening in and of themselves, becoming a powerful reminder of the event. Someone might avoid watching intense or emotional movies in order to prevent the experience of emotional arousal. Avoidance of conversations, reminders, or even of the experience of emotion itself may also be an attempt to avoid triggering internal cues. External stimuli that were present during the trauma can also become strong triggers. For example, if a woman is raped by a man wearing a red t-shirt, she may develop a strong alarm reaction to the sight of red shirts, or perhaps even more indiscriminately to anything with a similar color red. A combat veteran who experienced a strong smell of gasoline during a roadside bomb attack may have an intense alarm reaction when pumping gas back at home. Individuals with a psychological vulnerability toward viewing the world as uncontrollable and unpredictable may particularly struggle with the possibility of additional future, unpredictable traumatic events, fueling their need for hypervigilance and avoidance, and perpetuating the symptoms of PTSD. Obsessive-Compulsive Disorder Have you ever had a strange thought pop into your mind, such as picturing the stranger next to you naked? Or maybe you walked past a crooked picture on the wall and couldn’t resist straightening it. Most people have occasional strange thoughts and may even engage in some “compulsive” behaviors, especially when they are stressed (Boyer & Liénard, 2008; Fullana et al., 2009). But for most people, these thoughts are nothing more than a passing oddity, and the behaviors are done (or not done) without a second thought. For someone with obsessive-compulsive disorder (OCD), however, these thoughts and compulsive behaviors don’t just come and go. Instead, strange or unusual thoughts are taken to mean something much more important and real, maybe even something dangerous or frightening. The urge to engage in some behavior, such as straightening a picture, can become so intense that it is nearly impossible not to carry it out, or causes significant anxiety if it can’t be carried out. Further, someone with OCD might become preoccupied with the possibility that the behavior wasn’t carried out to completion and feel compelled to repeat the behavior again and again, maybe several times before they are “satisfied.” To receive a diagnosis of OCD, a person must experience obsessive thoughts and/or compulsions that seem irrational or nonsensical, but that keep coming into their mind. Some examples of obsessions include doubting thoughts (such as doubting a door is locked or an appliance is turned off), thoughts of contamination (such as thinking that touching almost anything might give you cancer), or aggressive thoughts or images that are unprovoked or nonsensical. Compulsions may be carried out in an attempt to neutralize some of these thoughts, providing temporary relief from the anxiety the obsessions cause, or they may be nonsensical in and of themselves. Either way, compulsions are distinct in that they must be repetitive or excessive, the person feels “driven” to carry out the behavior, and the person feels a great deal of distress if they can’t engage in the behavior. Some examples of compulsive behaviors are repetitive washing (often in response to contamination obsessions), repetitive checking (locks, door handles, appliances often in response to doubting obsessions), ordering and arranging things to ensure symmetry, or doing things according to a specific ritual or sequence (such as getting dressed or ready for bed in a specific order). To meet diagnostic criteria for OCD, engaging in obsessions and/or compulsions must take up a significant amount of the person’s time, at least an hour per day, and must cause significant distress or impairment in functioning. About 1.6% of the population has met criteria for OCD over the course of a lifetime (Kessler, Berglund, et al., 2005) (see Table 1). Whereas OCD was previously categorized as an Anxiety Disorder, in the most recent version of the DSM (DSM-5; APA, 2013) it has been reclassified under the more specific category of Obsessive-Compulsive and Related Disorders. People with OCD often confuse having an intrusive thought with their potential for carrying out the thought. Whereas most people when they have a strange or frightening thought are able to let it go, a person with OCD may become “stuck” on the thought and be intensely afraid that they might somehow lose control and act on it. Or worse, they believe that having the thought is just as bad as doing it. This is called thought-action fusion. For example, one patient of ours was plagued by thoughts that she would cause harm to her young daughter. She experienced intrusive images of throwing hot coffee in her daughter’s face or pushing her face underwater when she was giving her a bath. These images were so terrifying to the patient that she would no longer allow herself any physical contact with her daughter and would leave her daughter in the care of a babysitter if her husband or another family was not available to “supervise” her. In reality, the last thing she wanted to do was harm her daughter, and she had no intention or desire to act on the aggressive thoughts and images, nor does anybody with OCD act on these thoughts, but these thoughts were so horrifying to her that she made every attempt to prevent herself from the potential of carrying them out, even if it meant not being able to hold, cradle, or cuddle her daughter. These are the types of struggles people with OCD face every day. Treatments for Anxiety and Related Disorders Many successful treatments for anxiety and related disorders have been developed over the years. Medications (anti-anxiety drugs and antidepressants) have been found to be beneficial for disorders other than specific phobia, but relapse rates are high once medications are stopped (Heimberg et al., 1998; Hollon et al., 2005), and some classes of medications (minor tranquilizers or benzodiazepines) can be habit forming. Exposure-based cognitive behavioral therapies (CBT) are effective psychosocial treatments for anxiety disorders, and many show greater treatment effects than medication in the long term (Barlow, Allen, & Basden, 2007; Barlow, Gorman, Shear, & Woods, 2000). In CBT, patients are taught skills to help identify and change problematic thought processes, beliefs, and behaviors that tend to worsen symptoms of anxiety, and practice applying these skills to real-life situations through exposure exercises. Patients learn how the automatic “appraisals” or thoughts they have about a situation affect both how they feel and how they behave. Similarly, patients learn how engaging in certain behaviors, such as avoiding situations, tends to strengthen the belief that the situation is something to be feared. A key aspect of CBT is exposure exercises, in which the patient learns to gradually approach situations they find fearful or distressing, in order to challenge their beliefs and learn new, less fearful associations about these situations. Typically 50% to 80% of patients receiving drugs or CBT will show a good initial response, with the effect of CBT more durable. Newer developments in the treatment of anxiety disorders are focusing on novel interventions, such as the use of certain medications to enhance learning during CBT (Otto et al., 2010), and transdiagnostic treatments targeting core, underlying vulnerabilities (Barlow et al., 2011). As we advance our understanding of anxiety and related disorders, so too will our treatments advance, with the hopes that for the many people suffering from these disorders, anxiety can once again become something useful and adaptive, rather than something debilitating. 1. Kessler et al. (2005). 2. Kessler, Chiu, Demler, Merikangas, & Walters (2005). 3. Kessler, Sonnega, Bromet, Hughes, & Nelson (1995). 4. Craske et al. (1996). Outside Resources American Psychological Association (APA) http://www.apa.org/topics/anxiety/index.aspx National Institutes of Mental Health (NIMH) http://www.nimh.nih.gov/health/topics/anxiety-disorders/index.shtml Web: Anxiety and Depression Association of America (ADAA) http://www.adaa.org/ Web: Center for Anxiety and Related Disorders (CARD) http://www.bu.edu/card/ Discussion Questions 1. Name and describe the three main vulnerabilities contributing to the development of anxiety and related disorders. Do you think these disorders could develop out of biological factors alone? Could these disorders develop out of learning experiences alone? 2. Many of the symptoms in anxiety and related disorders overlap with experiences most people have. What features differentiate someone with a disorder versus someone without? 3. What is an “alarm reaction?” If someone experiences an alarm reaction when they are about to give a speech in front of a room full of people, would you consider this a “true alarm” or a “false alarm?” 4. Many people are shy. What differentiates someone who is shy from someone with social anxiety disorder? Do you think shyness should be considered an anxiety disorder? 5. Is anxiety ever helpful? What about worry? Vocabulary Agoraphobia A sort of anxiety disorder distinguished by feelings that a place is uncomfortable or may be unsafe because it is significantly open or crowded. Anxiety A mood state characterized by negative affect, muscle tension, and physical arousal in which a person apprehensively anticipates future danger or misfortune. Biological vulnerability A specific genetic and neurobiological factor that might predispose someone to develop anxiety disorders. Conditioned response A learned reaction following classical conditioning, or the process by which an event that automatically elicits a response is repeatedly paired with another neutral stimulus (conditioned stimulus), resulting in the ability of the neutral stimulus to elicit the same response on its own. External cues Stimuli in the outside world that serve as triggers for anxiety or as reminders of past traumatic events. Fight or flight response A biological reaction to alarming stressors that prepares the body to resist or escape a threat. Flashback Sudden, intense re-experiencing of a previous event, usually trauma-related. Generalized anxiety disorder (GAD) Excessive worry about everyday things that is at a level that is out of proportion to the specific causes of worry. Internal bodily or somatic cues Physical sensations that serve as triggers for anxiety or as reminders of past traumatic events. Interoceptive avoidance Avoidance of situations or activities that produce sensations of physical arousal similar to those occurring during a panic attack or intense fear response. Obsessive-compulsive disorder (OCD) A disorder characterized by the desire to engage in certain behaviors excessively or compulsively in hopes of reducing anxiety. Behaviors include things such as cleaning, repeatedly opening and closing doors, hoarding, and obsessing over certain thoughts. Panic disorder (PD) A condition marked by regular strong panic attacks, and which may include significant levels of worry about future attacks. Posttraumatic stress disorder (PTSD) A sense of intense fear, triggered by memories of a past traumatic event, that another traumatic event might occur. PTSD may include feelings of isolation and emotional numbing. Psychological vulnerabilities Influences that our early experiences have on how we view the world. Reinforced response Following the process of operant conditioning, the strengthening of a response following either the delivery of a desired consequence (positive reinforcement) or escape from an aversive consequence. SAD performance only Social anxiety disorder which is limited to certain situations that the sufferer perceives as requiring some type of performance. Social anxiety disorder (SAD) A condition marked by acute fear of social situations which lead to worry and diminished day to day functioning. Specific vulnerabilities How our experiences lead us to focus and channel our anxiety. Thought-action fusion The tendency to overestimate the relationship between a thought and an action, such that one mistakenly believes a “bad” thought is the equivalent of a “bad” action.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_9%3A_Psychological_Disorders_and_Treatments/9.04%3A_Anxiety_and_Related_Disorders.txt
By Todd Kashdan George Mason University Social anxiety occurs when we are overly concerned about being humiliated, embarrassed, evaluated, or rejected by others in social situations. Everyone experiences social anxiety some of the time, but for a minority of people, the frequency and intensity of social anxiety is intense enough to interfere with meaningful activities (e.g., relationships, academics, career aspirations). When a person’s level of social anxiety is excessive, social interactions are either dreaded or avoided, social cues and emotions are difficult to understand, and positive thoughts and emotions are rare, then that person may be diagnosed with social anxiety disorder (or social phobia). There are effective treatments—with both medications and psychotherapy–for this problem. Unfortunately, only a small proportion of people with social anxiety disorder actually seek treatment. learning objectives • Distinguish social anxiety from social anxiety disorder. • Identify commonly feared social situations. • Know the prevalence and treatment rates of social anxiety disorder. • Understand how social anxiety influences thoughts, feelings, and behaviors. • Identify effective treatments for social anxiety disorder. Introduction A public speaker waits backstage before her name is called. She visualizes what will happen in a few moments: the audience will cheer as she walks out and then turn silent, with all eyes on her. She imagines this will cause her to feel uncomfortable and, instead of standing balanced, she will lean to one side, not quite sure what to do with her hands. And when her mouth opens, instead of words, guttural sounds will emerge from a parched throat before her mind goes blank. In front of friends, family, and strangers, she is paralyzed with fear and embarrassment. Physically, in the moments leading up to the performance, she sweats, trembles, has difficulty breathing, notices a racing heartbeat, and feels nauseated. When someone asks her a question, she loses her voice or its pitch rises a few octaves. She attempts to hide her anxiety by tensing her muscles or telling herself to breathe and stay calm. Behaviorally, she seeks ways to escape the audience’s gaze (e.g., by playing a video and asking the audience questions), and she tries to get through the performance as quickly as possible (e.g., rushing off the stage). Later, she works hard to avoid similar situations, passing up future speaking opportunities. Welcome to the often terrifying world of social anxiety. People have a fundamental need to feel like they belong and are liked, so it is painful when we feel rejected or left out by those who matter to us. In response, we often become acutely aware of the impression we make on others, and we avoid doing things that may cause others to be upset with us. Social anxiety is the excessiveconcern about being in social situations where scrutiny is likely. When people are socially anxious, they become overly concerned about embarrassing themselves, and they tend to reveal these signs of discomfort through sweating or blushing; they worry that their character flaws will be exposed and result in rejection. See Figure 9.5.1 for examples of situations that commonly evoke social anxiety. The term anxiety describes a general apprehension about possible future danger, rather than a reaction to an immediate threat (i.e., fear). Nevertheless, like fear, the experience of social anxiety may involve physical, emotional, and behavioral symptoms like those described in the example above. Nearly everyone experiences some social anxiety at one point or another. It is particularly common before performing in front of an audience or meeting new people on one’s own, and this is normal. Social anxiety provides information about the demands required of us to handle an ongoing challenge (Frijda, 1996). It lets us know that the situation is meaningful, and the impression we make on other people may be important to our social standing. Most people are able to “power through” the situation, eventually feeling more comfortable and learning that it was not as bad as expected. This is a fundamentally important point: people think that their anxiety leading up to a situation (anticipatory feelings) will only increase further in the actual situation, when, in fact, our anxiety tends to peak in the moments before a situation. Sometimes, people experience more than the “normal” amount of anxiety. For people with excessive social anxiety, their anxiety often arises in a broader array of situations, is more intense, and does not subside as quickly. For those people, negative social outcomes are viewed as highly probable and costly, and their attention during social interactions tends to be inwardly directed (e.g., “Did my comment sound stupid? Can she tell that I’m sweating?”). This running internal commentary prevents people from focusing on the situation at hand, and even simple social interactions may become overwhelming (Bögels & Mansell, 2004). Social Anxiety Disorder When social anxiety and avoidance interfere with a person’s ability to function in important roles (e.g., as a student, worker, friend), the condition is called social anxiety disorder (SAD), also known as social phobia (American Psychiatric Association, 2013). In the United States, SAD affects approximately 12.1% of people in their lifetimes and 7.1% of adults in a given year (Ruscio et al., 2008). About 1 of every 4 people report at least one significant social fear in their lifetimes—most commonly, public speaking (see Figure 9.51). To be diagnosed with SAD, a person must report an impairing fear of multiple social situations that has persisted for at least six months. Most people with SAD fear eight or more distinct social situations such as initiating a conversation with a stranger, maintaining conversations, going on a first date, going to a work party/function, talking with an authority figure, talking in front of a group of people, and eating in front of other people (Ruscio et al., 2008). SAD is one of the most common anxiety disorders recognized by the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders (DSM-5; APA, 2013). SAD affects men and women about equally, and the majority of people with SAD report that their fears began in early adolescence, typically around age 13 (Kessler et al., 2005). Unfortunately, this condition tends to be chronic and few people recover on their own without an intervention. Despite the availability of effective treatments, few people seek help for their social fears (see Figure 9.5.2). In an epidemiological study, only 5.4% of people with SAD (and no other psychiatric disorders) ever received mental health treatment (Schneier, Johnson, Hornig, Liebowitz, & Weissman, 1992). There are several explanations for why people with SAD avoid treatment—for starters, the fear of being evaluated by a therapist and the stigma of seeking psychological services. Thus, the very features of the disorder may prevent a person from seeking treatment for it. Another explanation is that many physicians, teachers, parents, and peers do not believe that social anxiety disorder is a real condition and, instead, view it as nothing more than extreme shyness or inhibition. Finally, health care providers are often ill-equipped to assess SAD and may not be aware of evidence-based treatments (Kashdan, Christopher Frueh, Knapp, Hebert, & Magruder, 2006), and clients often do not know enough about social fears to discuss them with their doctors. Sadly, 60% to 80% of people with SAD suffer from symptoms for at least two decades (Ruscio et al., 2008). Thus, it is important to understand not only what social anxiety is but also what perpetuates social fears. Fear of Evaluation A central component of the social anxiety experience is how a person thinks about him- or herself, about others, and about social situations. According to the self-presentation theory of social anxiety (Leary & Kowalski, 1995), people feel socially anxious when they wish to make a good impression on others but doubt their ability to do so. People with excessive social anxiety are likely to view themselves as having more flaws or deficits, compared to those who rarely feel social anxiety (Clark & Wells, 1995); thus, for SAD sufferers, social interactions may seem like dangerous places where flaws can be observed and scrutinized (Moscovitch, 2009). At first, researchers believed that the core feature of SAD was a fear of negative evaluation —being preoccupied with the possibility of being unfavorably judged or rejected by others (Watson & Friend, 1969). Recent evidence has suggested that people with SAD are actually concerned with both positive and negative evaluation. Fear of positive evaluation is the dread associated with success and public favorable evaluation, raising the expectations for subsequent social interactions. The fear of being positively evaluated is particularly relevant when a social comparison occurs, such as when a person gets a promotion at work (Weeks, Heimberg, Rodebaugh, & Norton, 2008; Weeks, Heimberg, & Rodebaugh, 2008). Both of these fears of evaluation contribute to social anxiety (Weeks, Heimberg, Rodebaugh, Goldin, & Gross, 2012). Why might socially anxious people dread being praised? Gilbert’s (2001) evolutionary theory suggests that social anxiety is a mechanism that evolved to facilitate group cohesion. When in society there are people of different social ranks, a person lower on the social hierarchy (e.g., an entry-level employee) would experience anxiety when interacting with higher-ranking group members (e.g., bosses). Such anxiety would lead a person to display submissive behavior (e.g., avoiding eye contact) and prompt them to avoid doing anything that could cause conflict. Anything that increases social status—such as receiving a promotion or dating an attractive romantic partner—can cause tension and conflict with others of higher status. Whereas fear of negative evaluation is relevant to other psychological conditions, such as depression and eating disorders, fear of positive evaluation is unique to SAD (Fergus et al., 2009; Weeks, Heimberg, Rodebaugh, et al., 2008). Furthermore, when people are successfully treated for SAD, this fear of positive evaluation declines (Weeks et al., 2012). Biased Attention and Interpretation If you were to observe what people with SAD pay attention to in a social interaction, you would find that they are quick to recognize any signs of social threats. For instance, they are faster at detecting angry faces in a crowd (see Gilboa-Schechtman, Foa, & Amir, 1999). Imagine looking at the audience as you give a speech and the first faces you notice are scowling back! At the same time, SAD sufferers’ attention is biased away from positive, rewarding information (see Taylor, Bomyea, & Amir, 2010). This means that people with SAD are unlikely to notice the smiling, nodding faces in the crowd, and they fail to pick up the subtle hints that somebody wants to spend more time with them or to be asked out on a romantic date. These interpretation and attention biases are obstacles to starting and maintaining social relationships. When you attend to only negativity, you start to believe that you are unlovable and that the world is a hostile, unfriendly place. Complete the following sentence: “As I passed a group of people in the hall, they burst out in laughter, because . . .” People with SAD are more likely to complete the sentence with a statement suggesting that there is something wrong with their behavior or appearance (e.g., “they thought I looked ridiculous”) as opposed to a neutral explanation (e.g., “one of them made a joke”). The problem is that when you assume people are attacking you, you feel more self-conscious and are less likely to stay in a situation and to interact with that group of people or others in the future. Our thoughts influence our behavior, and the negative interpretations and predictions of people with SAD only serve to feed their social avoidance patterns (Amir, Beard, & Bower, 2005 ). Deficient Positive Experiences The strongest predictor of a happy, meaningful, long-lasting life is the presence of satisfying, healthy relationships (Berscheid & Reis, 1998). Thus, the fact that people with SAD frequently avoid social interactions—even those with the potential for fun or intimacy—means that they miss out on an important source of positive experiences. By studying people’s day-to-day experiences, researchers have discovered several positivity deficits in the lives of socially anxious people. For example, Kashdan and Collins (2010) gave participants portable electronic devices that randomly prompted them to describe what they were feeling and doing multiple times per day for several weeks. During such random assessments, socially anxious people reported less intense positive emotions (e.g., joy, happiness, calm), regardless of whether they were around other people (whereas, less anxious people report more intense positive emotions when socializing). Socially anxious people experience less frequent positive emotions even when spending time with close friends and family members (Brown, Silvia, Myin-Germeys, & Kwapil, 2007; Vittengl & Holt, 1998). In fact, even in the most intimate of situations—during sexual encounters with romantic partners—socially anxious people report less intense pleasure and less intimacy (Kashdan, Adams, et al., 2011). All of these findings highlight the vast reach of excessive social anxiety in people’s lives and how it detracts from the relationships and activities that hold the greatest promise for happiness and meaning in life (Kashdan, Weeks, & Savostyanova, 2011). Problematic Emotion Regulation A possible explanation for the distress and diminished positive experiences seen in SAD is that the sufferers’ ability to respond to and manage their emotions is impaired. Emotion regulation refers to how people recognize, interpret, experience, and attempt to alter emotional states (Gross, 1998). One symptom of SAD is the concern that the anxiety will be visible to others (APA, 2013). Given this concern, socially anxious people spend considerable time and effort preparing for and avoiding anxiety-related thoughts, sensations, and behaviors. They engage in safety behaviors, such as rehearsing exactly what to say in a conversation, asking questions of others to deflect attention from themselves, and holding a drink or food to have an excuse to pause before responding to a question (Clark & Wells, 1995). Because there is only so much we can pay attention to in a given moment, excessive self-focused attention detracts from a person’s ability to be mindful in a social encounter. In effect, by devoting effort to controlling emotions and minimizing the potential for rejection, a person paradoxically increases the likelihood of misunderstanding others or appearing aloof. Such encounters are also less enjoyable and possess less potential for deepening relationships. Socially anxious people believe that openly expressing emotions is likely to have negative consequences (Juretic & Zivcic-Becirevic, 2013). In turn, they are more apt to suppress or hide their negative emotions (Spokas, Luterek, & Heimberg, 2009) and avoid anything that is distressing (Kashdan, Morina, & Priebe, 2009). Emotion suppression is often ineffective: the more we try not to think about something, the more we end up thinking about it (Richards & Gross, 1999). Unfortunately, people with SAD report being less skilled at using more effective emotion regulation strategies, such as finding alternative, constructive ways of thinking about a situation (Werner, Goldin, Ball, Heimberg, & Gross, 2011). Socially anxious people also respond to positive emotions in an unexpected way. Whereas most people not only enjoy positive emotions but also seek them out and attempt to savor them, socially anxious people often fear intense positive emotions (Turk, Heimberg, Luterek, Mennin, & Fresco, 2005). When positive emotions arise, just like negative emotions, SAD sufferers make efforts to suppress them (Eisner, Johnson, & Carver, 2009; Farmer & Kashdan, 2012). Why downplay positive emotions? It is possible that avoiding public displays of positive emotions is another way that people with SAD can avoid scrutiny (e.g., not laughing because others might not find a joke funny) and prevent the wrath of powerful others (e.g., not expressing excitement about a personal triumph because others might be envious) (Weeks et al., 2008). A recent study sampled the day-to-day social interactions of people with and without SAD to uncover what distinguishes these two groups. What the researchers found was that the amount of social anxiety felt during social interactions was less important in distinguishing people with SAD from healthy adults than the intense effort put into avoid feeling anxious and infrequent positive emotions when spending time with other people (Kashdan et al., 2013). The ego depletion model (Muraven & Baumeister, 2000) proposes that people have a limited capacity for physical and mental self-control (e.g., physical endurance, attention). When we perform tasks that require significant effort and energy (e.g., suppressing emotions), we deplete these self-control resources, leaving us with less capacity to focus on subsequent tasks or to make good decisions (Vohs, Baumeister, & Ciarocco, 2005). When depleted or mentally exhausted, we tend to opt for whatever is immediately rewarding as opposed to pursuing meaningful goals (Hayes, Luoma, Bond, Masuda, & Lillis, 2006). For socially anxious people, what is immediately rewarding tends to be escaping or avoiding social situations in order to minimize the potential for unpleasant feelings. In other words, the way people with high social anxiety control their emotions not only makes their social situations less pleasant in the moment but also limits their capacity for pursuing rewarding opportunities afterward. Consistent with this, Farmer and Kashdan (2012) demonstrated that over the course of two weeks, when socially anxious people used more emotion suppression, they experienced fewer pleasant social events and less intense positive emotions on the following day. Taken together, this research suggests that socially anxious people respond to their emotions in ways that have far-reaching effects on their well-being, likely maintaining fears associated with social anxiety. Treatments Although SAD tends to be a chronic condition if left untreated (Wittchen, Fuetsch, Sonntag, Müller, & Liebowitz, 2000), the good news is that there are effective treatments that reduce social fears. Currently, there are two gold-standard treatments for SAD: cognitive behavioral therapy (CBT) and pharmacotherapy (Gould, Buckminster, Pollack, & Otto, 1997). The frontrunner among psychotherapy options, CBT, is an approach mental health professionals (e.g., licensed clinical psychologists) use to help people with SAD learn to think, behave, and feel differently so that they can feel more comfortable in social situations and improve their quality of life (Heimberg & Becker, 2002). The most effective strategy to treat SAD is exposure (Feske & Chambless, 1995)—where clients repeatedly confront their feared situations without the use of safety behaviors, starting with situations that are only slightly anxiety provoking (e.g., imagining a conversation with an attractive stranger) and gradually working their way up to more frightening situations (e.g., starting conversations with trained actors during therapy sections). Additional exposures are then assigned between sessions so that people can experiment with feared situations in their daily lives (e.g., saying hello to a passing pedestrian). As another example, someone who avoids riding elevators due to fear of interacting with other riders might start out by taking an elevator during off-peak hours, then during popular times, and then practicing talking with a stranger while riding an elevator, and so on. After taking part in these exposures, people learn that feared social situations are not as probable or dangerous as previously believed. Cognitive techniques form the other component of CBT. These are strategies therapists use to help people develop more realistic and helpful thoughts and expectations about social situations. For example, people with SAD often have unrealistic beliefs that contribute to anxiety (e.g., “Everyone can see that I’m sweating”). The therapist helps them challenge such thoughts and develop more helpful expectations about situations (e.g., from “If I pause while speaking, then everyone will think I’m stupid” to “It is OK to pause; the silence may seem longer to me than to others”). These techniques are most effective in combination with behavioral techniques that help clients test out some of their assumptions in real situations (Taylor, 1996). For instance, a behavioral experiment might involve giving a cashier the wrong change (making a mistake), to test whether the feared consequence (“the cashier will laugh at me”) actually arises and, if so, whether it is as painful as expected. Pharmacotherapy for SAD involves using medications to reduce people’s anxiety level so that they are able to stop avoiding situations and enjoy a better quality of life. The current first-line prescribed medications are selective serotonin re-uptake inhibitors (SSRIs), such as escitalopram, paroxetine, and sertraline, as well as serotonin norepinephrine reuptake inhibitors (SNRIs) like venlafaxine (Bandelow et al., 2012). Both of these medications are typically used as antidepressants that act on the neurotransmitter serotonin, which plays a big role in how the amygdala responds to possibly threatening information. SNRIs also act on norepinephrine at higher doses. These medications have few side effects and also are likely to improve symptoms of other anxiety or depression symptoms that often co-occur with SAD. Other classes of medications with some evidence of helpfulness for SAD symptoms include benzodiazepines and monoamine oxidase inhibitors, though these medications often produce a number of negative side effects (Blanco, Bragdon, Schneier, & Liebowitz, 2013). Both approaches—CBT and medications—are moderately helpful at reducing social anxiety symptoms in approximately 60% of clients; however, the majority of people with SAD do not fully remit, and many experience a return of symptoms after treatment ends (Fedoroff & Taylor, 2001). Notably, CBT tends to have more lasting effects, and there is some modest evidence for possible added benefit when combining medications with psychotherapy (Blanco et al., 2010). It is obvious, though, that existing treatments are insufficient. Current recommended treatments may not address some of the deficits discussed earlier. Specifically, people may avoid fewer social situations, but they might still have fewer, less intense positive life experiences. Researchers are constantly improving available treatments via new techniques and medications. Some new developments in the treatment of SAD include the encouragement of mindful awareness (vs. self-focus) and acceptance (vs. avoidance) of experiences (Dalrymple & Herbert, 2007; Goldin & Gross, 2010). As new treatments develop, it will be important to see whether these treatments not only improve SAD symptoms but also help sufferers achieve greater happiness, meaning in life, and success. Conclusions In this module, we discussed the normal experience of social anxiety as well as the clinically impairing distress suffered by people with SAD. It is important to remember that nearly every psychological experience and characteristic you will read about exists on a continuum. What appears to distinguish people with SAD from healthy adults is not the presence of intense social anxiety but the unwillingness to experience anxious thoughts, feelings, and sensations, and the immense effort put into avoiding this discomfort. Other problems linked to excessive social anxiety include infrequent positive events, diminished positive experiences, and a tendency to view benign and even positive social situations as threatening. Together, these symptoms prevent people from initiating and maintaining healthy social relationships, and lead to deficient well-being. When social fears become overwhelming, it is important to remember that effective treatments are available to improve one’s quality of life. Outside Resources Institution: Andrew Kukes Foundation for Social Anxiety http://akfsa.org/ Institution: Anxiety and Depression Association of America http://www.adaa.org/ Video: Social Anxiety Documentary - Afraid of People Web: CalmClinic http://www.calmclinic.com/ Web: Which Celebrities Suffer with Social Anxiety? https://www.verywell.com/which-celeb...nxiety-3024283 Discussion Questions 1. What differentiates people who are shy from those with social anxiety disorder? 2. Because the most effective treatment for social anxiety disorder is exposure to feared situations, what kinds of exposures would you devise for someone who fears talking in front of an audience? Engaging in small talk? Writing or eating in front of others? Speaking up in a small group? Talking to strangers? 3. Why might social anxiety disorder typically begin in late childhood/early adolescence? 4. How does culture influence fears of negative and positive evaluation? After all, social groups differ in their adherence to a vertical social hierarchy. 5. What may be some reasons people with severe social anxiety might not seek or receive treatment? How would you remove these obstacles? Vocabulary Amygdala A brain structure in the limbic system involved in fear reactivity and implicated in the biological basis for social anxiety disorder. Anxiety A state of worry or apprehension about future events or possible danger that usually involves negative thoughts, unpleasant physical sensations, and/or a desire to avoid harm. Cognitive behavioral therapy (CBT) Psychotherapy approach that incorporates cognitive techniques (targeting unhelpful thoughts) and behavioral techniques (changing behaviors) to improve psychological symptoms. Ego depletion The idea that people have a limited pool of mental resources for self-control (e.g., regulating emotions, willpower), and this pool can be used up (depleted). Emotion regulation The ability to recognize emotional experiences and respond to situations by engaging in strategies to manage emotions as necessary. Exposure treatment A technique used in behavior therapy that involves a patient repeatedly confronting a feared situation, without danger, to reduce anxiety. Fear of negative evaluation The preoccupation with and dread of the possibility of being judged negatively by others. Fear of positive evaluation The dread associated with favorable public evaluation or acknowledgment of success, particularly when it involves social comparison. Pharmacotherapy A treatment approach that involves using medications to alter a person’s neural functioning to reduce psychological symptoms. Safety behaviors Actions people take to reduce likelihood of embarrassment or minimizing anxiety in a situation (e.g., not making eye contact, planning what to say). Selective serotonin re-uptake inhibitors (SSRIs) A class of antidepressant medications often used to treat SAD that increase the concentration of the neurotransmitter serotonin in the brain. Serotonin norepinephrine reuptake inhibitors (SNRIs) A class of antidepressant medications often used to treat SAD that increase the concentration of serotonin and norepinephrine in the brain. Social anxiety Excessive anticipation and distress about social situations in which one may be evaluated negatively, rejected, or scrutinized. Social anxiety disorder (SAD) An anxiety disorder marked by severe and persistent social anxiety and avoidance that interferes with a person’s ability to fulfill their roles in important life domains.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_9%3A_Psychological_Disorders_and_Treatments/9.05%3A_Social_Anxiety.txt
By Dalena van Heugten - van der Kloet Maastricht University In psychopathology, dissociation happens when thoughts, feelings, and experiences of our consciousness and memory do not collaborate well with each other. This module provides an overview of dissociative disorders, including the definitions of dissociation, its origins and competing theories, and their relation to traumatic experiences and sleep problems. learning objectives • Define the basic terminology and historical origins of dissociative symptoms and dissociative disorders. • Describe the posttraumatic model of dissociation and the sleep-dissociation model, and the controversies and debate between these competing theories. • What is the innovative angle of the sleep-dissociation model? • How can the two models be combined into one conceptual scheme? Introduction Think about the last time you were daydreaming. Perhaps it was while you were driving or attending class. Some portion of your attention was on the activity at hand, but most of your conscious mind was wrapped up in fantasy. Now imagine that you could not control your daydreams. What if they intruded your waking consciousness unannounced, causing you to lose track of reality or experience the loss of time. Imagine how difficult it would be for you. This is similar to what people who suffer from dissociative disorders may experience. Of the many disorders listed in the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) (American Psychiatric Association, 2013), dissociative disorders rank as among the most puzzling and controversial. Dissociative disorders encompass an array of symptoms ranging from memory loss (amnesia) for autobiographical events, to changes in identity and the experience of everyday reality (American Psychiatric Association, 2013). Is it real? Let’s start with a little history. Multiple personality disorder, or dissociative identity disorder—as it is known now—used to be a mere curiosity. This is a disorder in which people present with more than one personality. For example, at times they might act and identify as an adult while at other times they might identify and behave like a child. The disorder was rarely diagnosed until the 1980s. That’s when multiple personality disorder became an official diagnosis in the DSM-III. From then on, the numbers of “multiples” increased rapidly. In the 1990s, there were hundreds of people diagnosed with multiple personality in every major city in the United States (Hacking, 1995). How could this “epidemic” be explained? One possible explanation might be the media attention that was given to the disorder. It all started with the book The Three Faces of Eve (Thigpen & Cleckley, 1957). This book, and later the movie, was one of the first to speak of multiple personality disorder. However, it wasn’t until years later, when the fictional “as told to” book of Sybil (Schreiber, 1973) became known worldwide, that the prototype of what it was like to be a “multiple personality” was born. Sybil tells the story of how a clinician—Cornelia Wilbur—unravels the different personalities of her patient Sybil during a long course of treatment (over 2,500 office hours!). She was one of the first to relate multiple personality to childhood sexual abuse. Probably, this relation between childhood abuse and dissociation has fueled the increase of numbers of multiples from that time on. It motivated therapists to actively seek for clues of childhood abuse in their dissociative patients. This suited well within the mindset of the 1980s, as childhood abuse was a sensitive issue then in psychology as well as in politics (Hacking, 1995). From then on, many movies and books were made on the subject of multiple personality, and nowadays, we see patients with dissociative identity disorder as guests visiting the Oprah Winfrey show, as if they were our modern-day circus acts. Defining dissociation The DSM-5 defines dissociation as “a disruption and/or discontinuity in the normal integration of consciousness, memory, identity, emotion, perception, body representation, motor control and behavior” (American Psychiatric Association, 2013, p. 291). A distinction is often made between dissociative states and dissociative traits (e.g., Bremner, 2010; Bremner & Brett, 1997). State dissociation is viewed as a transient symptom, which lasts for a few minutes or hours (e.g., dissociation during a traumatic event). Trait dissociation is viewed as an integral aspect of personality. Dissociative symptoms occur in patients but also in the general population, like you and me. Therefore, dissociation has commonly been conceptualized as ranging on a continuum, from nonsevere manifestations of daydreaming to more severe disturbances typical of dissociative disorders (Bernstein & Putnam, 1986). The dissociative disorders include: 1. Dissociative Amnesia (extensive forgetting typically associated with highly aversive events); 2. Dissociative Fugue (short-lived reversible amnesia for personal identity, involving unplanned travel or “bewildered wandering.” Dissociative fugue is not viewed as a separate disorder but is a feature of some, but not all, cases of dissociative amnesia ); 3. Depersonalization/Derealization Disorder (feeling as though one is an outside observer of one’s body); and 4. Dissociative Identity Disorder (DID; experiencing two or more distinct identities that recurrently take control over one’s behavior) (American Psychiatric Association, 2000). Although the concept of dissociation lacks a generally accepted definition, the Structural Clinical Interview for DSM-IV Dissociative Disorders (SCID-D) (Steinberg, 2001) assesses five symptom clusters that encompass key features of the dissociative disorders. These clusters are also found in the DSM-5: 1. depersonalization, 2. derealization, 3. dissociative amnesia, 4. identity confusion, and 5. identity alteration. Depersonalization refers to a “feeling of detachment or estrangement from one’s self.” Imagine that you are outside of your own body, looking at yourself from a distance as though you were looking at somebody else. Maybe you can also imagine what it would be like if you felt like a robot, deprived of all feelings. These are examples of depersonalization. Derealization is defined as “an alteration in the perception of one’s surroundings so that a sense of reality of the external world is lost” (Steinberg, 2001, p. 101). Imagine that the world around you seems as if you are living in a movie, or looking through a fog. These are examples of derealization. Dissociative amnesia does not refer to permanent memory loss, similar to the erasure of a computer disk, but rather to the hypothetical disconnection of memories from conscious inspection (Steinberg, 2001). Thus, the memory is still there somewhere, but you cannot reach it. Identity confusion is defined by Steinberg as “… thoughts and feelings of uncertainty and conflict a person has related to his or her identity” (Steinberg, 2001, p. 101), whereas identity alteration describes the behavioral acting out of this uncertainty and conflict (Bernstein & Putnam, 1986). Dissociative disorders are not as uncommon as you would expect. Several studies in a variety of patient groups show that dissociative disorders are prevalent in a 4%–29% range (Ross, Anderson, Fleischer, & Norton, 1991; Sar, Tutkun, Alyanak, Bakim, & Baral, 2000; Tutkun et al., 1998. For reviews see: Foote, Smolin, Kaplan, Legatt, & Lipschitz, 2006; Spiegel et al., 2011). Studies generally find a much lower prevalence in the general population, with rates in the order of 1%–3% (Lee, Kwok, Hunter, Richards, & David, 2010; Rauschenberger & Lynn, 1995; Sandberg & Lynn, 1992). Importantly, dissociative symptoms are not limited to the dissociative disorders. Certain diagnostic groups, notably patients with borderline personality disorder, posttraumatic stress disorder (PTSD), obsessive-compulsive disorder (Rufer, Fricke, Held, Cremer, & Hand, 2006), and schizophrenia (Allen & Coyne, 1995; Merckelbach, à Campo, Hardy, & Giesbrecht, 2005; Yu et al., 2010) also display heightened levels of dissociation. Measuring dissociation The Dissociative Experiences Scale (DES) (Bernstein & Putnam, 1986; Carlson & Putnam, 2000; Wright & Loftus, 1999) is the most widely used self-report measure of dissociation. A self-report measure is a type of psychological test in which a person completes a survey or questionnaire with or without the help of an investigator. This scale measures dissociation with items such as (a) “Some people sometimes have the experience of feeling as though they are standing next to themselves or watching themselves do something, and they actually see themselves as if they were looking at another person” and (b) “Some people find that sometimes they are listening to someone talk, and they suddenly realize that they did not hear part or all of what was said.” The DES is suitable only as a screening tool. When somebody scores a high level of dissociation on this scale, this does not necessarily mean that he or she is suffering from a dissociative disorder. It does, however, give an indication to investigate the symptoms more extensively. This is usually done with a structured clinical interview, called the Structured Clinical Interview for DSM-IV Dissociative Disorders (Steinberg, 1994), which is performed by an experienced clinician. With the publication of the new DSM-5 there has been an updated version of this instrument. Dissociation and Trauma The most widely held perspective on dissociative symptoms is that they reflect a defensive response to highly aversive events, mostly trauma experiences during the childhood years (Bremner, 2010; Spiegel et al., 2011; Spitzer, Vogel, Barnow, Freyberger, & Grabe, 2007). One prominent interpretation of the origins of dissociative disorders is that they are the direct result of exposure to traumatic experiences. We will refer to this interpretation as the posttraumatic model (PTM). According to the PTM, dissociative symptoms can best be understood as mental strategies to cope with or avoid the impact of highly aversive experiences (e.g., Spiegel et al., 2011). In this view, individuals rely on dissociation to escape from painful memories (Gershuny & Thayer, 1999). Once they have learned to use this defensive coping mechanism, it can become automatized and habitual, even emerging in response to minor stressors (Van der Hart & Horst, 1989). The idea that dissociation can serve a defensive function can be traced back to Pierre Janet (1899/1973), one of the first scholars to link dissociation to psychological trauma (Hacking, 1995). The PTM casts the clinical observation that dissociative disorders are linked to a trauma history in straightforward causal terms, that is, one causes the other (Gershuny & Thayer, 1999). For example, Vermetten and colleagues (Vermetten, Schmahl, Lindner, Loewenstein, & Bremner, 2006) found that the DID patients in their study all suffered from posttraumatic stress disorder and concluded that DID should be conceptualized as an extreme form of early-abuse–related posttraumatic stress disorder (Vermetten et al., 2006). Causality and evidence The empirical evidence that trauma leads to dissociative symptoms is the subject of intense debate (Kihlstrom, 2005; Bremner, 2010; Giesbrecht, Lynn, Lilienfeld & Merckelbach, 2010). Three limitations of the PTM will be described below. First, the majority of studies reporting links between self-reported trauma and dissociation are based on cross-sectional designs. This means that the data are collected at one point in time. When analyzing this type of data, one can only state whether scoring high on a particular questionnaire (for example, a trauma questionnaire) is indicative of also scoring high on another questionnaire (for example, the DES). This makes it difficult to state if one thing led to another, and therefore if the relation between the two is causal. Thus, the data that these designs yield do not allow for strong causal claims (Merckelbach & Muris, 2002). Second, whether somebody has experienced a trauma is often established using a questionnaire that the person completes himself or herself. This is called a self-report measure. Herein lies the problem. Individuals suffering from dissociative symptoms typically have high fantasy proneness. This is a character trait to engage in extensive and vivid fantasizing. The tendency to fantasize a lot may increase the risk of exaggerating or understating self-reports of traumatic experiences (Merckelbach et al., 2005; Giesbrecht, Lynn, Lilienfeld, & Merckelbach, 2008). Third, high dissociative individuals report more cognitive failures than low dissociative individuals. Cognitive failures are everyday slips and lapses, such as failing to notice signposts on the road, forgetting appointments, or bumping into people. This can be seen, in part, in the DSM-5 criteria for DID, in which people may have difficulty recalling everyday events as well as those that are traumatic. People who frequently make such slips and lapses often mistrust their own cognitive capacities. They also tend to overvalue the hints and cues provided by others (Merckelbach, Horselenberg, & Schmidt, 2002; Merckelbach, Muris, Rassin, & Horselenberg, 2000). This makes them vulnerable to suggestive information, which may distort self-reports, and thus limits conclusions that can be drawn from studies that rely solely on self-reports to investigate the trauma-dissociation link (Merckelbach & Jelicic, 2004). Most important, however, is that the PTM does not tell us how trauma produces dissociative symptoms. Therefore, workers in the field have searched for other explanations. They proposed that due to their dreamlike character, dissociative symptoms such as derealization, depersonalization, and absorption are associated with sleep-related experiences. They further noted that sleep-related experiences can explain the relation between highly aversive events and dissociative symptoms (Giesbrecht et al., 2008; Watson, 2001). In the following paragraph, the relation between dissociation and sleep will be discussed. Dissociation and Sleep A little history Researchers (Watson, 2001) have proposed that dissociative symptoms, such as absorption, derealization, and depersonalization originate from sleep. This idea is not entirely new. In the 19th century, double consciousness (or dédoublement), the historical precursor of dissociative identity disorder (DID; formerly known as multiple personality disorder), was often described as “somnambulism,” which refers to a state of sleepwalking. Patients suffering from this disorder were referred to as “somnambules” (Hacking, 1995). Many 19th-century scholars believed that these patients were switching between a “normal state” and a “somnambulistic state.” Hughlings Jackson, a well-known English neurologist from this era, viewed dissociation as the uncoupling of normal consciousness, which would result in what he termed “the dreamy state” (Meares, 1999). Interestingly, a century later, Levitan (1967) hypothesized that “depersonalization is a compromise state between dreaming and waking” (p.157). Arlow (1966) observed that the dissociation between the “experiencing self” and the “observing self” serves as the basis of depersonalized states, emphasizing its occurrence, especially in dreams. Likewise, Franklin (1990) considered dreamlike thoughts, the amnesia one usually has for dreams, and the lack of orientation of time, place, and person during dreams to be strikingly similar to the amnesia DID patients often report for their traumas. Related, Barrett (1994, 1995) described the similarity between dream characters and “alter personalities” in DID, with respect to cognitive and sensory abilities, movement, amnesia, and continuity with normal waking. The many similarities between dreaming states and dissociative symptoms are also a recurrent theme in the more recent clinical literature (e.g., Bob, 2004). Sleep problems in patients with dissociative disorders Anecdotal evidence supports the idea that sleep disruptions are linked to dissociation. For example, in patients with depersonalization, symptoms are worst when they are tired (Simeon & Abugel, 2006). Interestingly, among participants who report memories of childhood sexual abuse, experiences of sleep paralysis typically are accompanied by raised levels of dissociative symptoms (McNally & Clancy, 2005; Abrams, Mulligan, Carleton, & Asmundson, 2008). Patients with mood disorders, anxiety disorders, schizophrenia, and borderline personality disorder—conditions with relatively high levels of dissociative symptoms—as a rule exhibit sleep abnormalities. Recent research points to fairly specific relationships between certain sleep complaints (e.g., insomnia, nightmares) and certain forms of psychopathology (e.g., depression, posttraumatic stress disorder) (Koffel & Watson, 2009). Studying the relationship between dissociation and sleep In the general population, both dissociative symptoms and sleep problems are highly prevalent. For example, 29 percent of American adults report sleep problems (National Sleep Foundation, 2005). This allows researchers to study the relationship between dissociation and sleep not only in patients but also in the general population. In a pioneering study, Watson (2001) showed that dissociative symptoms—measured by the DES—are linked to self-reports of vivid dreams, nightmares, recurrent dreams, and other unusual sleep phenomena. This relationship has been studied extensively ever since, leading to three important statements. First, Watson’s (2001) basic findings have been reproduced time and again. This means that the same results (namely that dissociation and sleep problems are related) have been found in lots of different studies, using different groups, and different materials. All lead to the conclusion that unusual sleep experiences and dissociative symptoms are linked. Second, the connection between sleep and dissociation is specific. It seems that unusual sleep phenomena that are difficult to control, including nightmares and waking dreams, are related to dissociative symptoms, but lucid dreaming—dreams that are controllable—are only weakly related to dissociative symptoms. For example, dream recall frequency was related to dissociation (Suszek & Kopera, 2005). Individuals who reported three or more nightmares over a three-week period showed higher levels of dissociation compared to individuals reporting two nightmares or less (Levin & Fireman, 2002), and a relation was found between dream intensity and dissociation (Yu et al., 2010). Third, the sleep-dissociation link is apparent not only in general population groups—people such as you and me—but also in patient groups. Accordingly, one group of researchers reported nightmare disorder in 17 out of 30 DID patients (Agargun et al., 2003). They also found a 27.5% prevalence of nocturnal dissociative episodes in patients with dissociative disorders (Agargun et al., 2001). Another study investigated a group of borderline personality disorder patients and found that 49% of them suffered from nightmare disorder. Moreover, the patients with nightmare disorder displayed higher levels of dissociation than patients not suffering from nightmare disorder (Semiz, Basoglu, Ebrinc, & Cetin, 2008). Additionally, Ross (2011) found that patients suffering from DID reported higher rates of sleepwalking compared to a group of psychiatric outpatients and a sample from the general population. To sum up, there seems to be a strong relationship between dissociative symptoms and unusual sleep experiences that is evident in a range of phenomena, including waking dreams, nightmares, and sleepwalking. Inducing and reducing sleep problems Sleep problems can be induced in healthy participants by keeping them awake for a long duration of time. This is called sleep deprivation. If dissociative symptoms are fueled by a labile sleep-wake cycle, then sleep loss would be expected to intensify dissociative symptoms. Some evidence that this might work was already found in 2001, when soldiers who underwent a U.S. Army survival training, which included sleep deprivation, showed increases in dissociative symptoms (Morgan et al., 2001). Other researchers conducted a study that tracked 25 healthy volunteers during one day and one night of sleep loss. They found that dissociative symptoms increased substantially after one night of sleep loss (Giesbrecht, Smeets, Leppink, Jelicic, & Merckelbach, 2007). To further examine the causal link between dissociative experiences and sleep, we (van der Kloet, Giesbrecht, Lynn, Merckelbach, & de Zutter, 2011) investigated the relationship between unusual sleep experiences and dissociation in a patient group at a private clinic. They completed questionnaires upon arrival at the clinic and again when they departed eight weeks later. During their stay, they followed a strict program designed to improve sleep problems. And it worked! In most patients, sleep quality was improved after eight weeks. We found a robust link between sleep experiences and dissociative symptoms and determined that sleep normalization was accompanied by a reduction in dissociative symptoms. An exciting interpretation of the link between dissociative symptoms and unusual sleep phenomena (see also, Watson, 2001) may be this: A disturbed sleep–wake cycle may lead to dissociative symptoms. However, we should be cautious. Although studies support a causal arrow leading from sleep disruption to dissociative symptoms, the associations between sleep and dissociation may be more complex. For example, causal links may be bi-directional, such that dissociative symptoms may lead to sleep problems and vice versa, and other psychopathology may interfere in the link between sleep and dissociative symptoms (van der Kloet et al., 2011). Implications and Conclusions The sleep-dissociation model offers a fresh and exciting perspective on dissociative symptoms. This model may seem remote from the PTM. However, both models can be integrated in a single conceptual scheme in which traumatic childhood experiences may lead to disturbed sleep patterns, which may be the final common pathway to dissociative symptoms. Accordingly, the sleep-dissociation model may explain both: (a) how traumatic experiences disrupt the sleep–wake cycle and increase vulnerability to dissociative symptoms, and (b) why dissociation, trauma, fantasy proneness, and cognitive failures overlap. Future studies can also discern what characteristic sleep disruptions in the sleep–wake cycle are most reliably related to dissociative disorders, and then establish training programs, including medication regimens, to address these problems. This would constitute an entirely novel and exciting approach to the treatment of dissociative symptoms. In closing, the sleep-dissociation model can serve as a framework for studies that address a wide range of fascinating questions about dissociative symptoms and disorders. We now have good reason to be confident that research on sleep and dissociative symptoms will inform psychiatry, clinical science, and psychotherapeutic practice in meaningful ways in the years to come. Outside Resources Article: Extreme Dissociative Fugue: A life, Interrupted - A recent case of extreme dissociative fugue. The article is particularly powerful as it relates the story of a seemingly typical person, a young teacher, who suddenly experiences a dissociative fugue. http://www.nytimes.com/2009/03/01/nyregion/thecity/01miss.html?_r=0 Book: Schreiber, F. R. (1973). Sybil. Chicago: Regnery. Film: Debate Persists Over Diagnosing Mental Health Disorders, Long After ‘Sybil’. This short film would be useful to provide students with perspectives on the debate over diagnoses. It could be used to introduce the debate and provide students with evidence to argue for or against the diagnosis. http://www.nytimes.com/2014/11/24/us/debate-persists-over-diagnosing-mental-health-disorders-long-after-sybil.html Structured Clinical Interview for DSM-5 (SCID-5) https://www.appi.org/products/structured-clinical-interview-for-dsm-5-scid-5 Video: Depiction of the controversy regarding the existence of DID and show you some debate between clinicians and researchers on the topics of brain imaging, recovered memories, and false memories. False memory syndrome. Video: Patient Switching on Command and in Brain Scanner - This eight-minute video depicts the controversy regarding the existence of DID and relates some of the debate between clinicians and researchers on the topics of brain imaging, recovered memories, and false memories. Discussion Questions 1. Why are dissociation and trauma related to each other? 2. How is dissociation related to sleep problems? 3. Are dissociative symptoms induced or merely increased by sleep disturbances? 4. Do you have any ideas regarding treatment possibilities for dissociative disorders? 5. Does DID really exist? Vocabulary Amnesia The loss of memory. Anxiety disorder A group of diagnoses in the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV-TR) classification system where anxiety is central to the person’s dysfunctioning. Typical symptoms include excessive rumination, worrying, uneasiness, apprehension, and fear about future uncertainties either based on real or imagined events. These symptoms may affect both physical and psychological health. The anxiety disorders are subdivided into panic disorder, specific phobia, social phobia, posttraumatic stress disorder, obsessive-compulsive disorder, and generalized anxiety disorder. Borderline Personality Disorder This personality disorder is defined by a chronic pattern of instability. This instability manifests itself in interpersonal relationships, mood, self-image, and behavior that can interfere with social functioning or work. It may also cause grave emotional distress. Cognitive failures Every day slips and lapses, also called absentmindedness. Consciousness The quality or state of being aware of an external object or something within oneself. It has been defined as the ability to experience or to feel, wakefulness, having a sense of selfhood, and the executive control system of the mind. Cross-sectional design Research method that involves observation of all of a population, or a representative subset, at one specific point in time. Defensive coping mechanism An unconscious process, which protects an individual from unacceptable or painful ideas, impulses, or memories. DES Dissociative Experiences Scale. DID Dissociative identity disorder, formerly known as multiple personality disorder, is at the far end of the dissociative disorder spectrum. It is characterized by at least two distinct, and dissociated personality states. These personality states – or ‘alters’ - alternately control a person’s behavior. The sufferer therefore experiences significant memory impairment for important information not explained by ordinary forgetfulness. Dissociation A disruption in the usually integrated function of consciousness, memory, identity, or perception of the environment. Fantasy proneness The tendency to extensive fantasizing or daydreaming. General population A sample of people representative of the average individual in our society. Insomnia A sleep disorder in which there is an inability to fall asleep or to stay asleep as long as desired. Symptoms also include waking up too early, experience many awakenings during the night, and not feeling rested during the day. Lucid dreams Any dream in which one is aware that one is dreaming. Mood disorder A group of diagnoses in the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV-TR) classification system where a disturbance in the person’s mood is the primary dysfunction. Mood disorders include major depressive disorder, bipolar disorder, dysthymic and cyclothymic disorder. Nightmares An unpleasant dream that can cause a strong negative emotional response from the mind, typically fear or horror, but also despair, anxiety, and great sadness. The dream may contain situations of danger, discomfort, psychological or physical terror. Sufferers usually awaken in a state of distress and may be unable to return to sleep for a prolonged period of time. Obsessive-Compulsive Disorder This anxiety disorder is characterized by intrusive thoughts (obsessions), by repetitive behaviors (compulsions), or both. Obsessions produce uneasiness, fear, or worry. Compulsions are then aimed at reducing the associated anxiety. Examples of compulsive behaviors include excessive washing or cleaning; repeated checking; extreme hoarding; and nervous rituals, such as switching the light on and off a certain number of times when entering a room. Intrusive thoughts are often sexual, violent, or religious in nature... Prevalence The number of cases of a specific disorder present in a given population at a certain time. PTM Post-traumatic model of dissociation. Recurrent dreams The same dream narrative or dreamscape is experienced over different occasions of sleep. Schizophrenia This mental disorder is characterized by a breakdown of thought processes and emotional responses. Symptoms include auditory hallucinations, paranoid or bizarre delusions, or disorganized speech and thinking. Sufferers from this disorder experience grave dysfunctions in their social functioning and in work. SCID-D Structural Clinical Interview for DSM-IV Dissociative Disorders. S elf-report measure A type of psychological test in which a person fills out a survey or questionnaire with or without the help of an investigator. Sleep deprivation A sufficient lack of restorative sleep over a cumulative period so as to cause physical or psychiatric symptoms and affect routine performances of tasks. Sleep paralysis Sleep paralysis occurs when the normal paralysis during REM sleep manifests when falling asleep or awakening, often accompanied by hallucinations of danger or a malevolent presence in the room. Sleep-wake cycle A daily rhythmic activity cycle, based on 24-hour intervals, that is exhibited by many organisms. State When a symptom is acute, or transient, lasting from a few minutes to a few hours. Trait When a symptom forms part of the personality or character. Trauma An event or situation that causes great distress and disruption, and that creates substantial, lasting damage to the psychological development of a person. Vivid dreams A dream that is very clear, where the individual can remember the dream in great detail.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_9%3A_Psychological_Disorders_and_Treatments/9.06%3A_Dissociative_Disorders.txt
By Anda Gershon and Renee Thompson Stanford University, Washington University in St. Louis Everyone feels down or euphoric from time to time, but this is different from having a mood disorder such as major depressive disorder or bipolar disorder. Mood disorders are extended periods of depressed, euphoric, or irritable moods that in combination with other symptoms cause the person significant distress and interfere with his or her daily life, often resulting in social and occupational difficulties. In this module, we describe major mood disorders, including their symptom presentations, general prevalence rates, and how and why the rates of these disorders tend to vary by age, gender, and race. In addition, biological and environmental risk factors that have been implicated in the development and course of mood disorders, such as heritability and stressful life events, are reviewed. Finally, we provide an overview of treatments for mood disorders, covering treatments with demonstrated effectiveness, as well as new treatment options showing promise. learning objectives • Describe the diagnostic criteria for mood disorders. • Understand age, gender, and ethnic differences in prevalence rates of mood disorders. • Identify common risk factors for mood disorders. • Know effective treatments of mood disorders. The actress Brooke Shields published a memoir titled Down Came the Rain: My Journey through Postpartum Depression in which she described her struggles with depression following the birth of her daughter. Despite the fact that about one in 20 women experience depression after the birth of a baby (American Psychiatric Association [APA], 2013), postpartum depression—recently renamed “perinatal depression”—continues to be veiled by stigma, owing in part to a widely held expectation that motherhood should be a time of great joy. In an opinion piece in the New York Times, Shields revealed that entering motherhood was a profoundly overwhelming experience for her. She vividly describes experiencing a sense of “doom” and “dread” in response to her newborn baby. Because motherhood is conventionally thought of as a joyous event and not associated with sadness and hopelessness, responding to a newborn baby in this way can be shocking to the new mother as well as those close to her. It may also involve a great deal of shame for the mother, making her reluctant to divulge her experience to others, including her doctors and family. Feelings of shame are not unique to perinatal depression. Stigma applies to other types of depressive and bipolar disorders and contributes to people not always receiving the necessary support and treatment for these disorders. In fact, the World Health Organization ranks both major depressive disorder (MDD) and bipolar disorder (BD) among the top 10 leading causes of disability worldwide. Further, MDD and BD carry a high risk of suicide. It is estimated that 25%–50% of people diagnosed with BD will attempt suicide at least once in their lifetimes (Goodwin & Jamison, 2007). What Are Mood Disorders? Mood Episodes Everyone experiences brief periods of sadness, irritability, or euphoria. This is different than having a mood disorder, such as MDD or BD, which are characterized by a constellation of symptoms that causes people significant distress or impairs their everyday functioning. Major Depressive Episode A major depressive episode (MDE) refers to symptoms that co-occur for at least two weeks and cause significant distress or impairment in functioning, such as interfering with work, school, or relationships. Core symptoms include feeling down or depressed or experiencing anhedonia—loss of interest or pleasure in things that one typically enjoys. According to the fifth edition of the Diagnostic and Statistical Manual (DSM-5; APA, 2013), the criteria for an MDE require five or more of the following nine symptoms, including one or both of the first two symptoms, for most of the day, nearly every day: 1. depressed mood 2. diminished interest or pleasure in almost all activities 3. significant weight loss or gain or an increase or decrease in appetite 4. insomnia or hypersomnia 5. psychomotor agitation or retardation 6. fatigue or loss of energy 7. feeling worthless or excessive or inappropriate guilt 8. diminished ability to concentrate or indecisiveness 9. recurrent thoughts of death, suicidal ideation, or a suicide attempt These symptoms cannot be caused by physiological effects of a substance or a general medical condition (e.g., hypothyroidism). Manic or Hypomanic Episode The core criterion for a manic or hypomanic episode is a distinct period of abnormally and persistently euphoric, expansive, or irritable mood and persistently increased goal-directed activity or energy. The mood disturbance must be present for one week or longer in mania (unless hospitalization is required) or four days or longer in hypomania. Concurrently, at least three of the following symptoms must be present in the context of euphoric mood (or at least four in the context of irritable mood): 1. inflated self-esteem or grandiosity 2. increased goal-directed activity or psychomotor agitation 3. reduced need for sleep 4. racing thoughts or flight of ideas 5. distractibility 6. increased talkativeness 7. excessive involvement in risky behaviors Manic episodes are distinguished from hypomanic episodes by their duration and associated impairment; whereas manic episodes must last one week and are defined by a significant impairment in functioning, hypomanic episodes are shorter and not necessarily accompanied by impairment in functioning. Mood Disorders Unipolar Mood Disorders Two major types of unipolar disorders described by the DSM-5 (APA, 2013) are major depressive disorder and persistent depressive disorder (PDD; dysthymia). MDD is defined by one or more MDEs, but no history of manic or hypomanic episodes. Criteria for PDD are feeling depressed most of the day for more days than not, for at least two years. At least two of the following symptoms are also required to meet criteria for PDD: 1. poor appetite or overeating 2. insomnia or hypersomnia 3. low energy or fatigue 4. low self-esteem 5. poor concentration or difficulty making decisions 6. feelings of hopelessness Like MDD, these symptoms need to cause significant distress or impairment and cannot be due to the effects of a substance or a general medical condition. To meet criteria for PDD, a person cannot be without symptoms for more than two months at a time. PDD has overlapping symptoms with MDD. If someone meets criteria for an MDE during a PDD episode, the person will receive diagnoses of PDD and MDD. Bipolar Mood Disorders Three major types of BDs are described by the DSM-5 (APA, 2013). Bipolar I Disorder (BD I), which was previously known as manic-depression, is characterized by a single (or recurrent) manic episode. A depressive episode is not necessary but commonly present for the diagnosis of BD I. Bipolar II Disorder is characterized by single (or recurrent) hypomanic episodes and depressive episodes. Another type of BD is cyclothymic disorder, characterized by numerous and alternating periods of hypomania and depression, lasting at least two years. To qualify for cyclothymic disorder, the periods of depression cannot meet full diagnostic criteria for an MDE; the person must experience symptoms at least half the time with no more than two consecutive symptom-free months; and the symptoms must cause significant distress or impairment. It is important to note that the DSM-5 was published in 2013, and findings based on the updated manual will be forthcoming. Consequently, the research presented below was largely based on a similar, but not identical, conceptualization of mood disorders drawn from the DSM-IV (APA, 2000). How Common Are Mood Disorders? Who Develops Mood Disorders? Depressive Disorders In a nationally representative sample, lifetime prevalence rate for MDD is 16.6% (Kessler, Berglund, Demler, Jin, Merikangas, & Walters, 2005). This means that nearly one in five Americans will meet the criteria for MDD during their lifetime. The 12-month prevalence—the proportion of people who meet criteria for a disorder during a 12-month period—for PDD is approximately 0.5% (APA, 2013). Although the onset of MDD can occur at any time throughout the lifespan, the average age of onset is mid-20s, with the age of onset decreasing with people born more recently (APA, 2000). Prevalence of MDD among older adults is much lower than it is for younger cohorts (Kessler, Birnbaum, Bromet, Hwang, Sampson, & Shahly, 2010). The duration of MDEs varies widely. Recovery begins within three months for 40% of people with MDD and within 12 months for 80% (APA, 2013). MDD tends to be a recurrent disorder with about 40%–50% of those who experience one MDE experiencing a second MDE (Monroe & Harkness, 2011). An earlier age of onset predicts a worse course. About 5%–10% of people who experience an MDE will later experience a manic episode (APA, 2000), thus no longer meeting criteria for MDD but instead meeting them for BD I. Diagnoses of other disorders across the lifetime are common for people with MDD: 59% experience an anxiety disorder; 32% experience an impulse control disorder, and 24% experience a substance use disorder (Kessler, Merikangas, & Wang, 2007). Women experience two to three times higher rates of MDD than do men (Nolen-Hoeksema & Hilt, 2009). This gender difference emerges during puberty (Conley & Rudolph, 2009). Before puberty, boys exhibit similar or higher prevalence rates of MDD than do girls (Twenge & Nolen-Hoeksema, 2002). MDD is inversely correlated with socioeconomic status (SES), a person’s economic and social position based on income, education, and occupation. Higher prevalence rates of MDD are associated with lower SES (Lorant, Deliege, Eaton, Robert, Philippot, & Ansseau, 2003), particularly for adults over 65 years old (Kessler et al., 2010). Independent of SES, results from a nationally representative sample found that European Americans had a higher prevalence rate of MDD than did African Americans and Hispanic Americans, whose rates were similar (Breslau, Aguilar-Gaxiola, Kendler, Su, Williams, & Kessler, 2006). The course of MDD for African Americans is often more severe and less often treated than it is for European Americans, however (Williams et al., 2007). Native Americans have a higher prevalence rate than do European Americans, African Americans, or Hispanic Americans (Hasin, Goodwin, Stinson & Grant, 2005). Depression is not limited to industrialized or western cultures; it is found in all countries that have been examined, although the symptom presentation as well as prevalence rates vary across cultures (Chentsova-Dutton & Tsai, 2009). Bipolar Disorders The lifetime prevalence rate of bipolar spectrum disorders in the general U.S. population is estimated at approximately 4.4%, with BD I constituting about 1% of this rate (Merikangas et al., 2007). Prevalence estimates, however, are highly dependent on the diagnostic procedures used (e.g., interviews vs. self-report) and whether or not sub-threshold forms of the disorder are included in the estimate. BD often co-occurs with other psychiatric disorders. Approximately 65% of people with BD meet diagnostic criteria for at least one additional psychiatric disorder, most commonly anxiety disorders and substance use disorders (McElroy et al., 2001). The co-occurrence of BD with other psychiatric disorders is associated with poorer illness course, including higher rates of suicidality (Leverich et al., 2003). A recent cross-national study sample of more than 60,000 adults from 11 countries, estimated the worldwide prevalence of BD at 2.4%, with BD I constituting 0.6% of this rate (Merikangas et al., 2011). In this study, the prevalence of BD varied somewhat by country. Whereas the United States had the highest lifetime prevalence (4.4%), India had the lowest (0.1%). Variation in prevalence rates was not necessarily related to SES, as in the case of Japan, a high-income country with a very low prevalence rate of BD (0.7%). With regard to ethnicity, data from studies not confounded by SES or inaccuracies in diagnosis are limited, but available reports suggest rates of BD among European Americans are similar to those found among African Americans (Blazer et al., 1985) and Hispanic Americans (Breslau, Kendler, Su, Gaxiola-Aguilar, & Kessler, 2005). Another large community-based study found that although prevalence rates of mood disorders were similar across ethnic groups, Hispanic Americans and African Americans with a mood disorder were more likely to remain persistently ill than European Americans (Breslau et al., 2005). Compared with European Americans with BD, African Americans tend to be underdiagnosed for BD (and over-diagnosed for schizophrenia) (Kilbourne, Haas, Mulsant, Bauer, & Pincus, 2004; Minsky, Vega, Miskimen, Gara, & Escobar, 2003), and Hispanic Americans with BD have been shown to receive fewer psychiatric medication prescriptions and specialty treatment visits (Gonzalez et al., 2007). Misdiagnosis of BD can result in the underutilization of treatment or the utilization of inappropriate treatment, and thus profoundly impact the course of illness. As with MDD, adolescence is known to be a significant risk period for BD; mood symptoms start by adolescence in roughly half of BD cases (Leverich et al., 2007; Perlis et al., 2004). Longitudinal studies show that those diagnosed with BD prior to adulthood experience a more pernicious course of illness relative to those with adult onset, including more episode recurrence, higher rates of suicidality, and profound social, occupational, and economic repercussions (e.g., Lewinsohn, Seeley, Buckley, & Klein, 2002). The prevalence of BD is substantially lower in older adults compared with younger adults (1% vs. 4%) (Merikangas et al., 2007). What Are Some of the Factors Implicated in the Development and Course of Mood Disorders? Mood disorders are complex disorders resulting from multiple factors. Causal explanations can be attempted at various levels, including biological and psychosocial levels. Below are several of the key factors that contribute to onset and course of mood disorders are highlighted. Depressive Disorders Research across family and twin studies has provided support that genetic factors are implicated in the development of MDD. Twin studies suggest that familial influence on MDD is mostly due to genetic effects and that individual-specific environmental effects (e.g., romantic relationships) play an important role, too. By contrast, the contribution of shared environmental effect by siblings is negligible (Sullivan, Neale & Kendler, 2000). The mode of inheritance is not fully understood although no single genetic variation has been found to increase the risk of MDD significantly. Instead, several genetic variants and environmental factors most likely contribute to the risk for MDD (Lohoff, 2010). One environmental stressor that has received much support in relation to MDD is stressful life events. In particular, severe stressful life events—those that have long-term consequences and involve loss of a significant relationship (e.g., divorce) or economic stability (e.g., unemployment) are strongly related to depression (Brown & Harris, 1989; Monroe et al., 2009). Stressful life events are more likely to predict the first MDE than subsequent episodes (Lewinsohn, Allen, Seeley, & Gotlib, 1999). In contrast, minor events may play a larger role in subsequent episodes than the initial episodes (Monroe & Harkness, 2005). Depression research has not been limited to examining reactivity to stressful life events. Much research, particularly brain imagining research using functional magnetic resonance imaging (fMRI), has centered on examining neural circuitry—the interconnections that allow multiple brain regions to perceive, generate, and encode information in concert. A meta-analysis of neuroimaging studies showed that when viewing negative stimuli (e.g., picture of an angry face, picture of a car accident), compared with healthy control participants, participants with MDD have greater activation in brain regions involved in stress response and reduced activation of brain regions involved in positively motivated behaviors (Hamilton, Etkin, Furman, Lemus, Johnson, & Gotlib, 2012). Other environmental factors related to increased risk for MDD include experiencing early adversity (e.g., childhood abuse or neglect; Widom, DuMont, & Czaja, 2007), chronic stress (e.g., poverty) and interpersonal factors. For example, marital dissatisfaction predicts increases in depressive symptoms in both men and women. On the other hand, depressive symptoms also predict increases in marital dissatisfaction (Whisman & Uebelacker, 2009). Research has found that people with MDD generate some of their interpersonal stress (Hammen, 2005). People with MDD whose relatives or spouses can be described as critical and emotionally overinvolved have higher relapse rates than do those living with people who are less critical and emotionally overinvolved (Butzlaff & Hooley, 1998). People’s attributional styles or their general ways of thinking, interpreting, and recalling information have also been examined in the etiology of MDD (Gotlib & Joormann, 2010). People with a pessimistic attributional style tend to make internal (versus external), global (versus specific), and stable (versus unstable) attributions to negative events, serving as a vulnerability to developing MDD. For example, someone who when he fails an exam thinks that it was his fault (internal), that he is stupid (global), and that he will always do poorly (stable) has a pessimistic attribution style. Several influential theories of depression incorporate attributional styles (Abramson, Metalsky, & Alloy, 1989; Abramson Seligman, & Teasdale, 1978). Bipolar Disorders Although there have been important advances in research on the etiology, course, and treatment of BD, there remains a need to understand the mechanisms that contribute to episode onset and relapse. There is compelling evidence for biological causes of BD, which is known to be highly heritable (McGuffin, Rijsdijk, Andrew, Sham, Katz, & Cardno, 2003). It may be argued that a high rate of heritability demonstrates that BD is fundamentally a biological phenomenon. However, there is much variability in the course of BD both within a person across time and across people (Johnson, 2005). The triggers that determine how and when this genetic vulnerability is expressed are not yet understood; however, there is evidence to suggest that psychosocial triggers may play an important role in BD risk (e.g., Johnson et al., 2008; Malkoff-Schwartz et al., 1998). In addition to the genetic contribution, biological explanations of BD have also focused on brain function. Many of the studies using fMRI techniques to characterize BD have focused on the processing of emotional stimuli based on the idea that BD is fundamentally a disorder of emotion (APA, 2000). Findings show that regions of the brain thought to be involved in emotional processing and regulation are activated differently in people with BD relative to healthy controls (e.g., Altshuler et al., 2008; Hassel et al., 2008; Lennox, Jacob, Calder, Lupson, & Bullmore, 2004). However, there is little consensus as to whether a particular brain region becomes more or less active in response to an emotional stimulus among people with BD compared with healthy controls. Mixed findings are in part due to samples consisting of participants who are at various phases of illness at the time of testing (manic, depressed, inter-episode). Sample sizes tend to be relatively small, making comparisons between subgroups difficult. Additionally, the use of a standardized stimulus (e.g., facial expression of anger) may not elicit a sufficiently strong response. Personally engaging stimuli, such as recalling a memory, may be more effective in inducing strong emotions (Isacowitz, Gershon, Allard, & Johnson, 2013). Within the psychosocial level, research has focused on the environmental contributors to BD. A series of studies show that environmental stressors, particularly severe stressors (e.g., loss of a significant relationship), can adversely impact the course of BD. People with BD have substantially increased risk of relapse (Ellicott, Hammen, Gitlin, Brown, & Jamison, 1990) and suffer more depressive symptoms (Johnson, Winett, Meyer, Greenhouse, & Miller, 1999) following a severe life stressor. Interestingly, positive life events can also adversely impact the course of BD. People with BD suffer more manic symptoms after life events involving attainment of a desired goal (Johnson et al., 2008). Such findings suggest that people with BD may have a hypersensitivity to rewards. Evidence from the life stress literature has also suggested that people with mood disorders may have a circadian vulnerability that renders them sensitive to stressors that disrupt their sleep or rhythms. According to social zeitgeber theory (Ehlers, Frank, & Kupfer, 1988; Frank et al., 1994), stressors that disrupt sleep, or that disrupt the daily routines that entrain the biological clock (e.g., meal times) can trigger episode relapse. Consistent with this theory, studies have shown that life events that involve a disruption in sleep and daily routines, such as overnight travel, can increase bipolar symptoms in people with BD (Malkoff-Schwartz et al., 1998). What Are Some of the Well-Supported Treatments for Mood Disorders? Depressive Disorders There are many treatment options available for people with MDD. First, a number of antidepressant medications are available, all of which target one or more of the neurotransmitters implicated in depression.The earliest antidepressant medications were monoamine oxidase inhibitors (MAOIs). MAOIs inhibit monoamine oxidase, an enzyme involved in deactivating dopamine, norepinephrine, and serotonin. Although effective in treating depression, MAOIs can have serious side effects. Patients taking MAOIs may develop dangerously high blood pressure if they take certain drugs (e.g., antihistamines) or eat foods containing tyramine, an amino acid commonly found in foods such as aged cheeses, wine, and soy sauce. Tricyclics, the second-oldest class of antidepressant medications, block the reabsorption of norepinephrine, serotonin, or dopamine at synapses, resulting in their increased availability. Tricyclics are most effective for treating vegetative and somatic symptoms of depression. Like MAOIs, they have serious side effects, the most concerning of which is being cardiotoxic. Selective serotonin reuptake inhibitors (SSRIs; e.g., Fluoxetine) and serotonin and norepinephrine reuptake inhibitors (SNRIs; e.g., Duloxetine) are the most recently introduced antidepressant medications. SSRIs, the most commonly prescribed antidepressant medication, block the reabsorption of serotonin, whereas SNRIs block the reabsorption of serotonin and norepinephrine. SSRIs and SNRIs have fewer serious side effects than do MAOIs and tricyclics. In particular, they are less cardiotoxic, less lethal in overdose, and produce fewer cognitive impairments. They are not, however, without their own side effects, which include but are not limited to difficulty having orgasms, gastrointestinal issues, and insomnia. Other biological treatments for people with depression include electroconvulsive therapy (ECT), transcranial magnetic stimulation (TMS), and deep brain stimulation. ECT involves inducing a seizure after a patient takes muscle relaxants and is under general anesthesia. ECT is viable treatment for patients with severe depression or who show resistance to antidepressants although the mechanisms through which it works remain unknown. A common side effect is confusion and memory loss, usually short-term (Schulze-Rauschenbach, Harms, Schlaepfer, Maier, Falkai, & Wagner, 2005). Repetitive TMS is a noninvasive technique administered while a patient is awake. Brief pulsating magnetic fields are delivered to the cortex, inducing electrical activity. TMS has fewer side effects than ECT (Schulze-Rauschenbach et al., 2005), and while outcome studies are mixed, there is evidence that TMS is a promising treatment for patients with MDD who have shown resistance to other treatments (Rosa et al., 2006). Most recently, deep brain stimulation is being examined as a treatment option for patients who did not respond to more traditional treatments like those already described. Deep brain stimulation involves implanting an electrode in the brain. The electrode is connected to an implanted neurostimulator, which electrically stimulates that particular brain region. Although there is some evidence of its effectiveness (Mayberg et al., 2005), additional research is needed. Several psychosocial treatments have received strong empirical support, meaning that independent investigations have achieved similarly positive results—a high threshold for examining treatment outcomes. These treatments include but are not limited to behavior therapy, cognitive therapy, and interpersonal therapy. Behavior therapies focus on increasing the frequency and quality of experiences that are pleasant or help the patient achieve mastery. Cognitive therapies primarily focus on helping patients identify and change distorted automatic thoughts and assumptions (e.g., Beck, 1967). Cognitive-behavioral therapies are based on the rationale that thoughts, behaviors, and emotions affect and are affected by each other. Interpersonal Therapy for Depression focuses largely on improving interpersonal relationships by targeting problem areas, specifically unresolved grief, interpersonal role disputes, role transitions, and interpersonal deficits. Finally, there is also some support for the effectiveness of Short-Term Psychodynamic Therapy for Depression (Leichsenring, 2001). The short-term treatment focuses on a limited number of important issues, and the therapist tends to be more actively involved than in more traditional psychodynamic therapy. Bipolar Disorders Patients with BD are typically treated with pharmacotherapy. Antidepressants such as SSRIs and SNRIs are the primary choice of treatment for depression, whereas for BD, lithium is the first line treatment choice. This is because SSRIs and SNRIs have the potential to induce mania or hypomania in patients with BD. Lithium acts on several neurotransmitter systems in the brain through complex mechanisms, including reduction of excitatory (dopamine and glutamate) neurotransmission, and increasing of inhibitory (GABA) neurotransmission (Lenox & Hahn, 2000). Lithium has strong efficacy for the treatment of BD (Geddes, Burgess, Hawton, Jamison, & Goodwin, 2004). However, a number of side effects can make lithium treatment difficult for patients to tolerate. Side effects include impaired cognitive function (Wingo, Wingo, Harvey, & Baldessarini, 2009), as well as physical symptoms such as nausea, tremor, weight gain, and fatigue (Dunner, 2000). Some of these side effects can improve with continued use; however, medication noncompliance remains an ongoing concern in the treatment of patients with BD. Anticonvulsant medications (e.g., carbamazepine, valproate) are also commonly used to treat patients with BD, either alone or in conjunction with lithium. There are several adjunctive treatment options for people with BD. Interpersonal and social rhythm therapy (IPSRT; Frank et al., 1994) is a psychosocial intervention focused on addressing the mechanism of action posited in social zeitgeber theory to predispose patients who have BD to relapse, namely sleep disruption. A growing body of literature provides support for the central role of sleep dysregulation in BD (Harvey, 2008). Consistent with this literature, IPSRT aims to increase rhythmicity of patients’ lives and encourage vigilance in maintaining a stable rhythm. The therapist and patient work to develop and maintain a healthy balance of activity and stimulation such that the patient does not become overly active (e.g., by taking on too many projects) or inactive (e.g., by avoiding social contact). The efficacy of IPSRT has been demonstrated in that patients who received this treatment show reduced risk of episode recurrence and are more likely to remain well (Frank et al., 2005). Conclusion Everyone feels down or euphoric from time to time. For some people, these feelings can last for long periods of time and can also co-occur with other symptoms that, in combination, interfere with their everyday lives. When people experience an MDE or a manic episode, they see the world differently. During an MDE, people often feel hopeless about the future, and may even experience suicidal thoughts. During a manic episode, people often behave in ways that are risky or place them in danger. They may spend money excessively or have unprotected sex, often expressing deep shame over these decisions after the episode. MDD and BD cause significant problems for people at school, at work, and in their relationships and affect people regardless of gender, age, nationality, race, religion, or sexual orientation. If you or someone you know is suffering from a mood disorder, it is important to seek help. Effective treatments are available and continually improving. If you have an interest in mood disorders, there are many ways to contribute to their understanding, prevention, and treatment, whether by engaging in research or clinical work. Outside Resources Books: Recommended memoirs include A Memoir of Madness by William Styron (MDD); Noonday Demon: An Atlas of Depression by Andrew Solomon (MDD); and An Unquiet Mind: A Memoir of Moods and Madness by Kay Redfield (BD). Web: Visit the Association for Behavioral and Cognitive Therapies to find a list of the recommended therapists and evidence-based treatments. http://www.abct.org Web: Visit the Depression and Bipolar Support Alliance for educational information and social support options. http://www.dbsalliance.org/ Discussion Questions 1. What factors might explain the large gender difference in the prevalence rates of MDD? 2. Why might American ethnic minority groups experience more persistent BD than European Americans? 3. Why might the age of onset for MDD be decreasing over time? 4. Why might overnight travel constitute a potential risk for a person with BD? 5. What are some reasons positive life events may precede the occurrence of manic episode? Vocabulary Anhedonia Loss of interest or pleasure in activities one previously found enjoyable or rewarding. Attributional style The tendency by which a person infers the cause or meaning of behaviors or events. Chronic stress Discrete or related problematic events and conditions which persist over time and result in prolonged activation of the biological and/or psychological stress response (e.g., unemployment, ongoing health difficulties, marital discord). Early adversity Single or multiple acute or chronic stressful events, which may be biological or psychological in nature (e.g., poverty, abuse, childhood illness or injury), occurring during childhood and resulting in a biological and/or psychological stress response. Grandiosity Inflated self-esteem or an exaggerated sense of self-importance and self-worth (e.g., believing one has special powers or superior abilities). Hypersomnia Excessive daytime sleepiness, including difficulty staying awake or napping, or prolonged sleep episodes. Psychomotor agitation Increased motor activity associated with restlessness, including physical actions (e.g., fidgeting, pacing, feet tapping, handwringing). Psychomotor retardation A slowing of physical activities in which routine activities (e.g., eating, brushing teeth) are performed in an unusually slow manner. Social zeitgeber Zeitgeber is German for “time giver.” Social zeitgebers are environmental cues, such as meal times and interactions with other people, that entrain biological rhythms and thus sleep-wake cycle regularity. Socioeconomic status (SES) A person’s economic and social position based on income, education, and occupation. Suicidal ideation Recurring thoughts about suicide, including considering or planning for suicide, or preoccupation with suicide.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_9%3A_Psychological_Disorders_and_Treatments/9.07%3A_Mood_Disorders.txt
By Cristina Crego and Thomas Widiger University of Kentucky The purpose of this module is to define what is meant by a personality disorder, identify the five domains of general personality (i.e., neuroticism, extraversion, openness, agreeableness, and conscientiousness), identify the six personality disorders proposed for retention in the 5th edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) (i.e., borderline, antisocial, schizotypal, avoidant, obsessive-compulsive, and narcissistic), summarize the etiology for antisocial and borderline personality disorder, and identify the treatment for borderline personality disorder (i.e., dialectical behavior therapy and mentalization therapy). learning objectives • Define what is meant by a personality disorder. • Identify the five domains of general personality. • Identify the six personality disorders proposed for retention in DSM-5. • Summarize the etiology for antisocial and borderline personality disorder. • Identify the treatment for borderline personality disorder. Introduction Everybody has their own unique personality; that is, their characteristic manner of thinking, feeling, behaving, and relating to others (John, Robins, & Pervin, 2008). Some people are typically introverted, quiet, and withdrawn; whereas others are more extraverted, active, and outgoing. Some individuals are invariably conscientiousness, dutiful, and efficient; whereas others might be characteristically undependable and negligent. Some individuals are consistently anxious, self-conscious, and apprehensive; whereas others are routinely relaxed, self-assured, and unconcerned. Personality traits refer to these characteristic, routine ways of thinking, feeling, and relating to others. There are signs or indicators of these traits in childhood, but they become particularly evident when the person is an adult. Personality traits are integral to each person’s sense of self, as they involve what people value, how they think and feel about things, what they like to do, and, basically, what they are like most every day throughout much of their lives. There are literally hundreds of different personality traits. All of these traits can be organized into the broad dimensions referred to as the Five-Factor Model (John, Naumann, & Soto, 2008). These five broad domains are inclusive; there does not appear to be any traits of personality that lie outside of the Five-Factor Model. This even applies to traits that you may use to describe yourself. Table I provides illustrative traits for both poles of the five domains of this model of personality. A number of the traits that you see in this table may describe you. If you can think of some other traits that describe yourself, you should be able to place them somewhere in this table. DSM-5 Personality Disorders When personality traits result in significant distress, social impairment, and/or occupational impairment, they are considered to be a personality disorder (American Psychiatric Association, 2013). The authoritative manual for what constitutes a personality disorder is provided by the American Psychiatric Association’s (APA) Diagnostic and Statistical Manual of Mental Disorders (DSM), the current version of which is DSM-5 (APA, 2013). The DSM provides a common language and standard criteria for the classification and diagnosis of mental disorders. This manual is used by clinicians, researchers, health insurance companies, and policymakers. DSM-5 includes 10 personality disorders: antisocial, avoidant, borderline, dependent, histrionic, narcissistic, obsessive-compulsive, paranoid, schizoid, and schizotypal. All 10 of these personality disorders will be included in the next edition of the diagnostic manual, DSM-5. This list of 10 though does not fully cover all of the different ways in which a personality can be maladaptive. DSM-5 also includes a “wastebasket” diagnosis of other specified personality disorder (OSPD) and unspecified personality disorder (UPD). This diagnosis is used when a clinician believes that a patient has a personality disorder but the traits that constitute this disorder are not well covered by one of the 10 existing diagnoses. OSPD and UPD or as they used to be referred to in previous editions - PDNOS (personality disorder not otherwise specified) are often one of the most frequently used diagnoses in clinical practice, suggesting that the current list of 10 is not adequately comprehensive (Widiger & Trull, 2007). Description Each of the 10 DSM-5 (and DSM-IV-TR) personality disorders is a constellation of maladaptive personality traits, rather than just one particular personality trait (Lynam & Widiger, 2001). In this regard, personality disorders are “syndromes.” For example, avoidant personality disorder is a pervasive pattern of social inhibition, feelings of inadequacy, and hypersensitivity to negative evaluation (APA, 2013), which is a combination of traits from introversion (e.g., socially withdrawn, passive, and cautious) and neuroticism (e.g., self-consciousness, apprehensiveness, anxiousness, and worrisome). Dependent personality disorder includes submissiveness, clinging behavior, and fears of separation (APA, 2013), for the most part a combination of traits of neuroticism (anxious, uncertain, pessimistic, and helpless) and maladaptive agreeableness (e.g., gullible, guileless, meek, subservient, and self-effacing). Antisocial personality disorder is, for the most part, a combination of traits from antagonism (e.g., dishonest, manipulative, exploitative, callous, and merciless) and low conscientiousness (e.g., irresponsible, immoral, lax, hedonistic, and rash). See the 1967 movie, Bonnie and Clyde, starring Warren Beatty, for a nice portrayal of someone with antisocial personality disorder. Some of the DSM-5 personality disorders are confined largely to traits within one of the basic domains of personality. For example, obsessive-compulsive personality disorder is largely a disorder of maladaptive conscientiousness, including such traits as workaholism, perfectionism, punctilious, ruminative, and dogged; schizoid is confined largely to traits of introversion (e.g., withdrawn, cold, isolated, placid, and anhedonic); borderlinepersonality disorder is largely a disorder of neuroticism, including such traits as emotionally unstable, vulnerable, overwhelmed, rageful, depressive, and self-destructive (watch the 1987 movie, Fatal Attraction, starring Glenn Close, for a nice portrayal of this personality disorder); and histrionic personality disorder is largely a disorder of maladaptive extraversion, including such traits as attention-seeking, seductiveness, melodramatic emotionality, and strong attachment needs (see the 1951 film adaptation of Tennessee William’s play, Streetcar Named Desire, starring Vivian Leigh, for a nice portrayal of this personality disorder). It should be noted though that a complete description of each DSM-5 personality disorder would typically include at least some traits from other domains. For example, antisocial personality disorder (or psychopathy) also includes some traits from low neuroticism (e.g., fearlessness and glib charm) and extraversion (e.g., excitement-seeking and assertiveness); borderline includes some traits from antagonism (e.g., manipulative and oppositional) and low conscientiousness (e.g., rash); and histrionic includes some traits from antagonism (e.g., vanity) and low conscientiousness (e.g., impressionistic). Narcissistic personality disorder includes traits from neuroticism (e.g., reactive anger, reactive shame, and need for admiration), extraversion (e.g., exhibitionism and authoritativeness), antagonism (e.g., arrogance, entitlement, and lack of empathy), and conscientiousness (e.g., acclaim-seeking). Schizotypal personality disorder includes traits from neuroticism (e.g., social anxiousness and social discomfort), introversion (e.g., social withdrawal), unconventionality (e.g., odd, eccentric, peculiar, and aberrant ideas), and antagonism (e.g., suspiciousness). The APA currently conceptualizes personality disorders as qualitatively distinct conditions; distinct from each other and from normal personality functioning. However, included within an appendix to DSM-5 is an alternative view that personality disorders are simply extreme and/or maladaptive variants of normal personality traits, as suggested herein. Nevertheless, many leading personality disorder researchers do not hold this view (e.g., Gunderson, 2010; Hopwood, 2011; Shedler et al., 2010). They suggest that there is something qualitatively unique about persons suffering from a personality disorder, usually understood as a form of pathology in sense of self and interpersonal relatedness that is considered to be distinct from personality traits (APA, 2012; Skodol, 2012). For example, it has been suggested that antisocial personality disorder includes impairments in identity (e.g., egocentrism), self-direction, empathy, and capacity for intimacy, which are said to be different from such traits as arrogance, impulsivity, and callousness (APA, 2012). Validity It is quite possible that in future revisions of the DSM some of the personality disorders included in DSM-5 and DSM-IV-TR will no longer be included. In fact, for DSM-5 it was originally proposed that four be deleted. The personality disorders that were slated for deletion were histrionic, schizoid, paranoid, and dependent (APA, 2012). The rationale for the proposed deletions was in large part because they are said to have less empirical support than the diagnoses that were at the time being retained (Skodol, 2012). There is agreement within the field with regard to the empirical support for the borderline, antisocial, and schizotypal personality disorders (Mullins-Sweat, Bernstein, & Widiger, 2012; Skodol, 2012). However, there is a difference of opinion with respect to the empirical support for the dependent personality disorder (Bornstein, 2012; Livesley, 2011; Miller, Widiger, & Campbell, 2010; Mullins-Sweat et al., 2012). Little is known about the specific etiology for most of the DSM-5 personality disorders. Because each personality disorder represents a constellation of personality traits, the etiology for the syndrome will involve a complex interaction of an array of different neurobiological vulnerabilities and dispositions with a variety of environmental, psychosocial events. Antisocial personality disorder, for instance, is generally considered to be the result of an interaction of genetic dispositions for low anxiousness, aggressiveness, impulsivity, and/or callousness, with a tough, urban environment, inconsistent parenting, poor parental role modeling, and/or peer support (Hare, Neumann, & Widiger, 2012). Borderline personality disorder is generally considered to be the result of an interaction of a genetic disposition to negative affectivity interacting with a malevolent, abusive, and/or invalidating family environment (Hooley, Cole, & Gironde, 2012). To the extent that one considers the DSM-5 personality disorders to be maladaptive variants of general personality structure, as described, for instance, within the Five-Factor Model, there would be a considerable body of research to support the validity for all of the personality disorders, including even the histrionic, schizoid, and paranoid. There is compelling multivariate behavior genetic support with respect to the precise structure of the Five-Factor Model (e.g., Yamagata et al., 2006), childhood antecedents (Caspi, Roberts, & Shiner, 2005), universality (Allik, 2005), temporal stability across the lifespan (Roberts & DelVecchio, 2000), ties with brain structure (DeYoung, Hirsh, Shane, Papademetris, Rajeevan, & Gray, 2010), and even molecular genetic support for neuroticism (Widiger, 2009). Treatment Personality disorders are relatively unique because they are often “ego-syntonic;” that is, most people are largely comfortable with their selves, with their characteristic manner of behaving, feeling, and relating to others. As a result, people rarely seek treatment for their antisocial, narcissistic, histrionic, paranoid, and/or schizoid personality disorder. People typically lack insight into the maladaptivity of their personality. One clear exception though is borderline personality disorder (and perhaps as well avoidant personality disorder). Neuroticism is the domain of general personality structure that concerns inherent feelings of emotional pain and suffering, including feelings of distress, anxiety, depression, self-consciousness, helplessness, and vulnerability. Persons who have very high elevations on neuroticism (i.e., persons with borderline personality disorder) experience life as one of pain and suffering, and they will seek treatment to alleviate this severe emotional distress. People with avoidant personality may also seek treatment for their high levels of neuroticism (anxiousness and self-consciousness) and introversion (social isolation). In contrast, narcissistic individuals will rarely seek treatment to reduce their arrogance; paranoid persons rarely seek treatment to reduce their feelings of suspiciousness; and antisocial people rarely (or at least willfully) seek treatment to reduce their disposition for criminality, aggression, and irresponsibility. Nevertheless, maladaptive personality traits will be evident in many individuals seeking treatment for other mental disorders, such as anxiety, mood, or substance use. Many of the people with a substance use disorder will have antisocial personality traits; many of the people with mood disorder will have borderline personality traits. The prevalence of personality disorders within clinical settings is estimated to be well above 50% (Torgersen, 2012). As many as 60% of inpatients within some clinical settings are diagnosed with borderline personality disorder (APA, 2000). Antisocial personality disorder may be diagnosed in as many as 50% of inmates within a correctional setting (Hare et al., 2012). It is estimated that 10% to 15% of the general population meets criteria for at least one of the 10 DSM-IV-TR personality disorders (Torgersen, 2012), and quite a few more individuals are likely to have maladaptive personality traits not covered by one of the 10 DSM-5 diagnoses. The presence of a personality disorder will often have an impact on the treatment of other mental disorders, typically inhibiting or impairing responsivity. Antisocial persons will tend to be irresponsible and negligent; borderline persons can form intensely manipulative attachments to their therapists; paranoid patients will be unduly suspicious and accusatory; narcissistic patients can be dismissive and denigrating; and dependent patients can become overly attached to and feel helpless without their therapists. It is a misnomer, though, to suggest that personality disorders cannot themselves be treated. Personality disorders are among the most difficult of disorders to treat because they involve well-established behaviors that can be integral to a client’s self-image (Millon, 2011). Nevertheless, much has been written on the treatment of personality disorder (e.g., Beck, Freeman, Davis, & Associates, 1990; Gunderson & Gabbard, 2000), and there is empirical support for clinically and socially meaningful changes in response to psychosocial and pharmacologic treatments (Perry & Bond, 2000). The development of an ideal or fully healthy personality structure is unlikely to occur through the course of treatment, but given the considerable social, public health, and personal costs associated with some of the personality disorders, such as the antisocial and borderline, even just moderate adjustments in personality functioning can represent quite significant and meaningful change. Nevertheless, manualized and/or empirically validated treatment protocols have been developed for only one personality disorder, borderline (APA, 2001). Focus Topic: Treatment of Borderline Personality Disorder Dialectical behavior therapy (Lynch & Cuyper, 2012) and mentalization therapy (Bateman & Fonagy, 2012): Dialectical behavior therapy is a form of cognitive-behavior therapy that draws on principles from Zen Buddhism, dialectical philosophy, and behavioral science. The treatment has four components: individual therapy, group skills training, telephone coaching, and a therapist consultation team, and will typically last a full year. As such, it is a relatively expensive form of treatment, but research has indicated that its benefits far outweighs its costs, both financially and socially. It is unclear why specific and explicit treatment manuals have not been developed for the other personality disorders. This may reflect a regrettable assumption that personality disorders are unresponsive to treatment. It may also reflect the complexity of their treatment. As noted earlier, each DSM-5 disorder is a heterogeneous constellation of maladaptive personality traits. In fact, a person can meet diagnostic criteria for the antisocial, borderline, schizoid, schizotypal, narcissistic, and avoidant personality disorders and yet have only one diagnostic criterion in common. For example, only five of nine features are necessary for the diagnosis of borderline personality disorder; therefore, two persons can meet criteria for this disorder and yet have only one feature in common. In addition, patients meeting diagnostic criteria for one personality disorder will often meet diagnostic criteria for another. This degree of diagnostic overlap and heterogeneity of membership hinders tremendously any effort to identify a specific etiology, pathology, or treatment for a respective personality disorder as there is so much variation within any particular group of patients sharing the same diagnosis (Smith & Zapolski, 2009). Of course, this diagnostic overlap and complexity did not prevent researchers and clinicians from developing dialectical behavior therapy and mentalization therapy. A further reason for the weak progress in treatment development is that, as noted earlier, persons rarely seek treatment for their personality disorder. It would be difficult to obtain a sufficiently large group of people with, for instance, narcissistic or obsessive–compulsive disorder to participate in a treatment outcome study, one receiving the manualized treatment protocol, the other receiving treatment as usual. Conclusions It is evident that all individuals have a personality, as indicated by their characteristic way of thinking, feeling, behaving, and relating to others. For some people, these traits result in a considerable degree of distress and/or impairment, constituting a personality disorder. A considerable body of research has accumulated to help understand the etiology, pathology, and/or treatment for some personality disorders (i.e., antisocial, schizotypal, borderline, dependent, and narcissistic), but not so much for others (e.g., histrionic, schizoid, and paranoid). However, researchers and clinicians are now shifting toward a more dimensional understanding of personality disorders, wherein each is understood as a maladaptive variant of general personality structure, thereby bringing to bear all that is known about general personality functioning to an understanding of these maladaptive variants. Outside Resources Structured Clinical Interview for DSM-5 (SCID-5) https://www.appi.org/products/structured-clinical-interview-for-dsm-5-scid-5 Web: DSM-5 website discussion of personality disorders http://www.dsm5.org/ProposedRevision...Disorders.aspx Discussion Questions 1. Do you think that any of the personality disorders, or some of their specific traits, are ever good or useful to have? 2. If someone with a personality disorder commits a crime, what is the right way for society to respond? For example, does or should meeting diagnostic criteria for antisocial personality disorder mitigate (lower) a person’s responsibility for committing a crime? 3. Given what you know about personality disorders and the traits that comprise each one, would you say there is any personality disorder that is likely to be diagnosed in one gender more than the other? Why or why not? 4. Do you believe that personality disorders can be best understood as a constellation of maladaptive personality traits, or do you think that there is something more involved for individuals suffering from a personality disorder? 5. The authors suggested Clyde Barrow as an example of antisocial personality disorder and Blanche Dubois for histrionic personality disorder. Can you think of a person from the media or literature who would have at least some of the traits of narcissistic personality disorder? Vocabulary Antisocial A pervasive pattern of disregard and violation of the rights of others. These behaviors may be aggressive or destructive and may involve breaking laws or rules, deceit or theft. Avoidant A pervasive pattern of social inhibition, feelings of inadequacy, and hypersensitivity to negative evaluation. Borderline A pervasive pattern of instability of interpersonal relationships, self-image, and affects, and marked impulsivity. Dependent A pervasive and excessive need to be taken care of that leads to submissive and clinging behavior and fears of separation. Five-Factor Model Five broad domains or dimensions that are used to describe human personality. Histrionic A pervasive pattern of excessive emotionality and attention seeking. Narcissistic A pervasive pattern of grandiosity (in fantasy or behavior), need for admiration, and lack of empathy. Obsessive-compulsive A pervasive pattern of preoccupation with orderliness, perfectionism, and mental and interpersonal control, at the expense of flexibility, openness, and efficiency. Paranoid A pervasive distrust and suspiciousness of others such that their motives are interpreted as malevolent. Personality Characteristic, routine ways of thinking, feeling, and relating to others. Personality disorders When personality traits result in significant distress, social impairment, and/or occupational impairment. Schizoid A pervasive pattern of detachment from social relationships and a restricted range of expression of emotions in interpersonal settings. Schizotypal A pervasive pattern of social and interpersonal deficits marked by acute discomfort with, and reduced capacity for, close relationships as well as perceptual distortions and eccentricities of behavior.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_9%3A_Psychological_Disorders_and_Treatments/9.08%3A_Personality_Disorders.txt
By Chris Patrick Florida State University Psychopathy (or “psychopathic personality”) is a topic that has long fascinated the public at large as well as scientists and clinical practitioners. However, it has also been subject to considerable confusion and scholarly debate over the years. This module reviews alternative conceptions of psychopathy that have been proposed historically, and reviews major instruments currently in use for the assessment of psychopathic tendencies in clinical and nonclinical samples. An integrative theoretic framework, the Triarchic model, is presented that provides a basis for reconciling differing historic conceptions and assessment approaches. Implications of the model for thinking about causal hypotheses of psychopathy, and for resolving longstanding points of contention in the field, are discussed. Learning Objectives • Learn about Cleckley’s classic account of psychopathy, presented in his book The Mask of Sanity, along with other historic conceptions. • Compare and contrast differing inventories currently in use for assessing psychopathy in differing samples (e.g., adults and younger individuals, within clinical-forensic and community settings). • Become familiar with the Triarchic model of psychopathy and its constituent constructs of boldness, meanness, and disinhibition. • Learn about alternative theories regarding the causal origins of psychopathy. • Consider how longstanding matters of debate regarding the nature, definition, and origins of psychopathy can be addressed from the perspective of the Triarchic model. Introduction For many in the public at large, the term “psychopath” conjures up images of ruthless homicidal maniacs and criminal masterminds. This impression is reinforced on an ongoing basis by depictions of psychopathic individuals in popular books and films, such as No Country for Old Men, Silence of the Lambs, and Catch Me if You Can, and by media accounts of high-profile criminals ranging from Charles Manson to Jeffrey Dahmer to Bernie Madoff. However, the concept of psychopathy(“psychopathic personality”) held by experts in the mental health field differs sharply from this common public perception—emphasizing distinct dispositional tendencies as opposed to serious criminal acts of one sort or another. This module reviews historic and contemporary conceptions of psychopathy as a clinical disorder, describes methods for assessing it, and discusses how a new conceptual model can help to address key questions regarding its nature and origins that have long been debated. It will be seen from this review that the topic remains no less fascinating or socially relevant when considered from a clinical–scientific perspective. Historic Conceptions Early writers characterized psychopathy as an atypical form of mental illness in which rational faculties appeared normal but everyday behavior and social relationships are markedly disrupted. French physician Philippe Pinel (1806/1962) documented cases of what he called manie sans delire (“insanity without delirium”), in which dramatic episodes of recklessness and aggression occurred in individuals not suffering from obvious clouding of the mind. German psychiatrist Julius Koch (1888) introduced the disease-oriented term psychopathic to convey the idea that conditions of this type had a strong constitutional-heritable basis. In his seminal book The Mask of Sanity, which focused on patients committed for hospital treatment, American psychiatrist Hervey Cleckley (1941/1976) described psychopathy as a deep-rooted emotional pathology concealed by an outward appearance of good mental health. In contrast with other psychiatric patients, psychopathic individuals present as confident, sociable, and well adjusted. However, their underlying disorder reveals itself over time through their actions and attitudes. To facilitate identification of psychopathic individuals in clinical settings, Cleckley provided 16 diagnostic criteria distilled from his clinical case summaries, encompassing indicators of apparent psychological stability (e.g., charm and intelligence, absence of nervousness) along with symptoms of behavioral deviancy (e.g., irresponsibility, failure to plan) and impaired affect and social connectedness (e.g., absence of remorse, deceptiveness, inability to love). Notably, Cleckley did not characterize psychopathic patients as inherently cruel, violent, or dangerous. Although some engaged in repetitive violent acts, more often the harm they caused was nonphysical and the product of impulsive self-centeredness as opposed to viciousness. Indeed, Cleckley’s case histories included examples of “successful psychopaths” who ascended to careers as professors, medical doctors, or businessmen, along with examples of more aimless dysfunctional types. In contrast with this, other writers from Cleckley’s time who were concerned with criminal expressions of psychopathy placed greater emphasis on symptoms of emotional coldness, aggression, and predatory victimization. For example, McCord and McCord (1964) described the condition in more generally pathologic terms, highlighting “guiltlessness” (lack of remorse) and “lovelessness” (lack of attachment capacity) as central defining features. Cleckley’s conception served as a referent for the diagnosis of psychopathy in the first two editions of the official American psychiatric nosology, the Diagnostic and Statistical Manual of Mental Disorders (DSM). However, a dramatic shift occurred in the third edition of the DSM, with the introduction of behaviorally oriented symptom definitions for most disorders to address longstanding problems of reliability. The Cleckley-oriented conception of psychopathy in prior editions was replaced by antisocial personality disorder (ASPD), defined by specific indicants of behavioral deviancy in childhood (e.g., fighting, lying, stealing, truancy) continuing into adulthood (manifested as repeated rulebreaking, impulsiveness, irresponsibility, aggressiveness, etc.). Concerns with this new conception were expressed by psychopathy experts, who noted that ASPD provided limited coverage of interpersonal-affective symptoms considered essential to psychopathy (e.g., charm, deceitfulness, selfishness, shallow affect; Hare, 1983). Nonetheless, ASPD was retained in much the same form in the fourth edition of the DSM (DSM-IV; American Psychiatric Association [APA], 2000), and remained unchanged in the fifth edition of the DSM (American Psychiatric Association, 2013). That said, the DSM-5 does include a new, dimensional-trait approach to characterizing personality pathology (Strickland, Drislane, Lucy, Krueger, & Patrick, in press). Contemporary assessment methods Modern approaches to the assessment of psychopathy, consisting of rating instruments and self-report scales, reflect the foregoing historic conceptions to differing degrees. Psychopathy in adult criminal offenders The most widely used instrument for diagnosing psychopathy in correctional and forensic settings is the Psychopathy Checklist-Revised (PCL-R; Hare, 2003), which comprises 20 items rated on the basis of interview and file-record information. The items of the PCL-R effectively capture the interpersonal-affective deficits and behavioral deviance features identified by Cleckley, but include only limited, indirect coverage of positive adjustment features. The manual for the PCL-R recommends the use of a cutoff score of 30 out of 40 for assigning a diagnosis of psychopathy. High overall PCL-R scores are correlated with impulsive and aggressive tendencies, low empathy, Machiavellianism, lack of social connectedness, and persistent violent offending. Given these correlates, and the omission of positive adjustment indicators, psychopathy as assessed by the PCL-R appears more similar to the predatory-aggressive conception of McCord and McCord than to Cleckley’s conception. Although the PCL-R was developed to index psychopathy as a unitary condition, structural analyses of its items reveal distinct interpersonal-affective and antisocial deviance subdimensions (factors). Although moderately (about .5) correlated, these factors show contrasting relations with external criterion measures. The interpersonal-affective factor relates to indices of narcissism, low empathy, and proactive aggression (Hare, 2003), and to some extent (after controlling for its overlap with the antisocial factor) adaptive tendencies such as high social assertiveness and low fear, distress, and depression (Hicks & Patrick, 2006). High scores on the antisocial deviance factor, by contrast, are associated mainly with maladaptive tendencies and behaviors, including impulsiveness, sensation seeking, alienation and mistrust, reactive aggression, early and persistent antisocial deviance, and substance-related problems. Psychopathy in noncriminal adults Psychopathy has most typically been assessed in noncriminal adult samples using self-report-based measures. Older measures of this type emphasized the antisocial deviancy component of psychopathy with limited coverage of interpersonal-affective features (Hare, 2003). Some newer instruments provide more balanced coverage of both. One example is the now widely used Psychopathic Personality Inventory (PPI; Lilienfeld & Andrews, 1996), which was developed to index personality dispositions embodied within historic conceptions of psychopathy. Its current revised form (PPI-R; Lilienfeld & Widows, 2005) contains 154 items, organized into eight facet scales. Like the items of the PCL-R, the subscales of the PPI cohere around two distinguishable factors: a fearless dominance (FD) factor reflecting social potency, stress immunity, and fearlessness, and a self-centered impulsivity (SCI) factor reflecting egocentricity, exploitativeness, hostile rebelliousness, and lack of planning. However, unlike the factors of the PCL-R, the two PPI factors are uncorrelated, and thus even more distinct in their external correlates. Scores on PPI-FD are associated with indices of positive psychological adjustment (e.g., higher well-being; lower anxiety and depression) and measures of narcissism (low) empathy, and thrill/adventure seeking (Benning, Patrick, Blonigen, Hicks, & Iacono, 2005). Given this, PPI-FD has been interpreted as capturing a more adaptive expression of dispositional fearlessness (i.e., boldness; see below) than the interpersonal-affective factor of the PCL-R—which can be viewed as tapping a more pathologic (antagonistic or “mean”) expression of fearlessness. Scores on PPI-SCI, like Factor 2 of the PCL-R, are associated with multiple indicators of deviancy—including impulsivity and aggressiveness, child and adult antisocial behavior, substance abuse problems, heightened distress and dysphoria, and suicidal ideation. Psychopathy in child and adolescent clinical samples Different inventories exist for assessing psychopathic tendencies in children and adolescents. The best-known consist of rating-based measures developed, using the PCL-R as a referent, to identify psychopathic individuals among youth convicted of crimes or referred for treatment of conduct problems. The emphasis in work of this type has been on the importance of psychopathic features for predicting greater severity and persistence of conduct problems. Termed “callous-unemotional” traits, these features encompass low empathy, deficient remorse or guilt, shallow affect, and lack of concern about performance in school and other contexts (Frick & Moffitt, 2010). One extensively researched measure for assessing psychopathic tendencies in youth is the Antisocial Process Screening Device (APSD; Frick & Marsee, 2006), used with clinic-referred children ages 6 through 13. The APSD includes 20 items completed by parents or teachers. As with the PCL-R and PPI, the items of the APSD tap two distinct factors: a Callous-Unemotional (CU) traits factor, reflecting emotional insensitivity and disregard for others; and an Impulsive/Conduct Problems (I/CP) factor, reflecting impulsivity, behavioral deviancy, and inflated self-importance. Children high on the I/CP factor alone show below-average intelligence, heightened emotional responsiveness to stressors, and angry (reactive) aggression (Frick & Marsee, 2006). By contrast, children high on both APSD factors show average or above-average intelligence, low reported levels of anxiety and nervousness, reduced reactivity to stressful events, and preference for activities entailing novelty and risk. They also learn less readily from punishment and engage in high levels of premeditated as well as reactive aggression and exhibit more persistent violent behavior across time. Given the documented importance of CU traits in moderating the expression of conduct disorder, the upcoming fifth edition of the DSM will include criteria for designating a distinct CU variant of child conduct disorder (Frick & Moffitt, 2010). Core ingredients of psychopathy: disinhibition, boldness, and meanness The foregoing material highlights the fact that historic conceptions of psychopathy and available instruments for assessing it place differing emphasis on different symptomatic features. This had contributed to longstanding disagreements among scholars about what psychopathy entails and what causes it. A theoretic conceptualization formulated recently to reconcile alternative perspectives is the Triarchic model (Patrick, Fowles, & Krueger, 2009). This model conceives of psychopathy as encompassing three separable symptomatic components—disinhibition, boldness, and meanness—that can be viewed as thematic building blocks for differing conceptions of psychopathy. Definitions Disinhibition as described in the Triarchic model encompasses tendencies toward impulsiveness, weak behavioral restraint, hostility and mistrust, and difficulties in regulating emotion. Meanness entails deficient empathy, lack of affiliative capacity, contempt toward others, predatory exploitativeness, and empowerment through cruelty and destructiveness. Referents for disinhibition and meanness include the finding of distinct I/CP and CU factors in the child psychopathy literature and corresponding evidence for distinct disinhibitory and callous-aggression factors underlying impulse control (externalizing) problems in adults (Krueger, Markon, Patrick, Benning, & Kramer, 2007). The third construct in the model, Boldness,encompasses dominance, social assurance, emotional resiliency, and venturesomeness. Referents for this construct include the “mask” elements of Cleckley’s conception, Lykken’s (1995) low fear theory of psychopathy, the FD factor of the PPI, and developmental research on fearless temperament as a possible precursor to psychopathy (Patrick et al., 2009). From the perspective of the Triarchic model, Cleckley’s conception of psychopathy emphasized boldness and disinhibition, whereas criminally oriented conceptions (and affiliated measures, including the PCL-R and APSD) emphasize meanness and disinhibition more so. According to the model, individuals high in disinhibitory tendencies would warrant a diagnosis of psychopathy if also high in boldness or meanness (or both), but individuals high on only one of these tendencies would not. Individuals with differing relative elevations on these three symptomatic components would account for contrasting variants (subtypes) of psychopathy as described in the literature (Hicks, Markon, Patrick, Krueger, & Newman, 2004; Karpman, 1941; Skeem, Johansson, Andershed, Kerr, & Louden, 2007). An inventory designed specifically to operationalize this model is the Triarchic Psychopathy Measure (TriPM; Patrick, 2010). The TriPM contains 58 items comprising three subscales that correspond to the constructs of the model (see Table 1). The items of the Disinhibition and Meanness scales (20 and 19 items, respectively) are taken from the Externalizing Spectrum Inventory (ESI; Krueger et al., 2007), a measure of problems and traits associated with externalizing psychopathology. The TriPM Boldness scale was developed to index fearless tendencies in social, affective-experiential, and activity preference domains, with reference to the FD factor of the PPI and the general factor shown to underlie differing scale measures of fear and fearlessness (Kramer, Patrick, Gasperi, & Krueger, 2012). Although the TriPM is relatively new, promising evidence for its convergent and discriminant validity has begun to appear (e.g., Sellbom & Phillips, 2013; Strickland et al., in press; see also Venables & Patrick, 2012). Given that the inventory is freely available online, and that several foreign-language translations now exist (including Brazilian-Portuguese, Dutch, Finnish, German, Italian, Portuguese, Swedish, and Spanish), it can be expected that additional validity data will accumulate rapidly over time. Work is also being done to evaluate whether effective scale measures of the Triarchic constructs can be derived from items of other existing psychopathy inventories such as the PPI. As discussed in the last part of this section, research examining the common and distinctive correlates of these three components of psychopathy is likely to be helpful for addressing and perhaps resolving ongoing points of uncertainty and debate in the field. Causal factors Considerable research has been devoted over many years to investigation of causal factors in psychopathy. Existing theories are of two types: (1) theories emphasizing core deficits in emotional sensitivity or responsiveness, and (2) theories positing basic impairments in cognitive-attentional processing (Patrick & Bernat, 2009). In support of these alternative theories, differing neurobiological correlates of psychopathy have been reported. One of the most consistent entails a lack of normal enhancement of the startle blink reflex to abrupt noises occurring during viewing of aversive foreground stimuli (e.g., scary or disturbing pictorial images) as compared with neutral or pleasant stimuli (see Figure 9.9.1). This result, akin to a failure to “jump” upon hearing a trash can tip while walking alone in a dark alley, has been interpreted as reflecting a lack of normal defensive (fear) reactivity. Another fairly consistent finding involves reduced amplitude of brain potential response to intermittent target stimuli, or following incorrect responses, within cognitive performance tasks—indicative of reduced cortical-attentional processing or impaired action monitoring (Patrick & Bernat, 2009). Yet other research using functional neuroimaging has demonstrated deficits in basic subcortical (amygdala) reactivity to interpersonal distress cues (e.g., fearful human faces) in high-psychopathic individuals (Jones, Laurens, Herba, Barker, & Viding, 2009; Marsh et al., 2008). The Triarchic model may prove to be of use for reconciling alternative causal models of psychopathy that have been proposed based on contrasting neurobiological and behavioral findings. For example, lack of startle enhancement during aversive cuing has been tied specifically to the interpersonal-affective factor of the PCL-R and the counterpart FD factor of the PPI (Figure 9.9.1)—suggesting a link to the boldness component of psychopathy. By contrast, reduced brain potential responses in cognitive tasks appear more related to impulsive-externalizing tendencies associated with the disinhibition component of psychopathy (Carlson, Thái, & McLaron, 2009; Patrick & Bernat, 2009). On the other hand, the finding of reduced subcortical response to affective facial cues has been tied to the CU traits factor of child/adolescent psychopathy, a referent for meanness in the Triarchic model. However, further research is needed to determine whether this finding reflects fear deficits common to meanness and boldness, or deficits in affiliative capacity or empathy specific to meanness. Triarchic model perspective on long-debated issues regarding psychopathy As highlighted in the foregoing sections, scholars have grappled with issues of definition since psychopathy was first identified as a condition of clinical concern, and questions regarding its essential features and alternative expressions continue to be debated and studied. This final subsection discusses how some of the major issues of debate are addressed by the Triarchic model. One key issue is whether psychological/emotional stability is characteristic or not of psychopathy. Cleckley’s (1941/1976) view was that psychopathy entails a salient presentation of good mental health, and his diagnostic criteria included indicators of positive adjustment. By contrast, the dominant clinical assessment devices for psychopathy, the PCL-R and ASPD, are heavily oriented toward deviancy and include no items that are purely indicative of adjustment. From a Triarchic model standpoint, the more adaptive elements of psychopathy are embodied in its boldness facet, which entails social poise, emotional stability, and enjoyment of novelty and adventure. At the same time, high boldness is also associated with narcissistic tendencies, reduced sensitivity to the feelings of others, and risk-taking (Benning et al., 2005). Thus, the concept of boldness provides a way to think about the intriguing “mask” element of psychopathy. Related to this, another issue is whether lack of anxiety is central to psychopathy, as Cleckley and others (e.g., Fowles & Dindo, 2009; Lykken, 1995) have emphasized. This perspective is challenged by research showing either negligible or somewhat positive associations for overall scores on the PCL-R and other psychopathy measures with anxiety. The Triarchic model helps to address this inconsistency by separating the disorder into subcomponents or facets, which relate differently to measures of trait anxiety: Boldness is correlated negatively with anxiousness (Benning et al., 2005), whereas Disinhibition and Meannness are correlated negatively and negligibly, respectively, with anxiety (Venables & Patrick, 2012). Related to this, cluster analytic studies of criminal offenders exhibiting high overall scores on the PCL-R have demonstrated one subtype characterized by low anxiety in particular, and another exhibiting high anxiety along with very high levels of impulsivity and aggression (Hicks et al., 2004; Skeem et al., 2007). The implication is that low anxiousness is central to one variant of criminal psychopathy (the bold-disinhibited, or “primary” type) but not to another variant (the “disinhibited-mean,” “aggressive-externalizing,” or “secondary” type). A further key question is whether violent/aggressive tendencies are typical of psychopathic individuals and should be included in the definition of the disorder. Cleckley’s (1941/1976) view was that “such tendencies should be regarded as the exception rather than as the rule” (p. 262). However, aggressiveness is central to criminally oriented conceptions of psychopathy, and the PCL-R includes an item reflecting hot-temperedness and aggression (“poor behavioral controls”) along with other items scored in part based on indications of cruelty and violence. In the Triarchic model, tendencies toward aggression are represented in both the disinhibition and meanness constructs, and a “mean-disinhibited” type of psychopath clearly exists, marked by the presence of salient aggressive behavior (Frick & Marsee, 2006; Hicks et al., 2004). Thus, Cleckley’s idea of aggression as ancillary to psychopathy may apply more to a variant of psychopathy that entails high boldness in conjunction with high disinhibition (Hicks et al., 2004). Another question is whether criminal or antisocial behavior more broadly represents a defining feature of psychopathy, or a secondary manifestation (Cooke, Michie, Hart, & Clark, 2004). From the standpoint of the Triarchic model, antisocial behavior arises from the complex interplay of different “deviance-promoting” influences—including dispositional boldness, meanness, and disinhibition. However, whether approaches can be developed for classifying antisocial behaviors in ways that relate more selectively to these and other distinct influences (e.g., through reference to underlying motives, spontaneity versus premeditation) is an important topic to be addressed in future research. Another key question is whether differing subtypes of psychopathy exist. From the perspective of the Triarchic model, alternative variants of psychopathy reflect differing configurations of boldness, meanness, and disinhibition. Viewed this way, designations such as “bold-disinhibited” and “mean-disinhibited” may prove more useful for research and clinical purposes than labels like “primary” versus “secondary” or “low anxious” versus “high anxious.” An issue from this perspective is whether individuals who are high in boldness and/or meanness but low in disinhibition would qualify for a diagnosis of psychopathy. For example, should a high-bold/high-mean individual (e.g., a ruthless corporate executive, like the one portrayed by actor Michael Douglas in the film Wall Street; Pressman & Stone, 1987)—or an extremely mean/vicious but neither bold nor pervasively disinhibited individual, such as Russian serial murder Andrei Chikatilo (Cullen, 1993)—be considered psychopathic? Questions of this sort will need to be addressed through elaborations of existing theories in conjunction with further systematic research. Yet another question is whether psychopathy differs in women as compared to men. Cleckley’s descriptive accounts of psychopathic patients included two female case examples along with multiple male cases, and his view was that psychopathy clearly exists in women and reflects the same core deficit (i.e., absence of “major emotional accompaniments” of experience) as in men. However, men exhibit criminal deviance and ASPD at much higher rates than women (APA, 2000) and men in the population at large score higher in general on measures of psychopathy than women (Hare, 2003; Lilienfeld & Widows, 2005). From a Triarchic model perspective, these differences in prevalence may be attributable largely to differences between women and men in average levels of boldness, meanness, and disinhibition. Some supportive evidence exists for this hypothesis (e.g., findings of Hicks et al. [2007] demonstrating mediation of gender differences in ASPD symptoms by levels of externalizing proneness). Beyond this, it is important also to consider whether underlying psychopathic dispositions in men and women may be manifested differently in overt behavior (Verona & Vitale, 2006). Some intriguing evidence exists for this—including twin research findings demonstrating a genetic association between dispositional boldness (as indexed by estimated scores on PPI-FD) and a composite index of externalizing problems in male but not female participants (Blonigen, Hicks, Patrick, Krueger, Iacono, & McGue, 2005). However, more extensive research along these lines, examining all facets of the Triarchic model in relation to behavioral outcomes of differing kinds, will be required to effectively address the question of gender-moderated expression. A final intriguing question is whether “successful” psychopaths exist. Hall and Benning (2006) hypothesized that successful psychopathy entails a preponderance of certain causal influences (resulting in particular symptomatic features) over others. Drawing on known correlates of PPI-FD (e.g., Benning et al., 2005; Ross, Benning, Patrick, Thompson, & Thurston, 2009) and theories positing separate etiologic mechanisms for differing features of psychopathy (Fowles & Dindo, 2009; Patrick & Bernat, 2009), these authors proposed that the presence of dispositional fearlessness (boldness) may be conducive to success when not accompanied by high externalizing proneness (disinhibition). For example, high-bold/low-disinhibited individuals could be expected to achieve higher success in occupations calling for leadership and/or courage because their psychopathic tendencies are manifested mainly in terms of social effectiveness, affective resilience, and venturesomeness. Data relevant to this idea come from an intriguing study by Lilienfeld, Waldman, Landfield, Rubenzer, and Faschingbauer (2012), who used personality trait ratings of former U.S. presidents provided by expert historians to estimate scores on the FD and SCI factors of the PPI (Ross et al., 2009). They found that higher estimated levels of PPI-FD (boldness) predicted higher ratings of presidential performance, persuasiveness, leadership, and crisis management ability, whereas higher estimated levels of SCI predicted adverse outcomes such as documented abuses of power and impeachment proceedings. Further research on outcomes associated with high levels of boldness and/or meanness in the absence of high disinhibition should yield valuable new insights into dispositional factors underlying psychopathy and alternative ways psychopathic tendencies can be expressed. Outside Resources Book: (Fictional novels or biographies/biographical novels) Capote, T. (1966). In cold blood. New York, NY: Random House. Book: (Fictional novels or biographies/biographical novels) Highsmith, P. (1955). The talented Mr. Ripley. New York, NY: Coward-McCann. Book: (Fictional novels or biographies/biographical novels) Kerouac, J. (1957). On the road. New York, NY: Viking Press. Book: (Fictional novels or biographies/biographical novels) Mailer, N. (1979). The executioner’s song. New York, NY: Little, Brown & Co. Book: (Fictional novels or biographies/biographical novels) McMurtry, L. (1985). Lonesome dove. New York, NY: Simon & Schuster. Book: (Fictional novels or biographies/biographical novels) Rule, A. (1988). Small sacrifices. New York, NY: Signet. Book: (Fictional novels or biographies/biographical novels)Wolff, G. (1979). The duke of deception. New York, NY: Vintage. Book: (Reference) Babiak, P., & Hare, R. D. (2006). Snakes in suits. New York, NY: HarperCollins. Book: (Reference) Blair, R. J. R., Mitchell, D., & Blair, K. (2005). The psychopath: Emotion and the brain. Malden, MA: Blackwell. Book: (Reference) Hare, R. D. (1993). Without conscience. New York, NY: Guilford Press. Book: (Reference) Häkkänen-Nyholm, H., & Nyholm, J. (2012), Psychopathy and law: A practitioner’s guide. New York, NY: Wiley. Book: (Reference) Patrick, C. J. (2006). Handbook of psychopathy. New York, NY: Guilford Press. Book: (Reference) Raine, A. (2013). The anatomy of violence. New York, NY: Random House. Book: (Reference) Salekin, R., & Lynam, D. T. (2010). Handbook of child and adolescent psychopathy. New York, NY: Guilford Press. Measure: Online home of the Triarchic Psychopathy Measure. It serves as a great resource for students who wish to look at how psychopathy is measured a little more deeply. www.phenxtoolkit.org/index.p...ails&id=121601 Movie: (Facets of psychopathy - Boldness) Bigelow, K. (Producer & Director), Boal, M., Chartier, N., & Shapiro, G. (Producers). (2008). The hurt locker. United States: Universal Studios. Movie: (Facets of psychopathy - Disinhibition) Felner, E. (Producer), Cox, A. (Director). (1986). Sid and Nancy. United States: Samuel Goldwyn. Movie: (Facets of psychopathy - Meanness) Coen, J., Coen, E. (Producers & Directors), & Rudin, S. (Producer). (2007). No country for old men. United States: Miramax. Movie: (Psychopathic criminals) Demme, J., Saraf, P., Saxon, E. (Producers), & Sena, D. (Director). (1993). Kalifornia. United States: Gramercy. Movie: (Psychopathic criminals) Jaffe, S. R., Lansing, S. (Producers), & Lyn, A. (Director). (1987). Fatal attraction. United States: Paramount. Movie: (Psychopathic criminals) Scorsese, M., Harris, R. A., Painten, J. (Producers), & Frears, S. (Director). (1990). The grifters. United States: Miramax. Movie: (Psychopathic criminals) Ward, F., Bozman, R., Utt, K., Demme, J. (Producers), & Armitrage, G. (Director). (1990). Miami blues. United States: Orion. Movie: (Psychopathic hospital patients) Wick, D., Conrad, C. (Producers), & Mangold, J. (Director). (1990). Girl, interrupted. United States: Columbia. Movie: (Psychopathic hospital patients) Zaentz, S., Douglas, M. (Producers), & Forman, M. (Director). (1975). One flew over the cuckoo’s nest. United States: United Artists. Movie: (Psychopaths in business/politics) Bachrach, D. (Producer), & Pierson, F. (Director). (1992). Citizen Cohn. United States: Home Box Office. Movie: (Psychopaths in business/politics) Pressman, E. R. (Producer), & Stone, O. (Director). (1987). Wall Street. United States: 20th Century Fox. Web: A valuable online resource containing information of various types including detailed reference lists is Dr. Robert Hare’s website on the topic of psychopathy. Dr. Hare i the person who created the PCL-R. It has a number of interesting articles to read as well as helpful psychopathy links from about the web. http://www.hare.org/ Web: Hervey Cleckley’s classic book The Mask of Sanity is no longer in print at this time, but a version authorized for nonprofit educational use by his estate can be viewed online at http://www.quantumfuture.net/store/sanity_1.PdF Web: The Triarchic Psychopathy Measure can be accessed online at www.phenxtoolkit.org/index.p...ails&id=121601 Web: The website for the Aftermath Foundation, a nonprofit organization that provides information and support for victims and family members of psychopathic individuals, is http://www.aftermath-surviving-psychopathy.org/ Web: The website for the Society for Scientific Study of Psychopathy. This is the major research hub for psychopathy. It has a section specifically for students that might prove especially interesting. www.psychopathysociety.org/index.php?lang=en-US Discussion Questions 1. What did Cleckley mean when he characterized psychopathy as involving a “Mask of Sanity”? 2. Compare and contrast the Psychopathy Checklist-Revised (PCL-R), the Antisocial Process Screening Device (APSD), and the Psychopathic Personality Inventory (PPI), in terms of the samples they are designed for, the way in which they are administered, and the content and factor structure of their items. 3. Identify and define the three facet constructs of the Triarchic model of psychopathy. Discuss how these facet constructs relate to the factors of the PCL-R, ASPD, and PPI. Discuss how U.S. President Teddy Roosevelt and fictional character Anton Chigurh from the film No Country for Old Men might compare in terms of scores on the three Triarchic constructs. 4. Identify alternative types of theories that have been proposed regarding the cause of psychopathy, and how these can be viewed from the perspective of the Triarchic model. 5. Identify two longstanding issues of debate regarding the nature/definition of psychopathy and how these issues are addressed by the Triarchic model. Vocabulary Antisocial personality disorder Counterpart diagnosis to psychopathy included in the third through fifth editions of the Diagnostic and Statistical Manual of Mental Disorders (DSM; APA, 2000). Defined by specific symptoms of behavioral deviancy in childhood (e.g., fighting, lying, stealing, truancy) continuing into adulthood (manifested as repeated rule-breaking, impulsiveness, irresponsibility, aggressiveness, etc.). Psychopathy Synonymous with psychopathic personality, the term used by Cleckley (1941/1976), and adapted from the term psychopathic introduced by German psychiatrist Julius Koch (1888) to designate mental disorders presumed to be heritable. Triarchic model Model formulated to reconcile alternative historic conceptions of psychopathy and differing methods for assessing it. Conceives of psychopathy as encompassing three symptomatic components: boldness, involving social efficacy, emotional resiliency, and venturesomeness; meanness, entailing lack of empathy/emotional-sensitivity and exploitative behavior toward others; and disinhibition, entailing deficient behavioral restraint and lack of control over urges/emotional reactions.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_9%3A_Psychological_Disorders_and_Treatments/9.09%3A_Psychopathy.txt
By Deanna M. Barch Washington University in St. Louis Schizophrenia and the other psychotic disorders are some of the most impairing forms of psychopathology, frequently associated with a profound negative effect on the individual’s educational, occupational, and social function. Sadly, these disorders often manifest right at time of the transition from adolescence to adulthood, just as young people should be evolving into independent young adults. The spectrum of psychotic disorders includes schizophrenia, schizoaffective disorder, delusional disorder, schizotypal personality disorder, schizophreniform disorder, brief psychotic disorder, as well as psychosis associated with substance use or medical conditions. In this module, we summarize the primary clinical features of these disorders, describe the known cognitive and neurobiological changes associated with schizophrenia, describe potential risk factors and/or causes for the development of schizophrenia, and describe currently available treatments for schizophrenia. learning objectives • Describe the signs and symptoms of schizophrenia and related psychotic disorders. • Describe the most well-replicated cognitive and neurobiological changes associated with schizophrenia. • Describe the potential risk factors for the development of schizophrenia. • Describe the controversies associated with “clinical high risk” approaches to identifying individuals at risk for the development of schizophrenia. • Describe the treatments that work for some of the symptoms of schizophrenia. The phenomenology of schizophrenia and related psychotic disorders Most of you have probably had the experience of walking down the street in a city and seeing a person you thought was acting oddly. They may have been dressed in an unusual way, perhaps disheveled or wearing an unusual collection of clothes, makeup, or jewelry that did not seem to fit any particular group or subculture. They may have been talking to themselves or yelling at someone you could not see. If you tried to speak to them, they may have been difficult to follow or understand, or they may have acted paranoid or started telling a bizarre story about the people who were plotting against them. If so, chances are that you have encountered an individual with schizophrenia or another type of psychotic disorder. If you have watched the movie A Beautiful Mind or The Fisher King, you have also seen a portrayal of someone thought to have schizophrenia. Sadly, a few of the individuals who have committed some of the recently highly publicized mass murders may have had schizophrenia, though most people who commit such crimes do not have schizophrenia. It is also likely that you have met people with schizophrenia without ever knowing it, as they may suffer in silence or stay isolated to protect themselves from the horrors they see, hear, or believe are operating in the outside world. As these examples begin to illustrate, psychotic disorders involve many different types of symptoms, including delusions, hallucinations, disorganized speech and behavior, abnormal motor behavior (including catatonia), and negative symptoms such anhedonia/amotivation and blunted affect/reduced speech. Delusions are false beliefs that are often fixed, hard to change even when the person is presented with conflicting information, and are often culturally influenced in their content (e.g., delusions involving Jesus in Judeo-Christian cultures, delusions involving Allah in Muslim cultures). They can be terrifying for the person, who may remain convinced that they are true even when loved ones and friends present them with clear information that they cannot be true. There are many different types or themes to delusions. The most common delusions are persecutory and involve the belief that individuals or groups are trying to hurt, harm, or plot against the person in some way. These can be people that the person knows (people at work, the neighbors, family members), or more abstract groups (the FBI, the CIA, aliens, etc.). Other types of delusions include grandiose delusions, where the person believes that they have some special power or ability (e.g., I am the new Buddha, I am a rock star); referential delusions, where the person believes that events or objects in the environment have special meaning for them (e.g., that song on the radio is being played specifically for me); or other types of delusions where the person may believe that others are controlling their thoughts and actions, their thoughts are being broadcast aloud, or that others can read their mind (or they can read other people’s minds). When you see a person on the street talking to themselves or shouting at other people, they are experiencing hallucinations. These are perceptual experiences that occur even when there is no stimulus in the outside world generating the experiences. They can be auditory, visual, olfactory (smell), gustatory (taste), or somatic (touch). The most common hallucinations in psychosis (at least in adults) are auditory, and can involve one or more voices talking about the person, commenting on the person’s behavior, or giving them orders. The content of the hallucinations is frequently negative (“you are a loser,” “that drawing is stupid,” “you should go kill yourself”) and can be the voice of someone the person knows or a complete stranger. Sometimes the voices sound as if they are coming from outside the person’s head. Other times the voices seem to be coming from inside the person’s head, but are not experienced the same as the person’s inner thoughts or inner speech. Talking to someone with schizophrenia is sometimes difficult, as their speech may be difficult to follow, either because their answers do not clearly flow from your questions, or because one sentence does not logically follow from another. This is referred to as disorganized speech, and it can be present even when the person is writing. Disorganized behaviorcan include odd dress, odd makeup (e.g., lipstick outlining a mouth for 1 inch), or unusual rituals (e.g., repetitive hand gestures). Abnormal motor behavior can include catatonia, which refers to a variety of behaviors that seem to reflect a reduction in responsiveness to the external environment. This can include holding unusual postures for long periods of time, failing to respond to verbal or motor prompts from another person, or excessive and seemingly purposeless motor activity. Some of the most debilitating symptoms of schizophrenia are difficult for others to see. These include what people refer to as “negative symptoms” or the absence of certain things we typically expect most people to have. For example, anhedonia or amotivation reflect a lack of apparent interest in or drive to engage in social or recreational activities. These symptoms can manifest as a great amount of time spent in physical immobility. Importantly, anhedonia and amotivation do not seem to reflect a lack of enjoyment in pleasurable activities or events (Cohen & Minor, 2010; Kring & Moran, 2008; Llerena, Strauss, & Cohen, 2012) but rather a reduced drive or ability to take the steps necessary to obtain the potentially positive outcomes (Barch & Dowd, 2010). Flat affect and reduced speech (alogia) reflect a lack of showing emotions through facial expressions, gestures, and speech intonation, as well as a reduced amount of speech and increased pause frequency and duration. In many ways, the types of symptoms associated with psychosis are the most difficult for us to understand, as they may seem far outside the range of our normal experiences. Unlike depression or anxiety, many of us may not have had experiences that we think of as on the same continuum as psychosis. However, just like many of the other forms of psychopathology described in this book, the types of psychotic symptoms that characterize disorders like schizophrenia are on a continuum with “normal” mental experiences. For example, work by Jim van Os in the Netherlands has shown that a surprisingly large percentage of the general population (10%+) experience psychotic-like symptoms, though many fewer have multiple experiences and most will not continue to experience these symptoms in the long run (Verdoux & van Os, 2002). Similarly, work in a general population of adolescents and young adults in Kenya has also shown that a relatively high percentage of individuals experience one or more psychotic-like experiences (~19%) at some point in their lives (Mamah et al., 2012; Ndetei et al., 2012), though again most will not go on to develop a full-blown psychotic disorder. Schizophrenia is the primary disorder that comes to mind when we discuss “psychotic” disorders (see Table 1 for diagnostic criteria), though there are a number of other disorders that share one or more features with schizophrenia. In the remainder of this module, we will use the terms “psychosis” and “schizophrenia” somewhat interchangeably, given that most of the research has focused on schizophrenia. In addition to schizophrenia (see Table 1), other psychotic disorders include schizophreniform disorder (a briefer version of schizophrenia), schizoaffective disorder (a mixture of psychosis and depression/mania symptoms), delusional disorder (the experience of only delusions), and brief psychotic disorder (psychotic symptoms that last only a few days or weeks). The Cognitive Neuroscience of Schizophrenia As described above, when we think of the core symptoms of psychotic disorders such as schizophrenia, we think of people who hear voices, see visions, and have false beliefs about reality (i.e., delusions). However, problems in cognitive function are also a critical aspect of psychotic disorders and of schizophrenia in particular. This emphasis on cognition in schizophrenia is in part due to the growing body of research suggesting that cognitive problems in schizophrenia are a major source of disability and loss of functional capacity (Green, 2006; Nuechterlein et al., 2011). The cognitive deficits that are present in schizophrenia are widespread and can include problems with episodic memory (the ability to learn and retrieve new information or episodes in one’s life), working memory (the ability to maintain information over a short period of time, such as 30 seconds), and other tasks that require one to “control” or regulate one’s behavior (Barch & Ceaser, 2012; Bora, Yucel, & Pantelis, 2009a; Fioravanti, Carlone, Vitale, Cinti, & Clare, 2005; Forbes, Carrick, McIntosh, & Lawrie, 2009; Mesholam-Gately, Giuliano, Goff, Faraone, & Seidman, 2009). Individuals with schizophrenia also have difficulty with what is referred to as “processing speed” and are frequently slower than healthy individuals on almost all tasks. Importantly, these cognitive deficits are present prior to the onset of the illness (Fusar-Poli et al., 2007) and are also present, albeit in a milder form, in the first-degree relatives of people with schizophrenia (Snitz, Macdonald, & Carter, 2006). This suggests that cognitive impairments in schizophrenia reflect part of the risk for the development of psychosis, rather than being an outcome of developing psychosis. Further, people with schizophrenia who have more severe cognitive problems also tend to have more severe negative symptoms and more disorganized speech and behavior (Barch, Carter, & Cohen, 2003; Barch et al., 1999; Dominguez Mde, Viechtbauer, Simons, van Os, & Krabbendam, 2009; Ventura, Hellemann, Thames, Koellner, & Nuechterlein, 2009; Ventura, Thames, Wood, Guzik, & Hellemann, 2010). In addition, people with more cognitive problems have worse function in everyday life (Bowie et al., 2008; Bowie, Reichenberg, Patterson, Heaton, & Harvey, 2006; Fett et al., 2011). Some people with schizophrenia also show deficits in what is referred to as social cognition, though it is not clear whether such problems are separate from the cognitive problems described above or the result of them (Hoe, Nakagami, Green, & Brekke, 2012; Kerr & Neale, 1993; van Hooren et al., 2008). This includes problems with the recognition of emotional expressions on the faces of other individuals (Kohler, Walker, Martin, Healey, & Moberg, 2010) and problems inferring the intentions of other people (theory of mind) (Bora, Yucel, & Pantelis, 2009b). Individuals with schizophrenia who have more problems with social cognition also tend to have more negative and disorganized symptoms (Ventura, Wood, & Hellemann, 2011), as well as worse community function (Fett et al., 2011). The advent of neuroimaging techniques such as structural and functional magnetic resonance imaging and positron emission tomographyopened up the ability to try to understand the brain mechanisms of the symptoms of schizophrenia as well as the cognitive impairments found in psychosis. For example, a number of studies have suggested that delusions in psychosis may be associated with problems in “salience” detection mechanisms supported by the ventral striatum (Jensen & Kapur, 2009; Jensen et al., 2008; Kapur, 2003; Kapur, Mizrahi, & Li, 2005; Murray et al., 2008) and the anterior prefrontal cortex (Corlett et al., 2006; Corlett, Honey, & Fletcher, 2007; Corlett, Murray, et al., 2007a, 2007b). These are regions of the brain that normally increase their activity when something important (aka “salient”) happens in the environment. If these brain regions misfire, it may lead individuals with psychosis to mistakenly attribute importance to irrelevant or unconnected events. Further, there is good evidence that problems in working memory and cognitive control in schizophrenia are related to problems in the function of a region of the brain called the dorsolateral prefrontal cortex (DLPFC) (Minzenberg, Laird, Thelen, Carter, & Glahn, 2009; Ragland et al., 2009). These problems include changes in how the DLPFC works when people are doing working-memory or cognitive-control tasks, and problems with how this brain region is connected to other brain regions important for working memory and cognitive control, including the posterior parietal cortex (e.g., Karlsgodt et al., 2008; J. J. Kim et al., 2003; Schlosser et al., 2003), the anterior cingulate (Repovs & Barch, 2012), and temporal cortex (e.g., Fletcher et al., 1995; Meyer-Lindenberg et al., 2001). In terms of understanding episodic memory problems in schizophrenia, many researchers have focused on medial temporal lobe deficits, with a specific focus on the hippocampus (e.g., Heckers & Konradi, 2010). This is because there is much data from humans and animals showing that the hippocampus is important for the creation of new memories (Squire, 1992). However, it has become increasingly clear that problems with the DLPFC also make important contributions to episodic memory deficits in schizophrenia (Ragland et al., 2009), probably because this part of the brain is important for controlling our use of memory. In addition to problems with regions such as the DLFPC and medial temporal lobes in schizophrenia described above, magnitude resonance neuroimaging studies have also identified changes in cellular architecture, white matter connectivity, and gray matter volume in a variety of regions that include the prefrontal and temporal cortices (Bora et al., 2011). People with schizophrenia also show reduced overall brain volume, and reductions in brain volume as people get older may be larger in those with schizophrenia than in healthy people (Olabi et al., 2011). Taking antipsychotic medications or taking drugs such as marijuana, alcohol, and tobacco may cause some of these structural changes. However, these structural changes are not completely explained by medications or substance use alone. Further, both functional and structural brain changes are seen, again to a milder degree, in the first-degree relatives of people with schizophrenia (Boos, Aleman, Cahn, Pol, & Kahn, 2007; Brans et al., 2008; Fusar-Poli et al., 2007; MacDonald, Thermenos, Barch, & Seidman, 2009). This again suggests that that neural changes associated with schizophrenia are related to a genetic risk for this illness. Risk Factors for Developing Schizophrenia It is clear that there are important genetic contributions to the likelihood that someone will develop schizophrenia, with consistent evidence from family, twin, and adoption studies. (Sullivan, Kendler, & Neale, 2003). However, there is no “schizophrenia gene” and it is likely that the genetic risk for schizophrenia reflects the summation of many different genes that each contribute something to the likelihood of developing psychosis (Gottesman & Shields, 1967; Owen, Craddock, & O'Donovan, 2010). Further, schizophrenia is a very heterogeneous disorder, which means that two different people with “schizophrenia” may each have very different symptoms (e.g., one has hallucinations and delusions, the other has disorganized speech and negative symptoms). This makes it even more challenging to identify specific genes associated with risk for psychosis. Importantly, many studies also now suggest that at least some of the genes potentially associated with schizophrenia are also associated with other mental health conditions, including bipolar disorder, depression, and autism (Gejman, Sanders, & Kendler, 2011; Y. Kim, Zerwas, Trace, & Sullivan, 2011; Owen et al., 2010; Rutter, Kim-Cohen, & Maughan, 2006). There are also a number of environmental factors that are associated with an increased risk of developing schizophrenia. For example, problems during pregnancy such as increased stress, infection, malnutrition, and/or diabetes have been associated with increased risk of schizophrenia. In addition, complications that occur at the time of birth and which cause hypoxia (lack of oxygen) are also associated with an increased risk for developing schizophrenia (M. Cannon, Jones, & Murray, 2002; Miller et al., 2011). Children born to older fathers are also at a somewhat increased risk of developing schizophrenia. Further, using cannabis increases risk for developing psychosis, especially if you have other risk factors (Casadio, Fernandes, Murray, & Di Forti, 2011; Luzi, Morrison, Powell, di Forti, & Murray, 2008). The likelihood of developing schizophrenia is also higher for kids who grow up in urban settings (March et al., 2008) and for some minority ethnic groups (Bourque, van der Ven, & Malla, 2011). Both of these factors may reflect higher social and environmental stress in these settings. Unfortunately, none of these risk factors is specific enough to be particularly useful in a clinical setting, and most people with these “risk” factors do not develop schizophrenia. However, together they are beginning to give us clues as the neurodevelopmental factors that may lead someone to be at an increased risk for developing this disease. An important research area on risk for psychosis has been work with individuals who may be at “clinical high risk.” These are individuals who are showing attenuated (milder) symptoms of psychosis that have developed recently and who are experiencing some distress or disability associated with these symptoms. When people with these types of symptoms are followed over time, about 35% of them develop a psychotic disorder (T. D. Cannon et al., 2008), most frequently schizophrenia (Fusar-Poli, McGuire, & Borgwardt, 2012). In order to identify these individuals, a new category of diagnosis, called “Attenuated Psychotic Syndrome,” was added to Section III (the section for disorders in need of further study) of the DSM-5 (see Table 1 for symptoms) (APA, 2013). However, adding this diagnostic category to the DSM-5 created a good deal of controversy (Batstra & Frances, 2012; Fusar-Poli & Yung, 2012). Many scientists and clinicians have been worried that including “risk” states in the DSM-5 would create mental disorders where none exist, that these individuals are often already seeking treatment for other problems, and that it is not clear that we have good treatments to stop these individuals from developing to psychosis. However, the counterarguments have been that there is evidence that individuals with high-risk symptoms develop psychosis at a much higher rate than individuals with other types of psychiatric symptoms, and that the inclusion of Attenuated Psychotic Syndrome in Section III will spur important research that might have clinical benefits. Further, there is some evidence that non-invasive treatments such as omega-3 fatty acids and intensive family intervention may help reduce the development of full-blown psychosis (Preti & Cella, 2010) in people who have high-risk symptoms. Treatment of Schizophrenia The currently available treatments for schizophrenia leave much to be desired, and the search for more effective treatments for both the psychotic symptoms of schizophrenia (e.g., hallucinations and delusions) as well as cognitive deficits and negative symptoms is a highly active area of research. The first line of treatment for schizophrenia and other psychotic disorders is the use of antipsychotic medications. There are two primary types of antipsychotic medications, referred to as “typical” and “atypical.” The fact that “typical” antipsychotics helped some symptoms of schizophrenia was discovered serendipitously more than 60 years ago (Carpenter & Davis, 2012; Lopez-Munoz et al., 2005). These are drugs that all share a common feature of being a strong block of the D2 type dopamine receptor. Although these drugs can help reduce hallucinations, delusions, and disorganized speech, they do little to improve cognitive deficits or negative symptoms and can be associated with distressing motor side effects. The newer generation of antipsychotics is referred to as “atypical” antipsychotics. These drugs have more mixed mechanisms of action in terms of the receptor types that they influence, though most of them also influence D2 receptors. These newer antipsychotics are not necessarily more helpful for schizophrenia but have fewer motor side effects. However, many of the atypical antipsychotics are associated with side effects referred to as the “metabolic syndrome,” which includes weight gain and increased risk for cardiovascular illness, Type-2 diabetes, and mortality (Lieberman et al., 2005). The evidence that cognitive deficits also contribute to functional impairment in schizophrenia has led to an increased search for treatments that might enhance cognitive function in schizophrenia. Unfortunately, as of yet, there are no pharmacological treatments that work consistently to improve cognition in schizophrenia, though many new types of drugs are currently under exploration. However, there is a type of psychological intervention, referred to as cognitive remediation, which has shown some evidence of helping cognition and function in schizophrenia. In particular, a version of this treatment called Cognitive Enhancement Therapy (CET) has been shown to improve cognition, functional outcome, social cognition, and to protect against gray matter loss (Eack et al., 2009; Eack, Greenwald, Hogarty, & Keshavan, 2010; Eack et al., 2010; Eack, Pogue-Geile, Greenwald, Hogarty, & Keshavan, 2010; Hogarty, Greenwald, & Eack, 2006) in young individuals with schizophrenia. The development of new treatments such as Cognitive Enhancement Therapy provides some hope that we will be able to develop new and better approaches to improving the lives of individuals with this serious mental health condition and potentially even prevent it some day. Outside Resources Book: Ben Behind His Voices: One family’s journal from the chaos of schizophrenia to hope (2011). Randye Kaye. Rowman and Littlefield. Book: Conquering Schizophrenia: A father, his son, and a medical breakthrough (1997). Peter Wyden. Knopf. Book: Henry’s Demons: Living with schizophrenia, a father and son’s story (2011). Henry and Patrick Cockburn. Scribner Macmillan. Book: My Mother’s Keeper: A daughter’s memoir of growing up in the shadow of schizophrenia (1997). Tara Elgin Holley. William Morrow Co. Book: Recovered, Not Cured: A journey through schizophrenia (2005). Richard McLean. Allen and Unwin. Book: The Center Cannot Hold: My journey through madness (2008). Elyn R. Saks. Hyperion. Book: The Quiet Room: A journal out of the torment of madness (1996). Lori Schiller. Grand Central Publishing. Book: Welcome Silence: My triumph over schizophrenia (2003). Carol North. CSS Publishing. Web: National Alliance for the Mentally Ill. This is an excellent site for learning more about advocacy for individuals with major mental illnesses such as schizophrenia. http://www.nami.org/ Web: National Institute of Mental Health. This website has information on NIMH-funded schizophrenia research. http://www.nimh.nih.gov/health/topics/schizophrenia/index.shtml Web: Schizophrenia Research Forum. This is an excellent website that contains a broad array of information about current research on schizophrenia. http://www.schizophreniaforum.org/ Discussion Questions 1. Describe the major differences between the major psychotic disorders. 2. How would one be able to tell when an individual is “delusional” versus having non-delusional beliefs that differ from the societal normal? How should cultural and sub-cultural variation been taken into account when assessing psychotic symptoms? 3. Why are cognitive impairments important to understanding schizophrenia? 4. Why has the inclusion of a new diagnosis (Attenuated Psychotic Syndrome) in Section III of the DSM-5 created controversy? 5. What are some of the factors associated with increased risk for developing schizophrenia? If we know whether or not someone has these risk factors, how well can we tell whether they will develop schizophrenia? 6. What brain changes are most consistent in schizophrenia? 7. Do antipsychotic medications work well for all symptoms of schizophrenia? If not, which symptoms respond better to antipsychotic medications? 8. Are there any treatments besides antipsychotic medications that help any of the symptoms of schizophrenia? If so, what are they? Vocabulary Alogia A reduction in the amount of speech and/or increased pausing before the initiation of speech. Anhedonia/amotivation A reduction in the drive or ability to take the steps or engage in actions necessary to obtain the potentially positive outcome. Catatonia Behaviors that seem to reflect a reduction in responsiveness to the external environment. This can include holding unusual postures for long periods of time, failing to respond to verbal or motor prompts from another person, or excessive and seemingly purposeless motor activity. Delusions False beliefs that are often fixed, hard to change even in the presence of conflicting information, and often culturally influenced in their content. Diagnostic criteria The specific criteria used to determine whether an individual has a specific type of psychiatric disorder. Commonly used diagnostic criteria are included in the Diagnostic and Statistical Manual of Mental Disorder, 5th Edition (DSM-5) and the Internal Classification of Disorders, Version 9 (ICD-9). Disorganized behavior Behavior or dress that is outside the norm for almost all subcultures. This would include odd dress, odd makeup (e.g., lipstick outlining a mouth for 1 inch), or unusual rituals (e.g., repetitive hand gestures). Disorganized speech Speech that is difficult to follow, either because answers do not clearly follow questions or because one sentence does not logically follow from another. Dopamine A neurotransmitter in the brain that is thought to play an important role in regulating the function of other neurotransmitters. Episodic memory The ability to learn and retrieve new information or episodes in one’s life. Flat affect A reduction in the display of emotions through facial expressions, gestures, and speech intonation. Functional capacity The ability to engage in self-care (cook, clean, bathe), work, attend school, and/or engage in social relationships. Hallucinations Perceptual experiences that occur even when there is no stimulus in the outside world generating the experiences. They can be auditory, visual, olfactory (smell), gustatory (taste), or somatic (touch). Magnetic resonance imaging A set of techniques that uses strong magnets to measure either the structure of the brain (e.g., gray matter and white matter) or how the brain functions when a person performs cognitive tasks (e.g., working memory or episodic memory) or other types of tasks. Neurodevelopmental Processes that influence how the brain develops either in utero or as the child is growing up. Positron emission tomography A technique that uses radio-labelled ligands to measure the distribution of different neurotransmitter receptors in the brain or to measure how much of a certain type of neurotransmitter is released when a person is given a specific type of drug or does a particularly cognitive task. Processing speed The speed with which an individual can perceive auditory or visual information and respond to it. Psychopathology Illnesses or disorders that involve psychological or psychiatric symptoms. Working memory The ability to maintain information over a short period of time, such as 30 seconds or less.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_9%3A_Psychological_Disorders_and_Treatments/9.10%3A_Schizophrenia_Spectrum_Disorders.txt
By Kevin A. Pelphrey Yale University People with autism spectrum disorder (ASD) suffer from a profound social disability. Social neuroscience is the study of the parts of the brain that support social interactions or the “social brain.” This module provides an overview of ASD and focuses on understanding how social brain dysfunction leads to ASD. Our increasing understanding of the social brain and its dysfunction in ASD will allow us to better identify the genes that cause ASD and will help us to create and pick out treatments to better match individuals. Because social brain systems emerge in infancy, social neuroscience can help us to figure out how to diagnose ASD even before the symptoms of ASD are clearly present. This is a hopeful time because social brain systems remain malleable well into adulthood and thus open to creative new interventions that are informed by state-of-the-art science. learning objectives • Know the basic symptoms of ASD. • Distinguish components of the social brain and understand their dysfunction in ASD. • Appreciate how social neuroscience may facilitate the diagnosis and treatment of ASD. Defining Autism Spectrum Disorder Autism Spectrum Disorder (ASD) is a developmental disorder that usually emerges in the first three years and persists throughout the individual’s life. Though the key symptoms of ASD fall into three general categories (see below), each person with ASD exhibits symptoms in these domains in different ways and to varying degrees. This phenotypic heterogeneity reflects the high degree of variability in the genes underlying ASD (Geschwind & Levitt, 2007). Though we have identified genetic differences associated with individual cases of ASD, each accounts for only a small number of the actual cases, suggesting that no single genetic cause will apply in the majority of people with ASD. There is currently no biological test for ASD. Autism is in the category of pervasive developmental disorders, which includes Asperger's disorder, childhood disintegrative disorder, autistic disorder, and pervasive developmental disorder - not otherwise specified. These disorders, together, are labeled autism spectrum disorder (ASD). ASD is defined by the presence of profound difficulties in social interactions and communication combined with the presence of repetitive or restricted interests, cognitions and behaviors. The diagnostic process involves a combination of parental report and clinical observation. Children with significant impairments across the social/communication domain who also exhibit repetitive behaviors can qualify for the ASD diagnosis. There is wide variability in the precise symptom profile an individual may exhibit. Since Kanner first described ASD in 1943, important commonalities in symptom presentation have been used to compile criteria for the diagnosis of ASD. These diagnostic criteria have evolved during the past 70 years and continue to evolve (e.g., see the recent changes to the diagnostic criteria on the American Psychiatric Association’s website, http://www.dsm5.org/), yet impaired social functioning remains a required symptom for an ASD diagnosis. Deficits in social functioning are present in varying degrees for simple behaviors such as eye contact, and complex behaviors like navigating the give and take of a group conversation for individuals of all functioning levels (i.e. high or low IQ). Moreover, difficulties with social information processing occur in both visual (e.g., Pelphrey et al., 2002) and auditory (e.g., Dawson, Meltzoff, Osterling, Rinaldi, & Brown, 1998) sensory modalities. Consider the results of an eye tracking study in which Pelphrey and colleagues (2002) observed that individuals with autism did not make use of the eyes when judging facial expressions of emotion (see right panels of Figure 1). While repetitive behaviors or language deficits are seen in other disorders (e.g., obsessive-compulsive disorder and specific language impairment, respectively), basic social deficits of this nature are unique to ASD. Onset of the social deficits appears to precede difficulties in other domains (Osterling, Dawson, & Munson, 2002) and may emerge as early as 6 months of age (Maestro et al., 2002). Defining the Social Brain Within the past few decades, research has elucidated specific brain circuits that support perception of humans and other species. This social perception refers to “the initial stages in the processing of information that culminates in the accurate analysis of the dispositions and intentions of other individuals” (Allison, Puce, & McCarthy, 2000). Basic social perception is a critical building block for more sophisticated social behaviors, such as thinking about the motives and emotions of others. Brothers (1990) first suggested the notion of a social brain, a set of interconnected neuroanatomical structures that process social information, enabling the recognition of other individuals and the evaluation their mental states (e.g., intentions, dispositions, desires, and beliefs). The social brain is hypothesized to consist of the amygdala, the orbital frontal cortex (OFC), fusiform gyrus (FG), and the posterior superior temporal sulcus (STS) region, among other structures. Though all areas work in coordination to support social processing, each appears to serve a distinct role. The amygdala helps us recognize the emotional states of others (e.g., Morris et al., 1996) and also to experience and regulate our own emotions (e.g., LeDoux, 1992). The OFC supports the "reward" feelings we have when we are around other people (e.g., Rolls, 2000). The FG, located at the bottom of the surface of the temporal lobes detects faces and supports face recognition (e.g., Puce, Allison, Asgari, Gore, & McCarthy, 1996). The posterior STS region recognizes the biological motion, including eye, hand and other body movements, and helps to interpret and predict the actions and intentions of others (e.g., Pelphrey, Morris, Michelich, Allison, & McCarthy, 2005). Current Understanding of Social Perception in ASD The social brain is of great research interest because the social difficulties characteristic of ASD are thought to relate closely to the functioning of this brain network. Functional magnetic resonance imaging (fMRI) and event-related potentials (ERP)are complementary brain imaging methods used to study activity in the brain across the lifespan. Each method measures a distinct facet of brain activity and contributes unique information to our understanding of brain function. FMRI uses powerful magnets to measure the levels of oxygen within the brain, which vary according to changes in neural activity. As the neurons in specific brain regions “work harder”, they require more oxygen. FMRI detects the brain regions that exhibit a relative increase in blood flow (and oxygen levels) while people listen to or view social stimuli in the MRI scanner. The areas of the brain most crucial for different social processes are thus identified, with spatial information being accurate to the millimeter. In contrast, ERP provides direct measurements of the firing of groups of neurons in the cortex. Non-invasive sensors on the scalp record the small electrical currents created by this neuronal activity while the subject views stimuli or listens to specific kinds of information. While fMRI provides information about where brain activity occurs, ERP specifies when by detailing the timing of processing at the millisecond pace at which it unfolds. ERP and fMRI are complementary, with fMRI providing excellent spatial resolution and ERP offering outstanding temporal resolution. Together, this information is critical to understanding the nature of social perception in ASD. To date, the most thoroughly investigated areas of the social brain in ASD are the superior temporal sulcus (STS), which underlies the perception and interpretation of biological motion, and the fusiform gyrus (FG), which supports face perception. Heightened sensitivity to biological motion (for humans, motion such as walking) serves an essential role in the development of humans and other highly social species. Emerging in the first days of life, the ability to detect biological motion helps to orient vulnerable young to critical sources of sustenance, support, and learning, and develops independent of visual experience with biological motion (e.g., Simion, Regolin, & Bulf, 2008). This inborn “life detector” serves as a foundation for the subsequent development of more complex social behaviors (Johnson, 2006). From very early in life, children with ASD display reduced sensitivity to biological motion (Klin, Lin, Gorrindo, Ramsay, & Jones, 2009). Individuals with ASD have reduced activity in the STS during biological motion perception. Similarly, people at increased genetic risk for ASD but who do not develop symptoms of the disorder (i.e. unaffected siblings of individuals with ASD) show increased activity in this region, which is hypothesized to be a compensatory mechanism to offset genetic vulnerability (Kaiser et al., 2010). In typical development, preferential attention to faces and the ability to recognize individual faces emerge in the first days of life (e.g., Goren, Sarty, & Wu, 1975). The special way in which the brain responds to faces usually emerges by three months of age (e.g., de Haan, Johnson, & Halit, 2003) and continues throughout the lifespan (e.g., Bentin et al., 1996). Children with ASD, however, tend to show decreased attention to human faces by six to 12 months (Osterling & Dawson, 1994). Children with ASD also show reduced activity in the FG when viewing faces (e.g., Schultz et al., 2000). Slowed processing of faces (McPartland, Dawson, Webb, Panagiotides, & Carver, 2004) is a characteristic of people with ASD that is shared by parents of children with ASD (Dawson, Webb, & McPartland, 2005) and infants at increased risk for developing ASD because of having a sibling with ASD (McCleery, Akshoomoff, Dobkins, & Carver, 2009). Behavioral and attentional differences in face perception and recognition are evident in children and adults with ASD as well (e.g., Hobson, 1986). Exploring Diversity in ASD Because of the limited quality of the behavioral methods used to diagnose ASD and current clinical diagnostic practice, which permits similar diagnoses despite distinct symptom profiles (McPartland, Webb, Keehn, & Dawson, 2011), it is possible that the group of children currently referred to as having ASD may actually represent different syndromes with distinct causes. Examination of the social brain may well reveal diagnostically meaningful subgroups of children with ASD. Measurements of the “where” and “when” of brain activity during social processing tasks provide reliable sources of the detailed information needed to profile children with ASD with greater accuracy. These profiles, in turn, may help to inform treatment of ASD by helping us to match specific treatments to specific profiles. The integration of imaging methods is critical for this endeavor. Using face perception as an example, the combination of fMRI and ERP could identify who, of those individuals with ASD, shows anomalies in the FG and then determine the stage of information processing at which these impairments occur. Because different processing stages often reflect discrete cognitive processes, this level of understanding could encourage treatments that address specific processing deficits at the neural level. For example, differences observed in the early processing stages might reflect problems with low-level visual perception, while later differences would indicate problems with higher-order processes, such as emotion recognition. These same principles can be applied to the broader network of social brain regions and, combined with measures of behavioral functioning, could offer a comprehensive profile of brain-behavior performance for a given individual. A fundamental goal for this kind of subgroup approach is to improve the ability to tailor treatments to the individual. Another objective is to improve the power of other scientific tools. Most studies of individuals with ASD compare groups of individuals, for example, individuals on with ASD compared to typically developing peers. However, studies have also attempted to compare children across the autism spectrum by group according to differential diagnosis (e.g., Asperger’s disorder versus autistic disorder), or by other behavioral or cognitive characteristics (e.g., cognitively able versus intellectually disabled or anxious versus non-anxious). Yet, the power of a scientific study to detect these kinds of significant, meaningful, individual differences is only as strong as the accuracy of the factor used to define the compared groups. The identification of distinct subgroups within the autism spectrum according to information about the brain would allow for a more accurate and detailed exposition of the individual differences seen in those with ASD. This is especially critical for the success of investigations into the genetic basis of ASD. As mentioned before, the genes discovered thus far account for only a small portion of ASD cases. If meaningful, quantitative distinctions in individuals with ASD are identified; a more focused examination into the genetic causes specific to each subgroup could then be pursued. Moreover, distinct findings from neuroimaging, or biomarkers, can help guide genetic research. Endophenotypes, or characteristics that are not immediately available to observation but that reflect an underlying genetic liability for disease, expose the most basic components of a complex psychiatric disorder and are more stable across the lifespan than observable behavior (Gottesman & Shields, 1973). By describing the key characteristics of ASD in these objective ways, neuroimaging research will facilitate identification of genetic contributions to ASD. Atypical Brain Development Before the Emergence of Atypical Behavior Because autism is a developmental disorder, it is particularly important to diagnose and treat ASD early in life. Early deficits in attention to biological motion, for instance, derail subsequent experiences in attending to higher level social information, thereby driving development toward more severe dysfunction and stimulating deficits in additional domains of functioning, such as language development. The lack of reliable predictors of the condition during the first year of life has been a major impediment to the effective treatment of ASD. Without early predictors, and in the absence of a firm diagnosis until behavioral symptoms emerge, treatment is often delayed for two or more years, eclipsing a crucial period in which intervention may be particularly successful in ameliorating some of the social and communicative impairments seen in ASD. In response to the great need for sensitive (able to identify subtle cases) and specific (able to distinguish autism from other disorders) early indicators of ASD, such as biomarkers, many research groups from around the world have been studying patterns of infant development using prospective longitudinal studies of infant siblings of children with ASD and a comparison group of infant siblings without familial risks. Such designs gather longitudinal information about developmental trajectories across the first three years of life for both groups followed by clinical diagnosis at approximately 36 months. These studies are problematic in that many of the social features of autism do not emerge in typical development until after 12 months of age, and it is not certain that these symptoms will manifest during the limited periods of observation involved in clinical evaluations or in pediatricians’ offices. Moreover, across development, but especially during infancy, behavior is widely variable and often unreliable, and at present, behavioral observation is the only means to detect symptoms of ASD and to confirm a diagnosis. This is quite problematic because, even highly sophisticated behavioral methods, such as eye tracking (see Figure 1), do not necessarily reveal reliable differences in infants with ASD (Ozonoff et al., 2010). However, measuring the brain activity associated with social perception can detect differences that do not appear in behavior until much later. The identification of biomarkers utilizing the imaging methods we have described offers promise for earlier detection of atypical social development. ERP measures of brain response predict subsequent development of autism in infants as young as six months old who showed normal patterns of visual fixation (as measured by eye tracking) (Elsabbagh et al., 2012). This suggests the great promise of brain imaging for earlier recognition of ASD. With earlier detection, treatments could move from addressing existing symptoms to preventing their emergence by altering the course of abnormal brain development and steering it toward normality. Hope for Improved Outcomes The brain imaging research described above offers hope for the future of ASD treatment. Many of the functions of the social brain demonstrate significant plasticity, meaning that their functioning can be affected by experience over time. In contrast to theories that suggest difficulty processing complex information or communicating across large expanses of cortex (Minshew & Williams, 2007), this malleability of the social brain is a positive prognosticator for the development of treatment. The brains of people with ASD are not wired to process optimally social information. But this does not mean that these systems are irretrievably broken. Given the observed plasticity of the social brain, remediation of these difficulties may be possible with appropriate and timely intervention. Outside Resources Web: American Psychiatric Association’s website for the 5th edition of the Diagnostic and Statistical Manual of Mental Disorders http://www.dsm5.org Web: Autism Science Foundation - organization supporting autism research by providing funding and other assistance to scientists and organizations conducting, facilitating, publicizing and disseminating autism research. The organization also provides information about autism to the general public and serves to increase awareness of autism spectrum disorders and the needs of individuals and families affected by autism. http://www.autismsciencefoundation.org/ Web: Autism Speaks - Autism science and advocacy organization http://www.autismspeaks.org/ Discussion Questions 1. How can neuroimaging inform our understanding of the causes of autism? 2. What are the ways in which neuroimaging, including fMRI and ERP, may benefit efforts to diagnosis and treat autism? 3. How can an understanding of the social brain help us to understand ASD? 4. What are the core symptoms of ASD, and why is the social brain of particular interest? 5. What are some of the components of the social brain, and what functions do they serve? Vocabulary Endophenotypes A characteristic that reflects a genetic liability for disease and a more basic component of a complex clinical presentation. Endophenotypes are less developmentally malleable than overt behavior. Event-related potentials (ERP) Measures the firing of groups of neurons in the cortex. As a person views or listens to specific types of information, neuronal activity creates small electrical currents that can be recorded from non-invasive sensors placed on the scalp. ERP provides excellent information about the timing of processing, clarifying brain activity at the millisecond pace at which it unfolds. Functional magnetic resonance imaging (fMRI) Entails the use of powerful magnets to measure the levels of oxygen within the brain that vary with changes in neural activity. That is, as the neurons in specific brain regions “work harder” when performing a specific task, they require more oxygen. By having people listen to or view social percepts in an MRI scanner, fMRI specifies the brain regions that evidence a relative increase in blood flow. In this way, fMRI provides excellent spatial information, pinpointing with millimeter accuracy, the brain regions most critical for different social processes. Social brain The set of neuroanatomical structures that allows us to understand the actions and intentions of other people.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_9%3A_Psychological_Disorders_and_Treatments/9.11%3A_Autism%3A_Insights_from_the_Study_of_the_Social_Brain.txt
By Susan Barron University of Kentucky Psychopharmacology is the study of how drugs affect behavior. If a drug changes your perception, or the way you feel or think, the drug exerts effects on your brain and nervous system. We call drugs that change the way you think or feel psychoactive or psychotropic drugs, and almost everyone has used a psychoactive drug at some point (yes, caffeine counts). Understanding some of the basics about psychopharmacology can help us better understand a wide range of things that interest psychologists and others. For example, the pharmacological treatment of certain neurodegenerative diseases such as Parkinson’s disease tells us something about the disease itself. The pharmacological treatments used to treat psychiatric conditions such as schizophrenia or depression have undergone amazing development since the 1950s, and the drugs used to treat these disorders tell us something about what is happening in the brain of individuals with these conditions. Finally, understanding something about the actions of drugs of abuse and their routes of administration can help us understand why some psychoactive drugs are so addictive. In this module, we will provide an overview of some of these topics as well as discuss some current controversial areas in the field of psychopharmacology. learning objectives • How do the majority of psychoactive drugs work in the brain? • How does the route of administration affect how rewarding a drug might be? • Why is grapefruit dangerous to consume with many psychotropic medications? • Why might individualized drug doses based on genetic screening be helpful for treating conditions like depression? • Why is there controversy regarding pharmacotherapy for children, adolescents, and the elderly? Introduction Psychopharmacology, the study of how drugs affect the brain and behavior, is a relatively new science, although people have probably been taking drugs to change how they feel from early in human history (consider the of eating fermented fruit, ancient beer recipes, chewing on the leaves of the cocaine plant for stimulant properties as just some examples). The word psychopharmacology itself tells us that this is a field that bridges our understanding of behavior (and brain) and pharmacology, and the range of topics included within this field is extremely broad. Virtually any drug that changes the way you feel does this by altering how neurons communicate with each other. Neurons (more than 100 billion in your nervous system) communicate with each other by releasing a chemical (neurotransmitter) across a tiny space between two neurons (the synapse). When the neurotransmitter crosses the synapse, it binds to a postsynaptic receptor (protein) on the receiving neuron and the message may then be transmitted onward. Obviously, neurotransmission is far more complicated than this – links at the end of this module can provide some useful background if you want more detail – but the first step is understanding that virtually all psychoactive drugsinterfere with or alter how neurons communicate with each other. There are many neurotransmitters. Some of the most important in terms of psychopharmacological treatment and drugs of abuse are outlined in Table 1. The neurons that release these neurotransmitters, for the most part, are localized within specific circuits of the brain that mediate these behaviors. Psychoactive drugs can either increase activity at the synapse (these are called agonists) or reduce activity at the synapse (antagonists). Different drugs do this by different mechanisms, and some examples of agonists and antagonists are presented in Table 2. For each example, the drug’s trade name, which is the name of the drug provided by the drug company, and generic name (in parentheses) are provided. A very useful link at the end of this module shows the various steps involved in neurotransmission and some ways drugs can alter this. Table 2 provides examples of drugs and their primary mechanism of action, but it is very important to realize that drugs also have effects on other neurotransmitters. This contributes to the kinds of side effects that are observed when someone takes a particular drug. The reality is that no drugs currently available work only exactly where we would like in the brain or only on a specific neurotransmitter. In many cases, individuals are sometimes prescribed one psychotropic drug but then may also have to take additional drugs to reduce the side effects caused by the initial drug. Sometimes individuals stop taking medication because the side effects can be so profound. Pharmacokinetics: What Is It – Why Is It Important? While this section may sound more like pharmacology, it is important to realize how important pharmacokinetics can be when considering psychoactive drugs. Pharmacokinetics refers to how the body handles a drug that we take. As mentioned earlier, psychoactive drugs exert their effects on behavior by altering neuronal communication in the brain, and the majority of drugs reach the brain by traveling in the blood. The acronym ADME is often used with A standing for absorption (how the drug gets into the blood), Distribution (how the drug gets to the organ of interest – in this module, that is the brain), Metabolism (how the drug is broken down so it no longer exerts its psychoactive effects), and Excretion (how the drug leaves the body). We will talk about a couple of these to show their importance for considering psychoactive drugs. Drug Administration There are many ways to take drugs, and these routes of drug administration can have a significant impact on how quickly that drug reaches brain. The most common route of administration is oral administration, which is relatively slow and – perhaps surprisingly – often the most variable and complex route of administration. Drugs enter the stomach and then get absorbed by the blood supply and capillaries that line the small intestine. The rate of absorption can be affected by a variety of factors including the quantity and the type of food in the stomach (e.g., fats vs. proteins). This is why the medicine label for some drugs (like antibiotics) may specifically state foods that you should or should NOT consume within an hour of taking the drug because they can affect the rate of absorption. Two of the most rapid routes of administration include inhalation (i.e., smoking or gaseous anesthesia) and intravenous (IV) in which the drug is injected directly into the vein and hence the blood supply. Both of these routes of administration can get the drug to brain in less than 10 seconds. IV administration also has the distinction of being the most dangerous because if there is an adverse drug reaction, there is very little time to administer any antidote, as in the case of an IV heroin overdose. Why might how quickly a drug gets to the brain be important? If a drug activates the reward circuits in the brain AND it reaches the brain very quickly, the drug has a high risk for abuse and addiction. Psychostimulants like amphetamine or cocaine are examples of drugs that have high risk for abuse because they are agonists at DA neurons involved in reward AND because these drugs exist in forms that can be either smoked or injected intravenously. Some argue that cigarette smoking is one of the hardest addictions to quit, and although part of the reason for this may be that smoking gets the nicotine into the brain very quickly (and indirectly acts on DA neurons), it is a more complicated story. For drugs that reach the brain very quickly, not only is the drug very addictive, but so are the cues associated with the drug (see Rohsenow, Niaura, Childress, Abrams, & Monti, 1990). For a crack user, this could be the pipe that they use to smoke the drug. For a cigarette smoker, however, it could be something as normal as finishing dinner or waking up in the morning (if that is when the smoker usually has a cigarette). For both the crack user and the cigarette smoker, the cues associated with the drug may actually cause craving that is alleviated by (you guessed it) – lighting a cigarette or using crack (i.e., relapse). This is one of the reasons individuals that enroll in drug treatment programs, especially out-of-town programs, are at significant risk of relapse if they later find themselves in proximity to old haunts, friends, etc. But this is much more difficult for a cigarette smoker. How can someone avoid eating? Or avoid waking up in the morning, etc. These examples help you begin to understand how important the route of administration can be for psychoactive drugs. Drug Metabolism Metabolism involves the breakdown of psychoactive drugs, and this occurs primarily in the liver. The liver produces enzymes (proteins that speed up a chemical reaction), and these enzymes help catalyze a chemical reaction that breaks down psychoactive drugs. Enzymes exist in “families,” and many psychoactive drugs are broken down by the same family of enzymes, the cytochrome P450 superfamily. There is not a unique enzyme for each drug; rather, certain enzymes can break down a wide variety of drugs. Tolerance to the effects of many drugs can occur with repeated exposure; that is, the drug produces less of an effect over time, so more of the drug is needed to get the same effect. This is particularly true for sedative drugs like alcohol or opiate-based painkillers. Metabolic tolerance is one kind of tolerance and it takes place in the liver. Some drugs (like alcohol) cause enzyme induction – an increase in the enzymes produced by the liver. For example, chronic drinking results in alcohol being broken down more quickly, so the alcoholic needs to drink more to get the same effect – of course, until so much alcohol is consumed that it damages the liver (alcohol can cause fatty liver or cirrhosis). Recent Issues Related to Psychotropic Drugs and Metabolism Grapefruit Juice and Metabolism Certain types of food in the stomach can alter the rate of drug absorption, and other foods can also alter the rate of drug metabolism. The most well known is grapefruit juice. Grapefruit juice suppresses cytochrome P450 enzymes in the liver, and these liver enzymes normally break down a large variety of drugs (including some of the psychotropic drugs). If the enzymes are suppressed, drug levels can build up to potentially toxic levels. In this case, the effects can persist for extended periods of time after the consumption of grapefruit juice. As of 2013, there are at least 85 drugs shown to adversely interact with grapefruit juice (Bailey, Dresser, & Arnold, 2013). Some psychotropic drugs that are likely to interact with grapefruit juice include carbamazepine (Tegretol), prescribed for bipolar disorder; diazepam (Valium), used to treat anxiety, alcohol withdrawal, and muscle spasms; and fluvoxamine (Luvox), used to treat obsessive compulsive disorder and depression. A link at the end of this module gives the latest list of drugs reported to have this unusual interaction. Individualized Therapy, Metabolic Differences, and Potential Prescribing Approaches for the Future Mental illnesses contribute to more disability in western countries than all other illnesses including cancer and heart disease. Depression alone is predicted to be the second largest contributor to disease burden by 2020 (World Health Organization, 2004). The numbers of people affected by mental health issues are pretty astonishing, with estimates that 25% of adults experience a mental health issue in any given year, and this affects not only the individual but their friends and family. One in 17 adults experiences a serious mental illness (Kessler, Chiu, Demler, & Walters, 2005). Newer antidepressants are probably the most frequently prescribed drugs for treating mental health issues, although there is no “magic bullet” for treating depression or other conditions. Pharmacotherapy with psychological therapy may be the most beneficial treatment approach for many psychiatric conditions, but there are still many unanswered questions. For example, why does one antidepressant help one individual yet have no effect for another? Antidepressants can take 4 to 6 weeks to start improving depressive symptoms, and we don’t really understand why. Many people do not respond to the first antidepressant prescribed and may have to try different drugs before finding something that works for them. Other people just do not improve with antidepressants (Ioannidis, 2008). As we better understand why individuals differ, the easier and more rapidly we will be able to help people in distress. One area that has received interest recently has to do with an individualized treatment approach. We now know that there are genetic differences in some of the cytochrome P450 enzymes and their ability to break down drugs. The general population falls into the following 4 categories: 1) ultra-extensive metabolizers break down certain drugs (like some of the current antidepressants) very, very quickly, 2) extensive metabolizers are also able to break down drugs fairly quickly, 3) intermediate metabolizers break down drugs more slowly than either of the two above groups, and finally 4) poor metabolizers break down drugs much more slowly than all of the other groups. Now consider someone receiving a prescription for an antidepressant – what would the consequences be if they were either an ultra-extensive metabolizer or a poor metabolizer? The ultra-extensive metabolizer would be given antidepressants and told it will probably take 4 to 6 weeks to begin working (this is true), but they metabolize the medication so quickly that it will never be effective for them. In contrast, the poor metabolizer given the same daily dose of the same antidepressant may build up such high levels in their blood (because they are not breaking the drug down), that they will have a wide range of side effects and feel really badly – also not a positive outcome. What if – instead – prior to prescribing an antidepressant, the doctor could take a blood sample and determine which type of metabolizer a patient actually was? They could then make a much more informed decision about the best dose to prescribe. There are new genetic tests now available to better individualize treatment in just this way. A blood sample can determine (at least for some drugs) which category an individual fits into, but we need data to determine if this actually is effective for treating depression or other mental illnesses (Zhou, 2009). Currently, this genetic test is expensive and not many health insurance plans cover this screen, but this may be an important component in the future of psychopharmacology. Other Controversial Issues Juveniles and Psychopharmacology A recent Centers for Disease Control (CDC) report has suggested that as many as 1 in 5 children between the ages of 5 and 17 may have some type of mental disorder (e.g., ADHD, autism, anxiety, depression) (CDC, 2013). The incidence of bipolar disorder in children and adolescents has also increased 40 times in the past decade (Moreno, Laje, Blanco, Jiang, Schmidt, & Olfson, 2007), and it is now estimated that 1 in 88 children have been diagnosed with an autism spectrum disorder (CDC, 2011). Why has there been such an increase in these numbers? There is no single answer to this important question. Some believe that greater public awareness has contributed to increased teacher and parent referrals. Others argue that the increase stems from changes in criterion currently used for diagnosing. Still others suggest environmental factors, either prenatally or postnatally, have contributed to this upsurge. We do not have an answer, but the question does bring up an additional controversy related to how we should treat this population of children and adolescents. Many psychotropic drugs used for treating psychiatric disorders have been tested in adults, but few have been tested for safety or efficacy with children or adolescents. The most well-established psychotropics prescribed for children and adolescents are the psychostimulant drugs used for treating attention deficit hyperactivity disorder (ADHD), and there are clinical data on how effective these drugs are. However, we know far less about the safety and efficacy in young populations of the drugs typically prescribed for treating anxiety, depression, or other psychiatric disorders. The young brain continues to mature until probably well after age 20, so some scientists are concerned that drugs that alter neuronal activity in the developing brain could have significant consequences. There is an obvious need for clinical trials in children and adolescents to test the safety and effectiveness of many of these drugs, which also brings up a variety of ethical questions about who decides what children and adolescents will participate in these clinical trials, who can give consent, who receives reimbursements, etc. The Elderly and Psychopharmacology Another population that has not typically been included in clinical trials to determine the safety or effectiveness of psychotropic drugs is the elderly. Currently, there is very little high-quality evidence to guide prescribing for older people – clinical trials often exclude people with multiple comorbidities (other diseases, conditions, etc.), which are typical for elderly populations (see Hilmer and Gnjidict, 2008; Pollock, Forsyth, & Bies, 2008). This is a serious issue because the elderly consume a disproportionate number of the prescription meds prescribed. The term polypharmacy refers to the use of multiple drugs, which is very common in elderly populations in the United States. As our population ages, some estimate that the proportion of people 65 or older will reach 20% of the U.S. population by 2030, with this group consuming 40% of the prescribed medications. As shown in Table 3 (from Schwartz and Abernethy, 2008), it is quite clear why the typical clinical trial that looks at the safety and effectiveness of psychotropic drugs can be problematic if we try to interpret these results for an elderly population. Metabolism of drugs is often slowed considerably for elderly populations, so less drug can produce the same effect (or all too often, too much drug can result in a variety of side effects). One of the greatest risk factors for elderly populations is falling (and breaking bones), which can happen if the elderly person gets dizzy from too much of a drug. There is also evidence that psychotropic medications can reduce bone density (thus worsening the consequences if someone falls) (Brown & Mezuk, 2012). Although we are gaining an awareness about some of the issues facing pharmacotherapy in older populations, this is a very complex area with many medical and ethical questions. This module provided an introduction of some of the important areas in the field of psychopharmacology. It should be apparent that this module just touched on a number of topics included in this field. It should also be apparent that understanding more about psychopharmacology is important to anyone interested in understanding behavior and that our understanding of issues in this field has important implications for society. Outside Resources Video: Neurotransmission Web: Description of how some drugs work and the brain areas involved - 1 www.drugabuse.gov/news-events...rotransmission Web: Description of how some drugs work and the brain areas involved - 2 http://learn.genetics.utah.edu/content/addiction/mouse/ Web: Information about how neurons communicate and the reward pathways http://learn.genetics.utah.edu/content/addiction/rewardbehavior/ Web: National Institute of Alcohol Abuse and Alcoholism http://www.niaaa.nih.gov/ Web: National Institute of Drug Abuse http://www.drugabuse.gov/ Web: National Institute of Mental Health http://www.nimh.nih.gov/index.shtml Web: Neurotransmission science.education.nih.gov/su...nsmission.html Web: Report of the Working Group on Psychotropic Medications for Children and Adolescents: Psychopharmacological, Psychosocial, and Combined Interventions for Childhood Disorders: Evidence Base, Contextual Factors, and Future Directions (2008): http://www.apa.org/pi/families/resources/child-medications.pdf Web: Ways drugs can alter neurotransmission http://thebrain.mcgill.ca/flash/d/d_03/d_03_m/d_03_m_par/d_03_m_par.html Discussion Questions 1. What are some of the issues surrounding prescribing medications for children and adolescents? How might this be improved? 2. What are some of the factors that can affect relapse to an addictive drug? 3. How might prescribing medications for depression be improved in the future to increase the likelihood that a drug would work and minimize side effects? Vocabulary Agonists A drug that increases or enhances a neurotransmitter’s effect. Antagonist A drug that blocks a neurotransmitter’s effect. Enzyme A protein produced by a living organism that allows or helps a chemical reaction to occur. Enzyme induction Process through which a drug can enhance the production of an enzyme. Metabolism Breakdown of substances. Neurotransmitter A chemical substance produced by a neuron that is used for communication between neurons. Pharmacokinetics The action of a drug through the body, including absorption, distribution, metabolism, and excretion. Polypharmacy The use of many medications. Psychoactive drugs A drug that changes mood or the way someone feels. Psychotropic drug A drug that changes mood or emotion, usually used when talking about drugs prescribed for various mental conditions (depression, anxiety, schizophrenia, etc.). Synapse The tiny space separating neurons.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_9%3A_Psychological_Disorders_and_Treatments/9.12%3A_Psychopharmacology.txt
• 1.1: Why Science? Psychologists believe that scientific methods can be used in the behavioral domain to understand and improve the world. This module outlines the characteristics of the science, and the promises it holds for understanding behavior. The ethics that guide psychological research are briefly described. It concludes with the reasons you should learn about scientific psychology. • 1.2: Conducting Psychology Research in the Real World This module highlights the importance of also conducting research outside the psychology laboratory, within participants’ natural, everyday environments, and reviews existing methodologies for studying daily life. • 1.3: Psychophysiological Methods in Neuroscience As a generally noninvasive subset of neuroscience methods, psychophysiological methods are used across a variety of disciplines in order to answer diverse questions about psychology, both mental events and behavior. Many different techniques are classified as psychophysiological. • 1.4: History of Psychology This module provides an introduction and overview of the historical development of the science and practice of psychology in America. Ever-increasing specialization within the field often makes it difficult to discern the common roots from which the field of psychology has evolved. By exploring this shared past, students will be better able to understand how psychology has developed into the discipline we know today. 01: Psychology as Science By Edward Diener University of Utah, University of Virginia Scientific research has been one of the great drivers of progress in human history, and the dramatic changes we have seen during the past century are due primarily to scientific findings—modern medicine, electronics, automobiles and jets, birth control, and a host of other helpful inventions. Psychologists believe that scientific methods can be used in the behavioral domain to understand and improve the world. Although psychology trails the biological and physical sciences in terms of progress, we are optimistic based on discoveries to date that scientific psychology will make many important discoveries that can benefit humanity. This module outlines the characteristics of the science, and the promises it holds for understanding behavior. The ethics that guide psychological research are briefly described. It concludes with the reasons you should learn about scientific psychology learning objectives • Describe how scientific research has changed the world. • Describe the key characteristics of the scientific approach. • Discuss a few of the benefits, as well as problems that have been created by science. • Describe several ways that psychological science has improved the world. • Describe a number of the ethical guidelines that psychologists follow. Scientific Advances and World Progress There are many people who have made positive contributions to humanity in modern times. Take a careful look at the names on the following list. Which of these individuals do you think has helped humanity the most? 1. Mother Teresa 2. Albert Schweitzer 3. Edward Jenner 4. Norman Borlaug 5. Fritz Haber The usual response to this question is “Who on earth are Jenner, Borlaug, and Haber?” Many people know that Mother Teresa helped thousands of people living in the slums of Kolkata (Calcutta). Others recall that Albert Schweitzer opened his famous hospital in Africa and went on to earn the Nobel Peace Prize. The other three historical figures, on the other hand, are far less well known. Jenner, Borlaug, and Haber were scientists whose research discoveries saved millions, and even billions, of lives. Dr. Edward Jenner is often considered the “father of immunology” because he was among the first to conceive of and test vaccinations. His pioneering work led directly to the eradication of smallpox. Many other diseases have been greatly reduced because of vaccines discovered using science—measles, pertussis, diphtheria, tetanus, typhoid, cholera, polio, hepatitis—and all are the legacy of Jenner. Fritz Haber and Norman Borlaug saved more than a billion human lives. They created the “Green Revolution” by producing hybrid agricultural crops and synthetic fertilizer. Humanity can now produce food for the seven billion people on the planet, and the starvation that does occur is related to political and economic factors rather than our collective ability to produce food. If you examine major social and technological changes over the past century most of them can be directly attributed to science. The world in 1914 was very different than the one we see today (Easterbrook, 2003). There were few cars and most people traveled by foot, horseback, or carriage. There were no radios, televisions, birth control pills, artificial hearts or antibiotics. Only a small portion of the world had telephones, refrigeration or electricity. These days we find that 80% of all households have television and 84% have electricity. It is estimated that three quarters of the world’s population has access to a mobile phone! Life expectancy was 47 years in 1900 and 79 years in 2010. The percentage of hungry and malnourished people in the world has dropped substantially across the globe. Even average levels of I.Q. have risen dramatically over the past century due to better nutrition and schooling. All of these medical advances and technological innovations are the direct result of scientific research and understanding. In the modern age it is easy to grow complacent about the advances of science but make no mistake about it—science has made fantastic discoveries, and continues to do so. These discoveries have completely changed our world. What Is Science? What is this process we call “science,” which has so dramatically changed the world? Ancient people were more likely to believe in magical and supernatural explanations for natural phenomena such as solar eclipses or thunderstorms. By contrast, scientifically minded people try to figure out the natural world through testing and observation. Specifically, science is the use of systematic observation in order to acquire knowledge. For example, children in a science class might combine vinegar and baking soda to observe the bubbly chemical reaction. These empirical methods are wonderful ways to learn about the physical and biological world. Science is not magic—it will not solve all human problems, and might not answer all our questions about behavior. Nevertheless, it appears to be the most powerful method we have for acquiring knowledge about the observable world. The essential elements of science are as follows: 1. Systematic observation is the core of science. Scientists observe the world, in a very organized way. We often measure the phenomenon we are observing. We record our observations so that memory biases are less likely to enter in to our conclusions. We are systematic in that we try to observe under controlled conditions, and also systematically vary the conditions of our observations so that we can see variations in the phenomena and understand when they occur and do not occur. 2. Observation leads to hypotheses we can test. When we develop hypotheses and theories, we state them in a way that can be tested. For example, you might make the claim that candles made of paraffin wax burn more slowly than do candles of the exact same size and shape made from bee’s wax. This claim can be readily tested by timing the burning speed of candles made from these materials. 3. Science is democratic. People in ancient times may have been willing to accept the views of their kings or pharaohs as absolute truth. These days, however, people are more likely to want to be able to form their own opinions and debate conclusions. Scientists are skeptical and have open discussions about their observations and theories. These debates often occur as scientists publish competing findings with the idea that the best data will win the argument. 4. Science is cumulative. We can learn the important truths discovered by earlier scientists and build on them. Any physics student today knows more about physics than Sir Isaac Newton did even though Newton was possibly the most brilliant physicist of all time. A crucial aspect of scientific progress is that after we learn of earlier advances, we can build upon them and move farther along the path of knowledge. Psychology as a Science Even in modern times many people are skeptical that psychology is really a science. To some degree this doubt stems from the fact that many psychological phenomena such as depression, intelligence, and prejudice do not seem to be directly observable in the same way that we can observe the changes in ocean tides or the speed of light. Because thoughts and feelings are invisible many early psychological researchers chose to focus on behavior. You might have noticed that some people act in a friendly and outgoing way while others appear to be shy and withdrawn. If you have made these types of observations then you are acting just like early psychologists who used behavior to draw inferences about various types of personality. By using behavioral measures and rating scales it is possible to measure thoughts and feelings. This is similar to how other researchers explore “invisible” phenomena such as the way that educators measure academic performance or economists measure quality of life. One important pioneering researcher was Francis Galton, a cousin of Charles Darwin who lived in England during the late 1800s. Galton used patches of color to test people’s ability to distinguish between them. He also invented the self-report questionnaire, in which people offered their own expressed judgments or opinions on various matters. Galton was able to use self-reports to examine—among other things—people’s differing ability to accurately judge distances. Although he lacked a modern understanding of genetics Galton also had the idea that scientists could look at the behaviors of identical and fraternal twins to estimate the degree to which genetic and social factors contribute to personality; a puzzling issue we currently refer to as the “nature-nurture question.” In modern times psychology has become more sophisticated. Researchers now use better measures, more sophisticated study designs and better statistical analyses to explore human nature. Simply take the example of studying the emotion of happiness. How would you go about studying happiness? One straightforward method is to simply ask people about their happiness and to have them use a numbered scale to indicate their feelings. There are, of course, several problems with this. People might lie about their happiness, might not be able to accurately report on their own happiness, or might not use the numerical scale in the same way. With these limitations in mind modern psychologists employ a wide range of methods to assess happiness. They use, for instance, “peer report measures” in which they ask close friends and family members about the happiness of a target individual. Researchers can then compare these ratings to the self-report ratings and check for discrepancies. Researchers also use memory measures, with the idea that dispositionally positive people have an easier time recalling pleasant events and negative people have an easier time recalling unpleasant events. Modern psychologists even use biological measures such as saliva cortisol samples (cortisol is a stress related hormone) or fMRI images of brain activation (the left pre-frontal cortex is one area of brain activity associated with good moods). Despite our various methodological advances it is true that psychology is still a very young science. While physics and chemistry are hundreds of years old psychology is barely a hundred and fifty years old and most of our major findings have occurred only in the last 60 years. There are legitimate limits to psychological science but it is a science nonetheless. Psychological Science is Useful Psychological science is useful for creating interventions that help people live better lives. A growing body of research is concerned with determining which therapies are the most and least effective for the treatment of psychological disorders. For example, many studies have shown that cognitive behavioral therapy can help many people suffering from depression and anxiety disorders (Butler, Chapman, Forman, & Beck, 2006; Hoffman & Smits, 2008). In contrast, research reveals that some types of therapies actually might be harmful on average (Lilienfeld, 2007). In organizational psychology, a number of psychological interventions have been found by researchers to produce greater productivity and satisfaction in the workplace (e.g., Guzzo, Jette, & Katzell, 1985). Human factor engineers have greatly increased the safety and utility of the products we use. For example, the human factors psychologist Alphonse Chapanis and other researchers redesigned the cockpit controls of aircraft to make them less confusing and easier to respond to, and this led to a decrease in pilot errors and crashes. Forensic sciences have made courtroom decisions more valid. We all know of the famous cases of imprisoned persons who have been exonerated because of DNA evidence. Equally dramatic cases hinge on psychological findings. For instance, psychologist Elizabeth Loftus has conducted research demonstrating the limits and unreliability of eyewitness testimony and memory. Thus, psychological findings are having practical importance in the world outside the laboratory. Psychological science has experienced enough success to demonstrate that it works, but there remains a huge amount yet to be learned. Ethics of Scientific Psychology Psychology differs somewhat from the natural sciences such as chemistry in that researchers conduct studies with human research participants. Because of this there is a natural tendency to want to guard research participants against potential psychological harm. For example, it might be interesting to see how people handle ridicule but it might not be advisable to ridicule research participants. Scientific psychologists follow a specific set of guidelines for research known as a code of ethics. There are extensive ethical guidelines for how human participants should be treated in psychological research (Diener & Crandall, 1978; Sales & Folkman, 2000). Following are a few highlights: 1. Informed consent. In general, people should know when they are involved in research, and understand what will happen to them during the study. They should then be given a free choice as to whether to participate. 2. Confidentiality. Information that researchers learn about individual participants should not be made public without the consent of the individual. 3. Privacy. Researchers should not make observations of people in private places such as their bedrooms without their knowledge and consent. Researchers should not seek confidential information from others, such as school authorities, without consent of the participant or his or her guardian. 4. Benefits. Researchers should consider the benefits of their proposed research and weigh these against potential risks to the participants. People who participate in psychological studies should be exposed to risk only if they fully understand these risks and only if the likely benefits clearly outweigh the risks. 5. Deception. Some researchers need to deceive participants in order to hide the true nature of the study. This is typically done to prevent participants from modifying their behavior in unnatural ways. Researchers are required to “debrief” their participants after they have completed the study. Debriefing is an opportunity to educate participants about the true nature of the study. Why Learn About Scientific Psychology? I once had a psychology professor who asked my class why we were taking a psychology course. Our responses give the range of reasons that people want to learn about psychology: 1. To understand ourselves 2. To understand other people and groups 3. To be better able to influence others, for example, in socializing children or motivating employees 4. To learn how to better help others and improve the world, for example, by doing effective psychotherapy 5. To learn a skill that will lead to a profession such as being a social worker or a professor 6. To learn how to evaluate the research claims you hear or read about 7. Because it is interesting, challenging, and fun! People want to learn about psychology because this is exciting in itself, regardless of other positive outcomes it might have. Why do we see movies? Because they are fun and exciting, and we need no other reason. Thus, one good reason to study psychology is that it can be rewarding in itself. Conclusions The science of psychology is an exciting adventure. Whether you will become a scientific psychologist, an applied psychologist, or an educated person who knows about psychological research, this field can influence your life and provide fun, rewards, and understanding. My hope is that you learn a lot from the modules in this e-text, and also that you enjoy the experience! I love learning about psychology and neuroscience, and hope you will too! Outside Resources Web: Science Heroes- A celebration of people who have made lifesaving discoveries. http://www.scienceheroes.com/index.p...=258&Itemid=27 Discussion Questions 1. Some claim that science has done more harm than good. What do you think? 2. Humanity is faced with many challenges and problems. Which of these are due to human behavior, and which are external to human actions? 3. If you were a research psychologist, what phenomena or behaviors would most interest you? 4. Will psychological scientists be able to help with the current challenges humanity faces, such as global warming, war, inequality, and mental illness? 5. What can science study and what is outside the realm of science? What questions are impossible for scientists to study? 6. Some claim that science will replace religion by providing sound knowledge instead of myths to explain the world. They claim that science is a much more reliable source of solutions to problems such as disease than is religion. What do you think? Will science replace religion, and should it? 7. Are there human behaviors that should not be studied? Are some things so sacred or dangerous that we should not study them? Vocabulary Empirical methods Approaches to inquiry that are tied to actual measurement and observation. Ethics Professional guidelines that offer researchers a template for making decisions that protect research participants from potential harm and that help steer scientists away from conflicts of interest or other situations that might compromise the integrity of their research. Hypotheses A logical idea that can be tested. Systematic observation The careful observation of the natural world with the aim of better understanding it. Observations provide the basic data that allow scientists to track, tally, or otherwise organize information about the natural world. Theories Groups of closely related phenomena or observations.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/01%3A_Psychology_as_Science/1.01%3A_Why_Science.txt
By Matthias R. Mehl University of Arizona Because of its ability to determine cause-and-effect relationships, the laboratory experiment is traditionally considered the method of choice for psychological science. One downside, however, is that as it carefully controls conditions and their effects, it can yield findings that are out of touch with reality and have limited use when trying to understand real-world behavior. This module highlights the importance of also conducting research outside the psychology laboratory, within participants’ natural, everyday environments, and reviews existing methodologies for studying daily life. learning objectives • Identify limitations of the traditional laboratory experiment. • Explain ways in which daily life research can further psychological science. • Know what methods exist for conducting psychological research in the real world. Introduction The laboratory experiment is traditionally considered the “gold standard” in psychology research. This is because only laboratory experiments can clearly separate cause from effect and therefore establish causality. Despite this unique strength, it is also clear that a scientific field that is mainly based on controlled laboratory studies ends up lopsided. Specifically, it accumulates a lot of knowledge on what can happen—under carefully isolated and controlled circumstances—but it has little to say about what actually does happen under the circumstances that people actually encounter in their daily lives. For example, imagine you are a participant in an experiment that looks at the effect of being in a good mood on generosity, a topic that may have a good deal of practical application. Researchers create an internally-valid, carefully-controlled experiment where they randomly assign you to watch either a happy movie or a neutral movie, and then you are given the opportunity to help the researcher out by staying longer and participating in another study. If people in a good mood are more willing to stay and help out, the researchers can feel confident that – since everything else was held constant – your positive mood led you to be more helpful. However, what does this tell us about helping behaviors in the real world? Does it generalize to other kinds of helping, such as donating money to a charitable cause? Would all kinds of happy movies produce this behavior, or only this one? What about other positive experiences that might boost mood, like receiving a compliment or a good grade? And what if you were watching the movie with friends, in a crowded theatre, rather than in a sterile research lab? Taking research out into the real world can help answer some of these sorts of important questions. As one of the founding fathers of social psychology remarked, “Experimentation in the laboratory occurs, socially speaking, on an island quite isolated from the life of society” (Lewin, 1944, p. 286). This module highlights the importance of going beyond experimentation and also conducting research outside the laboratory (Reis & Gosling, 2010), directly within participants’ natural environments, and reviews existing methodologies for studying daily life. Rationale for Conducting Psychology Research in the Real World One important challenge researchers face when designing a study is to find the right balance between ensuring internal validity, or the degree to which a study allows unambiguous causal inferences, and external validity, or the degree to which a study ensures that potential findings apply to settings and samples other than the ones being studied (Brewer, 2000). Unfortunately, these two kinds of validity tend to be difficult to achieve at the same time, in one study. This is because creating a controlled setting, in which all potentially influential factors (other than the experimentally-manipulated variable) are controlled, is bound to create an environment that is quite different from what people naturally encounter (e.g., using a happy movie clip to promote helpful behavior). However, it is the degree to which an experimental situation is comparable to the corresponding real-world situation of interest that determines how generalizable potential findings will be. In other words, if an experiment is very far-off from what a person might normally experience in everyday life, you might reasonably question just how useful its findings are. Because of the incompatibility of the two types of validity, one is often—by design—prioritized over the other. Due to the importance of identifying true causal relationships, psychology has traditionally emphasized internal over external validity. However, in order to make claims about human behavior that apply across populations and environments, researchers complement traditional laboratory research, where participants are brought into the lab, with field research where, in essence, the psychological laboratory is brought to participants. Field studies allow for the important test of how psychological variables and processes of interest “behave” under real-world circumstances (i.e., what actually does happen rather than what can happen). They can also facilitate “downstream” operationalizations of constructs that measure life outcomes of interest directly rather than indirectly. Take, for example, the fascinating field of psychoneuroimmunology, where the goal is to understand the interplay of psychological factors - such as personality traits or one’s stress level - and the immune system. Highly sophisticated and carefully controlled experiments offer ways to isolate the variety of neural, hormonal, and cellular mechanisms that link psychological variables such as chronic stress to biological outcomes such as immunosuppression (a state of impaired immune functioning; Sapolsky, 2004). Although these studies demonstrate impressively how psychological factors can affect health-relevant biological processes, they—because of their research design—remain mute about the degree to which these factors actually do undermine people’s everyday health in real life. It is certainly important to show that laboratory stress can alter the number of natural killer cells in the blood. But it is equally important to test to what extent the levels of stress that people experience on a day-to-day basis result in them catching a cold more often or taking longer to recover from one. The goal for researchers, therefore, must be to complement traditional laboratory experiments with less controlled studies under real-world circumstances. The term ecological validity is used to refer the degree to which an effect has been obtained under conditions that are typical for what happens in everyday life (Brewer, 2000). In this example, then, people might keep a careful daily log of how much stress they are under as well as noting physical symptoms such as headaches or nausea. Although many factors beyond stress level may be responsible for these symptoms, this more correlational approach can shed light on how the relationship between stress and health plays out outside of the laboratory. An Overview of Research Methods for Studying Daily Life Capturing “life as it is lived” has been a strong goal for some researchers for a long time. Wilhelm and his colleagues recently published a comprehensive review of early attempts to systematically document daily life (Wilhelm, Perrez, & Pawlik, 2012). Building onto these original methods, researchers have, over the past decades, developed a broad toolbox for measuring experiences, behavior, and physiology directly in participants’ daily lives (Mehl & Conner, 2012). Figure 1 provides a schematic overview of the methodologies described below. Studying Daily Experiences Starting in the mid-1970s, motivated by a growing skepticism toward highly-controlled laboratory studies, a few groups of researchers developed a set of new methods that are now commonly known as the experience-sampling method (Hektner, Schmidt, & Csikszentmihalyi, 2007), ecological momentary assessment (Stone & Shiffman, 1994), or the diary method (Bolger & Rafaeli, 2003). Although variations within this set of methods exist, the basic idea behind all of them is to collect in-the-moment (or, close-to-the-moment) self-report data directly from people as they go about their daily lives. This is typically accomplished by asking participants’ repeatedly (e.g., five times per day) over a period of time (e.g., a week) to report on their current thoughts and feelings. The momentary questionnaires often ask about their location (e.g., “Where are you now?”), social environment (e.g., “With whom are you now?”), activity (e.g., “What are you currently doing?”), and experiences (e.g., “How are you feeling?”). That way, researchers get a snapshot of what was going on in participants’ lives at the time at which they were asked to report. Technology has made this sort of research possible, and recent technological advances have altered the different tools researchers are able to easily use. Initially, participants wore electronic wristwatches that beeped at preprogrammed but seemingly random times, at which they completed one of a stack of provided paper questionnaires. With the mobile computing revolution, both the prompting and the questionnaire completion were gradually replaced by handheld devices such as smartphones. Being able to collect the momentary questionnaires digitally and time-stamped (i.e., having a record of exactly when participants responded) had major methodological and practical advantages and contributed to experience sampling going mainstream (Conner, Tennen, Fleeson, & Barrett, 2009). Over time, experience sampling and related momentary self-report methods have become very popular, and, by now, they are effectively the gold standard for studying daily life. They have helped make progress in almost all areas of psychology (Mehl & Conner, 2012). These methods ensure receiving many measurements from many participants, and has further inspired the development of novel statistical methods (Bolger & Laurenceau, 2013). Finally, and maybe most importantly, they accomplished what they sought out to accomplish: to bring attention to what psychology ultimately wants and needs to know about, namely “what people actually do, think, and feel in the various contexts of their lives” (Funder, 2001, p. 213). In short, these approaches have allowed researchers to do research that is more externally valid, or more generalizable to real life, than the traditional laboratory experiment. To illustrate these techniques, consider a classic study, Stone, Reed, and Neale (1987), who tracked positive and negative experiences surrounding a respiratory infection using daily experience sampling. They found that undesirable experiences peaked and desirable ones dipped about four to five days prior to participants coming down with the cold. More recently, Killingsworth and Gilbert (2010) collected momentary self-reports from more than 2,000 participants via a smartphone app. They found that participants were less happy when their mind was in an idling, mind-wandering state, such as surfing the Internet or multitasking at work, than when it was in an engaged, task-focused one, such as working diligently on a paper. These are just two examples that illustrate how experience-sampling studies have yielded findings that could not be obtained with traditional laboratory methods. Recently, the day reconstruction method (DRM) (Kahneman, Krueger, Schkade, Schwarz, & Stone, 2004) has been developed to obtain information about a person’s daily experiences without going through the burden of collecting momentary experience-sampling data. In the DRM, participants report their experiences of a given day retrospectively after engaging in a systematic, experiential reconstruction of the day on the following day. As a participant in this type of study, you might look back on yesterday, divide it up into a series of episodes such as “made breakfast,” “drove to work,” “had a meeting,” etc. You might then report who you were with in each episode and how you felt in each. This approach has shed light on what situations lead to moments of positive and negative mood throughout the course of a normal day. Studying Daily Behavior Experience sampling is often used to study everyday behavior (i.e., daily social interactions and activities). In the laboratory, behavior is best studied using direct behavioral observation (e.g., video recordings). In the real world, this is, of course, much more difficult. As Funder put it, it seems it would require a “detective’s report [that] would specify in exact detail everything the participant said and did, and with whom, in all of the contexts of the participant’s life” (Funder, 2007, p. 41). As difficult as this may seem, Mehl and colleagues have developed a naturalistic observation methodology that is similar in spirit. Rather than following participants—like a detective—with a video camera (see Craik, 2000), they equip participants with a portable audio recorder that is programmed to periodically record brief snippets of ambient sounds (e.g., 30 seconds every 12 minutes). Participants carry the recorder (originally a microcassette recorder, now a smartphone app) on them as they go about their days and return it at the end of the study. The recorder provides researchers with a series of sound bites that, together, amount to an acoustic diary of participants’ days as they naturally unfold—and that constitute a representative sample of their daily activities and social encounters. Because it is somewhat similar to having the researcher’s ear at the participant’s lapel, they called their method the electronically activated recorder, or EAR (Mehl, Pennebaker, Crow, Dabbs, & Price, 2001). The ambient sound recordings can be coded for many things, including participants’ locations (e.g., at school, in a coffee shop), activities (e.g., watching TV, eating), interactions (e.g., in a group, on the phone), and emotional expressions (e.g., laughing, sighing). As unnatural or intrusive as it might seem, participants report that they quickly grow accustomed to the EAR and say they soon find themselves behaving as they normally would. In a cross-cultural study, Ramírez-Esparza and her colleagues used the EAR method to study sociability in the United States and Mexico. Interestingly, they found that although American participants rated themselves significantly higher than Mexicans on the question, “I see myself as a person who is talkative,” they actually spent almost 10 percent less time talking than Mexicans did (Ramírez-Esparza, Mehl, Álvarez Bermúdez, & Pennebaker, 2009). In a similar way, Mehl and his colleagues used the EAR method to debunk the long-standing myth that women are considerably more talkative than men. Using data from six different studies, they showed that both sexes use on average about 16,000 words per day. The estimated sex difference of 546 words was trivial compared to the immense range of more than 46,000 words between the least and most talkative individual (695 versus 47,016 words; Mehl, Vazire, Ramírez-Esparza, Slatcher, & Pennebaker, 2007). Together, these studies demonstrate how naturalistic observation can be used to study objective aspects of daily behavior and how it can yield findings quite different from what other methods yield (Mehl, Robbins, & Deters, 2012). A series of other methods and creative ways for assessing behavior directly and unobtrusively in the real world are described in a seminal book on real-world, subtle measures (Webb, Campbell, Schwartz, Sechrest, & Grove, 1981). For example, researchers have used time-lapse photography to study the flow of people and the use of space in urban public places (Whyte, 1980). More recently, they have observed people’s personal (e.g., dorm rooms) and professional (e.g., offices) spaces to understand how personality is expressed and detected in everyday environments (Gosling, Ko, Mannarelli, & Morris, 2002). They have even systematically collected and analyzed people’s garbage to measure what people actually consume (e.g., empty alcohol bottles or cigarette boxes) rather than what they say they consume (Rathje & Murphy, 2001). Because people often cannot and sometimes may not want to accurately report what they do, the direct—and ideally nonreactive—assessment of real-world behavior is of high importance for psychological research (Baumeister, Vohs, & Funder, 2007). Studying Daily Physiology In addition to studying how people think, feel, and behave in the real world, researchers are also interested in how our bodies respond to the fluctuating demands of our lives. What are the daily experiences that make our “blood boil”? How do our neurotransmitters and hormones respond to the stressors we encounter in our lives? What physiological reactions do we show to being loved—or getting ostracized? You can see how studying these powerful experiences in real life, as they actually happen, may provide more rich and informative data than one might obtain in an artificial laboratory setting that merely mimics these experiences. Also, in pursuing these questions, it is important to keep in mind that what is stressful, engaging, or boring for one person might not be so for another. It is, in part, for this reason that researchers have found only limited correspondence between how people respond physiologically to a standardized laboratory stressor (e.g., giving a speech) and how they respond to stressful experiences in their lives. To give an example, Wilhelm and Grossman (2010) describe a participant who showed rather minimal heart rate increases in response to a laboratory stressor (about five to 10 beats per minute) but quite dramatic increases (almost 50 beats per minute) later in the afternoon while watching a soccer game. Of course, the reverse pattern can happen as well, such as when patients have high blood pressure in the doctor’s office but not in their home environment—the so-called white coat hypertension (White, Schulman, McCabe, & Dey, 1989). Ambulatory physiological monitoring – that is, monitoring physiological reactions as people go about their daily lives - has a long history in biomedical research and an array of monitoring devices exist (Fahrenberg & Myrtek, 1996). Among the biological signals that can now be measured in daily life with portable signal recording devices are the electrocardiogram (ECG), blood pressure, electrodermal activity (or “sweat response”), body temperature, and even the electroencephalogram (EEG) (Wilhelm & Grossman, 2010). Most recently, researchers have added ambulatory assessment of hormones (e.g., cortisol) and other biomarkers (e.g., immune markers) to the list (Schlotz, 2012). The development of ever more sophisticated ways to track what goes on underneath our skins as we go about our lives is a fascinating and rapidly advancing field. In a recent study, Lane, Zareba, Reis, Peterson, and Moss (2011) used experience sampling combined with ambulatory electrocardiography (a so-called Holter monitor) to study how emotional experiences can alter cardiac function in patients with a congenital heart abnormality (e.g., long QT syndrome). Consistent with the idea that emotions may, in some cases, be able to trigger a cardiac event, they found that typical—in most cases even relatively low intensity— daily emotions had a measurable effect on ventricular repolarization, an important cardiac indicator that, in these patients, is linked to risk of a cardiac event. In another study, Smyth and colleagues (1998) combined experience sampling with momentary assessment of cortisol, a stress hormone. They found that momentary reports of current or even anticipated stress predicted increased cortisol secretion 20 minutes later. Further, and independent of that, the experience of other kinds of negative affect (e.g., anger, frustration) also predicted higher levels of cortisol and the experience of positive affect (e.g., happy, joyful) predicted lower levels of this important stress hormone. Taken together, these studies illustrate how researchers can use ambulatory physiological monitoring to study how the little—and seemingly trivial or inconsequential—experiences in our lives leave objective, measurable traces in our bodily systems. Studying Online Behavior Another domain of daily life that has only recently emerged is virtual daily behavior or how people act and interact with others on the Internet. Irrespective of whether social media will turn out to be humanity’s blessing or curse (both scientists and laypeople are currently divided over this question), the fact is that people are spending an ever increasing amount of time online. In light of that, researchers are beginning to think of virtual behavior as being as serious as “actual” behavior and seek to make it a legitimate target of their investigations (Gosling & Johnson, 2010). One way to study virtual behavior is to make use of the fact that most of what people do on the Web—emailing, chatting, tweeting, blogging, posting— leaves direct (and permanent) verbal traces. For example, differences in the ways in which people use words (e.g., subtle preferences in word choice) have been found to carry a lot of psychological information (Pennebaker, Mehl, & Niederhoffer, 2003). Therefore, a good way to study virtual social behavior is to study virtual language behavior. Researchers can download people’s—often public—verbal expressions and communications and analyze them using modern text analysis programs (e.g., Pennebaker, Booth, & Francis, 2007). For example, Cohn, Mehl, and Pennebaker (2004) downloaded blogs of more than a thousand users of lifejournal.com, one of the first Internet blogging sites, to study how people responded socially and emotionally to the attacks of September 11, 2001. In going “the online route,” they could bypass a critical limitation of coping research, the inability to obtain baseline information; that is, how people were doing before the traumatic event occurred. Through access to the database of public blogs, they downloaded entries from two months prior to two months after the attacks. Their linguistic analyses revealed that in the first days after the attacks, participants expectedly expressed more negative emotions and were more cognitively and socially engaged, asking questions and sending messages of support. Already after two weeks, though, their moods and social engagement returned to baseline, and, interestingly, their use of cognitive-analytic words (e.g., “think,” “question”) even dropped below their normal level. Over the next six weeks, their mood hovered around their pre-9/11 baseline, but both their social engagement and cognitive-analytic processing stayed remarkably low. This suggests a social and cognitive weariness in the aftermath of the attacks. In using virtual verbal behavior as a marker of psychological functioning, this study was able to draw a fine timeline of how humans cope with disasters. Reflecting their rapidly growing real-world importance, researchers are now beginning to investigate behavior on social networking sites such as Facebook (Wilson, Gosling, & Graham, 2012). Most research looks at psychological correlates of online behavior such as personality traits and the quality of one’s social life but, importantly, there are also first attempts to export traditional experimental research designs into an online setting. In a pioneering study of online social influence, Bond and colleagues (2012) experimentally tested the effects that peer feedback has on voting behavior. Remarkably, their sample consisted of 16 million (!) Facebook users. They found that online political-mobilization messages (e.g., “I voted” accompanied by selected pictures of their Facebook friends) influenced real-world voting behavior. This was true not just for users who saw the messages but also for their friends and friends of their friends. Although the intervention effect on a single user was very small, through the enormous number of users and indirect social contagion effects, it resulted cumulatively in an estimated 340,000 additional votes—enough to tilt a close election. In short, although still in its infancy, research on virtual daily behavior is bound to change social science, and it has already helped us better understand both virtual and “actual” behavior. “Smartphone Psychology”? A review of research methods for studying daily life would not be complete without a vision of “what’s next.” Given how common they have become, it is safe to predict that smartphones will not just remain devices for everyday online communication but will also become devices for scientific data collection and intervention (Kaplan & Stone, 2013; Yarkoni, 2012). These devices automatically store vast amounts of real-world user interaction data, and, in addition, they are equipped with sensors to track the physical (e. g., location, position) and social (e.g., wireless connections around the phone) context of these interactions. Miller (2012, p. 234) states, “The question is not whether smartphones will revolutionize psychology but how, when, and where the revolution will happen.” Obviously, their immense potential for data collection also brings with it big new challenges for researchers (e.g., privacy protection, data analysis, and synthesis). Yet it is clear that many of the methods described in this module—and many still to be developed ways of collecting real-world data—will, in the future, become integrated into the devices that people naturally and happily carry with them from the moment they get up in the morning to the moment they go to bed. Conclusion This module sought to make a case for psychology research conducted outside the lab. If the ultimate goal of the social and behavioral sciences is to explain human behavior, then researchers must also—in addition to conducting carefully controlled lab studies—deal with the “messy” real world and find ways to capture life as it naturally happens. Mortensen and Cialdini (2010) refer to the dynamic give-and-take between laboratory and field research as “full-cycle psychology”. Going full cycle, they suggest, means that “researchers use naturalistic observation to determine an effect’s presence in the real world, theory to determine what processes underlie the effect, experimentation to verify the effect and its underlying processes, and a return to the natural environment to corroborate the experimental findings” (Mortensen & Cialdini, 2010, p. 53). To accomplish this, researchers have access to a toolbox of research methods for studying daily life that is now more diverse and more versatile than it has ever been before. So, all it takes is to go ahead and—literally—bring science to life. Outside Resources Website: Society for Ambulatory Assessment http://www.ambulatory-assessment.org Discussion Questions 1. What do you think about the tradeoff between unambiguously establishing cause and effect (internal validity) and ensuring that research findings apply to people’s everyday lives (external validity)? Which one of these would you prioritize as a researcher? Why? 2. What challenges do you see that daily-life researchers may face in their studies? How can they be overcome? 3. What ethical issues can come up in daily-life studies? How can (or should) they be addressed? 4. How do you think smartphones and other mobile electronic devices will change psychological research? What are their promises for the field? And what are their pitfalls? Vocabulary Ambulatory assessment An overarching term to describe methodologies that assess the behavior, physiology, experience, and environments of humans in naturalistic settings. Daily Diary method A methodology where participants complete a questionnaire about their thoughts, feelings, and behavior of the day at the end of the day. Day reconstruction method (DRM) A methodology where participants describe their experiences and behavior of a given day retrospectively upon a systematic reconstruction on the following day. Ecological momentary assessment An overarching term to describe methodologies that repeatedly sample participants’ real-world experiences, behavior, and physiology in real time. Ecological validity The degree to which a study finding has been obtained under conditions that are typical for what happens in everyday life. Electronically activated recorder, or EAR A methodology where participants wear a small, portable audio recorder that intermittently records snippets of ambient sounds around them. Experience-sampling method A methodology where participants report on their momentary thoughts, feelings, and behaviors at different points in time over the course of a day. External validity The degree to which a finding generalizes from the specific sample and context of a study to some larger population and broader settings. Full-cycle psychology A scientific approach whereby researchers start with an observational field study to identify an effect in the real world, follow up with laboratory experimentation to verify the effect and isolate the causal mechanisms, and return to field research to corroborate their experimental findings. Generalize Generalizing, in science, refers to the ability to arrive at broad conclusions based on a smaller sample of observations. For these conclusions to be true the sample should accurately represent the larger population from which it is drawn. Internal validity The degree to which a cause-effect relationship between two variables has been unambiguously established. Linguistic inquiry and word count A quantitative text analysis methodology that automatically extracts grammatical and psychological information from a text by counting word frequencies. Lived day analysis A methodology where a research team follows an individual around with a video camera to objectively document a person’s daily life as it is lived. White coat hypertension A phenomenon in which patients exhibit elevated blood pressure in the hospital or doctor’s office but not in their everyday lives.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/01%3A_Psychology_as_Science/1.02%3A_Conducting_Psychology_Research_in_the_Real_World.txt
By Zachary Infantolino and Gregory A. Miller University of Delaware, University of California, Los Angeles As a generally noninvasive subset of neuroscience methods, psychophysiological methods are used across a variety of disciplines in order to answer diverse questions about psychology, both mental events and behavior. Many different techniques are classified as psychophysiological. Each technique has its strengths and weaknesses, and knowing them allows researchers to decide what each offers for a particular question. Additionally, this knowledge allows research consumers to evaluate the meaning of the results in a particular experiment. learning objectives • Learn what qualifies as psychophysiology within the broader field of neuroscience. • Review and compare several examples of psychophysiological methods. • Understand advantages and disadvantages of different psychophysiological methods. History In the mid-19th century, a railroad worker named Phineas Gage was in charge of setting explosive charges for blasting through rock in order to prepare a path for railroad tracks. He would lay the charge in a hole drilled into the rock, place a fuse and sand on top of the charge, and pack it all down using a tamping iron (a solid iron rod approximately one yard long and a little over an inch in diameter). On a September afternoon when Gage was performing this task, his tamping iron caused a spark that set off the explosive prematurely, sending the tamping iron flying through the air. Unfortunately for Gage, his head was above the hole and the tamping iron entered the side of his face, passed behind his left eye, and exited out of the top of his head, eventually landing 80 feet away. Gage lost a portion of his left frontal lobe in the accident, but survived and lived for another 12 years. What is most interesting from a psychological perspective is that Gage’s personality changed as a result of this accident. He became more impulsive, he had trouble carrying out plans, and, at times, he engaged in vulgar profanity, which was out of character. This case study leads one to believe that there are specific areas of the brain that are associated with certain psychological phenomena. When studying psychology, the brain is indeed an interesting source of information. Although it would be impossible to replicate the type of damage done to Gage in the name of research, methods have developed over the years that are able to safely measure different aspects of nervous system activity in order to help researchers better understand psychology as well as the relationship between psychology and biology. Introduction Psychophysiology is defined as any research in which the dependent variable (what the researcher measures) is a physiological measure, and the independent variable (what the researcher manipulates) is behavioral or mental. In most cases the work is done noninvasively with awake human participants. Physiological measures take many forms and range from blood flow or neural activity in the brain to heart rate variability and eye movements. These measures can provide information about processes including emotion, cognition, and the interactions between them. In these ways, physiological measures offer a very flexible set of tools for researchers to answer questions about behavior, cognition, and health. Psychophysiological methods are a subset of the very large domain of neuroscience methods. Many neuroscience methods are invasive, such as involving lesions of neural tissue, injection of neutrally active chemicals, or manipulation of neural activity via electrical stimulation. The present survey emphasizes noninvasive methods widely used with human subjects. Crucially, in examining the relationship between physiology and overt behavior or mental events, psychophysiology does not attempt to replace the latter with the former. As an example, happiness is a state of pleasurable contentment and is associated with various physiological measures, but one would not say that those physiological measures are happiness. We can make inferences about someone’s cognitive or emotional state based on his or her self-report, physiology, or overt behavior. Sometimes our interest is primarily in inferences about internal events and sometimes primarily in the physiology itself. Psychophysiology addresses both kinds of goals. Central Nervous System (CNS) This module provides an overview of several popular psychophysiological methods, though it is far from exhaustive. Each method can draw from a broad range of data-analysis strategies to provide an even more expansive set of tools. The psychophysiological methods discussed below focus on the central nervous system. Structural magnetic resonance imaging (sMRI) is a noninvasive technique that allows researchers and clinicians to view anatomical structures within a human. The participant is placed in a magnetic field that may be 66,000 times greater than the Earth’s magnetic field, which causes a small portion of the atoms in his or her body to line up in the same direction. The body is then pulsed with low-energy radio frequencies that are absorbed by the atoms in the body, causing them to tip over. As these atoms return to their aligned state, they give off energy in the form of harmless electromagnetic radiation, which is measured by the machine. The machine then transforms the measured energy into a three-dimensional picture of the tissue within the body. In psychophysiology research, this image may be used to compare the size of structures in different groups of people (e.g., are areas associated with pleasure smaller in individuals with depression?) or to increase the accuracy of spatial locations as measured with functional magnetic resonance imaging (fMRI). Functional magnetic resonance imaging (fMRI) is a method that is used to assess changes in activity of tissue, such as measuring changes in neural activity in different areas of the brain during thought. This technique builds on the principles of sMRI and also uses the property that, when neurons fire, they use energy, which must be replenished. Glucose and oxygen, two key components for energy production, are supplied to the brain from the blood stream as needed. Oxygen is transported through the blood using hemoglobin, which contains binding sites for oxygen. When these sites are saturated with oxygen, it is referred to as oxygenated hemoglobin. When the oxygen molecules have all been released from a hemoglobin molecule, it is known as deoxygenated hemoglobin. As a set of neurons begin firing, oxygen in the blood surrounding those neurons is consumed, leading to a reduction in oxygenated hemoglobin. The body then compensates and provides an abundance of oxygenated hemoglobin in the blood surrounding that activated neural tissue. When activity in that neural tissue declines, the level of oxygenated hemoglobin slowly returns to its original level, which typically takes several seconds. fMRI measures the change in the concentration of oxygenated hemoglobin, which is known as the blood-oxygen-level-dependent (BOLD) signal. This leads to two important facts about fMRI. First, fMRI measures blood volume and blood flow, and from this we infer neural activity; fMRI does not measure neural activity directly. Second, fMRI data typically have poor temporal resolution (the precision of measurement with respect to time); however, when combined with sMRI, fMRI provides excellent spatial resolution (the ability to distinguish one object from another in space). Temporal resolution for fMRI is typically on the order of seconds, whereas its spatial resolution is on the order of millimeters. Under most conditions there is an inverse relationship between temporal and spatial resolution—one can increase temporal resolution at the expense of spatial resolution and vice versa. This method is valuable for identifying specific areas of the brain that are associated with different physical or psychological tasks. Clinically, fMRI may be used prior to neurosurgery in order to identify areas that are associated with language so that the surgeon can avoid those areas during the operation. fMRI allows researchers to identify differential or convergent patterns of activation associated with tasks. For example, if participants are shown words on a screen and are expected to indicate the color of the letters, are the same brain areas recruited for this task if the words have emotional content or not? Does this relationship change in psychological disorders such as anxiety or depression? Is there a different pattern of activation even in the absence of overt performance differences? fMRI is an excellent tool for comparing brain activation in different tasks and/or populations. Figure 2.7.1 provides an example of results from fMRI analyses overlaid on an sMRI image. The blue and orange shapes represent areas with significant changes in the BOLD signal, thus changes in neural activation. Electroencephalography (EEG) is another technique for studying brain activation. This technique uses at least two and sometimes up to 256 electrodes to measure the difference in electrical charge (the voltage) between pairs of points on the head. These electrodes are typically fastened to a flexible cap (similar to a swimming cap) that is placed on the participant’s head. From the scalp, the electrodes measure the electrical activity that is naturally occurring within the brain. They do not introduce any new electrical activity. In contrast to fMRI, EEG measures neural activity directly, rather than a correlate of that activity. Electrodes used in EEG can also be placed within the skull, resting directly on the brain itself. This application, called electrocorticography (ECoG), is typically used prior to medical procedures for localizing activity, such as the origin of epileptic seizures. This invasive procedure allows for more precise localization of neural activity, which is essential in medical applications. However, it is generally not justifiable to open a person’s skull solely for research purposes, and instead electrodes are placed on the participant’s scalp, resulting in a noninvasive technique for measuring neural activity. Given that this electrical activity must travel through the skull and scalp before reaching the electrodes, localization of activity is less precise when measuring from the scalp, but it can still be within several millimeters when localizing activity that is near the scalp. One major advantage of EEG is its temporal resolution. Data can be recorded thousands of times per second, allowing researchers to document events that happen in less than a millisecond. EEG analyses typically investigate the change in amplitude or frequency components of the recorded EEG on an ongoing basis or averaged over dozens of trials (see Figure 2.7.2). Magnetoencephalography (MEG) is another technique for noninvasively measuring neural activity. The flow of electrical charge (the current) associated with neural activity produces very weak magnetic fields that can be detected by sensors placed near the participant’s scalp. The number of sensors used varies from a few to several hundred. Due to the fact that the magnetic fields of interest are so small, special rooms that are shielded from magnetic fields in the environment are needed in order to avoid contamination of the signal being measured. MEG has the same excellent temporal resolution as EEG. Additionally, MEG is not as susceptible to distortions from the skull and scalp. Magnetic fields are able to pass through the hard and soft tissue relatively unchanged, thus providing better spatial resolution than EEG. MEG analytic strategies are nearly identical to those used in EEG. However, the MEG recording apparatus is much more expensive than EEG, so MEG is much less widely available. EEG and MEG are both excellent for elucidating the temporal dynamics of neural processes. For example, if someone is reading a sentence that ends with an unexpected word (e.g., Michelle is going outside to water the book), how long after he or she reads the unexpected word does he or she recognize this as unexpected? In addition to these types of questions, EEG and MEG methods allow researchers to investigate the degree to which different parts of the brain “talk” to each other. This allows for a better understanding of brain networks, such as their role in different tasks and how they may function abnormally in psychopathology. Positron emission tomography (PET) is a medical imaging technique that is used to measure processes in the body, including the brain. This method relies on a positron-emitting tracer atom that is introduced into the blood stream in a biologically active molecule, such as glucose, water, or ammonia. A positron is a particle much like an electron but with a positive charge. One example of a biologically active molecule is fludeoxyglucose, which acts similarly to glucose in the body. Fludeoxyglucose will concentrate in areas where glucose is needed—commonly areas with higher metabolic needs. Over time, this tracer molecule emits positrons, which are detected by a sensor. The spatial location of the tracer molecule in the brain can be determined based on the emitted positrons. This allows researchers to construct a three-dimensional image of the areas of the brain that have the highest metabolic needs, typically those that are most active. Images resulting from PET usually represent neural activity that has occurred over tens of minutes, which is very poor temporal resolution for some purposes. PET images are often combined with computed tomography (CT) images to improve spatial resolution, as fine as several millimeters. Tracers can also be incorporated into molecules that bind to neurotransmitter receptors, which allow researchers to answer some unique questions about the action of neurotransmitters. Unfortunately, very few research centers have the equipment required to obtain the images or the special equipment needed to create the positron-emitting tracer molecules, which typically need to be produced on site. Transcranial magnetic stimulation (TMS) is a noninvasive method that causes depolarization or hyperpolarization in neurons near the scalp. This method is not considered psychophysiological because the independent variable is physiological, rather than the dependent. However, it does qualify as a neuroscience method because it deals with the function of the nervous system, and it can readily be combined with conventional psychophysiological methods. In TMS, a coil of wire is placed just above the participant’s scalp. When electricity flows through the coil, it produces a magnetic field. This magnetic field travels through the skull and scalp and affects neurons near the surface of the brain. When the magnetic field is rapidly turned on and off, a current is induced in the neurons, leading to depolarization or hyperpolarization, depending on the number of magnetic field pulses. Single- or paired-pulse TMS depolarizes site-specific neurons in the cortex, causing them to fire. If this method is used over primary motor cortex, it can produce or block muscle activity, such as inducing a finger twitch or preventing someone from pressing a button. If used over primary visual cortex, it can produce sensations of flashes of light or impair visual processes. This has proved to be a valuable tool in studying the function and timing of specific processes such as the recognition of visual stimuli. Repetitive TMS produces effects that last longer than the initial stimulation. Depending on the intensity, coil orientation, and frequency, neural activity in the stimulated area may be either attenuated or amplified. Used in this manner, TMS is able to explore neural plasticity, which is the ability of connections between neurons to change. This has implications for treating psychological disorders as well as understanding long-term changes in neuronal excitability. Peripheral Nervous System The psychophysiological methods discussed above focus on the central nervous system. Considerable research has also focused on the peripheral nervous system. These methods include skin conductance, cardiovascular responses, muscle activity, pupil diameter, eye blinks, and eye movements. Skin conductance, for example, measures the electrical conductance (the inverse of resistance) between two points on the skin, which varies with the level of moisture. Sweat glands are responsible for this moisture and are controlled by the sympathetic nervous system (SNS). Increases in skin conductance can be associated with changes in psychological activity. For example, studying skin conductance allows a researcher to investigate whether psychopaths react to fearful pictures in a normal way. Skin conductance provides relatively poor temporal resolution, with the entire response typically taking several seconds to emerge and resolve. However, it is an easy way to measure SNS response to a variety of stimuli. Cardiovascular measures include heart rate, heart rate variability, and blood pressure. The heart is innervated by the parasympathetic nervous system (PNS) and SNS. Input from the PNS decreases heart rate and contractile strength, whereas input from the SNS increases heart rate and contractile strength. Heart rate can easily be monitored using a minimum of two electrodes and is measured by counting the number of heartbeats in a given time period, such as one minute, or by assessing the time between successive heartbeats. Psychological activity can prompt increases and decreases in heart rate, often in less than a second, making heart rate a sensitive measure of cognition. Measures of heart rate variability are concerned with consistency in the time interval between heartbeats. Changes in heart rate variability are associated with stress as well as psychiatric conditions. Figure 2.7.3 is an example of an electrocardiogram, which is used to measure heart rate and heart rate variability. These cardiovascular measures allow researchers to monitor SNS and PNS reactivity to various stimuli or situations. For example, when an arachnophobe views pictures of spiders, does their heart rate increase more than that of a person not afraid of spiders? Electromyography (EMG) measures electrical activity produced by skeletal muscles. Similar to EEG, EMG measures the voltage between two points. This technique can be used to determine when a participant first initiates muscle activity to engage in a motor response to a stimulus or the degree to which a participant begins to engage in an incorrect response (such as pressing the wrong button), even if it is never visibly executed. It has also been used in emotion research to identify activity in muscles that are used to produce smiles and frowns. Using EMG, it is possible to detect very small facial movements that are not observable from looking at the face. The temporal resolution of EMG is similar to that of EEG and MEG. Valuable information can also be gleaned from eye blinks, eye movements, and pupil diameter. Eye blinks are most often assessed using EMG electrodes placed just below the eyelid, but electrical activity associated directly with eye blinks or eye movements can be measured with electrodes placed on the face near the eyes, because there is voltage across the entire eyeball. Another option for the measurement of eye movement is a camera used to record video of an eye. This video method is particularly valuable when determination of absolute direction of gaze (not just change in direction of gaze) is of interest, such as when the eyes scan a picture. With the help of a calibration period in which a participant looks at multiple, known targets, eye position is then extracted from each video frame during the main task and compared with data from the calibration phase, allowing researchers to identify the sequence, direction, and duration of gaze fixations. For example, when viewing pleasant or unpleasant images, people spend different amounts of time looking at the most arousing parts. This, in turn, can vary as a function of psychopathology. Additionally, the diameter of a participant’s pupil can be measured and recorded over time from the video record. As with heart rate, pupil diameter is controlled by competing inputs from the SNS and PNS. Pupil diameter is commonly used as an index of mental effort when performing a task. When to Use What As the reader, you may be wondering, how do I know what tool is right for a given question? Generally, there are no definitive answers. If you wanted to know the temperature in the morning, would you check your phone? Look outside to see how warm it looks? Ask your roommate what he or she is wearing today? Look to see what other people are wearing? There is not a single way to answer the question. The same is true for research questions. However, there are some guidelines that one can consider. For example, if you are interested in what brain structures are associated with cognitive control, you wouldn’t use peripheral nervous system measures. A technique such as fMRI or PET might be more appropriate. If you are interested in how cognitive control unfolds over time, EEG or MEG would be a good choice. If you are interested in studying the bodily response to fear in different groups of people, peripheral nervous system measures might be most appropriate. The key to deciding what method is most appropriate is properly defining the question that you are trying to answer. What aspects are most interesting? Do you care about identifying the most relevant brain structures? Temporal dynamics? Bodily responses? Then, it is important to think about the strengths and weaknesses of the different psychophysiological measures and pick one, or several, whose attributes work best for the question at hand. In fact, it is common to record several at once. Conclusion The outline of psychophysiological methods above provides a glimpse into the exciting techniques that are available to researchers studying a broad range of topics from clinical to social to cognitive psychology. Some of the most interesting psychophysiological studies use several methods, such as in sleep assessments or multimodal neuroimaging. Psychophysiological methods have applications outside of mainstream psychology in areas where psychological phenomena are central, such as economics, health-related decision making, and brain–computer interfaces. Examples of applications for each method are provided above, but this list is by no means exhaustive. Furthermore, the field is continually evolving, with new methods and new applications being developed. The wide variety of methods and applications provide virtually limitless possibilities for researchers. Outside Resources Book: Luck, S. J. (2005). An introduction to the event-related potential technique. Cambridge, MA: MIT Press. Book: Poldrack, R. A., Mumford, J. A., & Nichols, T. E. (2011). Handbook of functional MRI data analysis. New York: Cambridge University Press. Web: For a list of additional psychophysiology teaching materials: www.sprweb.org/teaching/index.cfm Web: For visualizations on MRI physics (requires a free registration): http://www.imaios.com/en/e-Courses/e-MRI/NMR/ Discussion Questions 1. Pick a psychological phenomenon that you would like to know more about. What specific hypothesis would you like to test? What psychophysiological methods might be appropriate for testing this hypothesis and why? 2. What types of questions would require high spatial resolution in measuring brain activity? What types of questions would require high temporal resolution? 3. Take the hypothesis you picked in the first question, and choose what you think would be the best psychophysiological method. What additional information could you obtain using a complementary method? For example, if you want to learn about memory, what two methods could you use that would each provide you with distinct information? 4. The popular press has shown an increasing interest in findings that contain images of brains and neuroscience language. Studies have shown that people often find presentations of results that contain these features more convincing than presentations of results that do not, even if the actual results are the same. Why would images of the brain and neuroscience language be more convincing to people? Given that results with these features are more convincing, what do you think is the researcher’s responsibility in reporting results with brain images and neuroscience language? 5. Many claims in the popular press attempt to reduce complex psychological phenomena to biological events. For example, you may have heard it said that schizophrenia is a brain disorder or that depression is simply a chemical imbalance. However, this type of “reductionism” so far does not appear to be tenable. There has been surprisingly little discussion of possible causal relationships, in either direction, between biological and psychological phenomena. We are aware of no such documented causal mechanisms. Do you think that it will ever be possible to explain how a change in biology can result in a change of a psychological phenomenon, or vice versa? Vocabulary Blood-oxygen-level-dependent (BOLD) The signal typically measured in fMRI that results from changes in the ratio of oxygenated hemoglobin to deoxygenated hemoglobin in the blood. Central nervous system The part of the nervous system that consists of the brain and spinal cord. Deoxygenated hemoglobin Hemoglobin not carrying oxygen. Depolarization A change in a cell’s membrane potential, making the inside of the cell more positive and increasing the chance of an action potential. Hemoglobin The oxygen-carrying portion of a red blood cell. Hyperpolarization A change in a cell’s membrane potential, making the inside of the cell more negative and decreasing the chance of an action potential. Invasive Procedure A procedure that involves the skin being broken or an instrument or chemical being introduced into a body cavity. Lesions Abnormalities in the tissue of an organism usually caused by disease or trauma. Neural plasticity The ability of synapses and neural pathways to change over time and adapt to changes in neural process, behavior, or environment. Neuroscience methods A research method that deals with the structure or function of the nervous system and brain. Noninvasive procedure A procedure that does not require the insertion of an instrument or chemical through the skin or into a body cavity. Oxygenated hemoglobin Hemoglobin carrying oxygen. Parasympathetic nervous system (PNS) One of the two major divisions of the autonomic nervous system, responsible for stimulation of “rest and digest” activities. Peripheral nervous system The part of the nervous system that is outside the brain and spinal cord. Positron A particle having the same mass and numerically equal but positive charge as an electron. Psychophysiological methods Any research method in which the dependent variable is a physiological measure and the independent variable is behavioral or mental (such as memory). Spatial resolution The degree to which one can separate a single object in space from another. Sympathetic nervous system (SNS) One of the two major divisions of the autonomic nervous system, responsible for stimulation of “fight or flight” activities. Temporal resolution The degree to which one can separate a single point in time from another. Voltage The difference in electric charge between two points.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/01%3A_Psychology_as_Science/1.03%3A_Psychophysiological_Methods_in_Neuroscience.txt
By David B. Baker and Heather Sperry University of Akron, The University of Akron This module provides an introduction and overview of the historical development of the science and practice of psychology in America. Ever-increasing specialization within the field often makes it difficult to discern the common roots from which the field of psychology has evolved. By exploring this shared past, students will be better able to understand how psychology has developed into the discipline we know today. learning objectives • Describe the precursors to the establishment of the science of psychology. • Identify key individuals and events in the history of American psychology. • Describe the rise of professional psychology in America. • Develop a basic understanding of the processes of scientific development and change. • Recognize the role of women and people of color in the history of American psychology. Introduction It is always a difficult question to ask, where to begin to tell the story of the history of psychology. Some would start with ancient Greece; others would look to a demarcation in the late 19th century when the science of psychology was formally proposed and instituted. These two perspectives, and all that is in between, are appropriate for describing a history of psychology. The interested student will have no trouble finding an abundance of resources on all of these time frames and perspectives (Goodwin, 2011; Leahey, 2012; Schultz & Schultz, 2007). For the purposes of this module, we will examine the development of psychology in America and use the mid-19th century as our starting point. For the sake of convenience, we refer to this as a history of modern psychology. Psychology is an exciting field and the history of psychology offers the opportunity to make sense of how it has grown and developed. The history of psychology also provides perspective. Rather than a dry collection of names and dates, the history of psychology tells us about the important intersection of time and place that defines who we are. Consider what happens when you meet someone for the first time. The conversation usually begins with a series of questions such as, “Where did you grow up?” “How long have you lived here?” “Where did you go to school?” The importance of history in defining who we are cannot be overstated. Whether you are seeing a physician, talking with a counselor, or applying for a job, everything begins with a history. The same is true for studying the history of psychology; getting a history of the field helps to make sense of where we are and how we got here. A Prehistory of Psychology Precursors to American psychology can be found in philosophy and physiology. Philosophers such as John Locke (1632–1704) and Thomas Reid (1710–1796) promoted empiricism, the idea that all knowledge comes from experience. The work of Locke, Reid, and others emphasized the role of the human observer and the primacy of the senses in defining how the mind comes to acquire knowledge. In American colleges and universities in the early 1800s, these principles were taught as courses on mental and moral philosophy. Most often these courses taught about the mind based on the faculties of intellect, will, and the senses (Fuchs, 2000). Physiology and Psychophysics Philosophical questions about the nature of mind and knowledge were matched in the 19th century by physiological investigations of the sensory systems of the human observer. German physiologist Hermann von Helmholtz (1821–1894) measured the speed of the neural impulse and explored the physiology of hearing and vision. His work indicated that our senses can deceive us and are not a mirror of the external world. Such work showed that even though the human senses were fallible, the mind could be measured using the methods of science. In all, it suggested that a science of psychology was feasible. An important implication of Helmholtz’s work was that there is a psychological reality and a physical reality and that the two are not identical. This was not a new idea; philosophers like John Locke had written extensively on the topic, and in the 19th century, philosophical speculation about the nature of mind became subject to the rigors of science. The question of the relationship between the mental (experiences of the senses) and the material (external reality) was investigated by a number of German researchers including Ernst Weber and Gustav Fechner. Their work was called psychophysics, and it introduced methods for measuring the relationship between physical stimuli and human perception that would serve as the basis for the new science of psychology (Fancher & Rutherford, 2011). The formal development of modern psychology is usually credited to the work of German physician, physiologist, and philosopher Wilhelm Wundt (1832–1920). Wundt helped to establish the field of experimental psychology by serving as a strong promoter of the idea that psychology could be an experimental field and by providing classes, textbooks, and a laboratory for training students. In 1875, he joined the faculty at the University of Leipzig and quickly began to make plans for the creation of a program of experimental psychology. In 1879, he complemented his lectures on experimental psychology with a laboratory experience: an event that has served as the popular date for the establishment of the science of psychology. The response to the new science was immediate and global. Wundt attracted students from around the world to study the new experimental psychology and work in his lab. Students were trained to offer detailed self-reports of their reactions to various stimuli, a procedure known as introspection. The goal was to identify the elements of consciousness. In addition to the study of sensation and perception, research was done on mental chronometry, more commonly known as reaction time. The work of Wundt and his students demonstrated that the mind could be measured and the nature of consciousness could be revealed through scientific means. It was an exciting proposition, and one that found great interest in America. After the opening of Wundt’s lab in 1879, it took just four years for the first psychology laboratory to open in the United States (Benjamin, 2007). Scientific Psychology Comes to the United States Wundt’s version of psychology arrived in America most visibly through the work of Edward Bradford Titchener (1867–1927). A student of Wundt’s, Titchener brought to America a brand of experimental psychology referred to as “structuralism.” Structuralists were interested in the contents of the mind—what the mind is. For Titchener, the general adult mind was the proper focus for the new psychology, and he excluded from study those with mental deficiencies, children, and animals (Evans, 1972; Titchener, 1909). Experimental psychology spread rather rapidly throughout North America. By 1900, there were more than 40 laboratories in the United States and Canada (Benjamin, 2000). Psychology in America also organized early with the establishment of the American Psychological Association (APA) in 1892. Titchener felt that this new organization did not adequately represent the interests of experimental psychology, so, in 1904, he organized a group of colleagues to create what is now known as the Society of Experimental Psychologists (Goodwin, 1985). The group met annually to discuss research in experimental psychology. Reflecting the times, women researchers were not invited (or welcome). It is interesting to note that Titchener’s first doctoral student was a woman, Margaret Floy Washburn (1871–1939). Despite many barriers, in 1894, Washburn became the first woman in America to earn a Ph.D. in psychology and, in 1921, only the second woman to be elected president of the American Psychological Association (Scarborough & Furumoto, 1987). Striking a balance between the science and practice of psychology continues to this day. In 1988, the American Psychological Society (now known as the Association for Psychological Science) was founded with the central mission of advancing psychological science. Toward a Functional Psychology While Titchener and his followers adhered to a structural psychology, others in America were pursuing different approaches. William James, G. Stanley Hall, and James McKeen Cattell were among a group that became identified with “functionalism.” Influenced by Darwin’s evolutionary theory, functionalists were interested in the activities of the mind—what the mind does. An interest in functionalism opened the way for the study of a wide range of approaches, including animal and comparative psychology (Benjamin, 2007). William James (1842–1910) is regarded as writing perhaps the most influential and important book in the field of psychology, Principles of Psychology,published in 1890. Opposed to the reductionist ideas of Titchener, James proposed that consciousness is ongoing and continuous; it cannot be isolated and reduced to elements. For James, consciousness helped us adapt to our environment in such ways as allowing us to make choices and have personal responsibility over those choices. At Harvard, James occupied a position of authority and respect in psychology and philosophy. Through his teaching and writing, he influenced psychology for generations. One of his students, Mary Whiton Calkins (1863–1930), faced many of the challenges that confronted Margaret Floy Washburn and other women interested in pursuing graduate education in psychology. With much persistence, Calkins was able to study with James at Harvard. She eventually completed all the requirements for the doctoral degree, but Harvard refused to grant her a diploma because she was a woman. Despite these challenges, Calkins went on to become an accomplished researcher and the first woman elected president of the American Psychological Association in 1905 (Scarborough & Furumoto, 1987). G. Stanley Hall (1844–1924) made substantial and lasting contributions to the establishment of psychology in the United States. At Johns Hopkins University, he founded the first psychological laboratory in America in 1883. In 1887, he created the first journal of psychology in America, American Journal of Psychology. In 1892, he founded the American Psychological Association (APA); in 1909, he invited and hosted Freud at Clark University (the only time Freud visited America). Influenced by evolutionary theory, Hall was interested in the process of adaptation and human development. Using surveys and questionnaires to study children, Hall wrote extensively on child development and education. While graduate education in psychology was restricted for women in Hall’s time, it was all but non-existent for African Americans. In another first, Hall mentored Francis Cecil Sumner (1895–1954) who, in 1920, became the first African American to earn a Ph.D. in psychology in America (Guthrie, 2003). James McKeen Cattell (1860–1944) received his Ph.D. with Wundt but quickly turned his interests to the assessment of individual differences. Influenced by the work of Darwin’s cousin, Frances Galton, Cattell believed that mental abilities such as intelligence were inherited and could be measured using mental tests. Like Galton, he believed society was better served by identifying those with superior intelligence and supported efforts to encourage them to reproduce. Such beliefs were associated with eugenics (the promotion of selective breeding) and fueled early debates about the contributions of heredity and environment in defining who we are. At Columbia University, Cattell developed a department of psychology that became world famous also promoting psychological science through advocacy and as a publisher of scientific journals and reference works (Fancher, 1987; Sokal, 1980). The Growth of Psychology Throughout the first half of the 20th century, psychology continued to grow and flourish in America. It was large enough to accommodate varying points of view on the nature of mind and behavior. Gestalt psychology is a good example. The Gestalt movement began in Germany with the work of Max Wertheimer (1880–1943). Opposed to the reductionist approach of Wundt’s laboratory psychology, Wertheimer and his colleagues Kurt Koffka (1886–1941), Wolfgang Kohler (1887–1967), and Kurt Lewin (1890–1947) believed that studying the whole of any experience was richer than studying individual aspects of that experience. The saying “the whole is greater than the sum of its parts” is a Gestalt perspective. Consider that a melody is an additional element beyond the collection of notes that comprise it. The Gestalt psychologists proposed that the mind often processes information simultaneously rather than sequentially. For instance, when you look at a photograph, you see a whole image, not just a collection of pixels of color. Using Gestalt principles, Wertheimer and his colleagues also explored the nature of learning and thinking. Most of the German Gestalt psychologists were Jewish and were forced to flee the Nazi regime due to the threats posed on both academic and personal freedoms. In America, they were able to introduce a new audience to the Gestalt perspective, demonstrating how it could be applied to perception and learning (Wertheimer, 1938). In many ways, the work of the Gestalt psychologists served as a precursor to the rise of cognitive psychology in America (Benjamin, 2007). Behaviorism emerged early in the 20th century and became a major force in American psychology. Championed by psychologists such as John B. Watson (1878–1958) and B. F. Skinner (1904–1990), behaviorism rejected any reference to mind and viewed overt and observable behavior as the proper subject matter of psychology. Through the scientific study of behavior, it was hoped that laws of learning could be derived that would promote the prediction and control of behavior. Russian physiologist Ivan Pavlov (1849–1936) influenced early behaviorism in America. His work on conditioned learning, popularly referred to as classical conditioning, provided support for the notion that learning and behavior were controlled by events in the environment and could be explained with no reference to mind or consciousness (Fancher, 1987). For decades, behaviorism dominated American psychology. By the 1960s, psychologists began to recognize that behaviorism was unable to fully explain human behavior because it neglected mental processes. The turn toward a cognitive psychology was not new. In the 1930s, British psychologist Frederic C. Bartlett (1886–1969) explored the idea of the constructive mind, recognizing that people use their past experiences to construct frameworks in which to understand new experiences. Some of the major pioneers in American cognitive psychology include Jerome Bruner (1915–), Roger Brown (1925–1997), and George Miller (1920–2012). In the 1950s, Bruner conducted pioneering studies on cognitive aspects of sensation and perception. Brown conducted original research on language and memory, coined the term “flashbulb memory,” and figured out how to study the tip-of-the-tongue phenomenon (Benjamin, 2007). Miller’s research on working memory is legendary. His 1956 paper “The Magic Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information”is one of the most highly cited papers in psychology. A popular interpretation of Miller’s research was that the number of bits of information an average human can hold in working memory is 7 ± 2. Around the same time, the study of computer science was growing and was used as an analogy to explore and understand how the mind works. The work of Miller and others in the 1950s and 1960s has inspired tremendous interest in cognition and neuroscience, both of which dominate much of contemporary American psychology. Applied Psychology in America In America, there has always been an interest in the application of psychology to everyday life. Mental testing is an important example. Modern intelligence tests were developed by the French psychologist Alfred Binet (1857–1911). His goal was to develop a test that would identify schoolchildren in need of educational support. His test, which included tasks of reasoning and problem solving, was introduced in the United States by Henry Goddard (1866–1957) and later standardized by Lewis Terman (1877–1956) at Stanford University. The assessment and meaning of intelligence has fueled debates in American psychology and society for nearly 100 years. Much of this is captured in the nature-nurture debate that raises questions about the relative contributions of heredity and environment in determining intelligence (Fancher, 1987). Applied psychology was not limited to mental testing. What psychologists were learning in their laboratories was applied in many settings including the military, business, industry, and education. The early 20th century was witness to rapid advances in applied psychology. Hugo Munsterberg (1863–1916) of Harvard University made contributions to such areas as employee selection, eyewitness testimony, and psychotherapy. Walter D. Scott (1869–1955) and Harry Hollingworth (1880–1956) produced original work on the psychology of advertising and marketing. Lillian Gilbreth (1878–1972) was a pioneer in industrial psychology and engineering psychology. Working with her husband, Frank, they promoted the use of time and motion studies to improve efficiency in industry. Lillian also brought the efficiency movement to the home, designing kitchens and appliances including the pop-up trashcan and refrigerator door shelving. Their psychology of efficiency also found plenty of applications at home with their 12 children. The experience served as the inspiration for the movie Cheaper by the Dozen (Benjamin, 2007). Clinical psychology was also an early application of experimental psychology in America. Lightner Witmer (1867–1956) received his Ph.D. in experimental psychology with Wilhelm Wundt and returned to the University of Pennsylvania, where he opened a psychological clinic in 1896. Witmer believed that because psychology dealt with the study of sensation and perception, it should be of value in treating children with learning and behavioral problems. He is credited as the founder of both clinical and school psychology (Benjamin & Baker, 2004). Psychology as a Profession As the roles of psychologists and the needs of the public continued to change, it was necessary for psychology to begin to define itself as a profession. Without standards for training and practice, anyone could use the title psychologist and offer services to the public. As early as 1917, applied psychologists organized to create standards for education, training, and licensure. By the 1930s, these efforts led to the creation of the American Association for Applied Psychology (AAAP). While the American Psychological Association (APA) represented the interests of academic psychologists, AAAP served those in education, industry, consulting, and clinical work. The advent of WWII changed everything. The psychiatric casualties of war were staggering, and there were simply not enough mental health professionals to meet the need. Recognizing the shortage, the federal government urged the AAAP and APA to work together to meet the mental health needs of the nation. The result was the merging of the AAAP and the APA and a focus on the training of professional psychologists. Through the provisions of National Mental Health Act of 1946, funding was made available that allowed the APA, the Veterans Administration, and the Public Health Service to work together to develop training programs that would produce clinical psychologists. These efforts led to the convening of the Boulder Conference on Graduate Education in Clinical Psychology in 1949 in Boulder, Colorado. The meeting launched doctoral training in psychology and gave us the scientist-practitioner model of training. Similar meetings also helped launch doctoral training programs in counseling and school psychology. Throughout the second half of the 20th century, alternatives to Boulder have been debated. In 1973, the Vail Conference on Professional Training in Psychology proposed the scholar-practitioner model and the Psy.D. degree (Doctor of Psychology). It is a training model that emphasizes clinical training and practice that has become more common (Cautin & Baker, in press). Psychology and Society Given that psychology deals with the human condition, it is not surprising that psychologists would involve themselves in social issues. For more than a century, psychology and psychologists have been agents of social action and change. Using the methods and tools of science, psychologists have challenged assumptions, stereotypes, and stigma. Founded in 1936, the Society for the Psychological Study of Social Issues (SPSSI) has supported research and action on a wide range of social issues. Individually, there have been many psychologists whose efforts have promoted social change. Helen Thompson Woolley (1874–1947) and Leta S. Hollingworth (1886–1939) were pioneers in research on the psychology of sex differences. Working in the early 20th century, when women’s rights were marginalized, Thompson examined the assumption that women were overemotional compared to men and found that emotion did not influence women’s decisions any more than it did men’s. Hollingworth found that menstruation did not negatively impact women’s cognitive or motor abilities. Such work combatted harmful stereotypes and showed that psychological research could contribute to social change (Scarborough & Furumoto, 1987). Among the first generation of African American psychologists, Mamie Phipps Clark (1917–1983) and her husband Kenneth Clark (1914–2005) studied the psychology of race and demonstrated the ways in which school segregation negatively impacted the self-esteem of African American children. Their research was influential in the 1954 Supreme Court ruling in the case of Brown v. Board of Education,which ended school segregation (Guthrie, 2003). In psychology, greater advocacy for issues impacting the African American community were advanced by the creation of the Association of Black Psychologists (ABPsi) in 1968. In 1957, psychologist Evelyn Hooker (1907–1996) published the paper “The Adjustment of the Male Overt Homosexual,” reporting on her research that showed no significant differences in psychological adjustment between homosexual and heterosexual men. Her research helped to de-pathologize homosexuality and contributed to the decision by the American Psychiatric Association to remove homosexuality from the Diagnostic and Statistical Manual of Mental Disorders in 1973 (Garnets & Kimmel, 2003). Conclusion Growth and expansion have been a constant in American psychology. In the latter part of the 20th century, areas such as social, developmental, and personality psychology made major contributions to our understanding of what it means to be human. Today neuroscience is enjoying tremendous interest and growth. As mentioned at the beginning of the module, it is a challenge to cover all the history of psychology in such a short space. Errors of omission and commission are likely in such a selective review. The history of psychology helps to set a stage upon which the story of psychology can be told. This brief summary provides some glimpse into the depth and rich content offered by the history of psychology. The learning modules in the Noba psychology collection are all elaborations on the foundation created by our shared past. It is hoped that you will be able to see these connections and have a greater understanding and appreciation for both the unity and diversity of the field of psychology. Timeline 1600s – Rise of empiricism emphasizing centrality of human observer in acquiring knowledge 1850s - Helmholz measures neural impulse / Psychophysics studied by Weber & Fechner 1859 - Publication of Darwin's Origin of Species 1879 - Wundt opens lab for experimental psychology 1883 - First psychology lab opens in the United States 1887 – First American psychology journal is published: American Journal of Psychology 1890 – James publishes Principles of Psychology 1892 – APA established 1894 – Margaret Floy Washburn is first U.S. woman to earn Ph.D. in psychology 1904 - Founding of Titchener's experimentalists 1905 - Mary Whiton Calkins is first woman president of APA 1909 – Freud’s only visit to the United States 1913 - John Watson calls for a psychology of behavior 1920 – Francis Cecil Sumner is first African American to earn Ph.D. in psychology 1921 – Margaret Floy Washburn is second woman president of APA 1930s – Creation and growth of the American Association for Applied Psychology (AAAP) / Gestalt psychology comes to America 1936- Founding of The Society for the Psychological Study of Social Issues 1940s – Behaviorism dominates American psychology 1946 – National Mental Health Act 1949 – Boulder Conference on Graduate Education in Clinical Psychology 1950s – Cognitive psychology gains popularity 1954 – Brown v. Board of Education 1957 – Evelyn Hooker publishes The Adjustment of the Male Overt Homosexual 1968 – Founding of the Association of Black Psychologists 1973 – Psy.D. proposed at the Vail Conference on Professional Training in Psychology 1988 – Founding of the American Psychological Society (now known as the Association for Psychological Science) Outside Resources Podcast: History of Psychology Podcast Series http://www.yorku.ca/christo/podcasts/ Web: Advances in the History of Psychology http://ahp.apps01.yorku.ca/ Web: Center for the History of Psychology http://www.uakron.edu/chp Web: Classics in the History of Psychology http://psychclassics.yorku.ca/ Web: Psychology’s Feminist Voices http://www.feministvoices.com/ Web: This Week in the History of Psychology http://www.yorku.ca/christo/podcasts/ Discussion Questions 1. Why was psychophysics important to the development of psychology as a science? 2. How have psychologists participated in the advancement of social issues? 3. Name some ways in which psychology began to be applied to the general public and everyday problems. 4. Describe functionalism and structuralism and their influences on behaviorism and cognitive psychology. Vocabulary Behaviorism The study of behavior. Cognitive psychology The study of mental processes. Consciousness Awareness of ourselves and our environment. Empiricism The belief that knowledge comes from experience. Eugenics The practice of selective breeding to promote desired traits. Flashbulb memory A highly detailed and vivid memory of an emotionally significant event. Functionalism A school of American psychology that focused on the utility of consciousness. Gestalt psychology An attempt to study the unity of experience. Individual differences Ways in which people differ in terms of their behavior, emotion, cognition, and development. Introspection A method of focusing on internal processes. Neural impulse An electro-chemical signal that enables neurons to communicate. Practitioner-Scholar Model A model of training of professional psychologists that emphasizes clinical practice. Psychophysics Study of the relationships between physical stimuli and the perception of those stimuli. Realism A point of view that emphasizes the importance of the senses in providing knowledge of the external world. Scientist-practitioner model A model of training of professional psychologists that emphasizes the development of both research and clinical skills. Structuralism A school of American psychology that sought to describe the elements of conscious experience. Tip-of-the-tongue phenomenon The inability to pull a word from memory even though there is the sensation that that word is available.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/01%3A_Psychology_as_Science/1.04%3A_History_of_Psychology.txt
• 2.1: The Brain The human brain is responsible for all behaviors, thoughts, and experiences described in this textbook. This module provides an introductory overview of the brain, including some basic neuroanatomy, and brief descriptions of the neuroscience methods used to study it. • 2.2: The Nervous System The mammalian nervous system is a complex biological organ, which enables many animals including humans to function in a coordinated fashion. The original design of this system is preserved across many animals through evolution; thus, adaptive physiological and behavioral functions are similar across many animal species. • 2.3: Evolutionary Theories in Psychology Evolution or change over time occurs through the processes of natural and sexual selection. In response to problems in our environment, we adapt both physically and psychologically to ensure our survival and reproduction. • 2.4: Hormones and Behavior The goal of this module is to introduce you to the topic of hormones and behavior. This field of study is also called behavioral endocrinology, which is the scientific study of the interaction between hormones and behavior. • 2.5: Biochemistry of Love Love is deeply biological. The evolutionary principles and ancient hormonal and neural systems that support the beneficial and healing effects of loving relationships are described here. • 2.6: Epigenetics in Psychology Early life experiences exert a profound and long-lasting influence on physical and mental health throughout life. In this module, we survey recent developments revealing epigenetic aspects of mental health and review some of the challenges of epigenetic approaches in psychology to help explain how nurture shapes nature. • 2.7: The Nature-Nurture Question People have a deep intuition about what has been called the “nature–nurture question.” Some aspects of our behavior feel as though they originate in our genetic makeup, while others feel like the result of our upbringing or our own hard work. Genes and environments always combine to produce behavior, and the real science is in the discovery of how they combine for a given behavior. 02: Biological Basis of Behavior By Diane Beck and Evelina Tapia University of Illinois at Urbana-Champaign, University of Illinois The human brain is responsible for all behaviors, thoughts, and experiences described in this textbook. This module provides an introductory overview of the brain, including some basic neuroanatomy, and brief descriptions of the neuroscience methods used to study it. Learning Objectives • Name and describe the basic function of the brain stem, cerebellum, and cerebral hemispheres. • Name and describe the basic function of the four cerebral lobes: occipital, temporal, parietal, and frontal cortex. • Describe a split-brain patient and at least two important aspects of brain function that these patients reveal. • Distinguish between gray and white matter of the cerebral hemispheres. • Name and describe the most common approaches to studying the human brain. • Distinguish among four neuroimaging methods: PET, fMRI, EEG, and DOI. • Describe the difference between spatial and temporal resolution with regard to brain function. Introduction Any textbook on psychology would be incomplete without reference to the brain. Every behavior, thought, or experience described in the other modules must be implemented in the brain. A detailed understanding of the human brain can help us make sense of human experience and behavior. For example, one well-established fact about human cognition is that it is limited. We cannot do two complex tasks at once: We cannot read and carry on a conversation at the same time, text and drive, or surf the Internet while listening to a lecture, at least not successfully or safely. We cannot even pat our head and rub our stomach at the same time (with exceptions, see “A Brain Divided”). Why is this? Many people have suggested that such limitations reflect the fact that the behaviors draw on the same resource; if one behavior uses up most of the resource there is not enough resource left for the other. But what might this limited resource be in the brain? The brain uses oxygen and glucose, delivered via the blood. The brain is a large consumer of these metabolites, using 20% of the oxygen and calories we consume despite being only 2% of our total weight. However, as long as we are not oxygen-deprived or malnourished, we have more than enough oxygen and glucose to fuel the brain. Thus, insufficient “brain fuel” cannot explain our limited capacity. Nor is it likely that our limitations reflect too few neurons. The average human brain contains 100 billion neurons. It is also not the case that we use only 10% of our brain, a myth that was likely started to imply we had untapped potential. Modern neuroimaging (see “Studying the Human Brain”) has shown that we use all parts of brain, just at different times, and certainly more than 10% at any one time. If we have an abundance of brain fuel and neurons, how can we explain our limited cognitive abilities? Why can’t we do more at once? The most likely explanation is the way these neurons are wired up. We know, for instance, that many neurons in the visual cortex (the part of the brain responsible for processing visual information) are hooked up in such a way as to inhibit each other (Beck & Kastner, 2009). When one neuron fires, it suppresses the firing of other nearby neurons. If two neurons that are hooked up in an inhibitory way both fire, then neither neuron can fire as vigorously as it would otherwise. This competitive behavior among neurons limits how much visual information the brain can respond to at the same time. Similar kinds of competitive wiring among neurons may underlie many of our limitations. Thus, although talking about limited resources provides an intuitive description of our limited capacity behavior, a detailed understanding of the brain suggests that our limitations more likely reflect the complex way in which neurons talk to each other rather than the depletion of any specific resource. The Anatomy of the Brain There are many ways to subdivide the mammalian brain, resulting in some inconsistent and ambiguous nomenclature over the history of neuroanatomy (Swanson, 2000). For simplicity, we will divide the brain into three basic parts: the brain stem, cerebellum, and cerebral hemispheres (see Figure 1.1.1). In Figure 1.1.2, however, we depict other prominent groupings (Swanson, 2000) of the six major subdivisions of the brain (Kandal, Schwartz, & Jessell, 2000). Brain Stem The brain stem is sometimes referred to as the “trunk” of the brain. It is responsible for many of the neural functions that keep us alive, including regulating our respiration (breathing), heart rate, and digestion. In keeping with its function, if a patient sustains severe damage to the brain stem he or she will require “life support” (i.e., machines are used to keep him or her alive). Because of its vital role in survival, in many countries, a person who has lost brain stem function is said to be “brain dead,” although other countries require significant tissue loss in the cortex (of the cerebral hemispheres), which is responsible for our conscious experience, for the same diagnosis. The brain stem includes the medulla, pons, midbrain, and diencephalon (which consists of thalamus and hypothalamus). Collectively, these regions also are involved in our sleep–wake cycle, some sensory and motor function, as well as growth and other hormonal behaviors. Cerebellum The cerebellum is the distinctive structure at the back of the brain. The Greek philosopher and scientist Aristotle aptly referred to it as the “small brain” (“parencephalon” in Greek, “cerebellum” in Latin) in order to distinguish it from the “large brain” (“encephalon” in Greek, “cerebrum” in Latin). The cerebellum is critical for coordinated movement and posture. More recently, neuroimaging studies (see “Studying the Human Brain”) have implicated it in a range of cognitive abilities, including language. It is perhaps not surprising that the cerebellum’s influence extends beyond that of movement and posture, given that it contains the greatest number of neurons of any structure in the brain. However, the exact role it plays in these higher functions is still a matter of further study. Cerebral Hemispheres The cerebral hemispheres are responsible for our cognitive abilities and conscious experience. They consist of the cerebral cortex and accompanying white matter (“cerebrum” in Latin) as well as the subcortical structures of the basal ganglia, amygdala, and hippocampal formation. The cerebral cortex is the largest and most visible part of the brain, retaining the Latin name (cerebrum) for “large brain” that Aristotle coined. It consists of two hemispheres (literally two half spheres) and gives the brain its characteristic gray and convoluted appearance; the folds and grooves of the cortex are called gyri and sulci (gyrus and sulcus if referring to just one), respectively. The two cerebral hemispheres can be further subdivided into four lobes: the occipital, temporal, parietal, and frontal lobes. The occipital lobe is responsible for vision, as is much of the temporal lobe. The temporal lobe is also involved in auditory processing, memory, and multisensory integration (e.g., the convergence of vision and audition). The parietal lobe houses the somatosensory (body sensations) cortex and structures involved in visual attention, as well as multisensory convergence zones. The frontal lobe houses the motor cortex and structures involved in motor planning, language, judgment, and decision-making. Not surprisingly then, the frontal lobe is proportionally larger in humans than in any other animal. The subcortical structures are so named because they reside beneath the cortex. The basal ganglia are critical to voluntary movement and as such make contact with the cortex, the thalamus, and the brain stem. The amygdala and hippocampal formation are part of the limbic system, which also includes some cortical structures. The limbic system plays an important role in emotion and, in particular, in aversion and gratification. A Brain Divided The two cerebral hemispheres are connected by a dense bundle of white matter tracts called the corpus callosum. Some functions are replicated in the two hemispheres. For example, both hemispheres are responsible for sensory and motor function, although the sensory and motor cortices have a contralateral (or opposite-side) representation; that is, the left cerebral hemisphere is responsible for movements and sensations on the right side of the body and the right cerebral hemisphere is responsible for movements and sensations on the left side of the body. Other functions are lateralized; that is, they reside primarily in one hemisphere or the other. For example, for right-handed and the majority of left-handed individuals, the left hemisphere is most responsible for language. There are some people whose two hemispheres are not connected, either because the corpus callosum was surgically severed (callosotomy) or due to a genetic abnormality. These split-brain patients have helped us understand the functioning of the two hemispheres. First, because of the contralateral representation of sensory information, if an object is placed in only the left or only the right visual hemifield, then only the right or left hemisphere, respectively, of the split-brain patient will see it. In essence, it is as though the person has two brains in his or her head, each seeing half the world. Interestingly, because language is very often localized in the left hemisphere, if we show the right hemisphere a picture and ask the patient what she saw, she will say she didn’t see anything (because only the left hemisphere can speak and it didn’t see anything). However, we know that the right hemisphere sees the picture because if the patient is asked to press a button whenever she sees the image, the left hand (which is controlled by the right hemisphere) will respond despite the left hemisphere’s denial that anything was there. There are also some advantages to having disconnected hemispheres. Unlike those with a fully functional corpus callosum, a split-brain patient can simultaneously search for something in his right and left visual fields (Luck, Hillyard, Mangun, & Gazzaniga, 1989) and can do the equivalent of rubbing his stomach and patting his head at the same time (Franz, Eliason, Ivry, & Gazzaniga, 1996). In other words, they exhibit less competition between the hemispheres. Gray Versus White Matter The cerebral hemispheres contain both grey and white matter, so called because they appear grayish and whitish in dissections or in an MRI (magnetic resonance imaging; see, “Studying the Human Brain”). The gray matter is composed of the neuronal cell bodies (see module, “Neurons”). The cell bodies (or soma) contain the genes of the cell and are responsible for metabolism (keeping the cell alive) and synthesizing proteins. In this way, the cell body is the workhorse of the cell. The white matter is composed of the axons of the neurons, and, in particular, axons that are covered with a sheath of myelin (fatty support cells that are whitish in color). Axons conduct the electrical signals from the cell and are, therefore, critical to cell communication. People use the expression “use your gray matter” when they want a person to think harder. The “gray matter” in this expression is probably a reference to the cerebral hemispheres more generally; the gray cortical sheet (the convoluted surface of the cortex) being the most visible. However, both the gray matter and white matter are critical to proper functioning of the mind. Losses of either result in deficits in language, memory, reasoning, and other mental functions. See Figure 1.1.3 for MRI slices showing both the inner white matter that connects the cell bodies in the gray cortical sheet. Studying the Human Brain How do we know what the brain does? We have gathered knowledge about the functions of the brain from many different methods. Each method is useful for answering distinct types of questions, but the strongest evidence for a specific role or function of a particular brain area is converging evidence; that is, similar findings reported from multiple studies using different methods. One of the first organized attempts to study the functions of the brain was phrenology, a popular field of study in the first half of the 19th century. Phrenologists assumed that various features of the brain, such as its uneven surface, are reflected on the skull; therefore, they attempted to correlate bumps and indentations of the skull with specific functions of the brain. For example, they would claim that a very artistic person has ridges on the head that vary in size and location from those of someone who is very good at spatial reasoning. Although the assumption that the skull reflects the underlying brain structure has been proven wrong, phrenology nonetheless significantly impacted current-day neuroscience and its thinking about the functions of the brain. That is, different parts of the brain are devoted to very specific functions that can be identified through scientific inquiry. Neuroanatomy Dissection of the brain, in either animals or cadavers, has been a critical tool of neuroscientists since 340 BC when Aristotle first published his dissections. Since then this method has advanced considerably with the discovery of various staining techniques that can highlight particular cells. Because the brain can be sliced very thinly, examined under the microscope, and particular cells highlighted, this method is especially useful for studying specific groups of neurons or small brain structures; that is, it has a very high spatial resolution. Dissections allow scientists to study changes in the brain that occur due to various diseases or experiences (e.g., exposure to drugs or brain injuries). Virtual dissection studies with living humans are also conducted. Here, the brain is imaged using computerized axial tomography (CAT) or MRI scanners; they reveal with very high precision the various structures in the brain and can help detect changes in gray or white matter. These changes in the brain can then be correlated with behavior, such as performance on memory tests, and, therefore, implicate specific brain areas in certain cognitive functions. Changing the Brain Some researchers induce lesions or ablate (i.e., remove) parts of the brain in animals. If the animal’s behavior changes after the lesion, we can infer that the removed structure is important for that behavior. Lesions of human brains are studied in patient populations only; that is, patients who have lost a brain region due to a stroke or other injury, or who have had surgical removal of a structure to treat a particular disease (e.g., a callosotomy to control epilepsy, as in split-brain patients). From such case studies, we can infer brain function by measuring changes in the behavior of the patients before and after the lesion. Because the brain works by generating electrical signals, it is also possible to change brain function with electrical stimulation. Transcranial magnetic stimulation (TMS) refers to a technique whereby a brief magnetic pulse is applied to the head that temporarily induces a weak electrical current in the brain. Although effects of TMS are sometimes referred to as temporary virtual lesions, it is more appropriate to describe the induced electricity as interference with neurons’ normal communication with each other. TMS allows very precise study of when events in the brain happen so it has a good temporal resolution, but its application is limited only to the surface of the cortex and cannot extend to deep areas of the brain. Transcranial direct current stimulation (tDCS) is similar to TMS except that it uses electrical current directly, rather than inducing it with magnetic pulses, by placing small electrodes on the skull. A brain area is stimulated by a low current (equivalent to an AA battery) for a more extended period of time than TMS. When used in combination with cognitive training, tDCS has been shown to improve performance of many cognitive functions such as mathematical ability, memory, attention, and coordination (e.g., Brasil-Neto, 2012; Feng, Bowden, & Kautz, 2013; Kuo & Nitsche, 2012). Neuroimaging Neuroimaging tools are used to study the brain in action; that is, when it is engaged in a specific task. Positron emission tomography (PET) records blood flow in the brain. The PET scanner detects the radioactive substance that is injected into the bloodstream of the participant just before or while he or she is performing some task (e.g., adding numbers). Because active neuron populations require metabolites, more blood and hence more radioactive substance flows into those regions. PET scanners detect the injected radioactive substance in specific brain regions, allowing researchers to infer that those areas were active during the task. Functional magnetic resonance imaging (fMRI) also relies on blood flow in the brain. This method, however, measures the changes in oxygen levels in the blood and does not require any substance to be injected into the participant. Both of these tools have good spatial resolution (although not as precise as dissection studies), but because it takes at least several seconds for the blood to arrive to the active areas of the brain, PET and fMRI have poor temporal resolution; that is, they do not tell us very precisely when the activity occurred. Electroencephalography (EEG), on the other hand, measures the electrical activity of the brain, and therefore, it has a much greater temporal resolution (millisecond precision rather than seconds) than PET or fMRI. Like tDCS, electrodes are placed on the participant’s head when he or she is performing a task. In this case, however, many more electrodes are used, and they measure rather than produce activity. Because the electrical activity picked up at any particular electrode can be coming from anywhere in the brain, EEG has poor spatial resolution; that is, we have only a rough idea of which part of the brain generates the measured activity. Diffuse optical imaging (DOI) can give researchers the best of both worlds: high spatial and temporal resolution, depending on how it is used. Here, one shines infrared light into the brain, and measures the light that comes back out. DOI relies on the fact that the properties of the light change when it passes through oxygenated blood, or when it encounters active neurons. Researchers can then infer from the properties of the collected light what regions in the brain were engaged by the task. When DOI is set up to detect changes in blood oxygen levels, the temporal resolution is low and comparable to PET or fMRI. However, when DOI is set up to directly detect active neurons, it has both high spatial and temporal resolution. Because the spatial and temporal resolution of each tool varies, strongest evidence for what role a certain brain area serves comes from converging evidence. For example, we are more likely to believe that the hippocampal formation is involved in memory if multiple studies using a variety of tasks and different neuroimaging tools provide evidence for this hypothesis. The brain is a complex system, and only advances in brain research will show whether the brain can ever really understand itself. Outside Resources Video: Brain Bank at Harvard (National Geographic video) http://video.nationalgeographic.com/video/science/health-human-body-sci/human-body/brain-bank-sci/ Video: Frontal Lobes and Behavior (video #25) www.learner.org/resources/series142.html Video: Organization and Evaluation of Human Brain Function video (video #1) www.learner.org/resources/series142.html Video: Videos of a split-brain patient https://youtu.be/ZMLzP1VCANo Video: Videos of a split-brain patient (video #5) www.learner.org/resources/series142.html Web: Atlas of the Human Brain: interactive demos and brain sections http://www.thehumanbrain.info/ Web: Harvard University Human Brain Atlas: normal and diseased brain scans http://www.med.harvard.edu/aanlib/home.html Discussion Questions 1. In what ways does the segmentation of the brain into the brain stem, cerebellum, and cerebral hemispheres provide a natural division? 2. How has the study of split-brain patients been informative? 3. What is behind the expression “use your gray matter,” and why is it not entirely accurate? 4. Why is converging evidence the best kind of evidence in the study of brain function? 5. If you were interested in whether a particular brain area was involved in a specific behavior, what neuroscience methods could you use? 6. If you were interested in the precise time in which a particular brain process occurred, which neuroscience methods could you use? Vocabulary Ablation Surgical removal of brain tissue. Axial plane See “horizontal plane.” Basal ganglia Subcortical structures of the cerebral hemispheres involved in voluntary movement. Brain stem The “trunk” of the brain comprised of the medulla, pons, midbrain, and diencephalon. Callosotomy Surgical procedure in which the corpus callosum is severed (used to control severe epilepsy). Case study A thorough study of a patient (or a few patients) with naturally occurring lesions. Cerebellum The distinctive structure at the back of the brain, Latin for “small brain.” Cerebral cortex The outermost gray matter of the cerebrum; the distinctive convolutions characteristic of the mammalian brain. Cerebral hemispheres The cerebral cortex, underlying white matter, and subcortical structures. Cerebrum Usually refers to the cerebral cortex and associated white matter, but in some texts includes the subcortical structures. Contralateral Literally “opposite side”; used to refer to the fact that the two hemispheres of the brain process sensory information and motor commands for the opposite side of the body (e.g., the left hemisphere controls the right side of the body). Converging evidence Similar findings reported from multiple studies using different methods. Coronal plane A slice that runs from head to foot; brain slices in this plane are similar to slices of a loaf of bread, with the eyes being the front of the loaf. Diffuse optical imaging (DOI) A neuroimaging technique that infers brain activity by measuring changes in light as it is passed through the skull and surface of the brain. Electroencephalography (EEG) A neuroimaging technique that measures electrical brain activity via multiple electrodes on the scalp. Frontal lobe The front most (anterior) part of the cerebrum; anterior to the central sulcus and responsible for motor output and planning, language, judgment, and decision-making. Functional magnetic resonance imaging (fMRI) Functional magnetic resonance imaging (fMRI): A neuroimaging technique that infers brain activity by measuring changes in oxygen levels in the blood. Gray matter The outer grayish regions of the brain comprised of the neurons’ cell bodies. Gyri (plural) Folds between sulci in the cortex. Gyrus A fold between sulci in the cortex. Horizontal plane A slice that runs horizontally through a standing person (i.e., parallel to the floor); slices of brain in this plane divide the top and bottom parts of the brain; this plane is similar to slicing a hamburger bun. Lateralized To the side; used to refer to the fact that specific functions may reside primarily in one hemisphere or the other (e.g., for the majority individuals, the left hemisphere is most responsible for language). Lesion A region in the brain that suffered damage through injury, disease, or medical intervention. Limbic system Includes the subcortical structures of the amygdala and hippocampal formation as well as some cortical structures; responsible for aversion and gratification. Metabolite A substance necessary for a living organism to maintain life. Motor cortex Region of the frontal lobe responsible for voluntary movement; the motor cortex has a contralateral representation of the human body. Myelin Fatty tissue, produced by glial cells (see module, “Neurons”) that insulates the axons of the neurons; myelin is necessary for normal conduction of electrical impulses among neurons. Nomenclature Naming conventions. Occipital lobe The back most (posterior) part of the cerebrum; involved in vision. Parietal lobe The part of the cerebrum between the frontal and occipital lobes; involved in bodily sensations, visual attention, and integrating the senses. Phrenology A now-discredited field of brain study, popular in the first half of the 19th century that correlated bumps and indentations of the skull with specific functions of the brain. Positron emission tomography (PET) A neuroimaging technique that measures brain activity by detecting the presence of a radioactive substance in the brain that is initially injected into the bloodstream and then pulled in by active brain tissue. Sagittal plane A slice that runs vertically from front to back; slices of brain in this plane divide the left and right side of the brain; this plane is similar to slicing a baked potato lengthwise. Somatosensory (body sensations) cortex The region of the parietal lobe responsible for bodily sensations; the somatosensory cortex has a contralateral representation of the human body. Spatial resolution A term that refers to how small the elements of an image are; high spatial resolution means the device or technique can resolve very small elements; in neuroscience it describes how small of a structure in the brain can be imaged. Split-brain patient A patient who has had most or all of his or her corpus callosum severed. Subcortical Structures that lie beneath the cerebral cortex, but above the brain stem. Sulci (plural) Grooves separating folds of the cortex. Sulcus A groove separating folds of the cortex. Temporal lobe The part of the cerebrum in front of (anterior to) the occipital lobe and below the lateral fissure; involved in vision, auditory processing, memory, and integrating vision and audition. Temporal resolution A term that refers to how small a unit of time can be measured; high temporal resolution means capable of resolving very small units of time; in neuroscience it describes how precisely in time a process can be measured in the brain. Transcranial direct current stimulation (tDCS) A neuroscience technique that passes mild electrical current directly through a brain area by placing small electrodes on the skull. Transcranial magnetic stimulation (TMS) A neuroscience technique whereby a brief magnetic pulse is applied to the head that temporarily induces a weak electrical current that interferes with ongoing activity. Transverse plane See “horizontal plane.” Visual hemifield The half of visual space (what we see) on one side of fixation (where we are looking); the left hemisphere is responsible for the right visual hemifield, and the right hemisphere is responsible for the left visual hemifield. White matter The inner whitish regions of the cerebrum comprised of the myelinated axons of neurons in the cerebral cortex.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/02%3A_Biological_Basis_of_Behavior/2.01%3A_The_Brain.txt
By Aneeq Ahmad Henderson State University The mammalian nervous system is a complex biological organ, which enables many animals including humans to function in a coordinated fashion. The original design of this system is preserved across many animals through evolution; thus, adaptive physiological and behavioral functions are similar across many animal species. Comparative study of physiological functioning in the nervous systems of different animals lend insights to their behavior and their mental processing and make it easier for us to understand the human brain and behavior. In addition, studying the development of the nervous system in a growing human provides a wealth of information about the change in its form and behaviors that result from this change. The nervous system is divided into central and peripheral nervous systems, and the two heavily interact with one another. The peripheral nervous system controls volitional (somatic nervous system) and nonvolitional (autonomic nervous system) behaviors using cranial and spinal nerves. The central nervous system is divided into forebrain, midbrain, and hindbrain, and each division performs a variety of tasks; for example, the cerebral cortex in the forebrain houses sensory, motor, and associative areas that gather sensory information, process information for perception and memory, and produce responses based on incoming and inherent information. To study the nervous system, a number of methods have evolved over time; these methods include examining brain lesions, microscopy, electrophysiology, electroencephalography, and many scanning technologies. Learning Objectives • Describe and understand the development of the nervous system. • Learn and understand the two important parts of the nervous system. • Explain the two systems in the peripheral nervous system and what you know about the different regions and areas of the central nervous system. • Learn and describe different techniques of studying the nervous system. Understand which of these techniques are important for cognitive neuroscientists. • Describe the reasons for studying different nervous systems in animals other than human beings. Explain what lessons we learn from the evolutionary history of this organ. Evolution of the Nervous System Many scientists and thinkers (Cajal, 1937; Crick & Koch, 1990; Edelman, 2004) believe that the human nervous system is the most complex machine known to man. Its complexity points to one undeniable fact—that it has evolved slowly over time from simpler forms. Evolution of the nervous system is intriguing not because we can marvel at this complicated biological structure, but it is fascinating because it inherits a lineage of a long history of many less complex nervous systems (Figure 1.2.1), and it documents a record of adaptive behaviors observed in life forms other than humans. Thus, evolutionary study of the nervous system is important, and it is the first step in understanding its design, its workings, and its functional interface with the environment. The brains of some animals, like apes, monkeys, and rodents, are structurally similar to humans (Figure 1.2.1), while others are not (e.g., invertebrates, single-celled organisms). Does anatomical similarity of these brains suggest that behaviors that emerge in these species are also similar? Indeed, many animals display behaviors that are similar to humans, e.g., apes use nonverbal communication signals with their hands and arms that resemble nonverbal forms of communication in humans (Gardner & Gardner, 1969; Goodall, 1986; Knapp & Hall, 2009). If we study very simple behaviors, like physiological responses made by individual neurons, then brain-based behaviors of invertebrates (Kandel & Schwartz, 1982) look very similar to humans, suggesting that from time immemorial such basic behaviors have been conserved in the brains of many simple animal forms and in fact are the foundation of more complex behaviors in animals that evolved later (Bullock, 1984). Even at the micro-anatomical level, we note that individual neurons differ in complexity across animal species. Human neurons exhibit more intricate complexity than other animals; for example, neuronal processes (dendrites) in humans have many more branch points, branches, and spines. Complexity in the structure of the nervous system, both at the macro- and micro-levels, give rise to complex behaviors. We can observe similar movements of the limbs, as in nonverbal communication, in apes and humans, but the variety and intricacy of nonverbal behaviors using hands in humans surpasses apes. Deaf individuals who use American Sign Language (ASL) express themselves in English nonverbally; they use this language with such fine gradation that many accents of ASL exist (Walker, 1987). Complexity of behavior with increasing complexity of the nervous system, especially the cerebral cortex, can be observed in the genus Homo (Figure 1.2.2). If we compare sophistication of material culture in Homo habilis (2 million years ago; brain volume ~650 cm3) and Homo sapiens (300,000 years to now; brain volume ~1400 cm3), the evidence shows that Homo habilis used crude stone tools compared with modern tools used by Homo sapiens to erect cities, develop written languages, embark on space travel, and study her own self. All of this is due to increasing complexity of the nervous system. What has led to the complexity of the brain and nervous system through evolution, to its behavioral and cognitive refinement? Darwin (1859, 1871) proposed two forces of natural and sexual selection as work engines behind this change. He prophesied, “psychology will be based on a new foundation, that of the necessary acquirement of each mental power and capacity by gradation” that is, psychology will be based on evolution (Rosenzweig, Breedlove, & Leiman, 2002). Development of the Nervous System Where the study of change in the nervous system over eons is immensely captivating, studying the change in a single brain during individual development is no less engaging. In many ways the ontogeny (development) of the nervous system in an individual mimics the evolutionary advancement of this structure observed across many animal species. During development, the nervous tissue emerges from the ectoderm (one of the three layers of the mammalian embryo) through the process of neural induction. This process causes the formation of the neural tube, which extends in a rostrocaudal (head-to-tail) plane. The tube, which is hollow, seams itself in the rostrocaudal direction. In some disease conditions, the neural tube does not close caudally and results in an abnormality called spina bifida. In this pathological condition, the lumbar and sacral segments of the spinal cord are disrupted. As gestation progresses, the neural tube balloons up (cephalization) at the rostral end, and forebrain, midbrain, hindbrain, and the spinal cord can be visually delineated (day 40). About 50 days into gestation, six cephalic areas can be anatomically discerned (also see below for a more detailed description of these areas). The progenitor cells (neuroblasts) that form the lining (neuroepithelium) of the neural tube generate all the neurons and glial cells of the central nervous system. During early stages of this development, neuroblasts rapidly divide and specialize into many varieties of neurons and glial cells, but this proliferation of cells is not uniform along the neural tube—that is why we see the forebrain and hindbrain expand into larger cephalic tissues than the midbrain. The neuroepithelium also generates a group of specialized cells that migrate outside the neural tube to form the neural crest. This structure gives rise to sensory and autonomic neurons in the peripheral nervous system. The Structure of the Nervous System The mammalian nervous system is divided into central and peripheral nervous systems. The Peripheral Nervous System The peripheral nervous system is divided into somatic and autonomic nervous systems (Figure 3). Where the somatic nervous system consists of cranial nerves (12 pairs) and spinal nerves (31 pairs) and is under the volitional control of the individual in maneuvering bodily muscles, the autonomic nervous system also running through these nerves lets the individual have little control over muscles and glands. Main divisions of the autonomic nervous system that control visceral structures are the sympathetic and parasympathetic nervous systems. At an appropriate cue (say a fear-inducing object like a snake), the sympathetic division generally energizes many muscles (e.g., heart) and glands (e.g., adrenals), causing activity and release of hormones that lead the individual to negotiate the fear-causing snake with fight-or-flight responses. Whether the individual decides to fight the snake or run away from it, either action requires energy; in short, the sympathetic nervous system says “go, go, go.” The parasympathetic nervous system, on the other hand, curtails undue energy mobilization into muscles and glands and modulates the response by saying “stop, stop, stop.” This push–pull tandem system regulates fight-or-flight responses in all of us. The Central Nervous System The central nervous system is divided into a number of important parts (see Figure 1.2.4), including the spinal cord, each specialized to perform a set of specific functions. Telencephalon or cerebrum is a newer development in the evolution of the mammalian nervous system. In humans, it is about the size of a large napkin and when crumpled into the skull, it forms furrows called sulci (singular form, sulcus). The bulges between sulci are called gyri (singular form, gyrus). The cortex is divided into two hemispheres, and each hemisphere is further divided into four lobes (Figure 5a), which have specific functions. The division of these lobes is based on two delineating sulci: the central sulcus divides the hemisphere into frontal and parietal-occipital lobes and the lateral sulcus marks the temporal lobe, which lies below. Just in front of the central sulcus lies an area called the primary motor cortex (precentral gyrus), which connects to the muscles of the body, and on volitional command moves them. From mastication to movements in the genitalia, the body map is represented on this strip (Figure 1.2.6). Some body parts, like fingers, thumbs, and lips, occupy a greater representation on the strip than, say, the trunk. This disproportionate representation of the body on the primary motor cortex is called the magnification factor (Rolls & Cowey, 1970) and is seen in other motor and sensory areas. At the lower end of the central sulcus, close to the lateral sulcus, lies the Broca’s area (Figure 1.2.8) in the left frontal lobe, which is involved with language production. Damage to this part of the brain led Pierre Paul Broca, a French neuroscientist in 1861, to document many different forms of aphasias, in which his patients would lose the ability to speak or would retain partial speech impoverished in syntax and grammar (AAAS, 1880). It is no wonder that others have found subvocal rehearsal and central executive processes of working memory in this frontal lobe (Smith & Jonides, 1997, 1999). Just behind the central gyrus, in the parietal lobe, lies the primary somatosensory cortex (Figure 1.2.7) on the postcentral gyrus, which represents the whole body receiving inputs from the skin and muscles. The primary somatosensory cortex parallels, abuts, and connects heavily to the primary motor cortex and resembles it in terms of areas devoted to bodily representation. All spinal and some cranial nerves (e.g., the facial nerve) send sensory signals from skin (e.g., touch) and muscles to the primary somatosensory cortex. Close to the lower (ventral) end of this strip, curved inside the parietal lobe, is the taste area (secondary somatosensory cortex), which is involved with taste experiences that originate from the tongue, pharynx, epiglottis, and so forth. Just below the parietal lobe, and under the caudal end of the lateral fissure, in the temporal lobe, lies the Wernicke’s area (Demonet et al., 1992). This area is involved with language comprehension and is connected to the Broca’s area through the arcuate fasciculus, nerve fibers that connect these two regions. Damage to the Wernicke’s area (Figure 1.2.8) results in many kinds of agnosias; agnosia is defined as an inability to know or understand language and speech-related behaviors. So an individual may show word deafness, which is an inability to recognize spoken language, or word blindness, which is an inability to recognize written or printed language. Close in proximity to the Wernicke’s area is the primary auditory cortex, which is involved with audition, and finally the brain region devoted to smell (olfaction) is tucked away inside the primary olfactory cortex (prepyriform cortex). At the very back of the cerebral cortex lies the occipital lobe housing the primary visual cortex. Optic nerves travel all the way to the thalamus(lateral geniculate nucleus, LGN) and then to visual cortex, where images that are received on the retina are projected (Hubel, 1995). In the past 50 to 60 years, visual sense and visual pathways have been studied extensively, and our understanding about them has increased manifold. We now understand that all objects that form images on the retina are transformed (transduction) in neural language handed down to the visual cortex for further processing. In the visual cortex, all attributes (features) of the image, such as the color, texture, and orientation, are decomposed and processed by different visual cortical modules (Van Essen, Anderson & Felleman, 1992) and then recombined to give rise to singular perception of the image in question. If we cut the cerebral hemispheres in the middle, a new set of structures come into view. Many of these perform different functions vital to our being. For example, the limbic system contains a number of nuclei that process memory (hippocampus and fornix) and attention and emotions (cingulate gyrus); the globus pallidus is involved with motor movements and their coordination; the hypothalamus and thalamus are involved with drives, motivations, and trafficking of sensory and motor throughputs. The hypothalamus plays a key role in regulating endocrine hormones in conjunction with the pituitary gland that extends from the hypothalamus through a stalk (infundibulum). As we descend down the thalamus, the midbrain comes into view with superior and inferior colliculi, which process visual and auditory information, as does the substantia nigra, which is involved with notorious Parkinson’s disease, and the reticular formation regulating arousal, sleep, and temperature. A little lower, the hindbrain with the pons processes sensory and motor information employing the cranial nerves, works as a bridge that connects the cerebral cortex with the medulla, and reciprocally transfers information back and forth between the brain and the spinal cord. The medulla oblongata processes breathing, digestion, heart and blood vessel function, swallowing, and sneezing. The cerebellum controls motor movement coordination, balance, equilibrium, and muscle tone. The midbrain and the hindbrain, which make up the brain stem, culminate in the spinal cord. Whereas inside the cerebral cortex, the gray matter (neuronal cell bodies) lies outside and white matter (myelinated axons) inside; in the spinal cord this arrangement reverses, as the gray matter resides inside and the white matter outside. Paired nerves (ganglia) exit the spinal cord, some closer in direction towards the back (dorsal) and others towards the front (ventral). The dorsal nerves (afferent) receive sensory information from skin and muscles, and ventral nerves (efferent) send signals to muscles and organs to respond. Studying the Nervous System The study of the nervous system involves anatomical and physiological techniques that have improved over the years in efficiency and caliber. Clearly, gross morphology of the nervous system requires an eye-level view of the brain and the spinal cord. However, to resolve minute components, optical and electron microscopic techniques are needed. Light microscopes and, later, electron microscopes have changed our understanding of the intricate connections that exist among nerve cells. For example, modern staining procedures (immunocytochemistry) make it possible to see selected neurons that are of one type or another or are affected by growth. With better resolution of the electron microscopes, fine structures like the synaptic cleft between the pre- and post-synaptic neurons can be studied in detail. Along with the neuroanatomical techniques, a number of other methodologies aid neuroscientists in studying the function and physiology of the nervous system. Early on, lesion studies in animals (and study of neurological damage in humans) provided information about the function of the nervous system, by ablating (removing) parts of the nervous system or using neurotoxins to destroy them and documenting the effects on behavior or mental processes. Later, more sophisticated microelectrode techniques were introduced, which led to recording from single neurons in the animal brains and investigating their physiological functions. Such studies led to formulating theories about how sensory and motor information are processed in the brain. To study many neurons (millions of them at a time) electroencephalographic (EEG) techniques were introduced. These methods are used to study how large ensembles of neurons, representing different parts of the nervous system, with (event-related potentials) or without stimulation function together. In addition, many scanning techniques that visualize the brain in conjunction with methods mentioned above are used to understand the details of the structure and function of the brain. These include computerized axial tomography (CAT), which uses X-rays to capture many pictures of the brain and sandwiches them into 3-D models to study it. The resolution of this method is inferior to magnetic resonance imaging (MRI), which is yet another way to capture brain images using large magnets that bobble (precession) hydrogen nuclei in the brain. Although the resolution of MRI scans is much better than CAT scans, they do not provide any functional information about the brain. Positron Emission Tomography (PET) involves the acquisition of physiologic (functional) images of the brain based on the detection of positrons. Radio-labeled isotopes of certain chemicals, such as an analog of glucose (fluorodeoxyglucose), enters the active nerve cells and emits positrons, which are captured and mapped into scans. Such scans show how the brain and its many modules become active (or not) when energized with entering glucose analog. Disadvantages of PET scans include being invasive and rendering poor spatial resolution. The latter is why modern PET machines are coupled with CAT scanners to gain better resolution of the functioning brain. Finally, to avoid the invasiveness of PET, functional MRI (fMRI) techniques were developed. Brain images based on fMRI technique visualize brain function by changes in the flow of fluids (blood) in brain areas that occur over time. These scans provide a wealth of functional information about the brain as the individual may engage in a task, which is why the last two methods of brain scanning are very popular among cognitive neuroscientists. Understanding the nervous system has been a long journey of inquiry, spanning several hundreds of years of meticulous studies carried out by some of the most creative and versatile investigators in the fields of philosophy, evolution, biology, physiology, anatomy, neurology, neuroscience, cognitive sciences, and psychology. Despite our profound understanding of this organ, its mysteries continue to surprise us, and its intricacies make us marvel at this complex structure unmatched in the universe. Outside Resources Video: Pt. 1 video on the anatomy of the nervous system Video: Pt. 2 video on the anatomy of the nervous system Video: To look at functions of the brain and neurons, watch Web: To look at different kinds of brains, visit http://brainmuseum.org/ Discussion Questions 1. Why is it important to study the nervous system in an evolutionary context? 2. How can we compare changes in the nervous system made through evolution to changes made during development? 3. What are the similarities and differences between the somatic and autonomic nervous systems? 4. Describe functions of the midbrain and hindbrain. 5. Describe the anatomy and functions of the forebrain. 6. Compare and contrast electroencephalograms to electrophysiological techniques. 7. Which brain scan methodologies are important for cognitive scientists? Why? Vocabulary Afferent nerves Nerves that carry messages to the brain or spinal cord. Agnosias Due to damage of Wernicke’s area. An inability to recognize objects, words, or faces. Aphasia Due to damage of the Broca’s area. An inability to produce or understand words. Arcuate fasciculus A fiber tract that connects Wernicke’s and Broca’s speech areas. Autonomic nervous system A part of the peripheral nervous system that connects to glands and smooth muscles. Consists of sympathetic and parasympathetic divisions. Broca’s area An area in the frontal lobe of the left hemisphere. Implicated in language production. Central sulcus The major fissure that divides the frontal and the parietal lobes. Cerebellum A nervous system structure behind and below the cerebrum. Controls motor movement coordination, balance, equilibrium, and muscle tone. Cerebrum Consists of left and right hemispheres that sit at the top of the nervous system and engages in a variety of higher-order functions. Cingulate gyrus A medial cortical portion of the nervous tissue that is a part of the limbic system. Computerized axial tomography A noninvasive brain-scanning procedure that uses X-ray absorption around the head. Ectoderm The outermost layer of a developing fetus. Efferent nerves Nerves that carry messages from the brain to glands and organs in the periphery. Electroencephalography A technique that is used to measure gross electrical activity of the brain by placing electrodes on the scalp. Event-related potentials A physiological measure of large electrical change in the brain produced by sensory stimulation or motor responses. Forebrain A part of the nervous system that contains the cerebral hemispheres, thalamus, and hypothalamus. Fornix (plural form, fornices) A nerve fiber tract that connects the hippocampus to mammillary bodies. Frontal lobe The most forward region (close to forehead) of the cerebral hemispheres. Functional magnetic resonance imaging (or fMRI) A noninvasive brain-imaging technique that registers changes in blood flow in the brain during a given task (also see magnetic resonance imaging). Globus pallidus A nucleus of the basal ganglia. Gray matter Composes the bark or the cortex of the cerebrum and consists of the cell bodies of the neurons (see also white matter). Gyrus (plural form, gyri) A bulge that is raised between or among fissures of the convoluted brain. Hippocampus (plural form, hippocampi) A nucleus inside (medial) the temporal lobe implicated in learning and memory. Homo habilis A human ancestor, handy man, that lived two million years ago. Homo sapiens Modern man, the only surviving form of the genus Homo. Hypothalamus Part of the diencephalon. Regulates biological drives with pituitary gland. Immunocytochemistry A method of staining tissue including the brain, using antibodies. Lateral geniculate nucleus (or LGN) A nucleus in the thalamus that is innervated by the optic nerves and sends signals to the visual cortex in the occipital lobe. Lateral sulcus The major fissure that delineates the temporal lobe below the frontal and the parietal lobes. Lesion studies A surgical method in which a part of the animal brain is removed to study its effects on behavior or function. Limbic system A loosely defined network of nuclei in the brain involved with learning and emotion. Magnetic resonance imaging Or MRI is a brain imaging noninvasive technique that uses magnetic energy to generate brain images (also see fMRI). Magnification factor Cortical space projected by an area of sensory input (e.g., mm of cortex per degree of visual field). Medulla oblongata An area just above the spinal cord that processes breathing, digestion, heart and blood vessel function, swallowing, and sneezing. Neural crest A set of primordial neurons that migrate outside the neural tube and give rise to sensory and autonomic neurons in the peripheral nervous system. Neural induction A process that causes the formation of the neural tube. Neuroblasts Brain progenitor cells that asymmetrically divide into other neuroblasts or nerve cells. Neuroepithelium The lining of the neural tube. Occipital lobe The back part of the cerebrum, which houses the visual areas. Parasympathetic nervous system A division of the autonomic nervous system that is slower than its counterpart—that is, the sympathetic nervous system—and works in opposition to it. Generally engaged in “rest and digest” functions. Parietal lobe An area of the cerebrum just behind the central sulcus that is engaged with somatosensory and gustatory sensation. Pons A bridge that connects the cerebral cortex with the medulla, and reciprocally transfers information back and forth between the brain and the spinal cord. Positron Emission Tomography (or PET) An invasive procedure that captures brain images with positron emissions from the brain after the individual has been injected with radio-labeled isotopes. Primary Motor Cortex A strip of cortex just in front of the central sulcus that is involved with motor control. Primary Somatosensory Cortex A strip of cerebral tissue just behind the central sulcus engaged in sensory reception of bodily sensations. Rostrocaudal A front-back plane used to identify anatomical structures in the body and the brain. Somatic nervous system A part of the peripheral nervous system that uses cranial and spinal nerves in volitional actions. Spina bifida A developmental disease of the spinal cord, where the neural tube does not close caudally. Sulcus (plural form, sulci) The crevices or fissures formed by convolutions in the brain. Sympathetic nervous system A division of the autonomic nervous system, that is faster than its counterpart that is the parasympathetic nervous system and works in opposition to it. Generally engaged in “fight or flight” functions. Temporal lobe An area of the cerebrum that lies below the lateral sulcus; it contains auditory and olfactory (smell) projection regions. Thalamus A part of the diencephalon that works as a gateway for incoming and outgoing information. Transduction A process in which physical energy converts into neural energy. Wernicke’s area A language area in the temporal lobe where linguistic information is comprehended (Also see Broca’s area). White matter Regions of the nervous system that represent the axons of the nerve cells; whitish in color because of myelination of the nerve cells. Working memory Short transitory memory processed in the hippocampus.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/02%3A_Biological_Basis_of_Behavior/2.02%3A_The_Nervous_System.txt
By David M. Buss University of Texas at Austin Evolution or change over time occurs through the processes of natural and sexual selection. In response to problems in our environment, we adapt both physically and psychologically to ensure our survival and reproduction. Sexual selection theory describes how evolution has shaped us to provide a mating advantage rather than just a survival advantage and occurs through two distinct pathways: intrasexual competition and intersexual selection. Gene selection theory, the modern explanation behind evolutionary biology, occurs through the desire for gene replication. Evolutionary psychology connects evolutionary principles with modern psychology and focuses primarily on psychological adaptations: changes in the way we think in order to improve our survival. Two major evolutionary psychological theories are described: Sexual strategies theory describes the psychology of human mating strategies and the ways in which women and men differ in those strategies. Error management theory describes the evolution of biases in the way we think about everything. Learning objectives • Learn what “evolution” means. • Define the primary mechanisms by which evolution takes place. • Identify the two major classes of adaptations. • Define sexual selection and its two primary processes. • Define gene selection theory. • Understand psychological adaptations. • Identify the core premises of sexual strategies theory. • Identify the core premises of error management theory, and provide two empirical examples of adaptive cognitive biases. Introduction If you have ever been on a first date, you’re probably familiar with the anxiety of trying to figure out what clothes to wear or what perfume or cologne to put on. In fact, you may even consider flossing your teeth for the first time all year. When considering why you put in all this work, you probably recognize that you’re doing it to impress the other person. But how did you learn these particular behaviors? Where did you get the idea that a first date should be at a nice restaurant or someplace unique? It is possible that we have been taught these behaviors by observing others. It is also possible, however, that these behaviors—the fancy clothes, the expensive restaurant—are biologically programmed into us. That is, just as peacocks display their feathers to show how attractive they are, or some lizards do push-ups to show how strong they are, when we style our hair or bring a gift to a date, we’re trying to communicate to the other person: “Hey, I’m a good mate! Choose me! Choose me!" However, we all know that our ancestors hundreds of thousands of years ago weren’t driving sports cars or wearing designer clothes to attract mates. So how could someone ever say that such behaviors are “biologically programmed” into us? Well, even though our ancestors might not have been doing these specific actions, these behaviors are the result of the same driving force: the powerful influence of evolution. Yes, evolution—certain traits and behaviors developing over time because they are advantageous to our survival. In the case of dating, doing something like offering a gift might represent more than a nice gesture. Just as chimpanzees will give food to mates to show they can provide for them, when you offer gifts to your dates, you are communicating that you have the money or “resources” to help take care of them. And even though the person receiving the gift may not realize it, the same evolutionary forces are influencing his or her behavior as well. The receiver of the gift evaluates not only the gift but also the gift-giver's clothes, physical appearance, and many other qualities, to determine whether the individual is a suitable mate. But because these evolutionary processes are hardwired into us, it is easy to overlook their influence. To broaden your understanding of evolutionary processes, this module will present some of the most important elements of evolution as they impact psychology. Evolutionary theory helps us piece together the story of how we humans have prospered. It also helps to explain why we behave as we do on a daily basis in our modern world: why we bring gifts on dates, why we get jealous, why we crave our favorite foods, why we protect our children, and so on. Evolution may seem like a historical concept that applies only to our ancient ancestors but, in truth, it is still very much a part of our modern daily lives. Basics of Evolutionary Theory Evolution simply means change over time. Many think of evolution as the development of traits and behaviors that allow us to survive this “dog-eat-dog” world, like strong leg muscles to run fast, or fists to punch and defend ourselves. However, physical survival is only important if it eventually contributes to successful reproduction. That is, even if you live to be a 100-year-old, if you fail to mate and produce children, your genes will die with your body. Thus, reproductive success, not survival success, is the engine of evolution by natural selection. Every mating success by one person means the loss of a mating opportunity for another. Yet every living human being is an evolutionary success story. Each of us is descended from a long and unbroken line of ancestors who triumphed over others in the struggle to survive (at least long enough to mate) and reproduce. However, in order for our genes to endure over time—to survive harsh climates, to defeat predators—we have inherited adaptive, psychological processes designed to ensure success. At the broadest level, we can think of organisms, including humans, as having two large classes of adaptations—or traits and behaviors that evolved over time to increase our reproductive success. The first class of adaptations are called survival adaptations: mechanisms that helped our ancestors handle the “hostile forces of nature.” For example, in order to survive very hot temperatures, we developed sweat glands to cool ourselves. In order to survive very cold temperatures, we developed shivering mechanisms (the speedy contraction and expansion of muscles to produce warmth). Other examples of survival adaptations include developing a craving for fats and sugars, encouraging us to seek out particular foods rich in fats and sugars that keep us going longer during food shortages. Some threats, such as snakes, spiders, darkness, heights, and strangers, often produce fear in us, which encourages us to avoid them and thereby stay safe. These are also examples of survival adaptations. However, all of these adaptations are for physical survival, whereas the second class of adaptations are for reproduction, and help us compete for mates. These adaptations are described in an evolutionary theory proposed by Charles Darwin, called sexual selection theory. Sexual Selection Theory Darwin noticed that there were many traits and behaviors of organisms that could not be explained by “survival selection.” For example, the brilliant plumage of peacocks should actually lower their rates of survival. That is, the peacocks’ feathers act like a neon sign to predators, advertising “Easy, delicious dinner here!” But if these bright feathers only lower peacocks’ chances at survival, why do they have them? The same can be asked of similar characteristics of other animals, such as the large antlers of male stags or the wattles of roosters, which also seem to be unfavorable to survival. Again, if these traits only make the animals less likely to survive, why did they develop in the first place? And how have these animals continued to survive with these traits over thousands and thousands of years? Darwin’s answer to this conundrum was the theory of sexual selection: the evolution of characteristics, not because of survival advantage, but because of mating advantage. Sexual selection occurs through two processes. The first, intrasexual competition, occurs when members of one sex compete against each other, and the winner gets to mate with a member of the opposite sex. Male stags, for example, battle with their antlers, and the winner (often the stronger one with larger antlers) gains mating access to the female. That is, even though large antlers make it harder for the stags to run through the forest and evade predators (which lowers their survival success), they provide the stags with a better chance of attracting a mate (which increases their reproductive success). Similarly, human males sometimes also compete against each other in physical contests: boxing, wrestling, karate, or group-on-group sports, such as football. Even though engaging in these activities poses a "threat" to their survival success, as with the stag, the victors are often more attractive to potential mates, increasing their reproductive success. Thus, whatever qualities lead to success in intrasexual competition are then passed on with greater frequency due to their association with greater mating success. The second process of sexual selection is preferential mate choice, also called intersexual selection. In this process, if members of one sex are attracted to certain qualities in mates—such as brilliant plumage, signs of good health, or even intelligence—those desired qualities get passed on in greater numbers, simply because their possessors mate more often. For example, the colorful plumage of peacocks exists due to a long evolutionary history of peahens’ (the term for female peacocks) attraction to males with brilliantly colored feathers. In all sexually-reproducing species, adaptations in both sexes (males and females) exist due to survival selection and sexual selection. However, unlike other animals where one sex has dominant control over mate choice, humans have “mutual mate choice.” That is, both women and men typically have a say in choosing their mates. And both mates value qualities such as kindness, intelligence, and dependability that are beneficial to long-term relationships—qualities that make good partners and good parents. Gene Selection Theory In modern evolutionary theory, all evolutionary processes boil down to an organism’s genes. Genes are the basic “units of heredity,” or the information that is passed along in DNA that tells the cells and molecules how to “build” the organism and how that organism should behave. Genes that are better able to encourage the organism to reproduce, and thus replicate themselves in the organism’s offspring, have an advantage over competing genes that are less able. For example, take female sloths: In order to attract a mate, they will scream as loudly as they can, to let potential mates know where they are in the thick jungle. Now, consider two types of genes in female sloths: one gene that allows them to scream extremely loudly, and another that only allows them to scream moderately loudly. In this case, the sloth with the gene that allows her to shout louder will attract more mates—increasing reproductive success—which ensures that her genes are more readily passed on than those of the quieter sloth. Essentially, genes can boost their own replicative success in two basic ways. First, they can influence the odds for survival and reproduction of the organism they are in (individual reproductive success or fitness—as in the example with the sloths). Second, genes can also influence the organism to help other organisms who also likely contain those genes—known as “genetic relatives”—to survive and reproduce (which is called inclusive fitness). For example, why do human parents tend to help their own kids with the financial burdens of a college education and not the kids next door? Well, having a college education increases one’s attractiveness to other mates, which increases one’s likelihood for reproducing and passing on genes. And because parents’ genes are in their own children (and not the neighborhood children), funding their children’s educations increases the likelihood that the parents’ genes will be passed on. Understanding gene replication is the key to understanding modern evolutionary theory. It also fits well with many evolutionary psychological theories. However, for the time being, we’ll ignore genes and focus primarily on actual adaptations that evolved because they helped our ancestors survive and/or reproduce. Evolutionary Psychology Evolutionary psychology aims the lens of modern evolutionary theory on the workings of the human mind. It focuses primarily on psychological adaptations: mechanisms of the mind that have evolved to solve specific problems of survival or reproduction. These kinds of adaptations are in contrast to physiological adaptations, which are adaptations that occur in the body as a consequence of one’s environment. One example of a physiological adaptation is how our skin makes calluses. First, there is an “input,” such as repeated friction to the skin on the bottom of our feet from walking. Second, there is a “procedure,” in which the skin grows new skin cells at the afflicted area. Third, an actual callus forms as an “output” to protect the underlying tissue—the final outcome of the physiological adaptation (i.e., tougher skin to protect repeatedly scraped areas). On the other hand, a psychological adaptation is a development or change of a mechanism in the mind. For example, take sexual jealousy. First, there is an “input,” such as a romantic partner flirting with a rival. Second, there is a “procedure,” in which the person evaluates the threat the rival poses to the romantic relationship. Third, there is a behavioral output, which might range from vigilance (e.g., snooping through a partner’s email) to violence (e.g., threatening the rival). Evolutionary psychology is fundamentally an interactionist framework, or a theory that takes into account multiple factors when determining the outcome. For example, jealousy, like a callus, doesn’t simply pop up out of nowhere. There is an “interaction” between the environmental trigger (e.g., the flirting; the repeated rubbing of the skin) and the initial response (e.g., evaluation of the flirter’s threat; the forming of new skin cells) to produce the outcome. In evolutionary psychology, culture also has a major effect on psychological adaptations. For example, status within one’s group is important in all cultures for achieving reproductive success, because higher status makes someone more attractive to mates. In individualistic cultures, such as the United States, status is heavily determined by individual accomplishments. But in more collectivist cultures, such as Japan, status is more heavily determined by contributions to the group and by that group’s success. For example, consider a group project. If you were to put in most of the effort on a successful group project, the culture in the United States reinforces the psychological adaptation to try to claim that success for yourself (because individual achievements are rewarded with higher status). However, the culture in Japan reinforces the psychological adaptation to attribute that success to the whole group (because collective achievements are rewarded with higher status). Another example of cultural input is the importance of virginity as a desirable quality for a mate. Cultural norms that advise against premarital sex persuade people to ignore their own basic interests because they know that virginity will make them more attractive marriage partners. Evolutionary psychology, in short, does not predict rigid robotic-like “instincts.” That is, there isn’t one rule that works all the time. Rather, evolutionary psychology studies flexible, environmentally-connected and culturally-influenced adaptations that vary according to the situation. Psychological adaptations are hypothesized to be wide-ranging, and include food preferences, habitat preferences, mate preferences, and specialized fears. These psychological adaptations also include many traits that improve people's ability to live in groups, such as the desire to cooperate and make friends, or the inclination to spot and avoid frauds, punish rivals, establish status hierarchies, nurture children, and help genetic relatives. Research programs in evolutionary psychology develop and empirically test predictions about the nature of psychological adaptations. Below, we highlight a few evolutionary psychological theories and their associated research approaches. Sexual Strategies Theory Sexual strategies theory is based on sexual selection theory. It proposes that humans have evolved a list of different mating strategies, both short-term and long-term, that vary depending on culture, social context, parental influence, and personal mate value (desirability in the “mating market”). In its initial formulation, sexual strategies theory focused on the differences between men and women in mating preferences and strategies (Buss & Schmitt, 1993). It started by looking at the minimum parental investment needed to produce a child. For women, even the minimum investment is significant: after becoming pregnant, they have to carry that child for nine months inside of them. For men, on the other hand, the minimum investment to produce the same child is considerably smaller—simply the act of sex. These differences in parental investment have an enormous impact on sexual strategies. For a woman, the risks associated with making a poor mating choice is high. She might get pregnant by a man who will not help to support her and her children, or who might have poor-quality genes. And because the stakes are higher for a woman, wise mating decisions for her are much more valuable. For men, on the other hand, the need to focus on making wise mating decisions isn’t as important. That is, unlike women, men 1) don’t biologically have the child growing inside of them for nine months, and 2) do not have as high a cultural expectation to raise the child. This logic leads to a powerful set of predictions: In short-term mating, women will likely be choosier than men (because the costs of getting pregnant are so high), while men, on average, will likely engage in more casual sexual activities (because this cost is greatly lessened). Due to this, men will sometimes deceive women about their long-term intentions for the benefit of short-term sex, and men are more likely than women to lower their mating standards for short-term mating situations. An extensive body of empirical evidence supports these and related predictions (Buss & Schmitt, 2011). Men express a desire for a larger number of sex partners than women do. They let less time elapse before seeking sex. They are more willing to consent to sex with strangers and are less likely to require emotional involvement with their sex partners. They have more frequent sexual fantasies and fantasize about a larger variety of sex partners. They are more likely to regret missed sexual opportunities. And they lower their standards in short-term mating, showing a willingness to mate with a larger variety of women as long as the costs and risks are low. However, in situations where both the man and woman are interested in long-term mating, both sexes tend to invest substantially in the relationship and in their children. In these cases, the theory predicts that both sexes will be extremely choosy when pursuing a long-term mating strategy. Much empirical research supports this prediction, as well. In fact, the qualities women and men generally look for when choosing long-term mates are very similar: both want mates who are intelligent, kind, understanding, healthy, dependable, honest, loyal, loving, and adaptable. Nonetheless, women and men do differ in their preferences for a few key qualities in long-term mating, because of somewhat distinct adaptive problems. Modern women have inherited the evolutionary trait to desire mates who possess resources, have qualities linked with acquiring resources (e.g., ambition, wealth, industriousness), and are willing to share those resources with them. On the other hand, men more strongly desire youth and health in women, as both are cues to fertility. These male and female differences are universal in humans. They were first documented in 37 different cultures, from Australia to Zambia (Buss, 1989), and have been replicated by dozens of researchers in dozens of additional cultures (for summaries, see Buss, 2012). As we know, though, just because we have these mating preferences (e.g., men with resources; fertile women), people don't always get what they want. There are countless other factors which influence who people ultimately select as their mate. For example, the sex ratio (the percentage of men to women in the mating pool), cultural practices (such as arranged marriages, which inhibit individuals’ freedom to act on their preferred mating strategies), the strategies of others (e.g., if everyone else is pursuing short-term sex, it’s more difficult to pursue a long-term mating strategy), and many others all influence who we select as our mates. Sexual strategies theory—anchored in sexual selection theory— predicts specific similarities and differences in men and women’s mating preferences and strategies. Whether we seek short-term or long-term relationships, many personality, social, cultural, and ecological factors will all influence who our partners will be. Error Management Theory Error management theory (EMT) deals with the evolution of how we think, make decisions, and evaluate uncertain situations—that is, situations where there's no clear answer how we should behave. (Haselton & Buss, 2000; Haselton, Nettle, & Andrews, 2005). Consider, for example, walking through the woods at dusk. You hear a rustle in the leaves on the path in front of you. It could be a snake. Or, it could just be the wind blowing the leaves. Because you can't really tell why the leaves rustled, it’s an uncertain situation. The important question then is, what are the costs of errors in judgment? That is, if you conclude that it’s a dangerous snake so you avoid the leaves, the costs are minimal (i.e., you simply make a short detour around them). However, if you assume the leaves are safe and simply walk over them—when in fact it is a dangerous snake—the decision could cost you your life. Now, think about our evolutionary history and how generation after generation was confronted with similar decisions, where one option had low cost but great reward (walking around the leaves and not getting bitten) and the other had a low reward but high cost (walking through the leaves and getting bitten). These kinds of choices are called “cost asymmetries.” If during our evolutionary history we encountered decisions like these generation after generation, over time an adaptive bias would be created: we would make sure to err in favor of the least costly (in this case, least dangerous) option (e.g., walking around the leaves). To put it another way, EMT predicts that whenever uncertain situations present us with a safer versus more dangerous decision, we will psychologically adapt to prefer choices that minimize the cost of errors. EMT is a general evolutionary psychological theory that can be applied to many different domains of our lives, but a specific example of it is the visual descent illusion. To illustrate: Have you ever thought it would be no problem to jump off of a ledge, but as soon as you stood up there, it suddenly looked much higher than you thought? The visual descent illusion (Jackson & Cormack, 2008) states that people will overestimate the distance when looking down from a height (compared to looking up) so that people will be especially wary of falling from great heights—which would result in injury or death. Another example of EMT is the auditory looming bias: Have you ever noticed how an ambulance seems closer when it's coming toward you, but suddenly seems far away once it's immediately passed? With the auditory looming bias, people overestimate how close objects are when the sound is moving toward them compared to when it is moving away from them. From our evolutionary history, humans learned, "It’s better to be safe than sorry." Therefore, if we think that a threat is closer to us when it’s moving toward us (because it seems louder), we will be quicker to act and escape. In this regard, there may be times we ran away when we didn’t need to (a false alarm), but wasting that time is a less costly mistake than not acting in the first place when a real threat does exist. EMT has also been used to predict adaptive biases in the domain of mating. Consider something as simple as a smile. In one case, a smile from a potential mate could be a sign of sexual or romantic interest. On the other hand, it may just signal friendliness. Because of the costs to men of missing out on chances for reproduction, EMT predicts that men have a sexual overperception bias: they often misread sexual interest from a woman, when really it’s just a friendly smile or touch. In the mating domain, the sexual overperception bias is one of the best-documented phenomena. It’s been shown in studies in which men and women rated the sexual interest between people in photographs and videotaped interactions. As well, it’s been shown in the laboratory with participants engaging in actual “speed dating,” where the men interpret sexual interest from the women more often than the women actually intended it (Perilloux, Easton, & Buss, 2012). In short, EMT predicts that men, more than women, will over-infer sexual interest based on minimal cues, and empirical research confirms this adaptive mating bias. Conclusion Sexual strategies theory and error management theory are two evolutionary psychological theories that have received much empirical support from dozens of independent researchers. But, there are many other evolutionary psychological theories, such as social exchange theory for example, that also make predictions about our modern day behavior and preferences, too. The merits of each evolutionary psychological theory, however, must be evaluated separately and treated like any scientific theory. That is, we should only trust their predictions and claims to the extent they are supported by scientific studies. However, even if the theory is scientifically grounded, just because a psychological adaptation was advantageous in our history, it doesn't mean it's still useful today. For example, even though women may have preferred men with resources in generations ago, our modern society has advanced such that these preferences are no longer apt or necessary. Nonetheless, it's important to consider how our evolutionary history has shaped our automatic or "instinctual" desires and reflexes of today, so that we can better shape them for the future ahead. Outside Resources FAQs http://www.anth.ucsb.edu/projects/human/evpsychfaq.html Web: Articles and books on evolutionary psychology http://homepage.psy.utexas.edu/homep...Group/BussLAB/ Web: Main international scientific organization for the study of evolution and human behavior, HBES http://www.hbes.com/ Discussion Questions 1. How does change take place over time in the living world? 2. Which two potential psychological adaptations to problems of survival are not discussed in this module? 3. What are the psychological and behavioral implications of the fact that women bear heavier costs to produce a child than men do? 4. Can you formulate a hypothesis about an error management bias in the domain of social interaction? Vocabulary Adaptations Evolved solutions to problems that historically contributed to reproductive success. Error management theory (EMT) A theory of selection under conditions of uncertainty in which recurrent cost asymmetries of judgment or inference favor the evolution of adaptive cognitive biases that function to minimize the more costly errors. Evolution Change over time. Is the definition changing? Gene Selection Theory The modern theory of evolution by selection by which differential gene replication is the defining process of evolutionary change. Intersexual selection A process of sexual selection by which evolution (change) occurs as a consequences of the mate preferences of one sex exerting selection pressure on members of the opposite sex. Intrasexual competition A process of sexual selection by which members of one sex compete with each other, and the victors gain preferential mating access to members of the opposite sex. Natural selection Differential reproductive success as a consequence of differences in heritable attributes. Psychological adaptations Mechanisms of the mind that evolved to solve specific problems of survival or reproduction; conceptualized as information processing devices. Sexual selection The evolution of characteristics because of the mating advantage they give organisms. Sexual strategies theory A comprehensive evolutionary theory of human mating that defines the menu of mating strategies humans pursue (e.g., short-term casual sex, long-term committed mating), the adaptive problems women and men face when pursuing these strategies, and the evolved solutions to these mating problems.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/02%3A_Biological_Basis_of_Behavior/2.03%3A_Evolutionary_Theories_in_Psychology.txt
By Randy J. Nelson The Ohio State University The goal of this module is to introduce you to the topic of hormones and behavior. This field of study is also called behavioral endocrinology, which is the scientific study of the interaction between hormones and behavior. This interaction is bidirectional: hormones can influence behavior, and behavior can sometimes influence hormone concentrations. Hormones are chemical messengers released from endocrine glands that travel through the blood system to influence the nervous system to regulate behaviors such as aggression, mating, and parenting of individuals. learning objectives • Define the basic terminology and basic principles of hormone–behavior interactions. • Explain the role of hormones in behavioral sex differentiation. • Explain the role of hormones in aggressive behavior. • Explain the role of hormones in parental behavior. • Provide examples of some common hormone–behavior interactions. Introduction This module describes the relationship between hormones and behavior. Many readers are likely already familiar with the general idea that hormones can affect behavior. Students are generally familiar with the idea that sex-hormone concentrations increase in the blood during puberty and decrease as we age, especially after about 50 years of age. Sexual behavior shows a similar pattern. Most people also know about the relationship between aggression and anabolic steroid hormones, and they know that administration of artificial steroid hormones sometimes results in uncontrollable, violent behavior called “roid rage.” Many different hormones can influence several types of behavior, but for the purpose of this module, we will restrict our discussion to just a few examples of hormones and behaviors. For example, are behavioral sex differences the result of hormones, the environment, or some combination of factors? Why are men much more likely than women to commit aggressive acts? Are hormones involved in mediating the so-called maternal “instinct”? Behavioral endocrinologists are interested in how the general physiological effects of hormones alter the development and expression of behavior and how behavior may influence the effects of hormones. This module describes, both phenomenologically and functionally, how hormones affect behavior. To understand the hormone-behavior relationship, it is important briefly to describe hormones. Hormones are organic chemical messengers produced and released by specialized glands called endocrine glands. Hormones are released from these glands into the blood, where they may travel to act on target structures at some distance from their origin. Hormones are similar in function to neurotransmitters, the chemicals used by the nervous system in coordinating animals’ activities. However, hormones can operate over a greater distance and over a much greater temporal range than neurotransmitters (Focus Topic 1). Examples of hormones that influence behavior include steroid hormones such as testosterone (a common type of androgen), estradiol (a common type of estrogen), progesterone (a common type of progestin), and cortisol (a common type of glucocorticoid) (Table 1, A-B). Several types of protein or peptide (small protein) hormones also influence behavior, including oxytocin, vasopressin, prolactin, and leptin. Focus Topic 1: Neural Transmission versus Hormonal Communication Although neural and hormonal communication both rely on chemical signals, several prominent differences exist. Communication in the nervous system is analogous to traveling on a train. You can use the train in your travel plans as long as tracks exist between your proposed origin and destination. Likewise, neural messages can travel only to destinations along existing nerve tracts. Hormonal communication, on the other hand, is like traveling in a car. You can drive to many more destinations than train travel allows because there are many more roads than railroad tracks. Similarly, hormonal messages can travel anywhere in the body via the circulatory system; any cell receiving blood is potentially able to receive a hormonal message. Neural and hormonal communication differ in other ways as well. To illustrate them, consider the differences between digital and analog technologies. Neural messages are digital, all-or-none events that have rapid onset and offset: neural signals can take place in milliseconds. Accordingly, the nervous system mediates changes in the body that are relatively rapid. For example, the nervous system regulates immediate food intake and directs body movement. In contrast, hormonal messages are analog, graded events that may take seconds, minutes, or even hours to occur. Hormones can mediate long-term processes, such as growth, development, reproduction, and metabolism. Hormonal and neural messages are both chemical in nature, and they are released and received by cells in a similar manner; however, there are important differences as well. Neurotransmitters, the chemical messengers used by neurons, travel a distance of only 20–30 nanometers (30 X 10–9 m)—to the membrane of the postsynaptic neuron, where they bind with receptors. Hormones enter the circulatory system and may travel from 1 millimeter to >2 meters before arriving at a target cell, where they bind with specific receptors. Another distinction between neural and hormonal communication is the degree of voluntary control that can be exerted over their functioning. In general, there is more voluntary control of neural than of hormonal signals. It is virtually impossible to will a change in your thyroid hormone levels, for example, whereas moving your limbs on command is easy. Although these are significant differences, the division between the nervous system and the endocrine system is becoming more blurred as we learn more about how the nervous system regulates hormonal communication. A better understanding of the interface between the endocrine system and the nervous system, called neuroendocrinology, is likely to yield important advances in the future study of the interaction between hormones and behavior. Hormones coordinate the physiology and behavior of individuals by regulating, integrating, and controlling bodily functions. Over evolutionary time, hormones have often been co-opted by the nervous system to influence behavior to ensure reproductive success. For example, the same hormones, testosterone and estradiol, that cause gamete (egg or sperm) maturation also promote mating behavior. This dual hormonal function ensures that mating behavior occurs when animals have mature gametes available for fertilization. Another example of endocrine regulation of physiological and behavioral function is provided by pregnancy. Estrogens and progesterone concentrations are elevated during pregnancy, and these hormones are often involved in mediating maternal behavior in the mothers. Not all cells are influenced by each and every hormone. Rather, any given hormone can directly influence only cells that have specific hormone receptors for that particular hormone. Cells that have these specific receptors are called target cells for the hormone. The interaction of a hormone with its receptor begins a series of cellular events that eventually lead to activation of enzymatic pathways or, alternatively, turns on or turns off gene activation that regulates protein synthesis. The newly synthesized proteins may activate or deactivate other genes, causing yet another cascade of cellular events. Importantly, sufficient numbers of appropriate hormone receptors must be available for a specific hormone to produce any effects. For example, testosterone is important for male sexual behavior. If men have too little testosterone, then sexual motivation may be low, and it can be restored by testosterone treatment. However, if men have normal or even elevated levels of testosterone yet display low sexual drive, then it might be possible for a lack of receptors to be the cause and treatment with additional hormones will not be effective. How might hormones affect behavior? In terms of their behavior, one can think of humans and other animals conceptually as comprised of three interacting components: (1) input systems (sensory systems), (2) integrators (the central nervous system), and (3) output systems, or effectors (e.g., muscles). Hormones do not causebehavioral changes. Rather, hormones influence these three systems so that specific stimuli are more likely to elicit certain responses in the appropriate behavioral or social context. In other words, hormones change the probability that a particular behavior will be emitted in the appropriate situation (Nelson, 2011). This is a critical distinction that can affect how we think of hormone-behavior relationships. We can apply this three-component behavioral scheme to a simple behavior, singing in zebra finches. Only male zebra finches sing. If the testes of adult male finches are removed, then the birds reduce singing, but castrated finches resume singing if the testes are reimplanted, or if the birds are treated with either testosterone or estradiol. Although we commonly consider androgens to be “male” hormones and estrogens to be “female” hormones, it is common for testosterone to be converted to estradiol in nerve cells (Figure 1.5.1). Thus, many male-like behaviors are associated with the actions of estrogens! Indeed, all estrogens must first be converted from androgens because of the typical biochemical synthesis process. If the converting enzyme is low or missing, then it is possible for females to produce excessive androgens and subsequently develop associated male traits. It is also possible for estrogens in the environment to affect the nervous system of animals, including people (e.g., Kidd et al., 2007). Again, singing behavior is most frequent when blood testosterone or estrogen concentrations are high. Males sing to attract mates or ward off potential competitors from their territories. Although it is apparent from these observations that estrogens are somehow involved in singing, how might the three-component framework just introduced help us to formulate hypotheses to explore estrogen’s role in this behavior? By examining input systems, we could determine whether estrogens alter the birds’ sensory capabilities, making the environmental cues that normally elicit singing more salient. If this were the case, then females or competitors might be more easily seen or heard. Estrogens also could influence the central nervous system. Neuronal architecture or the speed of neural processing could change in the presence of estrogens. Higher neural processes (e.g., motivation, attention, or perception) also might be influenced. Finally, the effector organs, muscles in this case, could be affected by the presence of estrogens. Blood estrogen concentrations might somehow affect the muscles of a songbird’s syrinx (the vocal organ of birds). Estrogens, therefore, could affect birdsong by influencing the sensory capabilities, central processing system, or effector organs of an individual bird. We do not understand completely how estrogen, derived from testosterone, influences birdsong, but in most cases, hormones can be considered to affect behavior by influencing one, two, or all three of these components, and this three-part framework can aid in the design of hypotheses and experiments to explore these issues. How might behaviors affect hormones? The birdsong example demonstrates how hormones can affect behavior, but as noted, the reciprocal relation also occurs; that is, behavior can affect hormone concentrations. For example, the sight of a territorial intruder may elevate blood testosterone concentrations in resident male birds and thereby stimulate singing or fighting behavior. Similarly, male mice or rhesus monkeys that lose a fight decrease circulating testosterone concentrations for several days or even weeks afterward. Comparable results have also been reported in humans. Testosterone concentrations are affected not only in humans involved in physical combat, but also in those involved in simulated battles. For example, testosterone concentrations were elevated in winners and reduced in losers of regional chess tournaments. People do not have to be directly involved in a contest to have their hormones affected by the outcome of the contest. Male fans of both the Brazilian and Italian teams were recruited to provide saliva samples to be assayed for testosterone before and after the final game of the World Cup soccer match in 1994. Brazil and Italy were tied going into the final game, but Brazil won on a penalty kick at the last possible moment. The Brazilian fans were elated and the Italian fans were crestfallen. When the samples were assayed, 11 of 12 Brazilian fans who were sampled had increased testosterone concentrations, and 9 of 9 Italian fans had decreased testosterone concentrations, compared with pre-game baseline values (Dabbs, 2000). In some cases, hormones can be affected by anticipation of behavior. For example, testosterone concentrations also influence sexual motivation and behavior in women. In one study, the interaction between sexual intercourse and testosterone was compared with other activities (cuddling or exercise) in women (van Anders, Hamilton, Schmidt, & Watson, 2007). On three separate occasions, women provided a pre-activity, post-activity, and next-morning saliva sample. After analysis, the women’s testosterone was determined to be elevated prior to intercourse as compared to other times. Thus, an anticipatory relationship exists between sexual behavior and testosterone. Testosterone values were higher post-intercourse compared to exercise, suggesting that engaging in sexual behavior may also influence hormone concentrations in women. Sex Differences Hens and roosters are different. Cows and bulls are different. Men and women are different. Even girls and boys are different. Humans, like many animals, are sexually dimorphic (di, “two”; morph, “type”) in the size and shape of their bodies, their physiology, and for our purposes, their behavior. The behavior of boys and girls differs in many ways. Girls generally excel in verbal abilities relative to boys; boys are nearly twice as likely as girls to suffer from dyslexia (reading difficulties) and stuttering and nearly 4 times more likely to suffer from autism. Boys are generally better than girls at tasks that require visuospatial abilities. Girls engage in nurturing behaviors more frequently than boys. More than 90% of all anorexia nervosa cases involve young women. Young men are twice as likely as young women to suffer from schizophrenia. Boys are much more aggressive and generally engage in more rough-and-tumble play than girls (Berenbaum, Martin, Hanish, Briggs, & Fabes, 2008). Many sex differences, such as the difference in aggressiveness, persist throughout adulthood. For example, there are many more men than women serving prison sentences for violent behavior. The hormonal differences between men and women may account for adult sex differences that develop during puberty, but what accounts for behavioral sex differences among children prior to puberty and activation of their gonads? Hormonal secretions from the developing gonads determine whether the individual develops in a male or female manner. The mammalian embryonic testes produce androgens, as well as peptide hormones, that steer the development of the body, central nervous system, and subsequent behavior in a male direction. The embryonic ovaries of mammals are virtually quiescent and do not secrete high concentrations of hormones. In the presence of ovaries, or in the complete absence of any gonads, morphological, neural, and, later, behavioral development follows a female pathway. Gonadal steroid hormones have organizational (or programming) effects upon brain and behavior (Phoenix, Goy, Gerall, & Young, 1959). The organizing effects of steroid hormones are relatively constrained to the early stages of development. An asymmetry exists in the effects of testes and ovaries on the organization of behavior in mammals. Hormone exposure early in life has organizational effects on subsequent rodent behavior; early steroid hormone treatment causes relatively irreversible and permanent masculinization of rodent behavior (mating and aggressive). These early hormone effects can be contrasted with the reversible behavioral influences of steroid hormones provided in adulthood, which are called activational effects. The activational effects of hormones on adult behavior are temporary and may wane soon after the hormone is metabolized. Thus, typical male behavior requires exposure to androgens during gestation (in humans) or immediately after birth (in rodents) to somewhat masculinize the brain and also requires androgens during or after puberty to activate these neural circuits. Typical female behavior requires a lack of exposure to androgens early in life which leads to feminization of the brain and also requires estrogens to activate these neural circuits in adulthood. But this simple dichotomy, which works well with animals with very distinct sexual dimorphism in behavior, has many caveats when applied to people. If you walk through any major toy store, then you will likely observe a couple of aisles filled with pink boxes and the complete absence of pink packaging of toys in adjacent aisles. Remarkably, you will also see a strong self-segregation of boys and girls in these aisles. It is rare to see boys in the “pink” aisles and vice versa. The toy manufacturers are often accused of making toys that are gender biased, but it seems more likely that boys and girls enjoy playing with specific types and colors of toys. Indeed, toy manufacturers would immediately double their sales if they could sell toys to both sexes. Boys generally prefer toys such as trucks and balls and girls generally prefer toys such as dolls. Although it is doubtful that there are genes that encode preferences for toy cars and trucks on the Y chromosome, it is possible that hormones might shape the development of a child’s brain to prefer certain types of toys or styles of play behavior. It is reasonable to believe that children learn which types of toys and which styles of play are appropriate to their gender. How can we understand and separate the contribution of physiological mechanisms from learning to understand sex differences in human behaviors? To untangle these issues, animal models are often used. Unlike the situation in humans, where sex differences are usually only a matter of degree (often slight), in some animals, members of only one sex may display a particular behavior. As noted, often only male songbirds sing. Studies of such strongly sex-biased behaviors are particularly valuable for understanding the interaction among behavior, hormones, and the nervous system. A study of vervet monkeys calls into question the primacy of learning in the establishment of toy preferences (Alexander & Hines, 2002). Female vervet monkeys preferred girl-typical toys, such as dolls or cooking pots, whereas male vervet monkeys preferred boy-typical toys, such as cars or balls. There were no sex differences in preference for gender-neutral toys, such as picture books or stuffed animals. Presumably, monkeys have no prior concept of “boy” or “girl” toys. Young rhesus monkeys also show similar toy preferences. What then underlies the sex difference in toy preference? It is possible that certain attributes of toys (or objects) appeal to either boys or girls. Toys that appeal to boys or male vervet or rhesus monkeys, in this case, a ball or toy car, are objects that can be moved actively through space, toys that can be incorporated into active, rough and tumble play. The appeal of toys that girls or female vervet monkeys prefer appears to be based on color. Pink and red (the colors of the doll and pot) may provoke attention to infants. Society may reinforce such stereotypical responses to gender-typical toys. The sex differences in toy preferences emerge by 12 or 24 months of age and seem fixed by 36 months of age, but are sex differences in toy preference present during the first year of life? It is difficult to ask pre-verbal infants what they prefer, but in studies where the investigators examined the amount of time that babies looked at different toys, eye-tracking data indicate that infants as young as 3 months showed sex differences in toy preferences; girls preferred dolls, whereas boys preferred trucks. Another result that suggests, but does not prove, that hormones are involved in toy preferences is the observation that girls diagnosed with congenital adrenal hyperplasia (CAH), whose adrenal glands produce varying amounts of androgens early in life, played with masculine toys more often than girls without CAH. Further, a dose-response relationship between the extent of the disorder (i.e., degree of fetal androgen exposure) and degree of masculinization of play behavior was observed. Are the sex differences in toy preferences or play activity, for example, the inevitable consequences of the differential endocrine environments of boys and girls, or are these differences imposed by cultural practices and beliefs? Are these differences the result of receiving gender-specific toys from an early age, or are these differences some combination of endocrine and cultural factors? Again, these are difficult questions to unravel in people. Even when behavioral sex differences appear early in development, there seems to be some question regarding the influences of societal expectations. One example is the pattern of human play behavior during which males are more physical; this pattern is seen in a number of other species including nonhuman primates, rats, and dogs. Is the difference in the frequency of rough-and-tumble play between boys and girls due to biological factors associated with being male or female, or is it due to cultural expectations and learning? If there is a combination of biological and cultural influences mediating the frequency of rough-and-tumble play, then what proportion of the variation between the sexes is due to biological factors and what proportion is due to social influences? Importantly, is it appropriate to talk about “normal” sex differences when these traits virtually always arrange themselves along a continuum rather than in discrete categories? Sex differences are common in humans and in nonhuman animals. Because males and females differ in the ratio of androgenic and estrogenic steroid hormone concentrations, behavioral endocrinologists have been particularly interested in the extent to which behavioral sex differences are mediated by hormones. The process of becoming female or male is called sexual differentiation. The primary step in sexual differentiation occurs at fertilization. In mammals, the ovum (which always contains an X chromosome) can be fertilized by a sperm bearing either a Y or an X chromosome; this process is called sex determination. The chromosomal sex of homogametic mammals (XX) is female; the chromosomal sex of heterogametic mammals (XY) is male. Chromosomal sex determines gonadal sex. Virtually all subsequent sexual differentiation is typically the result of differential exposure to gonadal steroid hormones. Thus, gonadal sex determines hormonal sex, which regulates morphological sex. Morphological differences in the central nervous system, as well as in some effector organs, such as muscles, lead to behavioral sex differences. The process of sexual differentiation is complicated, and the potential for errors is present. Perinatal exposure to androgens is the most common cause of anomalous sexual differentiation among females. The source of androgen may be internal (e.g., secreted by the adrenal glands) or external (e.g., exposure to environmental estrogens). Turner syndrome results when the second X chromosome is missing or damaged; these individuals possess dysgenic ovaries and are not exposed to steroid hormones until puberty. Interestingly, women with Turner syndrome often have impaired spatial memory. Female mammals are considered the “neutral” sex; additional physiological steps are required for male differentiation, and more steps bring more possibilities for errors in differentiation. Some examples of male anomalous sexual differentiation include 5α-reductase deficiency (in which XY individuals are born with ambiguous genitalia because of a lack of dihydrotestosterone and are reared as females, but masculinization occurs during puberty) and androgen insensitivity syndrome or TFM (in which XY individuals lack receptors for androgens and develop as females). By studying individuals who do not neatly fall into the dichotic boxes of female or male and for whom the process of sexual differentiation is atypical, behavioral endocrinologists glean hints about the process of typical sexual differentiation. We may ultimately want to know how hormones mediate sex differences in the human brain and behavior (to the extent to which these differences occur). To understand the mechanisms underlying sex differences in the brain and behavior, we return to the birdsong example. Birds provide the best evidence that behavioral sex differences are the result of hormonally induced structural changes in the brain (Goodson, Saldanha, Hahn, & Soma, 2005). In contrast to mammals, in which structural differences in neural tissues have not been directly linked to behavior, structural differences in avian brains have been directly linked to a sexually behavior: birdsong. Several brain regions in songbirds display significant sex differences in size. Two major brain circuit pathways, (1) the song production motor pathway and (2) the auditory transmission pathway, have been implicated in the learning and production of birdsong. Some parts of the song production pathway of male zebra finches are 3 to 6 times larger than those of female conspecifics. The larger size of these brain areas reflects that neurons in these nuclei are larger, more numerous, and farther apart. Although castration of adult male birds reduces singing, it does not reduce the size of the brain nuclei controlling song production. Similarly, androgen treatment of adult female zebra finches does not induce changes either in singing or in the size of the song control regions. Thus, activational effects of steroid hormones do not account for the sex differences in singing behavior or brain nucleus size in zebra finches. The sex differences in these structures are organized or programmed in the egg by estradiol (masculinizes) or the lack of steroids (feminizes). Taken together, estrogens appear to be necessary to activate the neural machinery underlying the song system in birds. The testes of birds primarily produce androgens, which enter the circulation. The androgens enter neurons containing aromatase, which converts them to estrogens. Indeed, the brain is the primary source of estrogens, which activate masculine behaviors in many bird species. Sex differences in human brain size have been reported for years. More recently, sex differences in specific brain structures have been discovered (Figure 1.5.2). Sex differences in a number of cognitive functions have also been reported. Females are generally more sensitive to auditory information, whereas males are more sensitive to visual information. Females are also typically more sensitive than males to taste and olfactory input. Women display less lateralization of cognitive functions than men. On average, females generally excel in verbal, perceptual, and fine motor skills, whereas males outperform females on quantitative and visuospatial tasks, including map reading and direction finding. Although reliable sex differences can be documented, these differences in ability are slight. It is important to note that there is more variation within each sex than betweenthe sexes for most cognitive abilities (Figure 1.5.3). Aggressive Behaviors The possibility for aggressive behavior exists whenever the interests of two or more individuals are in conflict (Nelson, 2006). Conflicts are most likely to arise over limited resources such as territories, food, and mates. A social interaction decides which animal gains access to the contested resource. In many cases, a submissive posture or gesture on the part of one animal avoids the necessity of actual combat over a resource. Animals may also participate in threat displays or ritualized combat in which dominance is determined but no physical damage is inflicted. There is overwhelming circumstantial evidence that androgenic steroid hormones mediate aggressive behavior across many species. First, seasonal variations in blood plasma concentrations of testosterone and seasonal variations in aggression coincide. For instance, the incidence of aggressive behavior peaks for male deer in autumn, when they are secreting high levels of testosterone. Second, aggressive behaviors increase at the time of puberty, when the testes become active and blood concentrations of androgens rise. Juvenile deer do not participate in the fighting during the mating season. Third, in any given species, males are generally more aggressive than females. This is certainly true of deer; relative to stags, female deer rarely display aggressive behavior, and their rare aggressive acts are qualitatively different from the aggressive behavior of aggressive males. Finally, castration typically reduces aggression in males, and testosterone replacement therapy restores aggression to pre-castration levels. There are some interesting exceptions to these general observations that are outside the scope of this module. As mentioned, males are generally more aggressive than females. Certainly, human males are much more aggressive than females. Many more men than women are convicted of violent crimes in North America. The sex differences in human aggressiveness appear very early. At every age throughout the school years, many more boys than girls initiate physical assaults. Almost everyone will acknowledge the existence of this sex difference, but assigning a cause to behavioral sex differences in humans always elicits much debate. It is possible that boys are more aggressive than girls because androgens promote aggressive behavior and boys have higher blood concentrations of androgens than girls. It is possible that boys and girls differ in their aggressiveness because the brains of boys are exposed to androgens prenatally and the “wiring” of their brains is thus organized in a way that facilitates the expression of aggression. It is also possible that boys are encouraged and girls are discouraged by family, peers, or others from acting in an aggressive manner. These three hypotheses are not mutually exclusive, but it is extremely difficult to discriminate among them to account for sex differences in human aggressiveness. What kinds of studies would be necessary to assess these hypotheses? It is usually difficult to separate out the influences of environment and physiology on the development of behavior in humans. For example, boys and girls differ in their rough-and-tumble play at a very young age, which suggests an early physiological influence on aggression. However, parents interact with their male and female offspring differently; they usually play more roughly with male infants than with females, which suggests that the sex difference in aggressiveness is partially learned. This difference in parental interaction style is evident by the first week of life. Because of these complexities in the factors influencing human behavior, the study of hormonal effects on sex-differentiated behavior has been pursued in nonhuman animals, for which environmental influences can be held relatively constant. Animal models for which sexual differentiation occurs postnatally are often used so that this process can be easily manipulated experimentally. Again, with the appropriate animal model, we can address the questions posed above: Is the sex difference in aggression due to higher adult blood concentrations of androgens in males than in females, or are males more aggressive than females because their brains are organized differently by perinatal hormones? Are males usually more aggressive than females because of an interaction of early and current blood androgen concentrations? If male mice are castrated prior to their sixth day of life, then treated with testosterone propionate in adulthood, they show low levels of aggression. Similarly, females ovariectomized prior to their sixth day but given androgens in adulthood do not express male-like levels of aggression. Treatment of perinatally gonadectomized males or females with testosterone prior to their sixth day life and also in adulthood results in a level of aggression similar to that observed in typical male mice. Thus, in mice, the proclivity for males to act more aggressively than females is organized perinatally by androgens but also requires the presence of androgens after puberty in order to be fully expressed. In other words, aggression in male mice is both organized and activated by androgens. Testosterone exposure in adulthood without prior organization of the brain by steroid hormones does not evoke typical male levels of aggression. The hormonal control of aggressive behavior in house mice is thus similar to the hormonal mediation of heterosexual male mating behavior in other rodent species. Aggressive behavior is both organized and activated by androgens in many species, including rats, hamsters, voles, dogs, and possibly some primate species. Parental Behaviors Parental behavior can be considered to be any behavior that contributes directly to the survival of fertilized eggs or offspring that have left the body of the female. There are many patterns of mammalian parental care. The developmental status of the newborn is an important factor driving the type and quality of parental care in a species. Maternal care is much more common than paternal care. The vast majority of research on the hormonal correlates of mammalian parental behavior has been conducted on rats. Rats bear altricial young, and mothers perform a cluster of stereotyped maternal behaviors, including nest building, crouching over the pups to allow nursing and to provide warmth, pup retrieval, and increased aggression directed at intruders. If you expose nonpregnant female rats (or males) to pups, their most common reaction is to huddle far away from them. Rats avoid new things (neophobia). However, if you expose adult rats to pups every day, they soon begin to behave maternally. This process is called concaveation or sensitization and it appears to serve to reduce the adult rats’ fear of pups. Of course a new mother needs to act maternal as soon as her offspring arrive—not in a week. The onset of maternal behavior in rats is mediated by hormones. Several methods of study, such as hormone removal and replacement therapy, have been used to determine the hormonal correlates of rat maternal behavior. A fast decline of blood concentrations of progesterone in late pregnancy after sustained high concentrations of this hormone, in combination with high concentrations of estradiol and probably prolactin and oxytocin, induces female rats to behave maternally almost immediately in the presence of pups. This pattern of hormones at parturition overrides the usual fear response of adult rats toward pups, and it permits the onset of maternal behavior. Thus, the so-called maternal “instinct” requires hormones to increase the approach tendency and lower the avoidance tendency. Laboratory strains of mice and rats are usually docile, but mothers can be quite aggressive toward animals that venture too close to their litter. Progesterone appears to be the primary hormone that induces this maternal aggression in rodents, but species differences exist. The role of maternal aggression in women’s behavior has not been adequately described or tested. A series of elegant experiments by Alison Fleming and her collaborators studied the endocrine correlates of the behavior of human mothers as well as the endocrine correlates of maternal attitudes as expressed in self-report questionnaires. Responses such as patting, cuddling, or kissing the baby were called affectionate behaviors; talking, singing, or cooing to the baby were considered vocal behaviors. Both affectionate and vocal behaviors were considered approach behaviors. Basic caregiving activities, such as changing diapers and burping the infants, were also recorded. In these studies, no relationship between hormone concentrations and maternal responsiveness, as measured by attitude questionnaires, was found. For example, most women showed an increasing positive self-image during early pregnancy that dipped during the second half of pregnancy, but recovered after parturition. A related dip in feelings of maternal engagement occurred during late pregnancy, but rebounded substantially after birth in most women. However, when behavior, rather than questionnaire responses, was compared with hormone concentrations, a different story emerged. Blood plasma concentrations of cortisol were positively associated with approach behaviors. In other words, women who had high concentrations of blood cortisol, in samples obtained immediately before or after nursing, engaged in more physically affectionate behaviors and talked more often to their babies than mothers with low cortisol concentrations. Additional analyses from this study revealed that the correlation was even greater for mothers that had reported positive maternal regard (feelings and attitudes) during gestation. Indeed, nearly half of the variation in maternal behavior among women could be accounted for by cortisol concentrations and positive maternal attitudes during pregnancy. Presumably, cortisol does not induce maternal behaviors directly, but it may act indirectly on the quality of maternal care by evoking an increase in the mother’s general level of arousal, thus increasing her responsiveness to infant-generated cues. New mothers with high cortisol concentrations were also more attracted to their infant’s odors, were superior in identifying their infants, and generally found cues from infants highly appealing (Fleming, Steiner, & Corter, 1997). The medial preoptic area is critical for the expression of rat maternal behavior. The amygdala appears to tonically inhibit the expression of maternal behavior. Adult rats are fearful of pups, a response that is apparently mediated by chemosensory information. Lesions of the amygdala or afferent sensory pathways from the vomeronasal organ to the amygdala disinhibit the expression of maternal behavior. Hormones or sensitization likely act to disinhibit the amygdala, thus permitting the occurrence of maternal behavior. Although correlations have been established, direct evidence of brain structural changes in human mothers remains unspecified (Fleming & Gonzalez, 2009). Considered together, there are many examples of hormones influencing behavior and of behavior feeding back to influence hormone secretion. More and more examples of hormone–behavior interactions are discovered, including hormones in the mediation of food and fluid intake, social interactions, salt balance, learning and memory, stress coping, as well as psychopathology including depression, anxiety disorders, eating disorders, postpartum depression, and seasonal depression. Additional research should reveal how these hormone–behavior interactions are mediated. Outside Resources Book: Adkins-Regan, E. (2005). Hormones and animal social behavior. Princeton, NJ: Princeton University Press. Book: Beach, F. A. (1948). Hormones and behavior. New York: Paul Hoeber. Book: Beach, F. A. (1975). Behavioral endocrinology: An emerging discipline. American Scientist, 63: 178–187. Book: Nelson, R. J. (2011). An introduction to behavioral endocrinology (4th ed.). Sunderland, MA: Sinauer Associates. Book: Pfaff, D. W. (2009). Hormones, brain, and behavior (2nd ed.). New York: Academic Press. Book: Pfaff, D. W., Phillips, I. M., & Rubin, R. T. (2005). Principles of hormone/behavior relations. New York: Academic Press. Video: Endocrinology Video (Playlist) - This YouTube playlist contains many helpful videos on the biology of hormones, including reproduction and behavior. This would be a helpful resource for students struggling with hormone synthesis, reproduction, regulation of biological functions, and signaling pathways. https://www.youtube.com/playlist?list=PLqTetbgey0aemiTfD8QkMsSUq8hQzv-vA Video: Paul Zak: Trust, morality - and oxytocin- This Ted talk explores the roles of oxytocin in the body. Paul Zak discusses biological functions of oxytocin, like lactation, as well as potential behavioral functions, like empathy. Video: Sex Differentiation- This video discusses gonadal differentiation, including the role of androgens in the development of male features. Video: The Teenage Brain Explained- This is a great video explaining the roles of hormones during puberty. Web: Society for Behavioral Neuroendocrinology - This website contains resources on current news and research in the field of neuroendocrinology. http://sbn.org/home.aspx Discussion Questions 1. What are some of the problems associated with attempting to determine causation in a hormone–behavior interaction? What are the best ways to address these problems? 2. Hormones cause changes in the rates of cellular processes or in cellular morphology. What are some ways that these hormonally induced cellular changes might theoretically produce profound changes in behavior? 3. List and describe some behavioral sex differences that you have noticed between boys and girls. What causes girls and boys to choose different toys? Do you think that the sex differences you have noted arise from biological causes or are learned? How would you go about establishing your opinions as fact? 4. Why is it inappropriate to refer to androgens as “male” hormones and estrogens as “female” hormones? 5. Imagine that you discovered that the brains of architects were different from those of non-architects—specifically, that the “drawstraightem nuclei” of the right temporal lobe were enlarged in architects as compared with non-architects. Would you argue that architects were destined to be architects because of their brain organization or that experience as an architect changed their brains? How would you resolve this issue? Vocabulary 5α-reductase An enzyme required to convert testosterone to 5α-dihydrotestosterone. Aggression A form of social interaction that includes threat, attack, and fighting. Aromatase An enzyme that converts androgens into estrogens. Chromosomal sex The sex of an individual as determined by the sex chromosomes (typically XX or XY) received at the time of fertilization. Defeminization The removal of the potential for female traits. Demasculinization The removal of the potential for male traits. Dihydrotestosterone (DHT) A primary androgen that is an androgenic steroid product of testosterone and binds strongly to androgen receptors. Endocrine gland A ductless gland from which hormones are released into the blood system in response to specific biological signals. Estrogen Any of the C18 class of steroid hormones, so named because of the estrus-generating properties in females. Biologically important estrogens include estradiol and estriol. Feminization The induction of female traits. Gonadal sex The sex of an individual as determined by the possession of either ovaries or testes. Females have ovaries, whereas males have testes. Hormone An organic chemical messenger released from endocrine cells that travels through the blood to interact with target cells at some distance to cause a biological response. Masculinization The induction of male traits. Maternal behavior Parental behavior performed by the mother or other female. Neurotransmitter A chemical messenger that travels between neurons to provide communication. Some neurotransmitters, such as norepinephrine, can leak into the blood system and act as hormones. Oxytocin A peptide hormone secreted by the pituitary gland to trigger lactation, as well as social bonding. Parental behavior Behaviors performed in relation to one’s offspring that contributes directly to the survival of those offspring Paternal behavior Parental behavior performed by the father or other male. Progesterone A primary progestin that is involved in pregnancy and mating behaviors. Progestin A class of C21 steroid hormones named for their progestational (pregnancy-supporting) effects. Progesterone is a common progestin. Prohormone A molecule that can act as a hormone itself or be converted into another hormone with different properties. For example, testosterone can serve as a hormone or as a prohormone for either dihydrotestosterone or estradiol. Prolactin A protein hormone that is highly conserved throughout the animal kingdom. It has many biological functions associated with reproduction and synergistic actions with steroid hormones. Receptor A chemical structure on the cell surface or inside of a cell that has an affinity for a specific chemical configuration of a hormone, neurotransmitter, or other compound. Sex determination The point at which an individual begins to develop as either a male or a female. In animals that have sex chromosomes, this occurs at fertilization. Females are XX and males are XY. All eggs bear X chromosomes, whereas sperm can either bear X or Y chromosomes. Thus, it is the males that determine the sex of the offspring. Sex differentiation The process by which individuals develop the characteristics associated with being male or female. Differential exposure to gonadal steroids during early development causes sexual differentiation of several structures including the brain. Target cell A cell that has receptors for a specific chemical messenger (hormone or neurotransmitter). Testosterone The primary androgen secreted by the testes of most vertebrate animals, including men.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/02%3A_Biological_Basis_of_Behavior/2.04%3A_Hormones_and_Behavior.txt
By Sue Carter and Stephen Porges University of North Carolina, Northeastern University - Boston Love is deeply biological. It pervades every aspect of our lives and has inspired countless works of art. Love also has a profound effect on our mental and physical state. A “broken heart” or a failed relationship can have disastrous effects; bereavement disrupts human physiology and may even precipitate death. Without loving relationships, humans fail to flourish, even if all of their other basic needs are met. As such, love is clearly not “just” an emotion; it is a biological process that is both dynamic and bidirectional in several dimensions. Social interactions between individuals, for example, trigger cognitive and physiological processes that influence emotional and mental states. In turn, these changes influence future social interactions. Similarly, the maintenance of loving relationships requires constant feedback through sensory and cognitive systems; the body seeks love and responds constantly to interactions with loved ones or to the absence of such interactions. The evolutionary principles and ancient hormonal and neural systems that support the beneficial and healing effects of loving relationships are described here. learning objectives • Understand the role of Oxytocin in social behaviors. • Articulate the functional differences between Vasopressin and Oxytocin. • List sex differences in reaction to stress. Introduction Although evidence exists for the healing power of love, only recently has science turned its attention to providing a physiological explanation for love. The study of love in this context offers insight into many important topics, including the biological basis of interpersonal relationships and why and how disruptions in social bonds have such pervasive consequences for behavior and physiology. Some of the answers will be found in our growing knowledge of the neurobiological and endocrinological mechanisms of social behavior and interpersonal engagement. The evolution of social behavior Nothing in biology makes sense except in the light of evolution. Theodosius Dobzhansky’s famous dictum also holds true for explaining the evolution of love. Life on earth is fundamentally social: The ability to dynamically interact with other living organisms to support mutual homeostasis, growth, and reproduction evolved very early. Social interactions are present in primitive invertebrates and even among prokaryotes: Bacteria recognize and approach members of their own species. Bacteria also reproduce more successfully in the presence of their own kind and are able to form communities with physical and chemical characteristics that go far beyond the capabilities of the individual cell (Ingham & Ben-Jacob, 2008). As another example, various insect species have evolved particularly complex social systems, known as eusociality. Characterized by a division of labor, eusociality appears to have evolved independently at least 11 times in insects. Research on honeybees indicates that a complex set of genes and their interactions regulate eusociality, and that these resulted from an “accelerated form of evolution” (Woodard et al., 2011). In other words, molecular mechanisms favoring high levels of sociality seem to be on an evolutionary fast track. The evolutionary pathways that led from reptiles to mammals allowed the emergence of the unique anatomical systems and biochemical mechanisms that enable social engagement and selectively reciprocal sociality. Reptiles show minimal parental investment in offspring and form nonselective relationships between individuals. Pet owners may become emotionally attached to their turtle or snake, but this relationship is not reciprocal. In contrast, most mammals show intense parental investment in offspring and form lasting bonds with their children. Many mammalian species—including humans, wolves, and prairie voles—also develop long-lasting, reciprocal, and selective relationships between adults, with several features of what humans experience as “love.” In turn, these reciprocal interactions trigger dynamic feedback mechanisms that foster growth and health. What is love? An evolutionary and physiological perspective Human love is more complex than simple feedback mechanisms. Love may create its own reality. The biology of love originates in the primitive parts of the brain—the emotional core of the human nervous system—which evolved long before the cerebral cortex. The brain “in love” is flooded with vague sensations, often transmitted by the vagus nerve, and creating much of what we experience as emotion. The modern cortex struggles to interpret love’s primal messages, and weaves a narrative around incoming visceral experiences, potentially reacting to that narrative rather than to reality. It also is helpful to realize that mammalian social behavior is supported by biological components that were repurposed or co-opted over the course of mammalian evolution, eventually permitting lasting relationships between adults. Is there a hormone of love and other relationships? One element that repeatedly appears in the biochemistry of love is the neuropeptide oxytocin. In large mammals, oxytocin adopts a central role in reproduction by helping to expel the big-brained baby from the uterus, ejecting milk and sealing a selective and lasting bond between mother and offspring (Keverne, 2006). Mammalian offspring crucially depend on their mother’s milk for some time after birth. Human mothers also form a strong and lasting bond with their newborns immediately after birth, in a time period that is essential for the nourishment and survival of the baby. However, women who give birth by cesarean section without going through labor, or who opt not to breastfeed, are still able to form a strong emotional bond with their children. Furthermore, fathers, grandparents, and adoptive parents also form lifelong attachments to children. Preliminary evidence suggests that the simple presence of an infant can release oxytocin in adults as well (Feldman, 2012; Kenkel et al., 2012). The baby virtually forces us to love it. The case for a major role for oxytocin in love is strong, but until recently was based largely on extrapolation from research on parental behavior (Feldman, 2012) or social behaviors in animals (Carter, 1998; Kenkel et al., 2012). However, recent human experiments have shown that intranasal delivery of oxytocin can facilitate social behaviors, including eye contact and social cognition (Meyer-Lindenberg, Domes, Kirsch, & Heinrichs, 2011)—behaviors that are at the heart of love. Of course, oxytocin is not the molecular equivalent of love. Rather, it is just one important component of a complex neurochemical system that allows the body to adapt to highly emotional situations. The systems necessary for reciprocal social interactions involve extensive neural networks through the brain and autonomic nervous system that are dynamic and constantly changing across the life span of an individual. We also now know that the properties of oxytocin are not predetermined or fixed. Oxytocin’s cellular receptors are regulated by other hormones and epigenetic factors. These receptors change and adapt based on life experiences. Both oxytocin and the experience of love can change over time. In spite of limitations, new knowledge of the properties of oxytocin has proven useful in explaining several enigmatic features of love. Stress and love Emotional bonds can form during periods of extreme duress, especially when the survival of one individual depends on the presence and support of another. There also is evidence that oxytocin is released in response to acutely stressful experiences, perhaps serving as hormonal “insurance” against overwhelming stress. Oxytocin may help to ensure that parents and others will engage with and care for infants; develop stable, loving relationships; and seek out and receive support from others in times of need. Animal models and the biology of social bonds To dissect the anatomy and chemistry of love, scientists needed a biological equivalent of the Rosetta Stone. Just as the actual stone helped linguists decipher an archaic language by comparison to a known one, animal models are helping biologists draw parallels between ancient physiology and contemporary behaviors. Studies of socially monogamous mammals that form long-lasting social bonds, such as prairie voles, have been especially helpful to an understanding the biology of human social behavior. There is more to love than oxytocin Research in prairie voles showed that, as in humans, oxytocin plays a major role in social interactions and parental behavior (Carter, 1998; Carter, Boone, Pournajafi-Nazarloo, & Bales, 2009; Kenkel et al., 2012). Of course, oxytocin does not act alone. Its release and actions depend on many other neurochemicals, including endogenous opioids and dopamine (Aragona & Wang, 2009). Particularly important to social bonding are the interactions of oxytocin with a related neuropeptide known as vasopressin. The systems regulated by oxytocin and vasopressin are sometimes redundant. Both peptides are implicated in behaviors that require social engagement by either males or females, such as huddling over an infant (Kenkel et al., 2012). For example, it was necessary in voles to block both oxytocin and vasopressin receptors to induce a significant reduction in social engagement, either among adults or between adults and infants. Blocking only one of these two receptors did not eliminate social approach or contact. However, antagonists for either the oxytocin or vasopressin receptor inhibited the selective sociality, which is essential for the expression of a social bond (Bales, Kim, Lewis-Reese, & Carter, 2004; Cho, DeVries, Williams, & Carter, 1999). If we accept selective social bonds, parenting, and mate protection as proxies for love in humans, research in animals supports the hypothesis that oxytocin and vasopressin interact to allow the dynamic behavioral states and behaviors necessary for love. Oxytocin and vasopressin have shared functions, but they are not identical in their actions. The specific behavioral roles of oxytocin and vasopressin are especially difficult to untangle because they are components of an integrated neural network with many points of intersection. Moreover, the genes that regulate the production of oxytocin and vasopressin are located on the same chromosome, possibly allowing coordinated synthesis or release of these peptides. Both peptides can bind to and have antagonist or agonist effects on each other’s receptors. Furthermore, the pathways necessary for reciprocal social behavior are constantly adapting: These peptides and the systems that they regulate are always in flux. In spite of these difficulties, some of the different functions of oxytocin and vasopressin have been identified. Functional differences between vasopressin and oxytocin Vasopressin is associated with physical and emotional mobilization, and can help support vigilance and behaviors needed for guarding a partner or territory (Carter, 1998), as well as other forms of adaptive self-defense (Ferris, 2008). Vasopressin also may protect against physiologically “shutting down” in the face of danger. In many mammalian species, mothers exhibit agonistic behaviors in defense of their young, possibly through the interactive actions of vasopressin and oxytocin (Bosch & Neumann, 2012). Prior to mating, prairie voles are generally social, even toward strangers. However, within a day or so of mating, they begin to show high levels of aggression toward intruders (Carter, DeVries, & Getz, 1995), possibly serving to protect or guard a mate, family, or territory. This mating-induced aggression is especially obvious in males. Oxytocin, in contrast, is associated with immobility without fear. This includes relaxed physiological states and postures that permit birth, lactation, and consensual sexual behavior. Although not essential for parenting, the increase of oxytocin associated with birth and lactation may make it easier for a woman to be less anxious around her newborn and to experience and express loving feelings for her child (Carter & Altemus, 1997). In highly social species such as prairie voles (Kenkel et al., 2013), and presumably in humans, the intricate molecular dances of oxytocin and vasopressin fine-tune the coexistence of caretaking and protective aggression. Fatherhood also has a biological basis The biology of fatherhood is less well-studied than motherhood is. However, male care of offspring also appears to rely on both oxytocin and vasopressin (Kenkel et al., 2012), probably acting in part through effects on the autonomic nervous system (Kenkel et al., 2013). Even sexually naïve male prairie voles show spontaneous parental behavior in the presence of an infant (Carter et al., 1995). However, the stimuli from infants or the nature of the social interactions that release oxytocin and vasopressin may differ between the sexes (Feldman, 2012). At the heart of the benefits of love is a sense of safety Parental care and support in a safe environment are particularly important for mental health in social mammals, including humans and prairie voles. Studies of rodents and of lactating women suggest that oxytocin has the important capacity to modulate the behavioral and autonomic distress that typically follows separation from a mother, child, or partner, reducing defensive behaviors and thereby supporting growth and health (Carter, 1998). The absence of love in early life can be detrimental to mental and physical health During early life in particular, trauma or neglect may produce behaviors and emotional states in humans that are socially pathological. Because the processes involved in creating social behaviors and social emotions are delicately balanced, these be may be triggered in inappropriate contexts, leading to aggression toward friends or family. Alternatively, bonds may be formed with prospective partners who fail to provide social support or protection. Sex differences exist in the consequences of early life experiences Males seem to be especially vulnerable to the negative effects of early experiences, possibly helping to explain the increased sensitivity of males to various developmental disorders. The implications of sex differences in the nervous system and in the response to stressful experiences for social behavior are only slowly becoming apparent (Carter et al., 2009). Both males and females produce vasopressin and oxytocin and are capable of responding to both hormones. However, in brain regions that are involved in defensive aggression, such as the extended amygdala and lateral septum, the production of vasopressin is androgen-dependent. Thus, in the face of a threat, males may be experiencing higher central levels of vasopressin. Oxytocin and vasopressin pathways, including the peptides and their receptors, are regulated by coordinated genetic, hormonal, and epigenetic factors that influence the adaptive and behavioral functions of these peptides across the animal’s life span. As a result, the endocrine and behavioral consequences of a stress or challenge may be different for males and females (DeVries, DeVries, Taymans, & Carter, 1996). For example, when unpaired prairie voles were exposed to an intense but brief stressor, such as a few minutes of swimming, or injection of the adrenal hormone corticosterone, the males (but not females) quickly formed new pair bonds. These and other experiments suggest that males and females have different coping strategies, and possibly may experience both stressful experiences, and even love, in ways that are gender-specific. In the context of nature and evolution, sex differences in the nervous system are important. However, sex differences in brain and behavior also may help to explain gender differences in the vulnerability to mental and physical disorders (Taylor, et al., 2000). Better understanding these differences will provide clues to the physiology of human mental health in both sexes. Loving relationships in early life can have epigenetic consequences Love is “epigenetic.” That is, positive experiences in early life can act upon and alter the expression of specific genes. These changes in gene expression may have behavioral consequences through simple biochemical changes, such as adding a methyl group to a particular site within the genome (Zhang & Meaney, 2010). It is possible that these changes in the genome may even be passed to the next generation. Social behaviors, emotional attachment to others, and long-lasting reciprocal relationships also are both plastic and adaptive, and so is the biology upon which they are based. For example, infants of traumatized or highly stressed parents might be chronically exposed to vasopressin, either through their own increased production of the peptide, or through higher levels of vasopressin in maternal milk. Such increased exposure could sensitize the infant to defensive behaviors or create a lifelong tendency to overreact to threat. Based on research in rats, it seems that in response to adverse early experiences of chronic isolation, the genes for vasopressin receptors can become upregulated (Zhang et al., 2012), leading to an increased sensitivity to acute stressors or anxiety that may persist throughout life. Epigenetic programming triggered by early life experiences is adaptive in allowing neuroendocrine systems to project and plan for future behavioral demands. But epigenetic changes that are long-lasting also can create atypical social or emotional behaviors (Zhang & Meaney, 2010) that may be especially likely to surface in later life, and in the face of social or emotional challenges. Exposure to exogenous hormones in early life also may be epigenetic. For example, prairie voles treated postnatally with vasopressin (especially males) were later more aggressive, whereas those exposed to a vasopressin antagonist showed less aggression in adulthood. Conversely, in voles the exposure of infants to slightly increased levels of oxytocin during development increased the tendency to show a pair bond. However, these studies also showed that a single exposure to a higher level of oxytocin in early life could disrupt the later capacity to pair bond (Carter et al., 2009). There is little doubt that either early social experiences or the effects of developmental exposure to these neuropeptides holds the potential to have long-lasting effects on behavior. Both parental care and exposure to oxytocin in early life can permanently modify hormonal systems, altering the capacity to form relationships and influence the expression of love across the life span. Our preliminary findings in voles further suggest that early life experiences affect the methylation of the oxytocin receptor gene and its expression (Connelly, Kenkel, Erickson, & Carter, 2011). Thus, we can plausibly argue that love is epigenetic. The absence of social behavior or isolation also has consequences for the oxytocin system Given the power of positive social experiences, it is not surprising that a lack of social relationships also may lead to alterations in behavior as well as changes in oxytocin and vasopressin pathways. We have found that social isolation reduced the expression of the gene for the oxytocin receptor, and at the same time increased the expression of genes for the vasopressin peptide. In female prairie voles, isolation also was accompanied by an increase in blood levels of oxytocin, possibly as a coping mechanism. However, over time, isolated prairie voles of both sexes showed increases in measures of depression, anxiety, and physiological arousal, and these changes were observed even when endogenous oxytocin was elevated. Thus, even the hormonal insurance provided by endogenous oxytocin in face of the chronic stress of isolation was not sufficient to dampen the consequences of living alone. Predictably, when isolated voles were given additional exogenous oxytocin, this treatment did restore many of these functions to normal (Grippo, Trahanas, Zimmerman, Porges, & Carter, 2009). In modern societies, humans can survive, at least after childhood, with little or no human contact. Communication technology, social media, electronic parenting, and many other recent technological advances may reduce social behaviors, placing both children and adults at risk for social isolation and disorders of the autonomic nervous system, including deficits in their capacity for social engagement and love (Porges, 2011). Social engagement actually helps us to cope with stress. The same hormones and areas of the brain that increase the capacity of the body to survive stress also enable us to better adapt to an ever-changing social and physical environment. Individuals with strong emotional support and relationships are more resilient in the face of stressors than those who feel isolated or lonely. Lesions in various bodily tissues, including the brain, heal more quickly in animals that are living socially versus in isolation (Karelina & DeVries, 2011). The protective effects of positive sociality seem to rely on the same cocktail of hormones that carries a biological message of “love” throughout the body. Can love—or perhaps oxytocin—be a medicine? Although research has only begun to examine the physiological effects of these peptides beyond social behavior, there is a wealth of new evidence showing that oxytocin can influence physiological responses to stress and injury. As only one example, the molecules associated with love have restorative properties, including the ability to literally heal a “broken heart.” Oxytocin receptors are expressed in the heart, and precursors for oxytocin appear to be critical for the development of the fetal heart (Danalache, Gutkowska, Slusarz, Berezowska, & Jankowski, 2010). Oxytocin exerts protective and restorative effects in part through its capacity to convert undifferentiated stem cells into cardiomyocytes. Oxytocin can facilitate adult neurogenesis and tissue repair, especially after a stressful experience. We now know that oxytocin has direct anti-inflammatory and antioxidant properties in in vitro models of atherosclerosis (Szeto et al., 2008). The heart seems to rely on oxytocin as part of a normal process of protection and self-healing. Thus, oxytocin exposure early in life not only regulates our ability to love and form social bonds, it also affects our health and well-being. Oxytocin modulates the hypothalamic–pituitary adrenal (HPA) axis, especially in response to disruptions in homeostasis (Carter, 1998), and coordinates demands on the immune system and energy balance. Long-term, secure relationships provide emotional support and down-regulate reactivity of the HPA axis, whereas intense stressors, including birth, trigger activation of the HPA axis and sympathetic nervous system. The ability of oxytocin to regulate these systems probably explains the exceptional capacity of most women to cope with the challenges of childbirth and childrearing. Dozens of ongoing clinical trials are currently attempting to examine the therapeutic potential of oxytocin in disorders ranging from autism to heart disease. Of course, as in hormonal studies in voles, the effects are likely to depend on the history of the individual and the context, and to be dose-dependent. As this research is emerging, a variety of individual differences and apparent discrepancies in the effects of exogenous oxytocin are being reported. Most of these studies do not include any information on the endogenous hormones, or on the oxytocin or vasopressin receptors, which are likely to affect the outcome of such treatments. Conclusion Research in this field is new and there is much left to understand. However, it is already clear that both love and oxytocin are powerful. Of course, with power comes responsibility. Although research into mechanisms through which love—or hormones such as oxytocin—may protect us against stress and disease is in its infancy, this knowledge will ultimately increase our understanding of the way that our emotions impact upon health and disease. The same molecules that allow us to give and receive love also link our need for others with health and well-being. Acknowledgments C. Sue Carter and Stephen W. Porges are both Professors of Psychiatry at the University of North Carolina, Chapel Hill, and also are Research Professors of Psychology at Northeastern University, Boston. Discussions of “love and forgiveness” with members of the Fetzer Institute’s Advisory Committee on Natural Sciences led to this essay and are gratefully acknowledged here. We are especially appreciative of thoughtful editorial input from Dr. James Harris. Studies from the authors’ laboratories were sponsored by the National Institutes of Health. We also express our gratitude for this support and to our colleagues, whose input and hard work informed the ideas expressed in this article. A version of this paper was previously published in EMBO Reports in the series on “Sex and Society”; this paper is reproduced with the permission of the publishers of that journal. Outside Resources Book: C. S. Carter, L. Ahnert et al. (Eds.), (2006). Attachment and bonding: A new synthesis. Cambridge, MA: MIT Press. Book: Porges, S.W. (2011). The polyvagal theory: Neurophysiological foundations of emotions, attachment, communication and self-regulation. New York, NY: Norton. Web: Database of publicly and privately supported clinical studies of human participants conducted around the world. http://www.clinicaltrials.gov Web: PubMed comprises over 22 million citations for biomedical literature from MEDLINE, life science journals, and online books. PubMed citations and abstracts include the fields of biomedicine and health, covering portions of the life sciences, behavioral sciences, chemical sciences, and bioengineering. PubMed also provides access to additional relevant web sites and links to the other NCBI molecular biology resources. http://www.ncbi.nlm.nih.gov/pubmed Web: Website of author Stephen Porges http://www.stephenporges.com/ Discussion Questions 1. If love is so important in human behavior, why is it so hard to describe and understand? 2. Discuss the role of evolution in understanding what humans call “love” or other forms of prosociality. 3. What are the common biological and neuroendocrine elements that appear in maternal love and adult-adult relationships? 4. Oxytocin and vasopressin are biochemically similar. What are some of the differences between the actions of oxytocin and vasopressin? 5. How may the properties of oxytocin and vasopressin help us understand the biological bases of love? 6. What are common features of the biochemistry of “love” and “safety,” and why are these important to human health? Vocabulary Epigenetics Heritable changes in gene activity that are not caused by changes in the DNA sequence. en.Wikipedia.org/wiki/Epigenetics Oxytocin A nine amino acid mammalian neuropeptide. Oxytocin is synthesized primarily in the brain, but also in other tissues such as uterus, heart and thymus, with local effects. Oxytocin is best known as a hormone of female reproduction due to its capacity to cause uterine contractions and eject milk. Oxytocin has effects on brain tissue, but also acts throughout the body in some cases as an antioxidant or anti-inflammatory. Vagus nerve The 10th cranial nerve. The mammalian vagus has an older unmyelinated branch which originates in the dorsal motor complex and a more recently evolved, myelinated branch, with origins in the ventral vagal complex including the nucleus ambiguous. The vagus is the primary source of autonomic-parasympathetic regulation for various internal organs, including the heart, lungs and other parts of the viscera. The vagus nerve is primarily sensory (afferent), transmitting abundant visceral input to the central nervous system. Vasopressin A nine amino acid mammalian neuropeptide. Vasopressin is synthesized primarily in the brain, but also may be made in other tissues. Vasopressin is best known for its effects on the cardiovascular system (increasing blood pressure) and also the kidneys (causing water retention). Vasopressin has effects on brain tissue, but also acts throughout the body.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/02%3A_Biological_Basis_of_Behavior/2.05%3A_Biochemistry_of_Love.txt
By Ian Weaver Dalhousie University Early life experiences exert a profound and long-lasting influence on physical and mental health throughout life. The efforts to identify the primary causes of this have significantly benefited from studies of the epigenome—a dynamic layer of information associated with DNA that differs between individuals and can be altered through various experiences and environments. The epigenome has been heralded as a key “missing piece” of the etiological puzzle for understanding how development of psychological disorders may be influenced by the surrounding environment, in concordance with the genome. Understanding the mechanisms involved in the initiation, maintenance, and heritability of epigenetic states is thus an important aspect of research in current biology, particularly in the study of learning and memory, emotion, and social behavior in humans. Moreover, epigenetics in psychology provides a framework for understanding how the expression of genes is influenced by experiences and the environment to produce individual differences in behavior, cognition, personality, and mental health. In this module, we survey recent developments revealing epigenetic aspects of mental health and review some of the challenges of epigenetic approaches in psychology to help explain how nurture shapes nature. 2.07: The Nature-Nurture Question By Eric Turkheimer University of Virginia People have a deep intuition about what has been called the “nature–nurture question.” Some aspects of our behavior feel as though they originate in our genetic makeup, while others feel like the result of our upbringing or our own hard work. The scientific field of behavior genetics attempts to study these differences empirically, either by examining similarities among family members with different degrees of genetic relatedness, or, more recently, by studying differences in the DNA of people with different behavioral traits. The scientific methods that have been developed are ingenious, but often inconclusive. Many of the difficulties encountered in the empirical science of behavior genetics turn out to be conceptual, and our intuitions about nature and nurture get more complicated the harder we think about them. In the end, it is an oversimplification to ask how “genetic” some particular behavior is. Genes and environments always combine to produce behavior, and the real science is in the discovery of how they combine for a given behavior. Learning objectives • Understand what the nature–nurture debate is and why the problem fascinates us. • Understand why nature–nurture questions are difficult to study empirically. • Know the major research designs that can be used to study nature–nurture questions. • Appreciate the complexities of nature–nurture and why questions that seem simple turn out not to have simple answers. Introduction There are three related problems at the intersection of philosophy and science that are fundamental to our understanding of our relationship to the natural world: the mind–body problem, the free will problem, and the nature–nurture problem. These great questions have a lot in common. Everyone, even those without much knowledge of science or philosophy, has opinions about the answers to these questions that come simply from observing the world we live in. Our feelings about our relationship with the physical and biological world often seem incomplete. We are in control of our actions in some ways, but at the mercy of our bodies in others; it feels obvious that our consciousness is some kind of creation of our physical brains, at the same time we sense that our awareness must go beyond just the physical. This incomplete knowledge of our relationship with nature leaves us fascinated and a little obsessed, like a cat that climbs into a paper bag and then out again, over and over, mystified every time by a relationship between inner and outer that it can see but can’t quite understand. It may seem obvious that we are born with certain characteristics while others are acquired, and yet of the three great questions about humans’ relationship with the natural world, only nature–nurture gets referred to as a “debate.” In the history of psychology, no other question has caused so much controversy and offense: We are so concerned with nature–nurture because our very sense of moral character seems to depend on it. While we may admire the athletic skills of a great basketball player, we think of his height as simply a gift, a payoff in the “genetic lottery.” For the same reason, no one blames a short person for his height or someone’s congenital disability on poor decisions: To state the obvious, it’s “not their fault.” But we do praise the concert violinist (and perhaps her parents and teachers as well) for her dedication, just as we condemn cheaters, slackers, and bullies for their bad behavior. The problem is, most human characteristics aren’t usually as clear-cut as height or instrument-mastery, affirming our nature–nurture expectations strongly one way or the other. In fact, even the great violinist might have some inborn qualities—perfect pitch, or long, nimble fingers—that support and reward her hard work. And the basketball player might have eaten a diet while growing up that promoted his genetic tendency for being tall. When we think about our own qualities, they seem under our control in some respects, yet beyond our control in others. And often the traits that don’t seem to have an obvious cause are the ones that concern us the most and are far more personally significant. What about how much we drink or worry? What about our honesty, or religiosity, or sexual orientation? They all come from that uncertain zone, neither fixed by nature nor totally under our own control. One major problem with answering nature-nurture questions about people is, how do you set up an experiment? In nonhuman animals, there are relatively straightforward experiments for tackling nature–nurture questions. Say, for example, you are interested in aggressiveness in dogs. You want to test for the more important determinant of aggression: being born to aggressive dogs or being raised by them. You could mate two aggressive dogs—angry Chihuahuas—together, and mate two nonaggressive dogs—happy beagles—together, then switch half the puppies from each litter between the different sets of parents to raise. You would then have puppies born to aggressive parents (the Chihuahuas) but being raised by nonaggressive parents (the Beagles), and vice versa, in litters that mirror each other in puppy distribution. The big questions are: Would the Chihuahua parents raise aggressive beagle puppies? Would the beagle parents raise nonaggressive Chihuahua puppies? Would the puppies’ nature win out, regardless of who raised them? Or... would the result be a combination of nature and nurture? Much of the most significant nature–nurture research has been done in this way (Scott & Fuller, 1998), and animal breeders have been doing it successfully for thousands of years. In fact, it is fairly easy to breed animals for behavioral traits. With people, however, we can’t assign babies to parents at random, or select parents with certain behavioral characteristics to mate, merely in the interest of science (though history does include horrific examples of such practices, in misguided attempts at “eugenics,” the shaping of human characteristics through intentional breeding). In typical human families, children’s biological parents raise them, so it is very difficult to know whether children act like their parents due to genetic (nature) or environmental (nurture) reasons. Nevertheless, despite our restrictions on setting up human-based experiments, we do see real-world examples of nature-nurture at work in the human sphere—though they only provide partial answers to our many questions. The science of how genes and environments work together to influence behavior is called behavioral genetics. The easiest opportunity we have to observe this is the adoption study. When children are put up for adoption, the parents who give birth to them are no longer the parents who raise them. This setup isn’t quite the same as the experiments with dogs (children aren’t assigned to random adoptive parents in order to suit the particular interests of a scientist) but adoption still tells us some interesting things, or at least confirms some basic expectations. For instance, if the biological child of tall parents were adopted into a family of short people, do you suppose the child’s growth would be affected? What about the biological child of a Spanish-speaking family adopted at birth into an English-speaking family? What language would you expect the child to speak? And what might these outcomes tell you about the difference between height and language in terms of nature-nurture? Another option for observing nature-nurture in humans involves twin studies. There are two types of twins: monozygotic (MZ) and dizygotic (DZ). Monozygotic twins, also called “identical” twins, result from a single zygote (fertilized egg) and have the same DNA. They are essentially clones. Dizygotic twins, also known as “fraternal” twins, develop from two zygotes and share 50% of their DNA. Fraternal twins are ordinary siblings who happen to have been born at the same time. To analyze nature–nurture using twins, we compare the similarity of MZ and DZ pairs. Sticking with the features of height and spoken language, let’s take a look at how nature and nurture apply: Identical twins, unsurprisingly, are almost perfectly similar for height. The heights of fraternal twins, however, are like any other sibling pairs: more similar to each other than to people from other families, but hardly identical. This contrast between twin types gives us a clue about the role genetics plays in determining height. Now consider spoken language. If one identical twin speaks Spanish at home, the co-twin with whom she is raised almost certainly does too. But the same would be true for a pair of fraternal twins raised together. In terms of spoken language, fraternal twins are just as similar as identical twins, so it appears that the genetic match of identical twins doesn’t make much difference. Twin and adoption studies are two instances of a much broader class of methods for observing nature-nurture called quantitative genetics, the scientific discipline in which similarities among individuals are analyzed based on how biologically related they are. We can do these studies with siblings and half-siblings, cousins, twins who have been separated at birth and raised separately (Bouchard, Lykken, McGue, & Segal, 1990; such twins are very rare and play a smaller role than is commonly believed in the science of nature–nurture), or with entire extended families (see Plomin, DeFries, Knopik, & Neiderhiser, 2012, for a complete introduction to research methods relevant to nature–nurture). For better or for worse, contentions about nature–nurture have intensified because quantitative genetics produces a number called a heritability coefficient, varying from 0 to 1, that is meant to provide a single measure of genetics’ influence of a trait. In a general way, a heritability coefficient measures how strongly differences among individuals are related to differences among their genes. But beware: Heritability coefficients, although simple to compute, are deceptively difficult to interpret. Nevertheless, numbers that provide simple answers to complicated questions tend to have a strong influence on the human imagination, and a great deal of time has been spent discussing whether the heritability of intelligence or personality or depression is equal to one number or another. One reason nature–nurture continues to fascinate us so much is that we live in an era of great scientific discovery in genetics, comparable to the times of Copernicus, Galileo, and Newton, with regard to astronomy and physics. Every day, it seems, new discoveries are made, new possibilities proposed. When Francis Galton first started thinking about nature–nurture in the late-19th century he was very influenced by his cousin, Charles Darwin, but genetics per se was unknown. Mendel’s famous work with peas, conducted at about the same time, went undiscovered for 20 years; quantitative genetics was developed in the 1920s; DNA was discovered by Watson and Crick in the 1950s; the human genome was completely sequenced at the turn of the 21st century; and we are now on the verge of being able to obtain the specific DNA sequence of anyone at a relatively low cost. No one knows what this new genetic knowledge will mean for the study of nature–nurture, but as we will see in the next section, answers to nature–nurture questions have turned out to be far more difficult and mysterious than anyone imagined. What Have We Learned About Nature–Nurture? It would be satisfying to be able to say that nature–nurture studies have given us conclusive and complete evidence about where traits come from, with some traits clearly resulting from genetics and others almost entirely from environmental factors, such as childrearing practices and personal will; but that is not the case. Instead, everything has turned out to have some footing in genetics. The more genetically-related people are, the more similar they are—for everything: height, weight, intelligence, personality, mental illness, etc. Sure, it seems like common sense that some traits have a genetic bias. For example, adopted children resemble their biological parents even if they have never met them, and identical twins are more similar to each other than are fraternal twins. And while certain psychological traits, such as personality or mental illness (e.g., schizophrenia), seem reasonably influenced by genetics, it turns out that the same is true for political attitudes, how much television people watch (Plomin, Corley, DeFries, & Fulker, 1990), and whether or not they get divorced (McGue & Lykken, 1992). It may seem surprising, but genetic influence on behavior is a relatively recent discovery. In the middle of the 20th century, psychology was dominated by the doctrine of behaviorism, which held that behavior could only be explained in terms of environmental factors. Psychiatry concentrated on psychoanalysis, which probed for roots of behavior in individuals’ early life-histories. The truth is, neither behaviorism nor psychoanalysis is incompatible with genetic influences on behavior, and neither Freud nor Skinner was naive about the importance of organic processes in behavior. Nevertheless, in their day it was widely thought that children’s personalities were shaped entirely by imitating their parents’ behavior, and that schizophrenia was caused by certain kinds of “pathological mothering.” Whatever the outcome of our broader discussion of nature–nurture, the basic fact that the best predictors of an adopted child’s personality or mental health are found in the biological parents he or she has never met, rather than in the adoptive parents who raised him or her, presents a significant challenge to purely environmental explanations of personality or psychopathology. The message is clear: You can’t leave genes out of the equation. But keep in mind, no behavioral traits are completely inherited, so you can’t leave the environment out altogether, either. Trying to untangle the various ways nature-nurture influences human behavior can be messy, and often common-sense notions can get in the way of good science. One very significant contribution of behavioral genetics that has changed psychology for good can be very helpful to keep in mind: When your subjects are biologically-related, no matter how clearly a situation may seem to point to environmental influence, it is never safe to interpret a behavior as wholly the result of nurture without further evidence. For example, when presented with data showing that children whose mothers read to them often are likely to have better reading scores in third grade, it is tempting to conclude that reading to your kids out loud is important to success in school; this may well be true, but the study as described is inconclusive, because there are genetic as well asenvironmental pathways between the parenting practices of mothers and the abilities of their children. This is a case where “correlation does not imply causation,” as they say. To establish that reading aloud causes success, a scientist can either study the problem in adoptive families (in which the genetic pathway is absent) or by finding a way to randomly assign children to oral reading conditions. The outcomes of nature–nurture studies have fallen short of our expectations (of establishing clear-cut bases for traits) in many ways. The most disappointing outcome has been the inability to organize traits from more- to less-genetic. As noted earlier, everything has turned out to be at least somewhat heritable (passed down), yet nothing has turned out to be absolutely heritable, and there hasn’t been much consistency as to which traits are moreheritable and which are less heritable once other considerations (such as how accurately the trait can be measured) are taken into account (Turkheimer, 2000). The problem is conceptual: The heritability coefficient, and, in fact, the whole quantitative structure that underlies it, does not match up with our nature–nurture intuitions. We want to know how “important” the roles of genes and environment are to the development of a trait, but in focusing on “important” maybe we’re emphasizing the wrong thing. First of all, genes and environment are both crucial to every trait; without genes the environment would have nothing to work on, and too, genes cannot develop in a vacuum. Even more important, because nature–nurture questions look at the differences among people, the cause of a given trait depends not only on the trait itself, but also on the differences in that trait between members of the group being studied. The classic example of the heritability coefficient defying intuition is the trait of having two arms. No one would argue against the development of arms being a biological, genetic process. But fraternal twins are just as similar for “two-armedness” as identical twins, resulting in a heritability coefficient of zero for the trait of having two arms. Normally, according to the heritability model, this result (coefficient of zero) would suggest all nurture, no nature, but we know that’s not the case. The reason this result is not a tip-off that arm development is less genetic than we imagine is because people do not vary in the genes related to arm development—which essentially upends the heritability formula. In fact, in this instance, the opposite is likely true: the extent that people differ in arm number is likely the result of accidents and, therefore, environmental. For reasons like these, we always have to be very careful when asking nature–nurture questions, especially when we try to express the answer in terms of a single number. The heritability of a trait is not simply a property of that trait, but a property of the trait in a particular context of relevant genes and environmental factors. Another issue with the heritability coefficient is that it divides traits’ determinants into two portions—genes and environment—which are then calculated together for the total variability. This is a little like asking how much of the experience of a symphony comes from the horns and how much from the strings; the ways instruments or genes integrate is more complex than that. It turns out to be the case that, for many traits, genetic differences affect behavior under some environmental circumstances but not others—a phenomenon called gene-environment interaction, or G x E. In one well-known example, Caspi et al. (2002) showed that among maltreated children, those who carried a particular allele of the MAOA gene showed a predisposition to violence and antisocial behavior, while those with other alleles did not. Whereas, in children who had not been maltreated, the gene had no effect. Making matters even more complicated are very recent studies of what is known as epigenetics (see module, “Epigenetics” http://noba.to/37p5cb8v), a process in which the DNA itself is modified by environmental events, and those genetic changes transmitted to children. Some common questions about nature–nurture are, how susceptible is a trait to change, how malleable is it, and do we “have a choice” about it? These questions are much more complex than they may seem at first glance. For example, phenylketonuria is an inborn error of metabolism caused by a single gene; it prevents the body from metabolizing phenylalanine. Untreated, it causes mental retardation and death. But it can be treated effectively by a straightforward environmental intervention: avoiding foods containing phenylalanine. Height seems like a trait firmly rooted in our nature and unchangeable, but the average height of many populations in Asia and Europe has increased significantly in the past 100 years, due to changes in diet and the alleviation of poverty. Even the most modern genetics has not provided definitive answers to nature–nurture questions. When it was first becoming possible to measure the DNA sequences of individual people, it was widely thought that we would quickly progress to finding the specific genes that account for behavioral characteristics, but that hasn’t happened. There are a few rare genes that have been found to have significant (almost always negative) effects, such as the single gene that causes Huntington’s disease, or the Apolipoprotein gene that causes early onset dementia in a small percentage of Alzheimer’s cases. Aside from these rare genes of great effect, however, the genetic impact on behavior is broken up over many genes, each with very small effects. For most behavioral traits, the effects are so small and distributed across so many genes that we have not been able to catalog them in a meaningful way. In fact, the same is true of environmental effects. We know that extreme environmental hardship causes catastrophic effects for many behavioral outcomes, but fortunately extreme environmental hardship is very rare. Within the normal range of environmental events, those responsible for differences (e.g., why some children in a suburban third-grade classroom perform better than others) are much more difficult to grasp. The difficulties with finding clear-cut solutions to nature–nurture problems bring us back to the other great questions about our relationship with the natural world: the mind-body problem and free will. Investigations into what we mean when we say we are aware of something reveal that consciousness is not simply the product of a particular area of the brain, nor does choice turn out to be an orderly activity that we can apply to some behaviors but not others. So it is with nature and nurture: What at first may seem to be a straightforward matter, able to be indexed with a single number, becomes more and more complicated the closer we look. The many questions we can ask about the intersection among genes, environments, and human traits—how sensitive are traits to environmental change, and how common are those influential environments; are parents or culture more relevant; how sensitive are traits to differences in genes, and how much do the relevant genes vary in a particular population; does the trait involve a single gene or a great many genes; is the trait more easily described in genetic or more-complex behavioral terms?—may have different answers, and the answer to one tells us little about the answers to the others. It is tempting to predict that the more we understand the wide-ranging effects of genetic differences on all human characteristics—especially behavioral ones—our cultural, ethical, legal, and personal ways of thinking about ourselves will have to undergo profound changes in response. Perhaps criminal proceedings will consider genetic background. Parents, presented with the genetic sequence of their children, will be faced with difficult decisions about reproduction. These hopes or fears are often exaggerated. In some ways, our thinking may need to change—for example, when we consider the meaning behind the fundamental American principle that all men are created equal. Human beings differ, and like all evolved organisms they differ genetically. The Declaration of Independence predates Darwin and Mendel, but it is hard to imagine that Jefferson—whose genius encompassed botany as well as moral philosophy—would have been alarmed to learn about the genetic diversity of organisms. One of the most important things modern genetics has taught us is that almost all human behavior is too complex to be nailed down, even from the most complete genetic information, unless we’re looking at identical twins. The science of nature and nurture has demonstrated that genetic differences among people are vital to human moral equality, freedom, and self-determination, not opposed to them. As Mordecai Kaplan said about the role of the past in Jewish theology, genetics gets a vote, not a veto, in the determination of human behavior. We should indulge our fascination with nature–nurture while resisting the temptation to oversimplify it. Outside Resources Web: Institute for Behavioral Genetics http://www.colorado.edu/ibg/ Discussion Questions 1. Is your personality more like one of your parents than the other? If you have a sibling, is his or her personality like yours? In your family, how did these similarities and differences develop? What do you think caused them? 2. Can you think of a human characteristic for which genetic differences would play almost no role? Defend your choice. 3. Do you think the time will come when we will be able to predict almost everything about someone by examining their DNA on the day they are born? 4. Identical twins are more similar than fraternal twins for the trait of aggressiveness, as well as for criminal behavior. Do these facts have implications for the courtroom? If it can be shown that a violent criminal had violent parents, should it make a difference in culpability or sentencing? Vocabulary Adoption study A behavior genetic research method that involves comparison of adopted children to their adoptive and biological parents. Behavioral genetics The empirical science of how genes and environments combine to generate behavior. Heritability coefficient An easily misinterpreted statistical construct that purports to measure the role of genetics in the explanation of differences among individuals. Quantitative genetics Scientific and mathematical methods for inferring genetic and environmental processes based on the degree of genetic and environmental similarity among organisms. Twin studies A behavior genetic research method that involves comparison of the similarity of identical (monozygotic; MZ) and fraternal (dizygotic; DZ) twins.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/02%3A_Biological_Basis_of_Behavior/2.06%3A_Epigenetics_in_Psychology.txt