chapter
stringlengths
1.97k
1.53M
path
stringlengths
47
241
This chapter explores how the presence of other people influences the behavior of individuals, dyads, and groups. Social factors can determine whether human behavior tends toward conflict or harmony. • Introduction Humans are diverse, and our differences often make it challenging for us to get along. A poignant example is that of Trayvon Martin, a 17-year-old African American who was shot by a neighborhood watch volunteer, George Zimmerman, in a predominantly White neighborhood in 2012. Zimmerman grew suspicious of the boy dressed in a hoodie and pursued Martin. A physical altercation ended with Zimmerman fatally shooting Martin. Zimmerman claimed that he acted in self-defense; Martin was unarmed. • 12.1: What Is Social Psychology? Social psychology examines how people affect one another, and it looks at the power of the situation. Social psychologists assert that an individual’s thoughts, feelings, and behaviors are very much influenced by social situations. Essentially, people will change their behavior to align with the social situation at hand. If we are in a new situation or are unsure how to behave, we will take our cues from other individuals. • 12.2: Self-presentation social psychology is the study of how people affect one another’s thoughts, feelings, and behaviors. We have discussed situational perspectives and social psychology’s emphasis on the ways in which a person’s environment, including culture and other social influences, affect behavior. In this section, we examine situational forces that have a strong influence on human behavior including social roles, social norms, and scripts. • 12.3: Attitudes and Persuasion Attitude is our evaluation of a person, an idea, or an object. We have attitudes for many things ranging from products that we might pick up in the supermarket to people around the world to political policies. Typically, attitudes are favorable or unfavorable: positive or negative (Eagly & Chaiken, 1993). And, they have three components: an affective component (feelings), a behavioral component (the effect of the attitude on behavior), and a cognitive component (belief and knowledge). • 12.4: Conformity, Compliance, and Obedience In this section, we discuss additional ways in which people influence others. The topics of conformity, social influence, obedience, and group processes demonstrate the power of the social situation to change our thoughts, feelings, and behaviors. We begin this section with a discussion of a famous social psychology experiment that demonstrated how susceptible humans are to outside social pressures. • 12.5: Prejudice and Discrimination Human conflict can result in crime, war, and mass murder, such as genocide. Prejudice and discrimination often are root causes of human conflict, which explains how strangers come to hate one another to the extreme of causing others harm. Prejudice and discrimination affect everyone. In this section we will examine the definitions of prejudice and discrimination, examples of these concepts, and causes of these biases. • 12.6: Aggression People can work together to achieve great things, such as helping each other in emergencies: recall the heroism displayed during the 9/11 terrorist attacks. People also can do great harm to one another, such as conforming to group norms that are immoral and obeying authority to the point of murder: consider the mass conformity of Nazis during WWII. In this section we will discuss a negative side of human behavior—aggression. • 12.7: Prosocial Behavior Researchers have documented several features of the situation that influence whether we form relationships with others. There are also universal traits that humans find attractive in others. In this section we discuss conditions that make forming relationships more likely, what we look for in friendships and romantic relationships, the different types of love, and a theory explaining how our relationships are formed, maintained, and terminated. • Critical Thinking Questions • Key Terms • Personal Application Questions • Review Questions • Summary Thumbnail: The Scream by Edvard Munch. 12: Social Psychology Chapter Outline 12.1 What Is Social Psychology? 12.2 Self-presentation 12.3 Attitudes and Persuasion 12.4 Conformity, Compliance, and Obedience 12.5 Prejudice and Discrimination 12.6 Aggression 12.7 Prosocial Behavior On the night of February 26, 2012, Trayvon Martin, a 17-year-old African American high school student, was shot by a neighborhood watch volunteer, George Zimmerman, in a predominantly White neighborhood. Zimmerman observed the boy dressed in a hoodie and pursued Martin. Zimmerman called the police to report a person acting suspiciously, which he had done on other occasions. According to the 911 call transcript, Zimmerman said on the call, "[expletive] punks. These [expletive], they always get away." The 911 operator told Zimmerman not to follow the teen, as was also stated in the police neighborhood watch guidelines that had been provided to Zimmerman. A physical altercation ended with Zimmerman fatally shooting Martin. Zimmerman claimed that he acted in self-defense. Martin was unarmed, and after his death, there was a nationwide outcry. A Florida jury found Zimmerman not guilty of second degree murder nor of manslaughter. There have also been tragic situations with deadly consequences in which police officers have shot innocent civilians. In 2019, Atatiana Jefferson's neighbor used a non-emergency line to call the police because Jefferson's front door was open in the late hours of the night. The police arrived and an officer went to the back of the yard. Jefferson, not knowing that the police had been called, reached into her purse and got out her legally owned gun. The officer perceived a threat and fired upon Jefferson, killing her. Her 8-year-old nephew witnessed the incident, as he was playing video games with his aunt. Why did each of these nights end so tragically for those involved? What dynamics contributed to the outcomes? How can these deaths be prevented? Social psychologists examine how the presence of others impacts how a person behaves and reacts, whether that person is an athlete playing a game, a police officer on the job, or a worshiper attending a religious service. Social psychologists believe that a person's behavior is influenced by who else is present in a given situation and the composition of social groups.
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/12%3A_Social_Psychology/12.01%3A_Prelude_to_Social_Psychology.txt
Learning Objectives • Define social psychology • Describe situational versus dispositional influences on behavior • Describe the fundamental attribution error • Explain actor-observer bias • Describe self-serving bias • Explain the just-world hypothesis Social psychology examines how people affect one another, and it looks at the power of the situation. Social psychologists assert that an individual’s thoughts, feelings, and behaviors are very much influenced by social situations. Essentially, people will change their behavior to align with the social situation at hand. If we are in a new situation or are unsure how to behave, we will take our cues from other individuals. The field of social psychology studies topics at both the intra- and interpersonal levels. Intrapersonal topics (those that pertain to the individual) include emotions and attitudes, the self, and social cognition (the ways in which we think about ourselves and others). Interpersonal topics (those that pertain to dyads and groups) include helping behavior (See figure \(1\)), aggression, prejudice and discrimination, attraction and close relationships, and group processes and intergroup relationships. Social psychologists focus on how people construe or interpret situations and how these interpretations influence their thoughts, feelings, and behaviors (Ross & Nisbett, 1991). Thus, social psychology studies individuals in a social context and how situational variables interact to influence behavior. In this chapter, we discuss the intrapersonal processes of self-presentation, cognitive dissonance and attitude change, and the interpersonal processes of conformity and obedience, aggression and altruism, and, finally, love and attraction. Situational and Dispositional Influences on Behavior Behavior is a product of both the situation (e.g., cultural influences, social roles, and the presence of bystanders) and of the person (e.g., personality characteristics). Subfields of psychology tend to focus on one influence or behavior over others. Situationism is the view that our behavior and actions are determined by our immediate environment and surroundings. In contrast, dispositionism holds that our behavior is determined by internal factors (Heider, 1958). An internal factor is an attribute of a person and includes personality traits and temperament. Social psychologists have tended to take the situationist perspective, whereas personality psychologists have promoted the dispositionist perspective. Modern approaches to social psychology, however, take both the situation and the individual into account when studying human behavior (Fiske, Gilbert, & Lindzey, 2010). In fact, the field of social-personality psychology has emerged to study the complex interaction of internal and situational factors that affect human behavior (Mischel, 1977; Richard, Bond, & Stokes-Zoota, 2003). Fundamental Attribution Error In the United States, the predominant culture tends to favor a dispositional approach in explaining human behavior. Why do you think this is? We tend to think that people are in control of their own behaviors, and, therefore, any behavior change must be due to something internal, such as their personality, habits, or temperament. According to some social psychologists, people tend to overemphasize internal factors as explanations—or attributions—for the behavior of other people. They tend to assume that the behavior of another person is a trait of that person, and to underestimate the power of the situation on the behavior of others. They tend to fail to recognize when the behavior of another is due to situational variables, and thus to the person’s state. This erroneous assumption is called the fundamental attribution error (Ross, 1977; Riggio & Garcia, 2009). To better understand, imagine this scenario: Greg returns home from work, and upon opening the front door his wife happily greets him and inquires about his day. Instead of greeting his wife, Greg yells at her, “Leave me alone!” Why did Greg yell at his wife? How would someone committing the fundamental attribution error explain Greg’s behavior? The most common response is that Greg is a mean, angry, or unfriendly person (his traits). This is an internal or dispositional explanation. However, imagine that Greg was just laid off from his job due to company downsizing. Would your explanation for Greg’s behavior change? Your revised explanation might be that Greg was frustrated and disappointed for losing his job; therefore, he was in a bad mood (his state). This is now an external or situational explanation for Greg’s behavior. The fundamental attribution error is so powerful that people often overlook obvious situational influences on behavior. A classic example was demonstrated in a series of experiments known as the quizmaster study (Ross, Amabile, & Steinmetz, 1977). Student participants were randomly assigned to play the role of a questioner (the quizmaster) or a contestant in a quiz game. Questioners developed difficult questions to which they knew the answers, and they presented these questions to the contestants. The contestants answered the questions correctly only \(4\) out of \(10\) times (See fig. 12.3). After the task, the questioners and contestants were asked to rate their own general knowledge compared to the average student. Questioners did not rate their general knowledge higher than the contestants, but the contestants rated the questioners’ intelligence higher than their own. In a second study, observers of the interaction also rated the questioner as having more general knowledge than the contestant. The obvious influence on performance is the situation. The questioners wrote the questions, so of course they had an advantage. Both the contestants and observers made an internal attribution for the performance. They concluded that the questioners must be more intelligent than the contestants. The halo effect refers to the tendency to let the overall impression of an individual color the way in which we feel about their character. For instance, we might assume that people who are physically attractive are more likely to be good people than less attractive individuals. Another example of how the halo effect might manifest would involve assuming that someone whom we perceive to be outgoing or friendly has a better moral character than someone who is not. As demonstrated in the example above, the fundamental attribution error is considered a powerful influence in how we explain the behaviors of others. However, it should be noted that some researchers have suggested that the fundamental attribution error may not be as powerful as it is often portrayed. In fact, a recent review of more than \(173\) published studies suggests that several factors (e.g., high levels of idiosyncrasy of the character and how well hypothetical events are explained) play a role in determining just how influential the fundamental attribution error is (Malle, 2006). Is the Fundamental Attribution Error a Universal Phenomenon? You may be able to think of examples of the fundamental attribution error in your life. Do people in all cultures commit the fundamental attribution error? Research suggests that they do not. People from an individualistic culture, that is, a culture that focuses on individual achievement and autonomy, have the greatest tendency to commit the fundamental attribution error. Individualistic cultures, which tend to be found in western countries such as the United States, Canada, and the United Kingdom, promote a focus on the individual. Therefore, a person’s disposition is thought to be the primary explanation for her behavior. In contrast, people from a collectivistic culture, that is, a culture that focuses on communal relationships with others, such as family, friends, and community (See fig. 12.4, are less likely to commit the fundamental attribution error (Markus & Kitayama, 1991; Triandis, 2001). Why do you think this is the case? Collectivistic cultures, which tend to be found in east Asian countries and in Latin American and African countries, focus on the group more than on the individual (Nisbett, Peng, Choi, & Norenzayan, 2001). This focus on others provides a broader perspective that takes into account both situational and cultural influences on behavior; thus, a more nuanced explanation of the causes of others’ behavior becomes more likely. The Table 12.1) below summarizes compares individualistic and collectivist cultures. Table 12.1 Characteristics of Individualistic and Collectivistic Cultures Individualistic Culture Collectivistic Culture Achievement oriented Relationship oriented Focus on autonomy Focus on group autonomy Dispositional perspective Situational perspective Independent Interdependent Analytic thinking style Holistic thinking style Masuda and Nisbett (2001) demonstrated that the kinds of information that people attend to when viewing visual stimuli (e.g., an aquarium scene) can differ significantly depending on whether the observer comes from a collectivistic versus an individualistic culture. Japanese participants were much more likely to recognize objects that were presented when they occurred in the same context in which they were originally viewed. Manipulating the context in which object recall occurred had no such impact on American participants. Other researchers have shown similar differences across cultures. For example, Zhang, Fung, Stanley, Isaacowitz, and Zhang (2014) demonstrated differences in the ways that holistic thinking might develop between Chinese and American participants, and Ramesh and Gelfand (2010) demonstrated that job turnover rates are more related to the fit between a person and the organization in which they work in an Indian sample, but the fit between the person and their specific job was more predictive of turnover in an American sample. Actor-Observer Bias Returning to our earlier example, Greg knew that he lost his job, but an observer would not know. So a naïve observer would tend to attribute Greg’s hostile behavior to Greg’s disposition rather than to the true, situational cause. Why do you think we underestimate the influence of the situation on the behaviors of others? One reason is that we often don’t have all the information we need to make a situational explanation for another person’s behavior. The only information we might have is what is observable. Due to this lack of information we have a tendency to assume the behavior is due to a dispositional, or internal, factor. When it comes to explaining our own behaviors, however, we have much more information available to us. If you came home from school or work angry and yelled at your dog or a loved one, what would your explanation be? You might say you were very tired or feeling unwell and needed quiet time—a situational explanation. The actor-observer bias is the phenomenon of attributing other people’s behavior to internal factors (fundamental attribution error) while attributing our own behavior to situational forces (Jones & Nisbett, 1971; Nisbett, Caputo, Legant, & Marecek, 1973; Choi & Nisbett, 1998). As actors of behavior, we have more information available to explain our own behavior. However as observers, we have less information available; therefore, we tend to default to a dispositionist perspective. One study on the actor-observer bias investigated reasons male participants gave for why they liked their girlfriend (Nisbett et al., 1973). When asked why participants liked their own girlfriend, participants focused on internal, dispositional qualities of their girlfriends (for example, her pleasant personality). The participants’ explanations rarely included causes internal to themselves, such as dispositional traits (for example, “I need companionship.”). In contrast, when speculating why a male friend likes his girlfriend, participants were equally likely to give dispositional and external explanations. This supports the idea that actors tend to provide few internal explanations but many situational explanations for their own behavior. In contrast, observers tend to provide more dispositional explanations for a friend’s behavior (See figure 12.5 below). Self-Serving Bias We can understand self-serving bias by digging more deeply into attribution, a belief about the cause of a result. One model of attribution proposes three main dimensions: locus of control (internal versus external), stability (stable versus unstable), and controllability (controllable versus uncontrollable). In this context, stability refers the extent to which the circumstances that result in a given outcome are changeable. The circumstances are considered stable if they are unlikely to change. Controllability refers to the extent to which the circumstances that are associated with a given outcome can be controlled. Obviously, those things that we have the power to control would be labeled controllable (Weiner, 1979). Following an outcome, self-serving bias are those attributions that enable us to see ourselves in favorable light (for example, making internal attributions for success and external attributions for failures). When you do well at a task, for example acing an exam, it is in your best interest to make a dispositional attribution for your behavior (“I’m smart,”) instead of a situational one (“The exam was easy,”). The tendency of an individual to take credit by making dispositional or internal attributions for positive outcomes but situational or external attributions for negative outcomes is known as the self-serving bias (Miller & Ross, 1975). This bias serves to protect self-esteem. You can imagine that if people always made situational attributions for their behavior, they would never be able to take credit and feel good about their accomplishments. Consider the example of how we explain our favorite sports team’s wins. Research shows that we make internal, stable, and controllable attributions for our team’s victory (See figure 12.6) (Grove, Hanrahan, & McInman, 1991). For example, we might tell ourselves that our team is talented (internal), consistently works hard (stable), and uses effective strategies (controllable). In contrast, we are more likely to make external, unstable, and uncontrollable attributions when our favorite team loses. For example, we might tell ourselves that the other team has more experienced players or that the referees were unfair (external), the other team played at home (unstable), and the cold weather affected our team’s performance (uncontrollable). Just-World Hypothesis One consequence of westerners’ tendency to provide dispositional explanations for behavior is victim blame (Jost & Major, 2001). When people experience bad fortune, others tend to assume that they somehow are responsible for their own fate. A common ideology, or worldview, in the United States is the just-world hypothesis. The just-world hypothesis is the belief that people get the outcomes they deserve (Lerner & Miller, 1978). In order to maintain the belief that the world is a fair place, people tend to think that good people experience positive outcomes, and bad people experience negative outcomes (Jost, Banaji, & Nosek, 2004; Jost & Major, 2001). The ability to think of the world as a fair place, where people get what they deserve, allows us to feel that the world is predictable and that we have some control over our life outcomes (Jost et al., 2004; Jost & Major, 2001). For example, if you want to experience positive outcomes, you just need to work hard to get ahead in life. Can you think of a negative consequence of the just-world hypothesis? One negative consequence is people’s tendency to blame poor individuals for their plight. What common explanations are given for why people live in poverty? Have you heard statements such as, “The poor are lazy and just don’t want to work” or “Poor people just want to live off the government”? What types of explanations are these, dispositional or situational? These dispositional explanations are clear examples of the fundamental attribution error. Blaming poor people for their poverty ignores situational factors that impact them, such as high unemployment rates, recession, poor educational opportunities, and the familial cycle of poverty (See figure 12.7). Other research shows that people who hold just-world beliefs have negative attitudes toward people who are unemployed and people living with AIDS (Sutton & Douglas, 2005). In the United States and other countries, victims of sexual assault may find themselves blamed for their abuse. Victim advocacy groups, such as Domestic Violence Ended (DOVE), attend court in support of victims to ensure that blame is directed at the perpetrators of sexual violence, not the victims.
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/12%3A_Social_Psychology/12.02%3A_What_Is_Social_Psychology.txt
Learning Objectives • Describe social roles and how they influence behavior • Explain what social norms are and how they influence behavior • Define script • Describe the findings of Zimbardo’s Stanford prison experiment As you’ve learned, social psychology is the study of how people affect one another’s thoughts, feelings, and behaviors. We have discussed situational perspectives and social psychology’s emphasis on the ways in which a person’s environment, including culture and other social influences, affect behavior. In this section, we examine situational forces that have a strong influence on human behavior including social roles, social norms, and scripts. We discuss how humans use the social environment as a source of information, or cues, on how to behave. Situational influences on our behavior have important consequences, such as whether we will help a stranger in an emergency or how we would behave in an unfamiliar environment. Social Roles One major social determinant of human behavior is our social roles. A social role is a pattern of behavior that is expected of a person in a given setting or group (Hare, 2003). Each one of us has several social roles. You may be, at the same time, a student, a parent, an aspiring teacher, a son or daughter, a spouse, and a lifeguard. How do these social roles influence your behavior? Social roles are defined by culturally shared knowledge. That is, nearly everyone in a given culture knows what behavior is expected of a person in a given role. For example, what is the social role for a student? If you look around a college classroom you will likely see students engaging in studious behavior, taking notes, listening to the professor, reading the textbook, and sitting quietly at their desks (See figure 12.8). Of course you may see students deviating from the expected studious behavior such as texting on their phones or using Facebook on their laptops, but in all cases, the students that you observe are attending class—a part of the social role of students. Social roles, and our related behavior, can vary across different settings. How do you behave when you are engaging in the role of son or daughter and attending a family function? Now imagine how you behave when you are engaged in the role of employee at your workplace. It is very likely that your behavior will be different. Perhaps you are more relaxed and outgoing with your family, making jokes and doing silly things. But at your workplace you might speak more professionally, and although you may be friendly, you are also serious and focused on getting the work completed. These are examples of how our social roles influence and often dictate our behavior to the extent that identity and personality can vary with context (that is, in different social groups) (Malloy, Albright, Kenny, Agatstein & Winquist, 1997). Social Norms As discussed previously, social roles are defined by a culture’s shared knowledge of what is expected behavior of an individual in a specific role. This shared knowledge comes from social norms. A social norm is a group’s expectation of what is appropriate and acceptable behavior for its members—how they are supposed to behave and think (Deutsch & Gerard, 1955; Berkowitz, 2004). How are we expected to act? What are we expected to talk about? What are we expected to wear? In our discussion of social roles we noted that colleges have social norms for students’ behavior in the role of student and workplaces have social norms for employees’ behaviors in the role of employee. Social norms are everywhere including in families, gangs, and on social media outlets. What are some social norms on Facebook? CONNECT THE CONCEPTS: Tweens, Teens, and Social Norms My \(11\)-year-old daughter, Jessica, recently told me she needed shorts and shirts for the summer, and that she wanted me to take her to a store at the mall that is popular with preteens and teens to buy them. I have noticed that many girls have clothes from that store, so I tried teasing her. I said, “All the shirts say ‘Aero’ on the front. If you are wearing a shirt like that and you have a substitute teacher, and the other girls are all wearing that type of shirt, won’t the substitute teacher think you are all named ‘Aero’?” My daughter replied, in typical \(11\)-year-old fashion, “Mom, you are not funny. Can we please go shopping?” I tried a different tactic. I asked Jessica if having clothing from that particular store will make her popular. She replied, “No, it will not make me popular. It is what the popular kids wear. It will make me feel happier.” How can a label or name brand make someone feel happier? Think back to what you’ve learned about lifespan development. What is it about pre-teens and young teens that make them want to fit in (See figure 12.9)? Does this change over time? Think back to your high school experience, or look around your college campus. What is the main name brand clothing you see? What messages do we get from the media about how to fit in? Scripts Because of social roles, people tend to know what behavior is expected of them in specific, familiar settings. A script is a person’s knowledge about the sequence of events expected in a specific setting (Schank & Abelson, 1977). How do you act on the first day of school, when you walk into an elevator, or are at a restaurant? For example, at a restaurant in the United States, if we want the server’s attention, we try to make eye contact. In Brazil, you would make the sound “psst” to get the server’s attention. You can see the cultural differences in scripts. To an American, saying “psst” to a server might seem rude, yet to a Brazilian, trying to make eye contact might not seem an effective strategy. Scripts are important sources of information to guide behavior in given situations. Can you imagine being in an unfamiliar situation and not having a script for how to behave? This could be uncomfortable and confusing. How could you find out about social norms in an unfamiliar culture? Zimbardo's Stanford Prison Experiment The famous Stanford prison experiment, conducted by social psychologist Philip Zimbardo and his colleagues at Stanford University, demonstrated the power of social roles, social norms, and scripts. In the summer of 1971, an advertisement was placed in a California newspaper asking for male volunteers to participate in a study about the psychological effects of prison life. More than \(70\) men volunteered, and these volunteers then underwent psychological testing to eliminate candidates who had underlying psychiatric issues, medical issues, or a history of crime or drug abuse. The pool of volunteers was whittled down to \(24\) healthy male college students. Each student was paid \(\$15\) per day and was randomly assigned to play the role of either a prisoner or a guard in the study. Based on what you have learned about research methods, why is it important that participants were randomly assigned? A mock prison was constructed in the basement of the psychology building at Stanford. Participants assigned to play the role of prisoners were “arrested” at their homes by Palo Alto police officers, booked at a police station, and subsequently taken to the mock prison. The experiment was scheduled to run for several weeks. To the surprise of the researchers, both the “prisoners” and “guards” assumed their roles with zeal. In fact, on day 2, some of the prisoners revolted, and the guards quelled the rebellion by threatening the prisoners with night sticks. In a relatively short time, the guards came to harass the prisoners in an increasingly sadistic manner, through a complete lack of privacy, lack of basic comforts such as mattresses to sleep on, and through degrading chores and late-night counts. The prisoners, in turn, began to show signs of severe anxiety and hopelessness—they began tolerating the guards’ abuse. Even the Stanford professor who designed the study and was the head researcher, Philip Zimbardo, found himself acting as if the prison was real and his role, as prison supervisor, was real as well. After only six days, the experiment had to be ended due to the participants’ deteriorating behavior. Zimbardo explained, At this point it became clear that we had to end the study. We had created an overwhelmingly powerful situation—a situation in which prisoners were withdrawing and behaving in pathological ways, and in which some of the guards were behaving sadistically. Even the “good” guards felt helpless to intervene, and none of the guards quit while the study was in progress. Indeed, it should be noted that no guard ever came late for his shift, called in sick, left early, or demanded extra pay for overtime work. (Zimbardo, 2013) The Stanford prison experiment demonstrated the power of social roles, norms, and scripts in affecting human behavior. The guards and prisoners enacted their social roles by engaging in behaviors appropriate to the roles: The guards gave orders and the prisoners followed orders. Social norms require guards to be authoritarian and prisoners to be submissive. When prisoners rebelled, they violated these social norms, which led to upheaval. The specific acts engaged by the guards and the prisoners derived from scripts. For example, guards degraded the prisoners by forcing them do push-ups and by removing all privacy. Prisoners rebelled by throwing pillows and trashing their cells. Some prisoners became so immersed in their roles that they exhibited symptoms of mental breakdown; however, according to Zimbardo, none of the participants suffered long term harm (Alexander, 2001). The Stanford Prison Experiment has some parallels with the abuse of prisoners of war by U.S. Army troops and CIA personnel at the Abu Ghraib prison in 2003 and 2004. The offenses at Abu Ghraib were documented by photographs of the abuse, some taken by the abusers themselves (See fig. 12.10).
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/12%3A_Social_Psychology/12.03%3A_Self-presentation.txt
Learning Objectives • Define attitude • Describe how people’s attitudes are internally changed through cognitive dissonance • Explain how people’s attitudes are externally changed through persuasion • Describe the peripheral and central routes to persuasion Social psychologists have documented how the power of the situation can influence our behaviors. Now we turn to how the power of the situation can influence our attitudes and beliefs. Attitude is our evaluation of a person, an idea, or an object. We have attitudes for many things ranging from products that we might pick up in the supermarket to people around the world to political policies. Typically, attitudes are favorable or unfavorable: positive or negative (Eagly & Chaiken, 1993). And, they have three components: an affective component (feelings), a behavioral component (the effect of the attitude on behavior), and a cognitive component (belief and knowledge) (Rosenberg & Hovland, 1960). For example, you may hold a positive attitude toward recycling. This attitude should result in positive feelings toward recycling (such as “It makes me feel good to recycle” or “I enjoy knowing that I make a small difference in reducing the amount of waste that ends up in landfills”). Certainly, this attitude should be reflected in our behavior: You actually recycle as often as you can. Finally, this attitude will be reflected in favorable thoughts (for example, “Recycling is good for the environment” or “Recycling is the responsible thing to do”). Our attitudes and beliefs are not only influenced by external forces, but also by internal influences that we control. Like our behavior, our attitudes and thoughts are not always changed by situational pressures, but they can be consciously changed by our own free will. In this section we discuss the conditions under which we would want to change our own attitudes and beliefs. What is Cognitive Dissonance? Social psychologists have documented that feeling good about ourselves and maintaining positive self-esteem is a powerful motivator of human behavior (Tavris & Aronson, 2008). In the United States, members of the predominant culture typically think very highly of themselves and view themselves as good people who are above average on many desirable traits (Ehrlinger, Gilovich, & Ross, 2005). Often, our behavior, attitudes, and beliefs are affected when we experience a threat to our self-esteem or positive self-image. Psychologist Leon Festinger (1957) defined cognitive dissonance as psychological discomfort arising from holding two or more inconsistent attitudes, behaviors, or cognitions (thoughts, beliefs, or opinions). Festinger’s theory of cognitive dissonance states that when we experience a conflict in our behaviors, attitudes, or beliefs that runs counter to our positive self-perceptions, we experience psychological discomfort (dissonance). For example, if you believe smoking is bad for your health but you continue to smoke, you experience conflict between your belief and behavior (See figure 12.11 below). Later research documented that only conflicting cognitions that threaten individuals’ positive self-image cause dissonance (Greenwald & Ronis, 1978). Additional research found that dissonance is not only psychologically uncomfortable but also can cause physiological arousal (Croyle & Cooper, 1983) and activate regions of the brain important in emotions and cognitive functioning (van Veen, Krug, Schooler, & Carter, 2009). When we experience cognitive dissonance, we are motivated to decrease it because it is psychologically, physically, and mentally uncomfortable. We can reduce cognitive dissonance by bringing our cognitions, attitudes, and behaviors in line—that is, making them harmonious. This can be done in different ways, such as: • changing our discrepant behavior (e.g., stop smoking), • changing our cognitions through rationalization or denial (e.g., telling ourselves that health risks can be reduced by smoking filtered cigarettes), • adding a new cognition (e.g., “Smoking suppresses my appetite so I don’t become overweight, which is good for my health.”). A classic example of cognitive dissonance is John, a \(20\)-year-old who enlists in the military. During boot camp he is awakened at 5:00 a.m., is chronically sleep deprived, yelled at, covered in sand flea bites, physically bruised and battered, and mentally exhausted (See figure 12.12). It gets worse. Recruits that make it to week \(11\) of boot camp have to do \(54\) hours of continuous training. Not surprisingly, John is miserable. No one likes to be miserable. In this type of situation, people can change their beliefs, their attitudes, or their behaviors. The last option, a change of behaviors, is not available to John. He has signed on to the military for four years, and he cannot legally leave. If John keeps thinking about how miserable he is, it is going to be a very long four years. He will be in a constant state of cognitive dissonance. As an alternative to this misery, John can change his beliefs or attitudes. He can tell himself, “I am becoming stronger, healthier, and sharper. I am learning discipline and how to defend myself and my country. What I am doing is really important.” If this is his belief, he will realize that he is becoming stronger through his challenges. He then will feel better and not experience cognitive dissonance, which is an uncomfortable state. The Effect of Initiation The military example demonstrates the observation that a difficult initiation into a group influences us to like the group more, due to the justification of effort. We do not want to have wasted time and effort to join a group that we eventually leave. A classic experiment by Aronson and Mills (1959) demonstrated this justification of effort effect. College students volunteered to join a campus group that would meet regularly to discuss the psychology of sex. Participants were randomly assigned to one of three conditions: no initiation, an easy initiation, and a difficult initiation into the group. After participating in the first discussion, which was deliberately made very boring, participants rated how much they liked the group. Participants who underwent a difficult initiation process to join the group rated the group more favorably than did participants with an easy initiation or no initiation (See figure 12.13). Similar effects can be seen in a more recent study of how student effort affects course evaluations. Heckert, Latier, Ringwald-Burton, and Drazen (2006) surveyed \(463\) undergraduates enrolled in courses at a midwestern university about the amount of effort that their courses required of them. In addition, the students were also asked to evaluate various aspects of the course. Given what you’ve just read, it will come as no surprise that those courses that were associated with the highest level of effort were evaluated as being more valuable than those that did not. Furthermore, students indicated that they learned more in courses that required more effort, regardless of the grades that they received in those courses (Heckert et al., 2006). Besides the classic military example and group initiation, can you think of other examples of cognitive dissonance? Here is one: Marco and Maria live in Fairfield County, Connecticut, which is one of the wealthiest areas in the United States and has a very high cost of living. Marco telecommutes from home and Maria does not work outside of the home. They rent a very small house for more than \(\$3000\) a month. Maria shops at consignment stores for clothes and economizes where she can. They complain that they never have any money and that they cannot buy anything new. When asked why they do not move to a less expensive location, since Marco telecommutes, they respond that Fairfield County is beautiful, they love the beaches, and they feel comfortable there. How does the theory of cognitive dissonance apply to Marco and Maria’s choices? Persuasion In the previous section we discussed that the motivation to reduce cognitive dissonance leads us to change our attitudes, behaviors, and/or cognitions to make them consonant. Persuasion is the process of changing our attitude toward something based on some kind of communication. Much of the persuasion we experience comes from outside forces. How do people convince others to change their attitudes, beliefs, and behaviors (See figure 12.14)? What communications do you receive that attempt to persuade you to change your attitudes, beliefs, and behaviors? A subfield of social psychology studies persuasion and social influence, providing us with a plethora of information on how humans can be persuaded by others. Yale Attitude Change Approach The topic of persuasion has been one of the most extensively researched areas in social psychology (Fiske et al., 2010). During the Second World War, Carl Hovland extensively researched persuasion for the U.S. Army. After the war, Hovland continued his exploration of persuasion at Yale University. Out of this work came a model called the Yale attitude change approach, which describes the conditions under which people tend to change their attitudes. Hovland demonstrated that certain features of the source of a persuasive message, the content of the message, and the characteristics of the audience will influence the persuasiveness of a message (Hovland, Janis, & Kelley, 1953). Features of the source of the persuasive message include the credibility of the speaker (Hovland & Weiss, 1951) and the physical attractiveness of the speaker (Eagly & Chaiken, 1975; Petty, Wegener, & Fabrigar, 1997). Thus, speakers who are credible, or have expertise on the topic, and who are deemed as trustworthy are more persuasive than less credible speakers. Similarly, more attractive speakers are more persuasive than less attractive speakers. The use of famous actors and athletes to advertise products on television and in print relies on this principle. The immediate and long term impact of the persuasion also depends, however, on the credibility of the messenger (Kumkale & Albarracín, 2004). Features of the message itself that affect persuasion include subtlety (the quality of being important, but not obvious) (Petty & Cacioppo, 1986; Walster & Festinger, 1962); sidedness (that is, having more than one side) (Crowley & Hoyer, 1994; Igou & Bless, 2003; Lumsdaine & Janis, 1953); timing (Haugtvedt & Wegener, 1994; Miller & Campbell, 1959), and whether both sides are presented. Messages that are more subtle are more persuasive than direct messages. Arguments that occur first, such as in a debate, are more influential if messages are given back-to-back. However, if there is a delay after the first message, and before the audience needs to make a decision, the last message presented will tend to be more persuasive (Miller & Campbell, 1959). Features of the audience that affect persuasion are attention (Albarracín & Wyer, 2001; Festinger & Maccoby, 1964), intelligence, self-esteem (Rhodes & Wood, 1992), and age (Krosnick & Alwin, 1989). In order to be persuaded, audience members must be paying attention. People with lower intelligence are more easily persuaded than people with higher intelligence; whereas people with moderate self-esteem are more easily persuaded than people with higher or lower self-esteem (Rhodes & Wood, 1992). Finally, younger adults aged 18–25 are more persuadable than older adults. Elaboration Likelihood Model An especially popular model that describes the dynamics of persuasion is the elaboration likelihood model of persuasion (Petty & Cacioppo, 1986). The elaboration likelihood model considers the variables of the attitude change approach—that is, features of the source of the persuasive message, contents of the message, and characteristics of the audience are used to determine when attitude change will occur. According to the elaboration likelihood model of persuasion, there are two main routes that play a role in delivering a persuasive message: central and peripheral (See figure 12.5). The central route is logic driven and uses data and facts to convince people of an argument’s worthiness. For example, a car company seeking to persuade you to purchase their model will emphasize the car’s safety features and fuel economy. This is a direct route to persuasion that focuses on the quality of the information. In order for the central route of persuasion to be effective in changing attitudes, thoughts, and behaviors, the argument must be strong and, if successful, will result in lasting attitude change. The central route to persuasion works best when the target of persuasion, or the audience, is analytical and willing to engage in processing of the information. From an advertiser’s perspective, what products would be best sold using the central route to persuasion? What audience would most likely be influenced to buy the product? One example is buying a computer. It is likely, for example, that small business owners might be especially influenced by the focus on the computer’s quality and features such as processing speed and memory capacity. The peripheral route is an indirect route that uses peripheral cues to associate positivity with the message (Petty & Cacioppo, 1986). Instead of focusing on the facts and a product’s quality, the peripheral route relies on association with positive characteristics such as positive emotions and celebrity endorsement. For example, having a popular athlete advertise athletic shoes is a common method used to encourage young adults to purchase the shoes. This route to attitude change does not require much effort or information processing. This method of persuasion may promote positivity toward the message or product, but it typically results in less permanent attitude or behavior change. The audience does not need to be analytical or motivated to process the message. In fact, a peripheral route to persuasion may not even be noticed by the audience, for example in the strategy of product placement. Product placement refers to putting a product with a clear brand name or brand identity in a TV show or movie to promote the product (Gupta & Lord, 1998). For example, one season of the reality series American Idol prominently showed the panel of judges drinking out of cups that displayed the Coca-Cola logo. What other products would be best sold using the peripheral route to persuasion? Another example is clothing: A retailer may focus on celebrities that are wearing the same style of clothing. Foot-in-the-door Technique Researchers have tested many persuasion strategies that are effective in selling products and changing people’s attitude, ideas, and behaviors. One effective strategy is the foot-in-the-door technique (Cialdini, 2001; Pliner, Hart, Kohl, & Saari, 1974). Using the foot-in-the-door technique, the persuader gets a person to agree to bestow a small favor or to buy a small item, only to later request a larger favor or purchase of a bigger item. The foot-in-the-door technique was demonstrated in a study by Freedman and Fraser (1966) in which participants who agreed to post small sign in their yard or sign a petition were more likely to agree to put a large sign in their yard than people who declined the first request (See figure 12.16). Research on this technique also illustrates the principle of consistency (Cialdini, 2001): Our past behavior often directs our future behavior, and we have a desire to maintain consistency once we have a committed to a behavior. A common application of foot-in-the-door is when teens ask their parents for a small permission (for example, extending curfew by a half hour) and then asking them for something larger. Having granted the smaller request increases the likelihood that parents will acquiesce with the later, larger request. How would a store owner use the foot-in-the-door technique to sell you an expensive product? For example, say that you are buying the latest model smartphone, and the salesperson suggests you purchase the best data plan. You agree to this. The salesperson then suggests a bigger purchase—the three-year extended warranty. After agreeing to the smaller request, you are more likely to also agree to the larger request. You may have encountered this if you have bought a car. When salespeople realize that a buyer intends to purchase a certain model, they might try to get the customer to pay for many or most available options on the car.
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/12%3A_Social_Psychology/12.04%3A_Attitudes_and_Persuasion.txt
Learning Objectives • Explain the Asch effect • Define conformity and types of social influence • Describe Stanley Milgram’s experiment and its implications • Define groupthink, social facilitation, and social loafing In this section, we discuss additional ways in which people influence others. The topics of conformity, social influence, obedience, and group processes demonstrate the power of the social situation to change our thoughts, feelings, and behaviors. We begin this section with a discussion of a famous social psychology experiment that demonstrated how susceptible humans are to outside social pressures. Conformity Solomon Asch conducted several experiments in the 1950s to determine how people are affected by the thoughts and behaviors of other people. In one study, a group of participants was shown a series of printed line segments of different lengths: \(a\), \(b\), and \(c\) (See figure 12.17). Participants were then shown a fourth line segment: \(x\). They were asked to identify which line segment from the first group (\(a\), \(b\), or \(c\)) most closely resembled the fourth line segment in length. Each group of participants had only one true, naïve subject. The remaining members of the group were confederates of the researcher. A confederate is a person who is aware of the experiment and works for the researcher. Confederates are used to manipulate social situations as part of the research design, and the true, naïve participants believe that confederates are, like them, uninformed participants in the experiment. In Asch’s study, the confederates identified a line segment that was obviously shorter than the target line—a wrong answer. The naïve participant then had to identify aloud the line segment that best matched the target line segment. How often do you think the true participant aligned with the confederates’ response? That is, how often do you think the group influenced the participant, and the participant gave the wrong answer? Asch (1955) found that \(76\%\) of participants conformed to group pressure at least once by indicating the incorrect line. Conformity is the change in a person’s behavior to go along with the group, even if he does not agree with the group. Why would people give the wrong answer? What factors would increase or decrease someone giving in or conforming to group pressure? The Asch effect is the influence of the group majority on an individual’s judgment. What factors make a person more likely to yield to group pressure? Research shows that the size of the majority, the presence of another dissenter, and the public or relatively private nature of responses are key influences on conformity. • The size of the majority: The greater the number of people in the majority, the more likely an individual will conform. There is, however, an upper limit: a point where adding more members does not increase conformity. In Asch’s study, conformity increased with the number of people in the majority—up to seven individuals. At numbers beyond seven, conformity leveled off and decreased slightly (Asch, 1955). • The presence of another dissenter: If there is at least one dissenter, conformity rates drop to near zero (Asch, 1955). • The public or private nature of the responses: When responses are made publicly (in front of others), conformity is more likely; however, when responses are made privately (e.g., writing down the response), conformity is less likely (Deutsch & Gerard, 1955). The finding that conformity is more likely to occur when responses are public than when they are private is the reason government elections require voting in secret, so we are not coerced by others (See figure 12.18). The Asch effect can be easily seen in children when they have to publicly vote for something. For example, if the teacher asks whether the children would rather have extra recess, no homework, or candy, once a few children vote, the rest will comply and go with the majority. In a different classroom, the majority might vote differently, and most of the children would comply with that majority. When someone’s vote changes if it is made in public versus private, this is known as compliance. Compliance can be a form of conformity. Compliance is going along with a request or demand, even if you do not agree with the request. In Asch’s studies, the participants complied by giving the wrong answers, but privately did not accept that the obvious wrong answers were correct. Now that you have learned about the Asch line experiments, why do you think the participants conformed? The correct answer to the line segment question was obvious, and it was an easy task. Researchers have categorized the motivation to conform into two types: normative social influence and informational social influence (Deutsch & Gerard, 1955). In normative social influence, people conform to the group norm to fit in, to feel good, and to be accepted by the group. However, with informational social influence, people conform because they believe the group is competent and has the correct information, particularly when the task or situation is ambiguous. What type of social influence was operating in the Asch conformity studies? Since the line judgment task was unambiguous, participants did not need to rely on the group for information. Instead, participants complied to fit in and avoid ridicule, an instance of normative social influence. An example of informational social influence may be what to do in an emergency situation. Imagine that you are in a movie theater watching a film and what seems to be smoke comes in the theater from under the emergency exit door. You are not certain that it is smoke—it might be a special effect for the movie, such as a fog machine. When you are uncertain you will tend to look at the behavior of others in the theater. If other people show concern and get up to leave, you are likely to do the same. However, if others seem unconcerned, you are likely to stay put and continue watching the movie (See figure 12.19). How would you have behaved if you were a participant in Asch’s study? Many students say they would not conform, that the study is outdated, and that people nowadays are more independent. To some extent this may be true. Research suggests that overall rates of conformity may have reduced since the time of Asch’s research. Furthermore, efforts to replicate Asch’s study have made it clear that many factors determine how likely it is that someone will demonstrate conformity to the group. These factors include the participant’s age, gender, and socio-cultural background (Bond & Smith, 1996; Larsen, 1990; Walker & Andrade, 1996). Link to Learning Watch this video of a replication of the Asch experiment to learn more. Stanley Milgram's Experiment Conformity is one effect of the influence of others on our thoughts, feelings, and behaviors. Another form of social influence is obedience to authority. Obedience is the change of an individual’s behavior to comply with a demand by an authority figure. People often comply with the request because they are concerned about a consequence if they do not comply. To demonstrate this phenomenon, we review another classic social psychology experiment. Stanley Milgram was a social psychology professor at Yale who was influenced by the trial of Adolf Eichmann, a Nazi war criminal. Eichmann’s defense for the atrocities he committed was that he was “just following orders.” Milgram (1963) wanted to test the validity of this defense, so he designed an experiment and initially recruited 40 men for his experiment. The volunteer participants were led to believe that they were participating in a study to improve learning and memory. The participants were told that they were to teach other students (learners) correct answers to a series of test items. The participants were shown how to use a device that they were told delivered electric shocks of different intensities to the learners. The participants were told to shock the learners if they gave a wrong answer to a test item—that the shock would help them to learn. The participants gave (or believed they gave) the learners shocks, which increased in \(15\)-volt increments, all the way up to \(450\) volts. The participants did not know that the learners were confederates and that the confederates did not actually receive shocks. In response to a string of incorrect answers from the learners, the participants obediently and repeatedly shocked them. The confederate learners cried out for help, begged the participant teachers to stop, and even complained of heart trouble. Yet, when the researcher told the participant-teachers to continue the shock, \(65\%\) of the participants continued the shock to the maximum voltage and to the point that the learner became unresponsive (See figure 12.20). What makes someone obey authority to the point of potentially causing serious harm to another person? Several variations of the original Milgram experiment were conducted to test the boundaries of obedience. When certain features of the situation were changed, participants were less likely to continue to deliver shocks (Milgram, 1965). For example, when the setting of the experiment was moved to an office building, the percentage of participants who delivered the highest shock dropped to \(48\%\). When the learner was in the same room as the teacher, the highest shock rate dropped to \(40\%\). When the teachers’ and learners’ hands were touching, the highest shock rate dropped to \(30\%\). When the researcher gave the orders by phone, the rate dropped to \(23\%\). These variations show that when the humanity of the person being shocked was increased, obedience decreased. Similarly, when the authority of the experimenter decreased, so did obedience. This case is still very applicable today. What does a person do if an authority figure orders something done? What if the person believes it is incorrect, or worse, unethical? In a study by Martin and Bull (2008), midwives privately filled out a questionnaire regarding best practices and expectations in delivering a baby. Then, a more senior midwife and supervisor asked the junior midwives to do something they had previously stated they were opposed to. Most of the junior midwives were obedient to authority, going against their own beliefs. Groupthink When in group settings, we are often influenced by the thoughts, feelings, and behaviors around us. Whether it is due to normative or informational social influence, groups have power to influence individuals. Another phenomenon of group conformity is groupthink. Groupthink is the modification of the opinions of members of a group to align with what they believe is the group consensus (Janis, 1972). In group situations, the group often takes action that individuals would not perform outside the group setting because groups make more extreme decisions than individuals do. Moreover, groupthink can hinder opposing trains of thought. This elimination of diverse opinions contributes to faulty decision by the group. DIG DEEPER: Groupthink in the U.S. Government There have been several instances of groupthink in the U.S. government. One example occurred when the United States led a small coalition of nations to invade Iraq in March 2003. This invasion occurred because a small group of advisors and former President George W. Bush were convinced that Iraq represented a significant terrorism threat with a large stockpile of weapons of mass destruction at its disposal. Although some of these individuals may have had some doubts about the credibility of the information available to them at the time, in the end, the group arrived at a consensus that Iraq had weapons of mass destruction and represented a significant threat to national security. It later came to light that Iraq did not have weapons of mass destruction, but not until the invasion was well underway. As a result, 6000 American soldiers were killed and many more civilians died. How did the Bush administration arrive at their conclusions? Here is a video of Colin Powell discussing the information he had, 10 years after his famous United Nations speech, https://www.youtube.com/watch?v=vU6KMYlDyWc (“Colin Powell regrets,” 2011). Do you see evidence of groupthink? Why does groupthink occur? There are several causes of groupthink, which makes it preventable. When the group is highly cohesive, or has a strong sense of connection, maintaining group harmony may become more important to the group than making sound decisions. If the group leader is directive and makes his opinions known, this may discourage group members from disagreeing with the leader. If the group is isolated from hearing alternative or new viewpoints, groupthink may be more likely. How do you know when groupthink is occurring? There are several symptoms of groupthink including the following: • perceiving the group as invulnerable or invincible—believing it can do no wrong • believing the group is morally correct • self-censorship by group members, such as withholding information to avoid disrupting the group consensus • the quashing of dissenting group members’ opinions • the shielding of the group leader from dissenting views • perceiving an illusion of unanimity among group members • holding stereotypes or negative attitudes toward the out-group or others’ with differing viewpoints (Janis, 1972) Given the causes and symptoms of groupthink, how can it be avoided? There are several strategies that can improve group decision making including seeking outside opinions, voting in private, having the leader withhold position statements until all group members have voiced their views, conducting research on all viewpoints, weighing the costs and benefits of all options, and developing a contingency plan (Janis, 1972; Mitchell & Eckstein, 2009). Group Polarization Another phenomenon that occurs within group settings is group polarization. Group polarization (Teger & Pruitt, 1967) is the strengthening of an original group attitude after the discussion of views within a group. That is, if a group initially favors a viewpoint, after discussion the group consensus is likely a stronger endorsement of the viewpoint. Conversely, if the group was initially opposed to a viewpoint, group discussion would likely lead to stronger opposition. Group polarization explains many actions taken by groups that would not be undertaken by individuals. Group polarization can be observed at political conventions, when platforms of the party are supported by individuals who, when not in a group, would decline to support them. A more everyday example is a group’s discussion of how attractive someone is. Does your opinion change if you find someone attractive, but your friends do not agree? If your friends vociferously agree, might you then find this person even more attractive? Social traps refer to situations that arise when individuals or groups of individuals behave in ways that are not in their best interest and that may have negative, long-term consequences. However, once established, a social trap is very difficult to escape. For example, following World War II, the United States and the former Soviet Union engaged in a nuclear arms race. While the presence of nuclear weapons is not in either party's best interest, once the arms race began, each country felt the need to continue producing nuclear weapons to protect itself from the other. Social Loafing Imagine you were just assigned a group project with other students whom you barely know. Everyone in your group will get the same grade. Are you the type who will do most of the work, even though the final grade will be shared? Or are you more likely to do less work because you know others will pick up the slack? Social loafing involves a reduction in individual output on tasks where contributions are pooled. Because each individual's efforts are not evaluated, individuals can become less motivated to perform well. Karau and Williams (1993) and Simms and Nichols (2014) reviewed the research on social loafing and discerned when it was least likely to happen. The researchers noted that social loafing could be alleviated if, among other situations, individuals knew their work would be assessed by a manager (in a workplace setting) or instructor (in a classroom setting), or if a manager or instructor required group members to complete self-evaluations. The likelihood of social loafing in student work groups increases as the size of the group increases (Shepperd & Taylor, 1999). According to Kamau and Williams (1993), college students were the population most likely to engage in social loafing. Their study also found that women and participants from collectivistic cultures were less likely to engage in social loafing, explaining that their group orientation may account for this. College students could work around social loafing or “free-riding” by suggesting to their professors use of a flocking method to form groups. Harding (2018) compared groups of students who had self-selected into groups for class to those who had been formed by flocking, which involves assigning students to groups who have similar schedules and motivations. Not only did she find that students reported less “free riding,” but that they also did better in the group assignments compared to those whose groups were self-selected. Interestingly, the opposite of social loafing occurs when the task is complex and difficult (Bond & Titus, 1983; Geen, 1989). In a group setting, such as the student work group, if your individual performance cannot be evaluated, there is less pressure for you to do well, and thus less anxiety or physiological arousal (Latané, Williams, & Harkens, 1979). This puts you in a relaxed state in which you can perform your best, if you choose (Zajonc, 1965). If the task is a difficult one, many people feel motivated and believe that their group needs their input to do well on a challenging project (Jackson & Williams, 1985). Deindividuation Another way in which a group presence can affect our performance is social loafing. Social loafing is the exertion of less effort by a person working together with a group. Social loafing occurs when our individual performance cannot be evaluated separately from the group. Thus, group performance declines on easy tasks (Karau & Williams, 1993). Essentially individual group members loaf and let other group members pick up the slack. Because each individual’s efforts cannot be evaluated, individuals become less motivated to perform well. For example, consider a group of people cooperating to clean litter from the roadside. Some people will exert a great amount of effort, while others will exert little effort. Yet the entire job gets done, and it may not be obvious who worked hard and who didn’t. The Table 12.2 below summarizes the types of social influence you have learned about in this chapter. Table 12.2 Types of Social Influence Type of Social Influence Description Conformity Changing your behavior to go along with the group even if you do not agree with the group Compliance Going along with a request or demand Normative social influence Conformity to a group norm to fit in, feel good, and be accepted by the group Informational social influence Conformity to a group norm prompted by the belief that the group is competent and has the correct information Obedience Changing your behavior to please an authority figure or to avoid aversive consequences Groupthink Group members modify their opinions to match what they believe is the group consensus Group polarization Strengthening of the original group attitude after discussing views within a group Social facilitation Improved performance when an audience is watching versus when the individual performs the behavior alone Social loafing Exertion of less effort by a person working in a group because individual performance cannot be evaluated separately from the group, thus causing performance decline on easy tasks
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/12%3A_Social_Psychology/12.05%3A_Conformity_Compliance_and_Obedience.txt
Learning Objectives • Define and distinguish among prejudice, stereotypes, and discrimination • Provide examples of prejudice, stereotypes, and discrimination • Explain why prejudice and discrimination exist Human conflict can result in crime, war, and mass murder, such as genocide. Prejudice and discrimination often are root causes of human conflict, which explains how strangers come to hate one another to the extreme of causing others harm. Prejudice and discrimination affect everyone. In this section we will examine the definitions of prejudice and discrimination, examples of these concepts, and causes of these biases. Understanding Prejudice and Discrimination As we discussed in the opening story of Trayvon Martin, humans are very diverse and although we share many similarities, we also have many differences. The social groups we belong to help form our identities (Tajfel, 1974). These differences may be difficult for some people to reconcile, which may lead to prejudice toward people who are different. Prejudice is a negative attitude and feeling toward an individual based solely on one’s membership in a particular social group (Allport, 1954; Brown, 2010). Prejudice is common against people who are members of an unfamiliar cultural group. Thus, certain types of education, contact, interactions, and building relationships with members of different cultural groups can reduce the tendency toward prejudice. In fact, simply imagining interacting with members of different cultural groups might affect prejudice. Indeed, when experimental participants were asked to imagine themselves positively interacting with someone from a different group, this led to an increased positive attitude toward the other group and an increase in positive traits associated with the other group. Furthermore, imagined social interaction can reduce anxiety associated with inter-group interactions (Crisp & Turner, 2009). What are some examples of social groups that you belong to that contribute to your identity? Social groups can include gender, race, ethnicity, nationality, social class, religion, sexual orientation, profession, and many more. And, as is true for social roles, you can simultaneously be a member of more than one social group. An example of prejudice is having a negative attitude toward people who are not born in the United States. Although people holding this prejudiced attitude do not know all people who were not born in the United States, they dislike them due to their status as foreigners. Can you think of a prejudiced attitude you have held toward a group of people? How did your prejudice develop? Prejudice often begins in the form of a stereotype—that is, a specific belief or assumption about individuals based solely on their membership in a group, regardless of their individual characteristics. Stereotypes become overgeneralized and applied to all members of a group. For example, someone holding prejudiced attitudes toward older adults, may believe that older adults are slow and incompetent (Cuddy, Norton, & Fiske, 2005; Nelson, 2004). We cannot possibly know each individual person of advanced age to know that all older adults are slow and incompetent. Therefore, this negative belief is overgeneralized to all members of the group, even though many of the individual group members may in fact be spry and intelligent. Another example of a well-known stereotype involves beliefs about racial differences among athletes. As Hodge, Burden, Robinson, and Bennett (2008) point out, Black male athletes are often believed to be more athletic, yet less intelligent, than their White male counterparts. These beliefs persist despite a number of high profile examples to the contrary. Sadly, such beliefs often influence how these athletes are treated by others and how they view themselves and their own capabilities. Whether or not you agree with a stereotype, stereotypes are generally well-known within in a given culture (Devine, 1989). Sometimes people will act on their prejudiced attitudes toward a group of people, and this behavior is known as discrimination. Discrimination is negative action toward an individual as a result of one’s membership in a particular group (Allport, 1954; Dovidio & Gaertner, 2004). As a result of holding negative beliefs (stereotypes) and negative attitudes (prejudice) about a particular group, people often treat the target of prejudice poorly, such as excluding older adults from their circle of friends. The Table 12.3 below summarizes the characteristics of stereotypes, prejudice, and discrimination. Have you ever been the target of discrimination? If so, how did this negative treatment make you feel? Table 12.3 Connecting Stereotypes, Prejudice, and Discrimination Item Function Connection Example Stereotype Cognitive; thoughts about people Overgeneralized beliefs about people may lead to prejudice. “Yankees fans are arrogant and obnoxious.” Prejudice Affective; feelings about people, both positive and negative Feelings may influence treatment of others, leading to discrimination. “I hate Yankees fans; they make me angry.” Discrimination Behavior; positive or negative treatment of others Holding stereotypes and harboring prejudice may lead to excluding, avoiding, and biased treatment of group members. “I would never hire nor become friends with a person if I knew he or she were a Yankees fan.” So far, we’ve discussed stereotypes, prejudice, and discrimination as negative thoughts, feelings, and behaviors because these are typically the most problematic. However, it is important to also point out that people can hold positive thoughts, feelings, and behaviors toward individuals based on group membership; for example, they would show preferential treatment for people who are like themselves—that is, who share the same gender, race, or favorite sports team. Link to Learning Watch this video of a social experiment conducted in a park that demonstrates the concepts of prejudice, stereotypes, and discrimination. In the video, three people try to steal a bike out in the open. The race and gender of the thief is varied: a White male teenager, a Black male teenager, and a White female. Does anyone try to stop them? The treatment of the teenagers in the video demonstrates the concept of racism. Types of Prejudice and Discrimination When we meet strangers we automatically process three pieces of information about them: their race, gender, and age (Ito & Urland, 2003). Why are these aspects of an unfamiliar person so important? Why don’t we instead notice whether their eyes are friendly, whether they are smiling, their height, the type of clothes they are wearing? Although these secondary characteristics are important in forming a first impression of a stranger, the social categories of race, gender, and age provide a wealth of information about an individual. This information, however, often is based on stereotypes. We may have different expectations of strangers depending on their race, gender, and age. What stereotypes and prejudices do you hold about people who are from a race, gender, and age group different from your own? Racism Racism is prejudice and discrimination against an individual based solely on one’s membership in a specific racial group (such as toward African Americans, Asian Americans, Latinos, Native Americans, European Americans). What are some stereotypes of various racial or ethnic groups? Research suggests cultural stereotypes for Asian Americans include cold, sly, and intelligent; for Latinos, cold and unintelligent; for European Americans, cold and intelligent; and for African Americans, aggressive, athletic, and more likely to be law breakers (Devine & Elliot, 1995; Fiske, Cuddy, Glick, & Xu, 2002; Sommers & Ellsworth, 2000; Dixon & Linz, 2000). Racism exists for many racial and ethnic groups. For example, Blacks are significantly more likely to have their vehicles searched during traffic stops than Whites, particularly when Blacks are driving in predominately White neighborhoods, (a phenomenon often termed “DWB,” or “driving while Black.”) (Rojek, Rosenfeld, & Decker, 2012) Mexican Americans and other Latino groups also are targets of racism from the police and other members of the community. For example, when purchasing items with a personal check, Latino shoppers are more likely than White shoppers to be asked to show formal identification (Dovidio et al., 2010). In one case of alleged harassment by the police, several East Haven, Connecticut, police officers were arrested on federal charges due to reportedly continued harassment and brutalization of Latinos. When the accusations came out, the mayor of East Haven was asked, “What are you doing for the Latino community today?” The Mayor responded, “I might have tacos when I go home, I’m not quite sure yet” (“East Haven Mayor,” 2012) This statement undermines the important issue of racial profiling and police harassment of Latinos, while belittling Latino culture by emphasizing an interest in a food product stereotypically associated with Latinos. Racism is prevalent toward many other groups in the United States including Native Americans, Arab Americans, Jewish Americans, and Asian Americans. Have you witnessed racism toward any of these racial or ethnic groups? Are you aware of racism in your community? One reason modern forms of racism, and prejudice in general, are hard to detect is related to the dual attitudes model (Wilson, Lindsey, & Schooler, 2000). Humans have two forms of attitudes: explicit attitudes, which are conscious and controllable, and implicit attitudes, which are unconscious and uncontrollable (Devine, 1989; Olson & Fazio, 2003). Because holding egalitarian views is socially desirable (Plant & Devine, 1998), most people do not show extreme racial bias or other prejudices on measures of their explicit attitudes. However, measures of implicit attitudes often show evidence of mild to strong racial bias or other prejudices (Greenwald, McGee, & Schwartz, 1998; Olson & Fazio, 2003). Sexism Sexism is prejudice and discrimination toward individuals based on their sex. Typically, sexism takes the form of men holding biases against women, but either sex can show sexism toward their own or their opposite sex. Like racism, sexism may be subtle and difficult to detect. Common forms of sexism in modern society include gender role expectations, such as expecting women to be the caretakers of the household. Sexism also includes people’s expectations for how members of a gender group should behave. For example, women are expected to be friendly, passive, and nurturing, and when women behave in an unfriendly, assertive, or neglectful manner they often are disliked for violating their gender role (Rudman, 1998). Research by Laurie Rudman (1998) finds that when female job applicants self-promote, they are likely to be viewed as competent, but they may be disliked and are less likely to be hired because they violated gender expectations for modesty. Sexism can exist on a societal level such as in hiring, employment opportunities, and education. Women are less likely to be hired or promoted in male-dominated professions such as engineering, aviation, and construction (See figure 12.22) (Blau, Ferber, & Winkler, 2010; Ceci & Williams, 2011). Have you ever experienced or witnessed sexism? Think about your family members’ jobs or careers. Why do you think there are differences in the jobs women and men have, such as more women nurses but more male surgeons (Betz, 2008)? Ageism People often form judgments and hold expectations about people based on their age. These judgments and expectations can lead to ageism, or prejudice and discrimination toward individuals based solely on their age. Typically, ageism occurs against older adults, but ageism also can occur toward younger adults. Think of expectations you hold for older adults. How could someone’s expectations influence the feelings they hold toward individuals from older age groups? Ageism is widespread in U.S. culture (Nosek, 2005), and a common ageist attitude toward older adults is that they are incompetent, physically weak, and slow (Greenberg, Schimel, & Martens, 2002) and some people consider older adults less attractive. Some cultures, however, including some Asian, Latino, and African American cultures, both outside and within the United States afford older adults respect and honor. Typically, ageism occurs against older adults, but ageism also can occur toward younger adults. What expectations do you hold toward younger people? Does society expect younger adults to be immature and irresponsible? Are younger generations seen as having it too easy or having weaker characters than older generations? Raymer, Reed, Spiegel, and Purvanova (2017) examined ageism against younger workers. They found that older workers endorsed negative stereotypes of younger workers, believing that they had more work deficit characteristics (including perceptions of incompetence). How might these forms of ageism affect a younger and older adult who are applying for a sales clerk position? Homophobia Another form of prejudice is homophobia: prejudice and discrimination of individuals based solely on their sexual orientation. Like ageism, homophobia is a widespread prejudice in U.S. society that is tolerated by many people (Herek & McLemore, 2013; Nosek, 2005). Negative feelings often result in discrimination, such as the exclusion of lesbian, gay, bisexual, and transgender (LGBT) people from social groups and the avoidance of LGBT neighbors and co-workers. This discrimination also extends to employers deliberately declining to hire qualified LGBT job applicants. Have you experienced or witnessed homophobia? If so, what stereotypes, prejudiced attitudes, and discrimination were evident? DIG DEEPER: Research into Homophobia Some people are quite passionate in their hatred for nonheterosexuals in our society. In some cases, people have been tortured and/or murdered simply because they were not heterosexual. This passionate response has led some researchers to question what motives might exist for homophobic people. Adams, Wright, & Lohr (1996) conducted a study investigating this issue and their results were quite an eye-opener. In this experiment, male college students were given a scale that assessed how homophobic they were; those with extreme scores were recruited to participate in the experiment. In the end, \(64\) men agreed to participate and were split into \(2\) groups: homophobic men and nonhomophobic men. Both groups of men were fitted with a penile plethysmograph, an instrument that measures changes in blood flow to the penis and serves as an objective measurement of sexual arousal. All men were shown segments of sexually explicit videos. One of these videos involved a sexual interaction between a man and a woman (heterosexual clip). One video displayed two females engaged in a sexual interaction (homosexual female clip), and the final video displayed two men engaged in a sexual interaction (homosexual male clip). Changes in penile tumescence were recorded during all three clips, and a subjective measurement of sexual arousal was also obtained. While both groups of men became sexually aroused to the heterosexual and female homosexual video clips, only those men who were identified as homophobic showed sexual arousal to the homosexual male video clip. While all men reported that their erections indicated arousal for the heterosexual and female homosexual clips, the homophobic men indicated that they were not sexually aroused (despite their erections) to the male homosexual clips. Adams et al. (1996) suggest that these findings may indicate that homophobia is related to homosexual arousal that the homophobic individuals either deny or are unaware. Why do Prejudice and Discrimination Exist? Prejudice and discrimination persist in society due to social learning and conformity to social norms. Children learn prejudiced attitudes and beliefs from society: their parents, teachers, friends, the media, and other sources of socialization, such as Facebook (O’Keeffe & Clarke-Pearson, 2011). If certain types of prejudice and discrimination are acceptable in a society, there may be normative pressures to conform and share those prejudiced beliefs, attitudes, and behaviors. For example, public and private schools are still somewhat segregated by social class. Historically, only children from wealthy families could afford to attend private schools, whereas children from middle- and low-income families typically attended public schools. If a child from a low-income family received a merit scholarship to attend a private school, how might the child be treated by classmates? Can you recall a time when you held prejudiced attitudes or beliefs or acted in a discriminatory manner because your group of friends expected you to? Stereotypes and Self-Fulfilling Prophecy When we hold a stereotype about a person, we have expectations that he or she will fulfill that stereotype. A self-fulfilling prophecy is an expectation held by a person that alters his or her behavior in a way that tends to make it true. When we hold stereotypes about a person, we tend to treat the person according to our expectations. This treatment can influence the person to act according to our stereotypic expectations, thus confirming our stereotypic beliefs. Research by Rosenthal and Jacobson (1968) found that disadvantaged students whose teachers expected them to perform well had higher grades than disadvantaged students whose teachers expected them to do poorly. Consider this example of cause and effect in a self-fulfilling prophecy: If an employer expects an openly gay male job applicant to be incompetent, the potential employer might treat the applicant negatively during the interview by engaging in less conversation, making little eye contact, and generally behaving coldly toward the applicant (Hebl, Foster, Mannix, & Dovidio, 2002). In turn, the job applicant will perceive that the potential employer dislikes him, and he will respond by giving shorter responses to interview questions, making less eye contact, and generally disengaging from the interview. After the interview, the employer will reflect on the applicant’s behavior, which seemed cold and distant, and the employer will conclude, based on the applicant’s poor performance during the interview, that the applicant was in fact incompetent. Thus, the employer’s stereotype—gay men are incompetent and do not make good employees—is reinforced. Do you think this job applicant is likely to be hired? Treating individuals according to stereotypic beliefs can lead to prejudice and discrimination. Another dynamic that can reinforce stereotypes is confirmation bias. When interacting with the target of our prejudice, we tend to pay attention to information that is consistent with our stereotypic expectations and ignore information that is inconsistent with our expectations. In this process, known as confirmation bias, we seek out information that supports our stereotypes and ignore information that is inconsistent with our stereotypes (Wason & Johnson-Laird, 1972). In the job interview example, the employer may not have noticed that the job applicant was friendly and engaging, and that he provided competent responses to the interview questions in the beginning of the interview. Instead, the employer focused on the job applicant’s performance in the later part of the interview, after the applicant changed his demeanor and behavior to match the interviewer’s negative treatment. Have you ever fallen prey to the self-fulfilling prophecy or confirmation bias, either as the source or target of such bias? How might we stop the cycle of the self-fulfilling prophecy? Social class stereotypes of individuals tend to arise when information about the individual is ambiguous. If information is unambiguous, stereotypes do not tend to arise (Baron et al., 1995). IN-Groups and OUT-Groups As discussed previously in this section, we all belong to a gender, race, age, and social economic group. These groups provide a powerful source of our identity and self-esteem (Tajfel & Turner, 1979). These groups serve as our in-groups. An in-group is a group that we identify with or see ourselves as belonging to. A group that we don’t belong to, or an out-group, is a group that we view as fundamentally different from us. For example, if you are female, your gender in-group includes all females, and your gender out-group includes all males (See figure 12.23). People often view gender groups as being fundamentally different from each other in personality traits, characteristics, social roles, and interests. Because we often feel a strong sense of belonging and emotional connection to our in-groups, we develop in-group bias: a preference for our own group over other groups. This in-group bias can result in prejudice and discrimination because the out-group is perceived as different and is less preferred than our in-group. Despite the group dynamics that seem only to push groups toward conflict, there are forces that promote reconciliation between groups: the expression of empathy, of acknowledgment of past suffering on both sides, and the halt of destructive behaviors. One function of prejudice is to help us feel good about ourselves and maintain a positive self-concept. This need to feel good about ourselves extends to our in-groups: We want to feel good and protect our in-groups. We seek to resolve threats individually and at the in-group level. This often happens by blaming an out-group for the problem. Scapegoating is the act of blaming an out-group when the in-group experiences frustration or is blocked from obtaining a goal (Allport, 1954).
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/12%3A_Social_Psychology/12.06%3A_Prejudice_and_Discrimination.txt
Learning Objectives • Define aggression • Define cyberbullying • Describe the bystander effect Throughout this chapter we have discussed how people interact and influence one another’s thoughts, feelings, and behaviors in both positive and negative ways. People can work together to achieve great things, such as helping each other in emergencies: recall the heroism displayed during the \(9/11\) terrorist attacks. People also can do great harm to one another, such as conforming to group norms that are immoral and obeying authority to the point of murder: consider the mass conformity of Nazis during WWII. In this section we will discuss a negative side of human behavior—aggression. A number of researchers have explored ways to reduce prejudice. One of the earliest was a study by Sherif et al. (1961) known as the Robbers Cave experiment. They found that when two opposing groups at a camp worked together toward a common goal, prejudicial attitudes between the groups decreased (Gaertner, Dovidio, Banker, Houlette, Johnson, & McGlynn, 2000). Focusing on superordinate goals was the key to attitude change in the research. Another study examined the jigsaw classroom, a technique designed by Aronson and Bridgeman in an effort to increase success in desegregated classrooms. In this technique, students work on an assignment in groups inclusive of various races and abilities. They are assigned tasks within their group, then collaborate with peers from other groups who were assigned the same task, and then report back to their original group. Walker and Crogan (1998) noted that the jigsaw classroom reduced potential for prejudice in Australia, as diverse students worked together on projects needing all of the pieces to succeed. This research suggests that anything that can allow individuals to work together toward common goals can decrease prejudicial attitudes. Obviously, the application of such strategies in real-world settings would enhance opportunities for conflict resolution. Aggression Humans engage in aggression when they seek to cause harm or pain to another person. Aggression takes two forms depending on one’s motives: hostile or instrumental. Hostile aggression is motivated by feelings of anger with intent to cause pain; a fight in a bar with a stranger is an example of hostile aggression. In contrast, instrumental aggression is motivated by achieving a goal and does not necessarily involve intent to cause pain (Berkowitz, 1993); a contract killer who murders for hire displays instrumental aggression. There are many different theories as to why aggression exists. Some researchers argue that aggression serves an evolutionary function (Buss, 2004). Men are more likely than women to show aggression (Wilson & Daly, 1985). From the perspective of evolutionary psychology, human male aggression, like that in nonhuman primates, likely serves to display dominance over other males, both to protect a mate and to perpetuate the male’s genes (See figure 12.24). Sexual jealousy is part of male aggression; males endeavor to make sure their mates are not copulating with other males, thus ensuring their own paternity of the female’s offspring. Although aggression provides an obvious evolutionary advantage for men, women also engage in aggression. Women typically display instrumental forms of aggression, with their aggression serving as a means to an end (Dodge & Schwartz, 1997). For example, women may express their aggression covertly, for example, by communication that impairs the social standing of another person. Another theory that explains one of the functions of human aggression is frustration aggression theory (Dollard, Doob, Miller, Mowrer, & Sears, 1939). This theory states that when humans are prevented from achieving an important goal, they become frustrated and aggressive. Bullying A modern form of aggression is bullying. As you learn in your study of child development, socializing and playing with other children is beneficial for children’s psychological development. However, as you may have experienced as a child, not all play behavior has positive outcomes. Some children are aggressive and want to play roughly. Other children are selfish and do not want to share toys. One form of negative social interactions among children that has become a national concern is bullying. Bullying is repeated negative treatment of another person, often an adolescent, over time (Olweus, 1993). A one-time incident in which one child hits another child on the playground would not be considered bullying: Bullying is repeated behavior. The negative treatment typical in bullying is the attempt to inflict harm, injury, or humiliation, and bullying can include physical or verbal attacks. However, bullying doesn’t have to be physical or verbal, it can be psychological. Research finds gender differences in how girls and boys bully others (American Psychological Association, 2010; Olweus, 1993). Boys tend to engage in direct, physical aggression such as physically harming others. Girls tend to engage in indirect, social forms of aggression such as spreading rumors, ignoring, or socially isolating others. Based on what you have learned about child development and social roles, why do you think boys and girls display different types of bullying behavior? Bullying involves three parties: the bully, the victim, and witnesses or bystanders. The act of bullying involves an imbalance of power with the bully holding more power—physically, emotionally, and/or socially over the victim. The experience of bullying can be positive for the bully, who may enjoy a boost to self-esteem. However, there are several negative consequences of bullying for the victim, and also for the bystanders. How do you think bullying negatively impacts adolescents? Being the victim of bullying is associated with decreased mental health, including experiencing anxiety and depression (APA, 2010). Victims of bullying may underperform in schoolwork (Bowen, 2011). Bullying also can result in the victim committing suicide (APA, 2010). How might bullying negatively affect witnesses? Although there is not one single personality profile for who becomes a bully and who becomes a victim of bullying (APA, 2010), researchers have identified some patterns in children who are at a greater risk of being bullied (Olweus, 1993): • Children who are emotionally reactive are at a greater risk for being bullied. Bullies may be attracted to children who get upset easily because the bully can quickly get an emotional reaction from them. • Children who are different from others are likely to be targeted for bullying. Children who are overweight, cognitively impaired, or racially or ethnically different from their peer group may be at higher risk. • Gay, lesbian, bisexual, and transgender teens are at very high risk of being bullied and hurt due to their sexual orientation. Cyberbullying With the rapid growth of technology, and widely available mobile technology and social networking media, a new form of bullying has emerged: cyberbullying (Hoff & Mitchell, 2009). Cyberbullying, like bullying, is repeated behavior that is intended to cause psychological or emotional harm to another person. What is unique about cyberbullying is that it is typically covert, concealed, done in private, and the bully can remain anonymous. This anonymity gives the bully power, and the victim may feel helpless, unable to escape the harassment, and unable to retaliate (Spears, Slee, Owens, & Johnson, 2009). Cyberbullying can take many forms, including harassing a victim by spreading rumors, creating a website defaming the victim, and ignoring, insulting, laughing at, or teasing the victim (Spears et al., 2009). In cyberbullying, it is more common for girls to be the bullies and victims because cyberbullying is nonphysical and is a less direct form of bullying (Figure) (Hoff & Mitchell, 2009). Interestingly, girls who become cyberbullies often have been the victims of cyberbullying at one time (Vandebosch & Van Cleemput, 2009). The effects of cyberbullying are just as harmful as traditional bullying and include the victim feeling frustration, anger, sadness, helplessness, powerlessness, and fear. Victims will also experience lower self-esteem (Hoff & Mitchell, 2009; Spears et al., 2009). Furthermore, recent research suggests that both cyberbullying victims and perpetrators are more likely to experience suicidal ideation, and they are more likely to attempt suicide than individuals who have no experience with cyberbullying (Hinduja & Patchin, 2010). What features of technology make cyberbullying easier and perhaps more accessible to young adults? What can parents, teachers, and social networking websites, like Facebook, do to prevent cyberbullying? The Bystander Effect The discussion of bullying highlights the problem of witnesses not intervening to help a victim. Researchers Latané and Darley (1968) described a phenomenon called the bystander effect. The bystander effect is a phenomenon in which a witness or bystander does not volunteer to help a victim or person in distress. Instead, they just watch what is happening. Social psychologists hold that we make these decisions based on the social situation, not our own personality variables. The impetus behind the bystander effect was the murder of a young woman named Kitty Genovese in 1964. The story of her tragic death took on a life of its own when it was reported that none of her neighbors helped her or called the police when she was being attacked. However, Kassin (2017) noted that her killer was apprehended due to neighbors who called the police when they saw him committing a burglary days later. Not only did bystanders indeed intervene in her murder (one man who shouted at the killer, a woman who said she called the police, and a friend who comforted her in her last moments), but other bystanders intervened in the capture of the murderer. Social psychologists claim that diffusion of responsibility is the likely explanation. Diffusion of responsibility is the tendency for no one in a group to help because the responsibility to help is spread throughout the group (Bandura, 1999). Because there were many witnesses to the attack on Genovese, as evidenced by the number of lit apartment windows in the building, individuals assumed someone else must have already called the police. The responsibility to call the police was diffused across the number of witnesses to the crime. Have you ever passed an accident on the freeway and assumed that a victim or certainly another motorist has already reported the accident? In general, the greater the number of bystanders, the less likely any one person will help.
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/12%3A_Social_Psychology/12.07%3A_Aggression.txt
Learning Objectives • Describe altruism • Describe conditions that influence the formation of relationships • Identify what attracts people to each other • Describe the triangular theory of love • Explain social exchange theory in relationships You’ve learned about many of the negative behaviors of social psychology, but the field also studies many positive social interactions and behaviors. What makes people like each other? With whom are we friends? Whom do we date? Researchers have documented several features of the situation that influence whether we form relationships with others. There are also universal traits that humans find attractive in others. In this section we discuss conditions that make forming relationships more likely, what we look for in friendships and romantic relationships, the different types of love, and a theory explaining how our relationships are formed, maintained, and terminated. Prosocial Behavior and Altruism Do you voluntarily help others? Voluntary behavior with the intent to help other people is called prosocial behavior. Why do people help other people? Is personal benefit such as feeling good about oneself the only reason people help one another? Research suggests there are many other reasons. Altruism is people’s desire to help others even if the costs outweigh the benefits of helping. In fact, people acting in altruistic ways may disregard the personal costs associated with helping (See figure 12.26). For example, news accounts of the \(9/11\) terrorist attacks on the World Trade Center in New York reported an employee in the first tower helped his co-workers make it to the exit stairwell. After helping a co-worker to safety he went back in the burning building to help additional co-workers. In this case the costs of helping were great, and the hero lost his life in the destruction (Stewart, 2002). Some researchers suggest that altruism operates on empathy. Empathy is the capacity to understand another person’s perspective, to feel what he or she feels. An empathetic person makes an emotional connection with others and feels compelled to help (Batson, 1991). Other researchers argue that altruism is a form of selfless helping that is not motivated by benefits or feeling good about oneself. Certainly, after helping, people feel good about themselves, but some researchers argue that this is a consequence of altruism, not a cause. Other researchers argue that helping is always self-serving because our egos are involved, and we receive benefits from helping (Cialdini, Brown, Lewis, Luce, & Neuberg 1997). It is challenging to determine experimentally the true motivation for helping, whether is it largely self-serving (egoism) or selfless (altruism). Thus, a debate on whether pure altruism exists continues. Link to Learning See this excerpt from the popular TV series Friends in which egoism versus altruism is debated to learn more. Forming Relationships What do you think is the single most influential factor in determining with whom you become friends and whom you form romantic relationships? You might be surprised to learn that the answer is simple: the people with whom you have the most contact. This most important factor is proximity. You are more likely to be friends with people you have regular contact with. For example, there are decades of research that shows that you are more likely to become friends with people who live in your dorm, your apartment building, or your immediate neighborhood than with people who live farther away (Festinger, Schachler, & Back, 1950). It is simply easier to form relationships with people you see often because you have the opportunity to get to know them. Similarity is another factor that influences who we form relationships with. We are more likely to become friends or lovers with someone who is similar to us in background, attitudes, and lifestyle. In fact, there is no evidence that opposites attract. Rather, we are attracted to people who are most like us (See figure 12.27) (McPherson, Smith-Lovin, & Cook, 2001). Why do you think we are attracted to people who are similar to us? Sharing things in common will certainly make it easy to get along with others and form connections. When you and another person share similar music taste, hobbies, food preferences, and so on, deciding what to do with your time together might be easy. Homophily is the tendency for people to form social networks, including friendships, marriage, business relationships, and many other types of relationships, with others who are similar (McPherson et al., 2001). But, homophily limits our exposure to diversity (McPherson et al., 2001). By forming relationships only with people who are similar to us, we will have homogenous groups and will not be exposed to different points of view. In other words, because we are likely to spend time with those who are most like ourselves, we will have limited exposure to those who are different than ourselves, including people of different races, ethnicities, social-economic status, and life situations. Once we form relationships with people, we desire reciprocity. Reciprocity is the give and take in relationships. We contribute to relationships, but we expect to receive benefits as well. That is, we want our relationships to be a two way street. We are more likely to like and engage with people who like us back. Self-disclosure is part of the two way street. Self-disclosure is the sharing of personal information (Laurenceau, Barrett, & Pietromonaco, 1998). We form more intimate connections with people with whom we disclose important information about ourselves. Indeed, self-disclosure is a characteristic of healthy intimate relationships, as long as the information disclosed is consistent with our own views (Cozby, 1973). Attraction We have discussed how proximity and similarity lead to the formation of relationships, and that reciprocity and self-disclosure are important for relationship maintenance. But, what features of a person do we find attractive? We don’t form relationships with everyone that lives or works near us, so how is it that we decide which specific individuals we will select as friends and lovers? Researchers have documented several characteristics in men and women that humans find attractive. First we look for friends and lovers who are physically attractive. People differ in what they consider attractive, and attractiveness is culturally influenced. Research, however, suggests that some universally attractive features in women include large eyes, high cheekbones, a narrow jaw line, a slender build (Buss, 1989), and a lower waist-to-hip ratio (Singh, 1993). For men, attractive traits include being tall, having broad shoulders, and a narrow waist (Buss, 1989). Both men and women with high levels of facial and body symmetry are generally considered more attractive than asymmetric individuals (Fink, Neave, Manning, & Grammer, 2006; Penton-Voak et al., 2001; Rikowski & Grammer, 1999). Social traits that people find attractive in potential female mates include warmth, affection, and social skills; in males, the attractive traits include achievement, leadership qualities, and job skills (Regan & Berscheid, 1997). Although humans want mates who are physically attractive, this does not mean that we look for the most attractive person possible. In fact, this observation has led some to propose what is known as the matching hypothesis which asserts that people tend to pick someone they view as their equal in physical attractiveness and social desirability (Taylor, Fiore, Mendelsohn, & Cheshire, 2011). For example, you and most people you know likely would say that a very attractive movie star is out of your league. So, even if you had proximity to that person, you likely would not ask them out on a date because you believe you likely would be rejected. People weigh a potential partner’s attractiveness against the likelihood of success with that person. If you think you are particularly unattractive (even if you are not), you likely will seek partners that are fairly unattractive (that is, unattractive in physical appearance or in behavior). Sternberg's Triangular Theory of Love We typically love the people with whom we form relationships, but the type of love we have for our family, friends, and lovers differs. Robert Sternberg (1986) proposed that there are three components of love: intimacy, passion, and commitment. These three components form a triangle that defines multiple types of love: this is known as Sternberg’s triangular theory of love (See figure 12.28). Intimacy is the sharing of details and intimate thoughts and emotions. Passion is the physical attraction—the flame in the fire. Commitment is standing by the person—the “in sickness and health” part of the relationship. Sternberg (1986) states that a healthy relationship will have all three components of love—intimacy, passion, and commitment—which is described as consummate love (See figure 12.29). However, different aspects of love might be more prevalent at different life stages. Other forms of love include liking, which is defined as having intimacy but no passion or commitment. Infatuation is the presence of passion without intimacy or commitment. Empty love is having commitment without intimacy or passion. Companionate love, which is characteristic of close friendships and family relationships, consists of intimacy and commitment but no passion. Romantic love is defined by having passion and intimacy, but no commitment. Finally, fatuous love is defined by having passion and commitment, but no intimacy, such as a long term sexual love affair. Can you describe other examples of relationships that fit these different types of love? Social Exchange Theory We have discussed why we form relationships, what attracts us to others, and different types of love. But what determines whether we are satisfied with and stay in a relationship? One theory that provides an explanation is social exchange theory. According to social exchange theory, we act as naïve economists in keeping a tally of the ratio of costs and benefits of forming and maintaining a relationship with others (See figure below) (Rusbult & Van Lange, 2003). People are motivated to maximize the benefits of social exchanges, or relationships, and minimize the costs. People prefer to have more benefits than costs, or to have nearly equal costs and benefits, but most people are dissatisfied if their social exchanges create more costs than benefits. Let’s discuss an example. If you have ever decided to commit to a romantic relationship, you probably considered the advantages and disadvantages of your decision. What are the benefits of being in a committed romantic relationship? You may have considered having companionship, intimacy, and passion, but also being comfortable with a person you know well. What are the costs of being in a committed romantic relationship? You may think that over time boredom from being with only one person may set in; moreover, it may be expensive to share activities such as attending movies and going to dinner. However, the benefits of dating your romantic partner presumably outweigh the costs, or you wouldn’t continue the relationship.
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/12%3A_Social_Psychology/12.08%3A_Prosocial_Behavior.txt
29. Compare and contrast situational influences and dispositional influences and give an example of each. Explain how situational influences and dispositional influences might explain inappropriate behavior. 30. Provide an example of how people from individualistic and collectivistic cultures would differ in explaining why they won an important sporting event. 31. Why didn’t the “good” guards in the Stanford prison experiment object to other guards’ abusive behavior? Were the student prisoners simply weak people? Why didn’t they object to being abused? 32. Describe how social roles, social norms, and scripts were evident in the Stanford prison experiment. How can this experiment be applied to everyday life? Are there any more recent examples where people started fulfilling a role and became abusive? 33. Give an example (one not used in class or your text) of cognitive dissonance and how an individual might resolve this. 34. Imagine that you work for an advertising agency, and you’ve been tasked with developing an advertising campaign to increase sales of Bliss Soda. How would you develop an advertisement for this product that uses a central route of persuasion? How would you develop an ad using a peripheral route of persuasion? 35. Describe how seeking outside opinions can prevent groupthink. 36. Explain why the following situation is not an example of discrimination: A teacher seats students wearing short sleeves on the left half of the room and students wearing long sleeves on the right half of the room. 37. Some people seem more willing to openly display prejudice regarding sexual orientation than prejudice regarding race and gender. Descbribe why this might be. 38. When people blame a scapegoat, how do you think they choose evidence to support the blame? 39. Compare and contrast hostile and instrumental aggression. 40. What evidence discussed in the previous section suggests that cyberbullying is difficult to detect and prevent? 41. Describe what influences whether relationships will be formed. 42. The evolutionary theory argues that humans are motivated to perpetuate their genes and reproduce. Using an evolutionary perspective, describe traits in men and women that humans find attractive. Key Terms actor-observer bias phenomenon of explaining other people’s behaviors are due to internal factors and our own behaviors are due to situational forces ageism prejudice and discrimination toward individuals based solely on their age aggression seeking to cause harm or pain to another person altruism humans’ desire to help others even if the costs outweigh the benefits of helping Asch effect group majority influences an individual’s judgment, even when that judgment is inaccurate attitude evaluations of or feelings toward a person, idea, or object that are typically positive or negative attribution explanation for the behavior of other people bullying a person, often an adolescent, being treated negatively repeatedly and over time bystander effect situation in which a witness or bystander does not volunteer to help a victim or person in distress central route persuasion logic-driven arguments using data and facts to convince people of an argument’s worthiness cognitive dissonance psychological discomfort that arises from a conflict in a person’s behaviors, attitudes, or beliefs that runs counter to one’s positive self-perception collectivist culture culture that focuses on communal relationships with others such as family, friends, and community companionate love type of love consisting of intimacy and commitment, but not passion; associated with close friendships and family relationships confederate person who works for a researcher and is aware of the experiment, but who acts as a participant; used to manipulate social situations as part of the research design confirmation bias seeking out information that supports our stereotypes while ignoring information that is inconsistent with our stereotypes conformity when individuals change their behavior to go along with the group even if they do not agree with the group consummate love type of love occurring when intimacy, passion, and commitment are all present cyberbullying repeated behavior that is intended to cause psychological or emotional harm to another person and that takes place online diffusion of responsibility tendency for no one in a group to help because the responsibility to help is spread throughout the group discrimination negative actions toward individuals as a result of their membership in a particular group dispositionism describes a perspective common to personality psychologists, which asserts that our behavior is determined by internal factors, such as personality traits and temperament empathy capacity to understand another person’s perspective—to feel what they feel foot-in-the-door technique persuasion of one person by another person, encouraging a person to agree to a small favor, or to buy a small item, only to later request a larger favor or purchase of a larger item fundamental attribution error tendency to overemphasize internal factors as attributions for behavior and underestimate the power of the situation group polarization strengthening of the original group attitude after discussing views within the group groupthink group members modify their opinions to match what they believe is the group consensus homophily tendency for people to form social networks, including friendships, marriage, business relationships, and many other types of relationships, with others who are similar homophobia prejudice and discrimination against individuals based solely on their sexual orientation hostile aggression aggression motivated by feelings of anger with intent to cause pain in-group group that we identify with or see ourselves as belonging to in-group bias preference for our own group over other groups individualistic culture culture that focuses on individual achievement and autonomy informational social influence conformity to a group norm prompted by the belief that the group is competent and has the correct information instrumental aggression aggression motivated by achieving a goal and does not necessarily involve intent to cause pain internal factor internal attribute of a person, such as personality traits or temperament just-world hypothesis ideology common in the United States that people get the outcomes they deserve justification of effort theory that people value goals and achievements more when they have put more effort into them normative social influence conformity to a group norm to fit in, feel good, and be accepted by the group obedience change of behavior to please an authority figure or to avoid aversive consequences out-group group that we don’t belong to—one that we view as fundamentally different from us peripheral route persuasion one person persuades another person; an indirect route that relies on association of peripheral cues (such as positive emotions and celebrity endorsement) to associate positivity with a message persuasion process of changing our attitude toward something based on some form of communication prejudice negative attitudes and feelings toward individuals based solely on their membership in a particular group prosocial behavior voluntary behavior with the intent to help other people racism prejudice and discrimination toward individuals based solely on their race reciprocity give and take in relationships romantic love type of love consisting of intimacy and passion, but no commitment scapegoating act of blaming an out-group when the in-group experiences frustration or is blocked from obtaining a goal script person’s knowledge about the sequence of events in a specific setting self-disclosure sharing personal information in relationships self-fulfilling prophecy treating stereotyped group members according to our biased expectations only to have this treatment influence the individual to act according to our stereotypic expectations, thus confirming our stereotypic beliefs self-serving bias tendency for individuals to take credit by making dispositional or internal attributions for positive outcomes and situational or external attributions for negative outcomes sexism prejudice and discrimination toward individuals based on their sex situationism describes a perspective that behavior and actions are determined by the immediate environment and surroundings; a view promoted by social psychologists social exchange theory humans act as naïve economists in keeping a tally of the ratio of costs and benefits of forming and maintain a relationship, with the goal to maximize benefits and minimize costs social loafing exertion of less effort by a person working in a group because individual performance cannot be evaluated separately from the group, thus causing performance decline on easy tasks social norm group’s expectations regarding what is appropriate and acceptable for the thoughts and behavior of its members social psychology field of psychology that examines how people impact or affect each other, with particular focus on the power of the situation social role socially defined pattern of behavior that is expected of a person in a given setting or group stanford prison experiment Stanford University conducted an experiment in a mock prison that demonstrated the power of social roles, social norms, and scripts stereotype specific beliefs or assumptions about individuals based solely on their membership in a group, regardless of their individual characteristics triangular theory of love model of love based on three components: intimacy, passion, and commitment; several types of love exist, depending on the presence or absence of each of these components
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/12%3A_Social_Psychology/Critical_Thinking_Questions.txt
43. Provide a personal example of an experience in which your behavior was influenced by the power of the situation. 44. Think of an example in the media of a sports figure—player or coach—who gives a self-serving attribution for winning or losing. Examples might include accusing the referee of incorrect calls, in the case of losing, or citing their own hard work and talent, in the case of winning. 45. Try attending a religious service very different from your own and see how you feel and behave without knowing the appropriate script. Or, try attending an important, personal event that you have never attended before, such as a bar mitzvah (a coming-of-age ritual in Jewish culture), a quinceañera (in some Latin American cultures a party is given to a girl who is turning 15 years old), a wedding, a funeral, or a sporting event new to you, such as horse racing or bull riding. Observe and record your feelings and behaviors in this unfamiliar setting for which you lack the appropriate script. Do you silently observe the action, or do you ask another person for help interpreting the behaviors of people at the event? Describe in what ways your behavior would change if you were to attend a similar event in the future? 46. Name and describe at least three social roles you have adopted for yourself. Why did you adopt these roles? What are some roles that are expected of you, but that you try to resist? 47. Cognitive dissonance often arises after making an important decision, called post-decision dissonance (or in popular terms, buyer’s remorse). Describe a recent decision you made that caused dissonance and describe how you resolved it. 48. Describe a time when you or someone you know used the foot-in-the-door technique to gain someone’s compliance. 49. Conduct a conformity study the next time you are in an elevator. After you enter the elevator, stand with your back toward the door. See if others conform to your behavior. Watch this video for a candid camera demonstration of this phenomenon. Did your results turn out as expected? 50. Most students adamantly state that they would never have turned up the voltage in the Milligram experiment. Do you think you would have refused to shock the learner? Looking at your own past behavior, what evidence suggests that you would go along with the order to increase the voltage? 51. Give an example when you felt that someone was prejudiced against you. What do you think caused this attitude? Did this person display any discrimination behaviors and, if so, how? 52. Give an example when you felt prejudiced against someone else. How did you discriminate against them? Why do you think you did this? 53. Have you ever experienced or witnessed bullying or cyberbullying? How did it make you feel? What did you do about it? After reading this section would you have done anything differently? 54. The next time you see someone needing help, observe your surroundings. Look to see if the bystander effect is in action and take measures to make sure the person gets help. If you aren’t able to help, notify an adult or authority figure that can. 55. Think about your recent friendships and romantic relationship(s). What factors do you think influenced the development of these relationships? What attracted you to becoming friends or romantic partners? 56. Have you ever used a social exchange theory approach to determine how satisfied you were in a relationship, either a friendship or romantic relationship? Have you ever had the costs outweigh the benefits of a relationship? If so, how did you address this imbalance?
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/12%3A_Social_Psychology/Personal_Application_Questions.txt
1. As a field, social psychology focuses on ________ in predicting human behavior. 1. personality traits 2. genetic predispositions 3. biological forces 4. situational factors 2. Making internal attributions for your successes and making external attributions for your failures is an example of ________. 1. actor-observer bias 2. fundamental attribution error 3. self-serving bias 4. just-world hypothesis 3. Collectivistic cultures are to ________ as individualistic cultures are to ________. 1. dispositional; situational 2. situational; dispositional 3. autonomy; group harmony 4. just-world hypothesis; self-serving bias 4. According to the actor-observer bias, we have more information about ________. 1. situational influences on behavior 2. influences on our own behavior 3. influences on others’ behavior 4. dispositional influences on behavior 5. A(n) ________ is a set of group expectations for appropriate thoughts and behaviors of its members. 1. social role 2. social norm 3. script 4. attribution 6. On his first day of soccer practice, Jose suits up in a t-shirt, shorts, and cleats and runs out to the field to join his teammates. Jose’s behavior is reflective of ________. 1. a script 2. social influence 3. good athletic behavior 4. normative behavior 7. When it comes to buying clothes, teenagers often follow social norms; this is likely motivated by ________. 1. following parents’ rules 2. saving money 3. fitting in 4. looking good 8. In the Stanford prison experiment, even the lead researcher succumbed to his role as a prison supervisor. This is an example of the power of ________ influencing behavior. 1. scripts 2. social norms 3. conformity 4. social roles 9. Attitudes describe our ________ of people, objects, and ideas. 1. treatment 2. evaluations 3. cognitions 4. knowledge 10. Cognitive dissonance causes discomfort because it disrupts our sense of ________. 1. dependency 2. unpredictability 3. consistency 4. power 11. In order for the central route to persuasion to be effective, the audience must be ________ and ________. 1. analytical; motivated 2. attentive; happy 3. intelligent; unemotional 4. gullible; distracted 12. Examples of cues used in peripheral route persuasion include all of the following except ________. 1. celebrity endorsement 2. positive emotions 3. attractive models 4. factual information 13. In the Asch experiment, participants conformed due to ________ social influence. 1. informational 2. normative 3. inspirational 4. persuasive 14. Under what conditions will informational social influence be more likely? 1. when individuals want to fit in 2. when the answer is unclear 3. when the group has expertise 4. both b and c 15. Social loafing occurs when ________. 1. individual performance cannot be evaluated 2. the task is easy 3. both a and b 4. none of the above 16. If group members modify their opinions to align with a perceived group consensus, then ________ has occurred. 1. group cohesion 2. social polarization 3. groupthink 4. social loafing 17. Prejudice is to ________ as discrimination is to ________. 1. feelings; behavior 2. thoughts; feelings 3. feelings; thoughts 4. behavior; feelings 18. Which of the following is not a type of prejudice? 1. homophobia 2. racism 3. sexism 4. individualism 19. ________ occurs when the out-group is blamed for the in-group’s frustration. 1. stereotyping 2. in-group bias 3. scapegoating 4. ageism 20. When we seek out information that supports our stereotypes we are engaged in ________. 1. scapegoating 2. confirmation bias 3. self-fulfilling prophecy 4. in-group bias 21. Typically, bullying from boys is to ________ as bullying from girls is to ________. 1. emotional harm; physical harm 2. physical harm; emotional harm 3. psychological harm; physical harm 4. social exclusion; verbal taunting 22. Which of the following adolescents is least likely to be targeted for bullying? 1. a child with a physical disability 2. a transgender adolescent 3. an emotionally sensitive boy 4. the captain of the football team 23. The bystander effect likely occurs due to ________. 1. desensitization to violence 2. people not noticing the emergency 3. diffusion of responsibility 4. emotional insensitivity 24. Altruism is a form of prosocial behavior that is motivated by ________. 1. feeling good about oneself 2. selfless helping of others 3. earning a reward 4. showing bravery to bystanders 25. After moving to a new apartment building, research suggests that Sam will be most likely to become friends with ________. 1. his next door neighbor 2. someone who lives three floors up in the apartment building 3. someone from across the street 4. his new postal delivery person 26. What trait do both men and women tend to look for in a romantic partner? 1. sense of humor 2. social skills 3. leadership potential 4. physical attractiveness 27. According to the triangular theory of love, what type of love is defined by passion and intimacy but no commitment? 1. consummate love 2. empty love 3. romantic love 4. liking 28. According to social exchange theory, humans want to maximize the ________ and minimize the ________ in relationships. 1. intimacy; commitment 2. benefits; costs 3. costs; benefits 4. passion; intimacy
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/12%3A_Social_Psychology/Review_Questions.txt
12.1 What Is Social Psychology? Social psychology is the subfield of psychology that studies the power of the situation to influence individuals’ thoughts, feelings, and behaviors. Psychologists categorize the causes of human behavior as those due to internal factors, such as personality, or those due to external factors, such as cultural and other social influences. Behavior is better explained, however, by using both approaches. Lay people tend to over-rely on dispositional explanations for behavior and ignore the power of situational influences, a perspective called the fundamental attribution error. People from individualistic cultures are more likely to display this bias versus people from collectivistic cultures. Our explanations for our own and others behaviors can be biased due to not having enough information about others’ motivations for behaviors and by providing explanations that bolster our self-esteem. 12.2 Self-presentation Human behavior is largely influenced by our social roles, norms, and scripts. In order to know how to act in a given situation, we have shared cultural knowledge of how to behave depending on our role in society. Social norms dictate the behavior that is appropriate or inappropriate for each role. Each social role has scripts that help humans learn the sequence of appropriate behaviors in a given setting. The famous Stanford prison experiment is an example of how the power of the situation can dictate the social roles, norms, and scripts we follow in a given situation, even if this behavior is contrary to our typical behavior. 12.3 Attitudes and Persuasion Attitudes are our evaluations or feelings toward a person, idea, or object and typically are positive or negative. Our attitudes and beliefs are influenced not only by external forces, but also by internal influences that we control. An internal form of attitude change is cognitive dissonance or the tension we experience when our thoughts, feelings, and behaviors are in conflict. In order to reduce dissonance, individuals can change their behavior, attitudes, or cognitions, or add a new cognition. External forces of persuasion include advertising; the features of advertising that influence our behaviors include the source, message, and audience. There are two primary routes to persuasion. The central route to persuasion uses facts and information to persuade potential consumers. The peripheral route uses positive association with cues such as beauty, fame, and positive emotions. 12.4 Conformity, Compliance, and Obedience The power of the situation can lead people to conform, or go along with the group, even in the face of inaccurate information. Conformity to group norms is driven by two motivations, the desire to fit in and be liked and the desire to be accurate and gain information from the group. Authority figures also have influence over our behaviors, and many people become obedient and follow orders even if the orders are contrary to their personal values. Conformity to group pressures can also result in groupthink, or the faulty decision-making process that results from cohesive group members trying to maintain group harmony. Group situations can improve human behavior through facilitating performance on easy tasks, but inhibiting performance on difficult tasks. The presence of others can also lead to social loafing when individual efforts cannot be evaluated. 12.5 Prejudice and Discrimination As diverse individuals, humans can experience conflict when interacting with people who are different from each other. Prejudice, or negative feelings and evaluations, is common when people are from a different social group (i.e., out-group). Negative attitudes toward out-groups can lead to discrimination. Prejudice and discrimination against others can be based on gender, race, ethnicity, social class, sexual orientation, or a variety of other social identities. In-group’s who feel threatened may blame the out-groups for their plight, thus using the out-group as a scapegoat for their frustration. 12.6 Aggression Aggression is seeking to cause another person harm or pain. Hostile aggression is motivated by feelings of anger with intent to cause pain, and instrumental aggression is motivated by achieving a goal and does not necessarily involve intent to cause pain Bullying is an international public health concern that largely affects the adolescent population. Bullying is repeated behaviors that are intended to inflict harm on the victim and can take the form of physical, psychological, emotional, or social abuse. Bullying has negative mental health consequences for youth including suicide. Cyberbullying is a newer form of bullying that takes place in an online environment where bullies can remain anonymous and victims are helpless to address the harassment. Despite the social norm of helping others in need, when there are many bystanders witnessing an emergency, diffusion of responsibility will lead to a lower likelihood of any one person helping. 12.7 Prosocial Behavior Altruism is a pure form of helping others out of empathy, which can be contrasted with egoistic motivations for helping. Forming relationships with others is a necessity for social beings. We typically form relationships with people who are close to us in proximity and people with whom we share similarities. We expect reciprocity and self-disclosure in our relationships. We also want to form relationships with people who are physically attractive, though standards for attractiveness vary by culture and gender. There are many types of love that are determined by various combinations of intimacy, passion, and commitment; consummate love, which is the ideal form of love, contains all three components. When determining satisfaction and whether to maintain a relationship, individuals often use a social exchange approach and weigh the costs and benefits of forming and maintaining a relationship.
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/12%3A_Social_Psychology/Summary.txt
• Introduction Telecommuting is representative of many management innovations that have been made in recent years, largely by tech companies. Telecommuting reflects a belief on the part of companies that employees are responsible, self-motivating, and perhaps work best when they are left alone. It also has an impact on work–family balance, though which way is yet unclear. And telecommuting reflects the more general trend of increasing overlap between workers’ time spent on the job and time spent off the job. • 13.1: What Is Industrial and Organizational Psychology? The workday is a significant portion of workers’ time and energy. It impacts their lives and their family’s lives in positive and negative physical and psychological ways. Industrial and organizational (I-O) psychology is a branch of psychology that studies how human behavior and psychology affect work and how they are affected by work. • 13.2: Industrial Psychology - Selecting and Evaluating Employees The branch of I-O psychology known as industrial psychology focuses on identifying and matching persons to tasks within an organization. This involves job analysis, which means accurately describing the task or job. Then, organizations must identify the characteristics of applicants for a match to the job analysis. It also involves training employees from their first day on the job throughout their tenure within the organization, and appraising their performance along the way. • 13.3: Organizational Psychology - The Social Dimension of Work Organizational psychology is the second major branch of study and practice within the discipline of industrial and organizational psychology. In organizational psychology, the focus is on social interactions and their effect on the individual and on the functioning of the organization. In this section, you will learn about the work organizational psychologists have done to understand job satisfaction, different styles of management, different styles of leadership, organizational culture, and tea • 13.4: Human Factors Psychology and Workplace Design Human factors psychology (or ergonomics, a term that is favored in Europe) is the third subject area within industrial and organizational psychology. This field is concerned with the integration of the human-machine interface in the workplace, through design, and specifically with researching and designing machines that fit human requirements. The integration may be physical or cognitive, or a combination of both. • Critical Thinking Questions • Key Terms • Personal Application Questions • Review Questions • Summary Thumbnail: First staff meeting of President Reagan. (Public Domain; 13: Industrial-Organizational Psychology Chapter Outline 13.1 What Is Industrial and Organizational Psychology? 13.2 Industrial Psychology: Selecting and Evaluating Employees 13.3 Organizational Psychology: The Social Dimension of Work 13.4 Human Factors Psychology and Workplace Design In October 2019, Social Security Administration Commissioner Andrew Saul announced that the Social Security Administration would end a telework program it began 6 years previous serving approximately 12,000 of its employees. Then-Deputy Commissioner Grace Kim wrote a letter to Social Security employees explaining the reasons the program was ending and cited an increased workload and a backlog of cases as reasons for ending the pilot program. This change in the telework policy came on the heels of a negotiation between the American Federal Government Employee Union and the Social Security Administration, a negotiation that had to be brokered by the Federal Services Impasse Panel (a third-party federal organization developed specifically to arbitrate in situations where negotiations between union officials and federal organizations break down and progress halts between the organization and the union representatives) (Wagner, 2019a). The May 2019 decision by the panel gave Social Security Agency managers the ability to limit or restrict telework for employees using their discretion to ensure that all tasks were being completed and wait times were normal. One of the biggest reasons cited for this was that the organization was able to provide evidence that after the implementation of a telework program, the average wait time for individuals temporarily increased, causing a backlog of work to be completed at a later date. Although the Social Security Administration pushed the official end date for all telework in the agency to March of 2020, the program was officially ended. In the wake of the COVID-19 pandemic, Congress requested a review of the telework policy and raised questions about whether it should be revived to serve as a preventative measure for reducing and slowing the spread of the virus (Wagner, 2019b). Could this allow employees to continue working while not coming to the workplace in order to help prevent the spread of illness? What were the benefits versus the costs of implementing a telework policy again for employees as the spread of the virus continued? What did previous research show related to the positive and negative benefits to the organization and the employees with respect to telework?
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/13%3A_Industrial-Organizational_Psychology/13.01%3A_Prelude_to_Industrial-Organizational_Psychology.txt
Learning Objectives • Understand the scope of study in the field of industrial and organizational psychology • Describe the history of industrial and organizational psychology In 2019, people who worked in the United States spent an average of about 42–54 hours per week working (Bureau of Labor Statistics—U.S. Department of Labor, 2019). Sleeping was the only other activity they spent more time on with an average of about 43–62 hours per week. The workday is a significant portion of workers’ time and energy. It impacts their lives and their family’s lives in positive and negative physical and psychological ways. Industrial and organizational (I-O) psychology is a branch of psychology that studies how human behavior and psychology affect work and how they are affected by work. Industrial and organizational psychologists work in four main contexts: academia, government, consulting firms, and business. Most I-O psychologists have a master’s or doctorate degree. The field of I-O psychology can be divided into three broad areas (Figure 13.2 and Figure 13.3): industrial, organizational, and human factors. Industrial psychology is concerned with describing job requirements and assessing individuals for their ability to meet those requirements. In addition, once employees are hired, industrial psychology studies and develops ways to train, evaluate, and respond to those evaluations. As a consequence of its concern for candidate characteristics, industrial psychology must also consider issues of legality regarding discrimination in hiring. Organizational psychology is a discipline interested in how the relationships among employees affect those employees and the performance of a business. This includes studying worker satisfaction, motivation, and commitment. This field also studies management, leadership, and organizational culture, as well as how an organization’s structures, management and leadership styles, social norms, and role expectations affect individual behavior. As a result of its interest in worker wellbeing and relationships, organizational psychology also considers the subjects of harassment, including sexual harassment, and workplace violence. Human factors psychology is the study of how workers interact with the tools of work and how to design those tools to optimize workers’ productivity, safety, and health. These studies can involve interactions as straightforward as the fit of a desk, chair, and computer to a human having to sit on the chair at the desk using the computer for several hours each day. They can also include the examination of how humans interact with complex displays and their ability to interpret them accurately and quickly. In Europe, this field is referred to as ergonomics. Occupational health psychology (OHP) deals with the stress, diseases, and disorders that can affect employees as a result of the workplace. As such, the field is informed by research from the medical, biological, psychological, organizational, human factors, human resources, and industrial fields. Individuals in this field seek to examine the ways in which the organization affects the quality of work life for an employee and the responses that employees have towards their organization or as a result of their organization’s influence on them. The responses for employees are not limited to the workplace as there may be some spillover into their personal lives outside of work, especially if there is not good work-life balance. The ultimate goal of an occupational health psychologist is to improve the overall health and well-being of an individual, and, as a result, increase the overall health of the organization (Society for Occupational Health Psychology, 2020). In 2009, the field of humanitarian work psychology (HWP) was developed as the brainchild of a small group of I-O psychologists who met at a conference. Realizing they had a shared set of goals involving helping those who are underserved and underprivileged, the I-O psychologists formally formed the group in 2012 and have approximately 300 members worldwide. Although this is a small number, the group continues to expand. The group seeks to help marginalized members of society, such as people with low-income, find work. In addition, they help to determine ways to deliver humanitarian aid during major catastrophes. The Humanitarian Work Psychology group can also reach out to those in the local community who do not have the knowledge, skills, and abilities (KSAs) to be able to find gainful employment that would enable them to not need to receive aid. In both cases, humanitarian work psychologists try to help the underserved individuals develop KSAs that they can use to improve their lives and their current situations. When ensuring these underserved individuals receive training or education, the focus is on skills that, once learned, will never be forgotten and can serve individuals throughout their lifetimes as they seek employment (APA, 2016). Table 13.1 summarizes the main fields in I-O psychology, their focuses, and jobs within each field. Table 13.1 Fields of Industrial Organizational Psychology Field of I-O Psychology Description Types of Jobs Industrial Psychology Specializes and focuses on the retention of employees and hiring practices to ensure the least number of firings and the greatest number of hirings relative to the organization’s size. Personnel Analyst Instructional Designer Professor Research Analyst Organizational Psychology Works with the relationships that employees develop with their organizations and conversely that their organization develops with them. In addition, studies the relationships that develop between co-workers and how that is influenced by organizational norms. HR Research Specialist Professor Project Consultant Personnel Psychologist Test Developer Training Developer Leadership Developer Talent Developer Human Factors and Engineering Researches advances and changes in technology in an effort to improve the way technology is used by consumers, whether with consumer products, technologies, transportation, work environments, or communications. Seeks to be better able to predict the ways in which people can and will utilize technology and products in an effort to provide improved safety and reliability. Professor Ergonomist Safety Scientist Project Consultant Inspector Research Scientist Marketer Product Development Humanitarian Work Psychology Works to improve the conditions of individuals who have faced serious disaster or who are part of an underserved population. Focuses on labor relations, enhancing public health services, effects on populations due to climate change, recession, and diseases. Professor Instructional Designer Research Scientist Counselor Consultant Program Manager Senior Response Officer Occupational Health Psychology Concerned with the overall well-being of both employees and organizations. Occupational Therapist Research Scientist Consultant Human Resources (HR) Specialist Professor Link to Learning Find out what I-O psychologists do on the Society for Industrial and Organizational Psychology (SIOP) website—a professional organization for people working in the discipline. This site also offers several I-O psychologist profiles. The Historical Development of Industrial and Organizational Psychology Industrial and organizational psychology had its origins in the early 20th century. Several influential early psychologists studied issues that today would be categorized as industrial psychology: James Cattell (1860–1944), Hugo Münsterberg (1863–1916), Walter Dill Scott (1869–1955), Robert Yerkes (1876–1956), Walter Bingham (1880–1952), and Lillian Gilbreth (1878–1972). Cattell, Münsterberg, and Scott had been students of Wilhelm Wundt, the father of experimental psychology. Some of these researchers had been involved in work in the area of industrial psychology before World War I. Cattell’s contribution to industrial psychology is largely reflected in his founding of a psychological consulting company, which is still operating today called the Psychological Corporation, and in the accomplishments of students at Columbia in the area of industrial psychology. In 1913, Münsterberg published Psychology and Industrial Efficiency, which covered topics such as employee selection, employee training, and effective advertising. Scott was one of the first psychologists to apply psychology to advertising, management, and personnel selection. In 1903, Scott published two books: The Theory of Advertising and Psychology of Advertising. They are the first books to describe the use of psychology in the business world. By 1911 he published two more books, Influencing Men in Business and Increasing Human Efficiency in Business. In 1916 a newly formed division in the Carnegie Institute of Technology hired Scott to conduct applied research on employee selection (Katzell & Austin, 1992). The focus of all this research was in what we now know as industrial psychology; it was only later in the century that the field of organizational psychology developed as an experimental science (Katzell & Austin, 1992). In addition to their academic positions, these researchers also worked directly for businesses as consultants. When the United States entered World War I in April 1917, the work of psychologists working in this discipline expanded to include their contributions to military efforts. At that time Yerkes was the president of the 25-year-old American Psychological Association (APA). The APA is a professional association in the United States for clinical and research psychologists. Today the APA performs a number of functions including holding conferences, accrediting university degree programs, and publishing scientific journals. Yerkes organized a group under the Surgeon General’s Office (SGO) that developed methods for screening and selecting enlisted men. They developed the Army Alpha test to measure mental abilities. The Army Beta test was a non-verbal form of the test that was administered to illiterate and non-English-speaking draftees. Scott and Bingham organized a group under the Adjutant General’s Office (AGO) with the goal to develop selection methods for officers. They created a catalogue of occupational needs for the Army, essentially a job-description system and a system of performance ratings and occupational skill tests for officers (Katzell & Austin, 1992). After the war, work on personnel selection continued. For example, Millicent Pond researched the selection of factory workers, comparing the results of pre-employment tests with various indicators of job performance (Vinchur & Koppes, 2014). From 1929 to 1932 Elton Mayo (1880–1949) and his colleagues began a series of studies at a plant near Chicago, Western Electric’s Hawthorne Works (Figure 13.4). This long-term project took industrial psychology beyond just employee selection and placement to a study of more complex problems of interpersonal relations, motivation, and organizational dynamics. These studies mark the origin of organizational psychology. They began as research into the effects of the physical work environment (e.g., level of lighting in a factory), but the researchers found that the psychological and social factors in the factory were of more interest than the physical factors. These studies also examined how human interaction factors, such as supervisorial style, increased or decreased productivity. Analysis of the findings by later researchers led to the term the Hawthorne effect, which describes the increase in performance of individuals who are noticed, watched, and paid attention to by researchers or supervisors (See figure 13.5). What the original researchers found was that any change in a variable, such as lighting levels, led to an improvement in productivity; this was true even when the change was negative, such as a return to poor lighting. The effect faded when the attention faded (Roethlisberg & Dickson, 1939). The Hawthorne-effect concept endures today as an important experimental consideration in many fields and a factor that has to be controlled for in an experiment. In other words, an experimental treatment of some kind may produce an effect simply because it involves greater attention of the researchers on the participants (McCarney et al., 2007). Link to Learning Watch this video of first-hand accounts of the original Hawthorne studies to learn more. In the 1930s, researchers began to study employees’ feelings about their jobs. Kurt Lewin also conducted research on the effects of various leadership styles, team structure, and team dynamics (Katzell & Austin, 1992). Lewin is considered the founder of social psychology and much of his work and that of his students produced results that had important influences in organizational psychology. Lewin and his students’ research included an important early study that used children to study the effect of leadership style on aggression, group dynamics, and satisfaction (Lewin, Lippitt, & White, 1939). Lewin was also responsible for coining the term group dynamics, and he was involved in studies of group interactions, cooperation, competition, and communication that bear on organizational psychology. Parallel to these studies in industrial and organizational psychology, the field of human factors psychology was also developing. Frederick Taylor was an engineer who saw that if one could redesign the workplace there would be an increase in both output for the company and wages for the workers. In 1911 he put forward his theory in a book titled, The Principles of Scientific Management (Figure 13.6). His book examines management theories, personnel selection and training, as well as the work itself, using time and motion studies. Taylor argued that the principle goal of management should be to make the most money for the employer, along with the best outcome for the employee. He believed that the best outcome for the employee and management would be achieved through training and development so that each employee could provide the best work. He believed that by conducting time and motion studies for both the organization and the employee, the best interests of both were addressed. Time-motion studies were methods aimed to improve work by dividing different types of operations into sections that could be measured. These analyses were used to standardize work and to check the efficiency of people and equipment. Personnel selection is a process used by recruiting personnel within the company to recruit and select the best candidates for the job. Training may need to be conducted depending on what skills the hired candidate has. Often companies will hire someone with the personality that fits in with others but who may be lacking in skills. Skills can be taught, but personality cannot be easily changed. One of the examples of Taylor’s theory in action involved workers handling heavy iron ingots. Taylor showed that the workers could be more productive by taking work rests. This method of rest increased worker productivity from \(12.5\) to \(47.0\) tons moved per day with less reported fatigue as well as increased wages for the workers who were paid by the ton. At the same time, the company’s cost was reduced from \(9.2\) cents to \(3.9\) cents per ton. Despite these increases in productivity, Taylor’s theory received a great deal of criticism at the time because it was believed that it would exploit workers and reduce the number of workers needed. Also controversial was the underlying concept that only a manager could determine the most efficient method of working, and that while at work, a worker was incapable of this. Taylor’s theory was underpinned by the notion that a worker was fundamentally lazy and the goal of Taylor’s scientific management approach was to maximize productivity without much concern for worker well-being. His approach was criticized by unions and those sympathetic to workers (Van De Water, 1997). Gilbreth was another influential I-O psychologist who strove to find ways to increase productivity (See figure 13.7). Using time and motion studies, Gilbreth and her husband, Frank, worked to make workers more efficient by reducing the number of motions required to perform a task. She not only applied these methods to industry but also to the home, office, shops, and other areas. She investigated employee fatigue and time management stress and found many employees were motivated by money and job satisfaction. In 1914, Gilbreth wrote the book title, The Psychology of Management: The Function of the Mind in Determining, Teaching, and Installing Methods of Least Waste, and she is known as the mother of modern management. Some of Gilbreth’s contributions are still in use today: you can thank her for the idea to put shelves inside on refrigerator doors, and she also came up with the concept of using a foot pedal to operate the lid of trash can (Gilbreth, 1914, 1998; Koppes, 1997; Lancaster, 2004). Gilbreth was the first woman to join the American Society of Mechanical Engineers in 1926, and in 1966 she was awarded the Hoover Medal of the American Society of Civil Engineers. Taylor and Gilbreth’s work improved productivity, but these innovations also improved the fit between technology and the human using it. The study of machine–human fit is known as ergonomics or human factors psychology. From WWII to Today World War II also drove the expansion of industrial psychology. Bingham was hired as the chief psychologist for the War Department (now the Department of Defense) and developed new systems for job selection, classification, training, ad performance review, plus methods for team development, morale change, and attitude change (Katzell & Austin, 1992). Other countries, such as Canada and the United Kingdom, likewise saw growth in I-O psychology during World War II (McMillan, Stevens, & Kelloway, 2009). In the years after the war, both industrial psychology and organizational psychology became areas of significant research effort. Concerns about the fairness of employment tests arose, and the ethnic and gender biases in various tests were evaluated with mixed results. In addition, a great deal of research went into studying job satisfaction and employee motivation (Katzell & Austin, 1992). The research and work of I-O psychologists in the areas of employee selection, placement, and performance appraisal became increasingly important in the 1960s. When Congress passed the 1964 Civil Rights Act, Title VII covered what is known as equal employment opportunity. This law protects employees against discrimination based on race, color, religion, sex, or national origin, as well as discrimination against an employee for associating with an individual in one of these categories. Organizations had to adjust to the social, political, and legal climate of the Civil Rights movement, and these issues needed to be addressed by members of I/O in research and practice. There are many reasons for organizations to be interested in I/O so that they can better understand the psychology of their workers, which in turn helps them understand how their organizations can become more productive and competitive. For example, most large organizations are now competing on a global level, and they need to understand how to motivate workers in order to achieve high productivity and efficiency. Most companies also have a diverse workforce and need to understand the psychological complexity of the people in these diverse backgrounds. Today, I-O psychology is a diverse and deep field of research and practice, as you will learn about in the rest of this chapter. The Society for Industrial and Organizational Psychology (SIOP), a division of the APA, lists 8,000 members (SIOP, 2014) and the Bureau of Labor Statistics—U.S. Department of Labor (2013) has projected this profession will have the greatest growth of all job classifications in the 20 years following 2012. On average, a person with a master’s degree in industrial-organizational psychology will earn over \$80,000 a year, while someone with a doctorate will earn over \$110,000 a year (Khanna, Medsker, & Ginter, 2012).
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/13%3A_Industrial-Organizational_Psychology/13.02%3A_What_Is_Industrial_and_Organizational_Psychology.txt
Learning Objectives • Explain the aspects of employee selection • Describe the kinds of job training • Describe the approaches to and issues surrounding performance assessment The branch of I-O psychology known as industrial psychology focuses on identifying and matching persons to tasks within an organization. This involves job analysis, which means accurately describing the task or job. Then, organizations must identify the characteristics of applicants for a match to the job analysis. It also involves training employees from their first day on the job throughout their tenure within the organization, and appraising their performance along the way. Selecting Employees When you read job advertisements, do you ever wonder how the company comes up with the job description? Often, this is done with the help of I-O psychologists. There are two related but different approaches to job analysis—you may be familiar with the results of each as they often appear on the same job advertisement. The first approach is task-oriented and lists in detail the tasks that will be performed for the job. Each task is typically rated on scales for how frequently it is performed, how difficult it is, and how important it is to the job. The second approach is worker-oriented. This approach describes the characteristics required of the worker to successfully perform the job. This second approach has been called job specification (Dierdorff & Wilson, 2003). For job specification, the knowledge, skills, and abilities (KSAs) that the job requires are identified. Observation, surveys, and interviews are used to obtain the information required for both types of job analysis. It is possible to observe someone who is proficient in a position and analyze what skills are apparent. Another approach used is to interview people presently holding that position, their peers, and their supervisors to get a consensus of what they believe are the requirements of the job. How accurate and reliable is a job analysis? Research suggests that it can depend on the nature of the descriptions and the source for the job analysis. For example, Dierdorff & Wilson (2003) found that job analyses developed from descriptions provided by people holding the job themselves were the least reliable; however, they did not study or speculate why this was the case. The United States Department of Labor maintains a database of previously compiled job analyses for different jobs and occupations. This allows the I-O psychologist to access previous analyses for nearly any type of occupation. This system is called O*Net (accessible at www.onetonline.org). The site is open and you can see the KSAs that are listed for your own position or one you might be curious about (Figure 13.8). Each occupation lists the tasks, knowledge, skills, abilities, work context, work activities, education requirements, interests, personality requirements, and work styles that are deemed necessary for success in that position. You can also see data on average earnings and projected job growth in that industry. Link to Learning The O*Net database describes the skills, knowledge, and education required for occupations, as well as what personality types and work styles are best suited to the role. See what it has to say about being a food server in a restaurant or an elementary school teacher or an industrial-organizational psychologist to learn more about these career paths. Candidate Analysis and Testing Once a company identifies potential candidates for a position, the candidates’ knowledge, skills, and other abilities must be evaluated and compared with the job description. These evaluations can involve testing, an interview, and work samples or exercises. You learned about personality tests in the chapter on personality; in the I-O context, they are used to identify the personality characteristics of the candidate in an effort to match those to personality characteristics that would ensure good performance on the job. For example, a high rating of agreeableness might be desirable in a customer support position. However, it is not always clear how best to correlate personality characteristics with predictions of job performance. It might be that too high of a score on agreeableness is actually a hindrance in the customer support position. For example, if a customer has a misperception about a product or service, agreeing with their misperception will not ultimately lead to resolution of their complaint. Any use of personality tests should be accompanied by a verified assessment of what scores on the test correlate with good performance (Arthur, Woehr, & Graziano, 2001). Other types of tests that may be given to candidates include IQ tests, integrity tests, and physical tests, such as drug tests or physical fitness tests. To better understand the hiring process, let’s consider an example case. A company determined it had an open position and advertised it. The human resources (HR) manager directed the hiring team to start the recruitment process. Imani saw the advertisement and submitted her résumé, which went into the collection of candidate résumés. The HR team reviewed the candidates’ credentials and provided a list of the best potential candidates to the department manager, who reached out to them (including Imani) to set up individual interviews. WHAT DO YOU THINK: Using Cutoff Scores to Determine Job Selection Many positions require applicants to take tests as part of the selection process. These can include IQ tests, job-specific skills tests, or personality tests. The organization may set cutoff scores (i.e., a score below which a candidate will not move forward) for each test to determine whether the applicant moves on to the next stage. For example, there was a case of Robert Jordan, a \(49\)-year-old college graduate who applied for a position with the police force in New London, Connecticut. As part of the selection process, Jordan took the Wonderlic Personnel Test (WPT), a test designed to measure cognitive ability. Jordan did not make it to the interview stage because his WPT score of \(33\), equivalent to an IQ score of \(125\) (\(100\) is the average IQ score), was too high. The New London Police department policy is to not interview anyone who has a WPT score over \(27\) (equivalent to an IQ score over \(104\)) because they believe anyone who scores higher would be bored with police work. The average score for police officers nationwide is the equivalent of an IQ score of \(104\) (Jordan v. New London, 2000; ABC News, 2000). Jordan sued the police department alleging that his rejection was discrimination and his civil rights were violated because he was denied equal protection under the law. The \(2^{nd}\) U.S. Circuit Court of Appeals upheld a lower court’s decision that the city of New London did not discriminate against him because the same standards were applied to everyone who took the exam (The New York Times, 1999). What do you think? When might universal cutoff points make sense in a hiring decision, and when might they eliminate otherwise potentially strong employees? Interviews Most jobs for mid-size to large-size businesses in the United States require a personal interview as a step in the selection process. Because interviews are commonly used, they have been the subject of considerable research by industrial psychologists. Information derived from job analysis usually forms the basis for the types of questions asked. Interviews can provide a more dynamic source of information about the candidate than standard testing measures. Importantly, social factors and body language can influence the outcome of the interview. These include influences, such as the degree of similarity of the applicant to the interviewer and nonverbal behaviors, such as hand gestures, head nodding, and smiling (Bye, Horverak, Sandal, Sam, & Vivjer, 2014; Rakić, Steffens, & Mummendey, 2011). There are two types of interviews: unstructured and structured. In an unstructured interview, the interviewer may ask different questions of each different candidate. One candidate might be asked about her career goals, and another might be asked about his previous work experience. In an unstructured interview, the questions are often, though not always, unspecified beforehand. And in an unstructured interview the responses to questions asked are generally not scored using a standard system. In a structured interview, the interviewer asks the same questions of every candidate, the questions are prepared in advance, and the interviewer uses a standardized rating system for each response. With this approach, the interviewer can accurately compare two candidates’ interviews. In a meta-analysis of studies examining the effectiveness of various types of job interviews, McDaniel, Whetzel, Schmidt & Maurer (1994) found that structured interviews were more effective at predicting subsequent job performance of the job candidate. Let’s return to our example case. For her first interview, Imani was interviewed by a team of employees at the company. She was one of five candidates interviewed that day for the position. Each interviewee was asked the same list of questions, by the same people, so the interview experience was as consistent as possible across applicants. At the end of the interviews, the HR team met and reviewed the answers given by Imani and the other candidates. The HR team then identified a few new qualified candidates and conducted a new round of initial interviews, while the first group of potential hires waited to hear back from the company. EVERYDAY CONNECTION: Preparing for the Job Interview You might be wondering if psychology research can tell you how to succeed in a job interview. As you can imagine, most research is concerned with the employer’s interest in choosing the most appropriate candidate for the job, a goal that makes sense for the candidate too. But suppose you are not the only qualified candidate for the job; is there a way to increase your chances of being hired? A limited amount of research has addressed this question. As you might expect, nonverbal cues are important in an interview. Liden, Martin, & Parsons (1993) found that lack of eye contact and smiling on the part of the applicant led to lower applicant ratings. Studies of impression management on the part of an applicant have shown that self-promotion behaviors generally have a positive impact on interviewers (Gilmore & Ferris, 1989). Different personality types use different forms of impression management, for example extroverts use verbal self-promotion, and applicants high in agreeableness use non-verbal methods such as smiling and eye contact. Self-promotion was most consistently related with a positive outcome for the interview, particularly if it was related to the candidate’s person–job fit. However, it is possible to overdo self-promotion with experienced interviewers (Howard & Ferris, 1996). Barrick, Swider & Stewart (2010) examined the effect of first impressions during the rapport building that typically occurs before an interview begins. They found that initial judgments by interviewers during this period were related to job offers and that the judgments were about the candidate’s competence and not just likability. Levine and Feldman (2002) looked at the influence of several nonverbal behaviors in mock interviews on candidates’ likability and projections of competence. Likability was affected positively by greater smiling behavior. Interestingly, other behaviors affected likability differently depending on the gender of the applicant. Men who displayed higher eye contact were less likable; women were more likable when they made greater eye contact. However, for this study male applicants were interviewed by men and female applicants were interviewed by women. In a study carried out in a real setting, DeGroot & Gooty (2009) found that nonverbal cues affected interviewers’ assessments about candidates. They looked at visual cues, which can often be modified by the candidate and vocal (nonverbal) cues, which are more difficult to modify. They found that interviewer judgment was positively affected by visual and vocal cues of conscientiousness, visual and vocal cues of openness to experience, and vocal cues of extroversion. What is the take home message from the limited research that has been done? Learn to be aware of your behavior during an interview. You can do this by practicing and soliciting feedback from mock interviews. Pay attention to any nonverbal cues you are projecting and work at presenting nonverbal cures that project confidence and positive personality traits. And finally, pay attention to the first impression you are making as it may also have an impact in the interview. Training Training is an important element of success and performance in many jobs. Most jobs begin with an orientation period during which the new employee is provided information regarding the company history, policies, and administrative protocols such as time tracking, benefits, and reporting requirements. An important goal of orientation training is to educate the new employee about the organizational culture, the values, visions, hierarchies, norms and ways the company’s employees interact—essentially how the organization is run, how it operates, and how it makes decisions. There will also be training that is specific to the job the individual was hired to do, or training during the individual’s period of employment that teaches aspects of new duties, or how to use new physical or software tools. Much of these kinds of training will be formalized for the employee; for example, orientation training is often accomplished using software presentations, group presentations by members of the human resources department or with people in the new hire’s department (See figure 13.9 below). Mentoring is a form of informal training in which an experienced employee guides the work of a new employee. In some situations, mentors will be formally assigned to a new employee, while in others a mentoring relationship may develop informally. Mentoring effects on the mentor and the employee being mentored, the protégé, have been studied in recent years. In a review of mentoring studies, Eby, Allen, Evans, Ng, & DuBois (2008) found significant but small effects of mentoring on performance (i.e., behavioral outcomes), motivation and satisfaction, and actual career outcomes. In a more detailed review, Allen, Eby, Poteet, Lentz, & Lima (2004) found that mentoring positively affected a protégé’s compensation and number of promotions compared with non-mentored employees. In addition, protégés were more satisfied with their careers and had greater job satisfaction. All of the effects were small but significant. Eby, Durley, Evans, & Ragins (2006) examined mentoring effects on the mentor and found that mentoring was associated with greater job satisfaction and organizational commitment. Gentry, Weber, & Sadri (2008) found that mentoring was positively related with performance ratings by supervisors. Allen, Lentz, & Day (2006) found in a comparison of mentors and non-mentors that mentoring led to greater reported salaries and promotions. Mentoring is recognized to be particularly important to the career success of women (McKeen & Bujaki, 2007) by creating connections to informal networks, reducing feelings of isolation, and with overcoming discrimination in job promotions. Researchers have worked to understand the impacts of gender, race, and ethnicity on mentoring relationships, especially whether outcomes differ based on the gender or racial/ethnic makeup of the mentor-mentee pair. Overall, women and members of underrepresented racial/ethnic populations do not lack access to mentors, but they do lack access to mentors of the same gender, race, and ethnicity, because the senior roles in many organizations are made up of White men (MLDC, 2010). Results from these studies are mixed, but generally suggest that pairings made up of two people of the same gender or race/ethnicity have better psychosocial outcomes, such as a feeling of more support (Koberg, Boss, & Goodman, 1998; Allen et al, 2006). On the other hand, women and people of color typically experience greater career advancement or higher pay when mentored by people of a different gender or race/ethnicity (Parks-Yancy, 2012). A 2003 study by Arthur, Bennett, Edens, and Bell examined multiple other studies to determine how effective organizational training is. Their results showed that training was effective, based on four types of measurement: (1) the immediate response of the employee to the training effort, (2) testing at the end of training to demonstrate that learning outcomes were met, (3) behavioral measurements of job activities by supervisors, and (4) results such as productivity and profits. The examined studies represented diverse forms of training including self-instruction, lecture and discussion, and computer assisted training. Evaluating Employees Industrial and organizational psychologists are typically involved in designing performance-appraisal systems for organizations. These systems are designed to evaluate whether each employee is performing her job satisfactorily. Industrial and organizational psychologists study, research, and implement ways to make work evaluations as fair and positive as possible; they also work to decrease the subjectivity involved with performance ratings. Fairly evaluated work helps employees do their jobs better, improves the likelihood of people being in the right jobs for their talents, maintains fairness, and identifies company and individual training needs. Performance appraisals are typically documented several times a year, often with a formal process and an annual face-to-face brief meeting between an employee and his supervisor. It is important that the original job analysis play a role in performance appraisal as well as any goals that have been set by the employee or by the employee and supervisor. The meeting is often used for the supervisor to communicate specific concerns about the employee’s performance and to positively reinforce elements of good performance. It may also be used to discuss specific performance rewards, such as a pay increase, or consequences of poor performance, such as a probationary period. Part of the function of performance appraisals for the organization is to document poor performance to bolster decisions to terminate an employee. Performance appraisals are becoming more complex processes within organizations and are often used to motivate employees to improve performance and expand their areas of competence, in addition to assessing their job performance. In this capacity, performance appraisals can be used to identify opportunities for training or whether a particular training program has been successful. One approach to performance appraisal is called 360-degree feedback appraisal (See figure 13.10). In this system, the employee’s appraisal derives from a combination of ratings by supervisors, peers, employees supervised by the employee, and from the employee herself. Occasionally, outside observers may be used as well, such as customers. The purpose of \(360\)-degree system is to give the employee (who may be a manager) and supervisor different perspectives of the employee’s job performance; the system should help employees make improvements through their own efforts or through training. The system is also used in a traditional performance-appraisal context, providing the supervisor with more information with which to make decisions about the employee’s position and compensation (Tornow, 1993a). Few studies have assessed the effectiveness of \(360\)-degree methods, but Atkins and Wood (2002) found that the self and peer ratings were unreliable as an assessment of an employee’s performance and that even supervisors tended to underrate employees that gave themselves modest feedback ratings. However, a different perspective sees this variability in ratings as a positive in that it provides for greater learning on the part of the employees as they and their supervisor discuss the reasons for the discrepancies (Tornow, 1993b). In theory, performance appraisals should be an asset for an organization wishing to achieve its goals, and most employees will actually solicit feedback regarding their jobs if it is not offered (DeNisi & Kluger, 2000). However, in practice, many performance evaluations are disliked by organizations, employees, or both (Fletcher, 2001), and few of them have been adequately tested to see if they do in fact improve performance or motivate employees (DeNisi & Kluger, 2000). One of the reasons evaluations fail to accomplish their purpose in an organization is that performance appraisal systems are often used incorrectly or are of an inappropriate type for an organization’s particular culture (Schraeder, Becton, & Portis, 2007). An organization’s culture is how the organization is run, how it operates, and how it makes decisions. It is based on the collective values, hierarchies, and how individuals within the organization interact. Examining the effectiveness of performance appraisal systems in particular organizations and the effectiveness of training for the implementation of the performance appraisal system is an active area of research in industrial psychology (Fletcher, 2001). Bias and Protections in Hiring In an ideal hiring process, an organization would generate a job analysis that accurately reflects the requirements of the position, and it would accurately assess candidates’ KSAs to determine who the best individual is to carry out the job’s requirements. For many reasons, hiring decisions in the real world are often made based on factors other than matching a job analysis to KSAs. As mentioned earlier, interview rankings can be influenced by other factors: similarity to the interviewer (Bye, Horverak, Sandal, Sam, & Vijver, 2014) and the regional accent of the interviewee (Rakić, Steffens, & Mummendey 2011). A study by Agerström & Rooth (2011) examined hiring managers’ decisions to invite equally qualified normal-weight and obese job applicants to an interview. The decisions of the hiring managers were based on photographs of the two applicants. The study found that hiring managers that scored high on a test of negative associations with overweight people displayed a bias in favor of inviting the equally qualified normal-weight applicant but not inviting the obese applicant. The association test measures automatic or subconscious associations between an individual’s negative or positive values and, in this case, the body-weight attribute. A meta-analysis of experimental studies found that physical attractiveness benefited individuals in various job-related outcomes such as hiring, promotion, and performance review (Hosoda, Stone-Romero, & Coats, 2003). They also found that the strength of the benefit appeared to be decreasing with time between the late 1970s and the late 1990s. Some hiring criteria may be related to a particular group an applicant belongs to and not individual abilities. Unless membership in that group directly affects potential job performance, a decision based on group membership is discriminatory (See figure 13.11). To combat hiring discrimination, in the United States there are numerous city, state, and federal laws that prevent hiring based on various group-membership criteria. For example, did you know it is illegal for a potential employer to ask your age in an interview? Did you know that an employer cannot ask you whether you are married, a U.S. citizen, have disabilities, or what your race or religion is? They cannot even ask questions that might shed some light on these attributes, such as where you were born or who you live with. These are only a few of the restrictions that are in place to prevent discrimination in hiring. In the United States, federal anti-discrimination laws are administered by the U.S. Equal Employment Opportunity Commission (EEOC). The U.S. Equal Employment Opportunuty Commission(EEOC) The U.S. Equal Employment Opportunity Commission (EEOC) is responsible for enforcing federal laws that make it illegal to discriminate against a job applicant or an employee because of the person's race, color, religion, sex (including pregnancy), national origin, age (\(40\) or older), disability, or genetic information. Figure 13.12 provides some of the legal language from laws that have been passed to prevent discrimination. The United States has several specific laws regarding fairness and avoidance of discrimination. The Equal Pay Act requires that equal pay for men and women in the same workplace who are performing equal work. Despite the law, persistent inequities in earnings between men and women exist. Corbett & Hill (2012) studied one facet of the gender gap by looking at earnings in the first year after college in the United States. Just comparing the earnings of women to men, women earn about \(82\) cents for every dollar a man earns in their first year out of college. However, some of this difference can be explained by education, career, and life choices, such as choosing majors with lower earning potential or specific jobs within a field that have less responsibility. When these factors were corrected the study found an unexplained seven-cents-on-the-dollar gap in the first year after college that can be attributed to gender discrimination in pay. This approach to analysis of the gender pay gap, called the human capital model, has been criticized. Lips (2013) argues that the education, career, and life choices can, in fact, be constrained by necessities imposed by gender discrimination. This suggests that removing these factors entirely from the gender gap equation leads to an estimate of the size of the pay gap that is too small. Title VII of the Civil Rights Act of 1964 makes it illegal to treat individuals unfavorably because of their race or color of their skin: An employer cannot discriminate based on skin color, hair texture, or other immutable characteristics, which are traits of an individual that are fundamental to her identity, in hiring, benefits, promotions, or termination of employees. The Pregnancy Discrimination Act of 1978 amends the Civil Rights Act; it prohibits job (e.g., employment, pay, and termination) discrimination of a woman because she is pregnant as long as she can perform the work required. The Supreme Court ruling in Griggs v. Duke Power Co. made it illegal under Title VII of the Civil Rights Act to include educational requirements in a job description (e.g., high school diploma) that negatively impacts one race over another if the requirement cannot be shown to be directly related to job performance. The EEOC (2014) received more than \(94,000\) charges of various kinds of employment discrimination in 2013. Many of the filings are for multiple forms of discrimination and include charges of retaliation for making a claim, which itself is illegal. Only a small fraction of these claims become suits filed in a federal court, although the suits may represent the claims of more than one person. In 2013, there were 148 suits filed in federal courts. Link to Learning In 2011, the U.S. Supreme Court decided a case in which women plaintiffs were attempting to group together in a class-action suit against Walmart for gender discrimination in promotion and pay. The case was important because it was the only practical way for individual women who felt they had been discriminated against to sustain a court battle for redress of their claims. The Court ultimately decided against the plaintiffs, and the right to a class-action suit was denied. However, the case itself effectively publicized the issue of gender discrimination in employment. Watch this video about the case history and issues and this PBS NewsHour video about the arguments to learn more. Based on a 2020 Supreme Court ruling regarding the application of the Civil Rights Act, federal legislation now protects employees in the private sector from discrimination related to sexual orientation and gender identity. These groups include lesbian, gay, bisexual, and transgender individuals. There is evidence of discrimination derived from surveys of workers, studies of complaint filings, wage comparison studies, and controlled job-interview studies (Badgett, Sears, Lau, & Ho, 2009). Prior to the ruling, federal legislation protected federal employees from such discrimination; the District of Columbia and 20 states have laws protecting public and private employees from discrimination for sexual orientation (American Civil Liberties Union, n.d). Most of the states with these laws also protect against discrimination based on gender identity. Gender identity refers to one’s sense of being male, female, neither of these, both of these, or another gender. While the Supreme Court's Civil Rights Act interpretation is regarded as a landmark outcome for LGBTQ people, it will be continually tested by organizations that for various reasons see a need to exclude LGBTQ people from employment or service. The First Amendment protects religious organizations from some aspects of anti-discrimination laws, and some recent court decisions have expanded these exceptions so that certain ministries, schools, or other organizations could avoid employing or serving LGBTQ people. Americans with Disabilities Act (ADA) The Americans with Disabilities Act (ADA) of 1990 states people may not be discriminated against due to the nature of their disability. A disability is defined as a physical or mental impairment that limits one or more major life activities such as hearing, walking, and breathing. An employer must make reasonable accommodations for the performance of a disabled employee’s job. This might include making the work facility handicapped accessible with ramps, providing readers for blind personnel, or allowing for more frequent breaks. The ADA has now been expanded to include individuals with alcoholism, former drug use, obesity, or psychiatric disabilities. The premise of the law is that disabled individuals can contribute to an organization and they cannot be discriminated against because of their disabilities (O'Keefe & Bruyere, 1994). The Civil Rights Act and the Age Discrimination in Employment Act make provisions for bona fide occupational qualifications (BFOQs), which are requirements of certain occupations for which denying an individual employment would otherwise violate the law. For example, there may be cases in which religion, national origin, age, and sex are bona fide occupational qualifications. There are no BFOQ exceptions that apply to race, although the first amendment protects artistic expressions, such as films, in making race a requirement of a role. Clearcut examples of BFOQs would be hiring someone of a specific religion for a leadership position in a worship facility, or for an executive position in religiously affiliated institutions, such as the president of a university with religious ties. Age has been determined to be a BFOQ for airline pilots; hence, there are mandatory retirement ages for safety reasons. Sex has been determined as a BFOQ for guards in male prisons. Sex (gender) is the most common reason for invoking a BFOQ as a defense against accusing an employer of discrimination (Manley, 2009). Courts have established a three-part test for sex-related BFOQs that are often used in other types of legal cases for determining whether a BFOQ exists. The first of these is whether all or substantially all women would be unable to perform a job. This is the reason most physical limitations, such as “able to lift 30 pounds,” fail as reasons to discriminate because most women are able to lift this weight. The second test is the “essence of the business” test, in which having to choose the other gender would undermine the essence of the business operation. This test was the reason the now defunct Pan American World Airways (i.e., Pan Am) was told it could not hire only female flight attendants. Hiring men would not have undermined the essense of this business. On a deeper level, this means that hiring cannot be made purely on customers’ or others’ preferences. The third and final test is whether the employer cannot make reasonable alternative accomodations, such as reassigning staff so that a woman does not have to work in a male-only part of a jail or other gender-specific facility. Privacy concerns are a major reason why discrimination based on gender is upheld by the courts, for example in situations such as hires for nursing or custodial staff (Manley, 2009). Most cases of BFOQs are decided on a case-by-case basis and these court decisions inform policy and future case decisions. WHAT DO YOU THINK: Hooters and BFOQ Laws The restaurant chain Hooters, which hires only female wait staff and has them dress in a sexually provocative manner, is commonly cited as a discriminatory employer. The chain would argue that the female employees are an essential part of their business in that they market through sex appeal and the wait staff attract customers. Men have filed discrimination charges against Hooters in the past for not hiring them as wait staff simply because they are men. The chain has avoided a court decision on their hiring practices by settling out of court with the plaintiffs in each case. Do you think their practices violate the Civil Rights Act? See if you can apply the three court tests to this case and make a decision about whether a case that went to trial would find in favor of the plaintiff or the chain.
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/13%3A_Industrial-Organizational_Psychology/13.03%3A_Industrial_Psychology_-_Selecting_and_Evaluating_Employees.txt
Learning Objectives • Define organizational psychology • Explain the measurement and determinants of job satisfaction • Describe key elements of management and leadership • Explain the significance of organizational culture Organizational psychology is the second major branch of study and practice within the discipline of industrial and organizational psychology. In organizational psychology, the focus is on social interactions and their effect on the individual and on the functioning of the organization. In this section, you will learn about the work organizational psychologists have done to understand job satisfaction, different styles of management, different styles of leadership, organizational culture, and teamwork. Job Satisfaction Some people love their jobs, some people tolerate their jobs, and some people cannot stand their jobs. Job satisfaction describes the degree to which individuals enjoy their job. It was described by Edwin Locke (1976) as the state of feeling resulting from appraising one’s job experiences. While job satisfaction results from both how we think about our work (our cognition) and how we feel about our work (our affect) (Saari & Judge, 2004), it is described in terms of affect. Job satisfaction is impacted by the work itself, our personality, and the culture we come from and live in (Saari & Judge, 2004). Job satisfaction is typically measured after a change in an organization, such as a shift in the management model, to assess how the change affects employees. It may also be routinely measured by an organization to assess one of many factors expected to affect the organization’s performance. In addition, polling companies like Gallup regularly measure job satisfaction on a national scale to gather broad information on the state of the economy and the workforce (Saad, 2012). Job satisfaction is measured using questionnaires that employees complete. Sometimes a single question might be asked in a very straightforward way to which employees respond using a rating scale, such as a Likert scale, which was discussed in the chapter on personality. A Likert scale (typically) provides five possible answers to a statement or question that allows respondents to indicate their positive-to-negative strength of agreement or strength of feeling regarding the question or statement. Thus the possible responses to a question such as “How satisfied are you with your job today?” might be “Very satisfied,” “Somewhat satisfied,” “Neither satisfied, nor dissatisfied,” “Somewhat dissatisfied,” and “Very dissatisfied.” More commonly the survey will ask a number of questions about the employee’s satisfaction to determine more precisely why he is satisfied or dissatisfied. Sometimes these surveys are created for specific jobs; at other times, they are designed to apply to any job. Job satisfaction can be measured at a global level, meaning how satisfied in general the employee is with work, or at the level of specific factors intended to measure which aspects of the job lead to satisfaction (See Table 13.2 below). Table 13.2 Factors Involved in Job Satisfaction–Dissatisfaction Factor Description Autonomy Individual responsibility, control over decisions Work content Variety, challenge, role clarity Communication Feedback Financial rewards Salary and benefits Growth and development Personal growth, training, education Promotion Career advancement opportunity Coworkers Professional relations or adequacy Supervision and feedback Support, recognition, fairness Workload Time pressure, tedium Work demands Extra work requirements, insecurity of position Research has suggested that the work-content factor, which includes variety, difficulty level, and role clarity of the job, is the most strongly predictive factor of overall job satisfaction (Saari & Judge, 2004). In contrast, there is only a weak correlation between pay level and job satisfaction (Judge, Piccolo, Podsakoff, Shaw, & Rich, 2010). Judge et al. (2010) suggest that individuals adjust or adapt to higher pay levels: Higher pay no longer provides the satisfaction the individual may have initially felt when her salary increased. Why should we care about job satisfaction? Or more specifically, why should an employer care about job satisfaction? Measures of job satisfaction are somewhat correlated with job performance; in particular, they appear to relate to organizational citizenship or discretionary behaviors on the part of an employee that further the goals of the organization (Judge & Kammeyer-Mueller, 2012). Job satisfaction is related to general life satisfaction, although there has been limited research on how the two influence each other or whether personality and cultural factors affect both job and general life satisfaction. One carefully controlled study suggested that the relationship is reciprocal: Job satisfaction affects life satisfaction positively, and vice versa (Judge & Watanabe, 1993). Of course, organizations cannot control life satisfaction’s influence on job satisfaction. Job satisfaction, specifically low job satisfaction, is also related to withdrawal behaviors, such as leaving a job or absenteeism (Judge & Kammeyer-Mueller, 2012). The relationship with turnover itself, however, is weak (Judge & Kammeyer-Mueller, 2012). Finally, it appears that job satisfaction is related to organizational performance, which suggests that implementing organizational changes to improve employee job satisfaction will improve organizational performance (Judge & Kammeyer-Mueller, 2012). There is opportunity for more research in the area of job satisfaction. For example, Weiss (2002) suggests that the concept of job satisfaction measurements have combined both emotional and cognitive concepts, and measurements would be more reliable and show better relationships with outcomes like performance if the measurement of job satisfaction separated these two possible elements of job satisfaction. DIG DEEPER: Job Satisfaction in Federal Government Agencies A 2013 study of job satisfaction in the U.S. federal government found indexes of job satisfaction plummeting compared to the private sector. The largest factor in the decline was satisfaction with pay, followed by training and development opportunities. The Partnership for Public Service, a nonprofit, nonpartisan organization, has conducted research on federal employee job satisfaction since 2003. Its primary goal is to improve the federal government’s management. However, the results also provide information to those interested in obtaining employment with the federal government. Among large agencies, the highest job satisfaction ranking went to NASA, followed by the Department of Commerce and the intelligence community. The lowest scores went to the Department of Homeland Security. The data used to derive the job satisfaction score come from three questions on the Federal Employee Viewpoint Survey. The questions are: 1. I recommend my organization as a good place to work. 2. Considering everything, how satisfied are you with your job? 3. Considering everything, how satisfied are you with your organization? The questions have a range of six possible answers, spanning a range of strong agreement or satisfaction to strong disagreement or dissatisfaction. How would you answer these questions with regard to your own job? Would these questions adequately assess your job satisfaction? You can explore the Best Places To Work In The Federal Government study at their Web site: www.bestplacestowork.org. The Office of Personnel Management also produces a report based on their survey: www.fedview.opm.gov. Job stress affects job satisfaction. Job stress, or job strain, is caused by specific stressors in an occupation. Stress can be an ambigious term as it is used in common language. Stress is the perception and response of an individual to events judged as ovewhelming or threatening to the individual’s well-being (Gyllensten & Palmer, 2005). The events themselves are the stressors. Stress is a result of an employee’s perception that the demands placed on them exceed their ability to meet them (Gyllensten & Palmer, 2005), such as having to fill multiple roles in a job or life in general, workplace role ambiguity, lack of career progress, lack of job security, lack of control over work outcomes, isolation, work overload, discrimination, harrassment, and bullying (Colligan & Higgins, 2005). The stressors are different for women than men and these differences are a significant area of research (Gyllensten & Palmer, 2005). Job stress leads to poor employee health, job performance, and family life (Colligan & Higgins, 2005). As already mentioned, job insecurity contributes significantly to job stress. Two increasing threats to job security are downsizing events and corporate mergers. Businesses typically involve I-O psychologists in planning for, implementing, and managing these types of organizational change. Downsizing is an increasingly common response to a business’s pronounced failure to achieve profit goals, and it involves laying off a significant percentage of the company’s employees. Industrial-organizational psychologists may be involved in all aspects of downsizing: how the news is delivered to employees (both those being let go and those staying), how laid-off employees are supported (e.g., separation packages), and how retained employees are supported. The latter is important for the organization because downsizing events affect the retained employee’s intent to quit, organizational commitment, and job insecurity (Ugboro, 2006). In addition to downsizing as a way of responding to outside strains on a business, corporations often grow larger by combining with other businesses. This can be accomplished through a merger (i.e., the joining of two organizations of equal power and status) or an acquisition (i.e., one organization purchases the other). In an acquisition, the purchasing organization is usually the more powerful or dominant partner. In both cases, there is usually a duplication of services between the two companies, such as two accounting departments and two sales forces. Both departments must be merged, which commonly involves a reduction of staff (See figure 13.14). This leads to organizational processes and stresses similar to those that occur in downsizing events. Mergers require determining how the organizational culture will change, to which employees also must adjust (van Knippenberg, van Knippenberg, Monden, & de Lima, 2002). There can be additional stress on workers as they lose their connection to the old organization and try to make connections with the new combined group (Amiot, Terry, Jimmieson, & Callan, 2006). Research in this area focuses on understanding employee reactions and making practical recommendations for managing these organizational changes. Work-Family Balance Many people juggle the demands of work life with the demands of their home life, whether it be caring for children or taking care of an elderly parent; this is known as work-family balance. We might commonly think about work interfering with family, but it is also the case that family responsibilities may conflict with work obligations (Carlson, Kacmar, & Williams, 2000). Greenhaus and Beutell (1985) first identified three sources of work–family conflicts: • time devoted to work makes it difficult to fulfill requirements of family, or vice versa, • strain from participation in work makes it difficult to fulfill requirements of family, or vice versa, and • specific behaviors required by work make it difficult to fulfill the requirements of family, or vice versa. Women often have greater responsibility for family demands, including home care, child care, and caring for aging parents, yet men in the United States are increasingly assuming a greater share of domestic responsibilities. However, research has documented that women report greater levels of stress from work–family conflict (Gyllensten & Palmer, 2005). There are many ways to decrease work–family conflict and improve people’s job satisfaction (Posig & Kickul, 2004). These include support in the home, which can take various forms: emotional (listening), practical (help with chores). Workplace support can include understanding supervisors, flextime, leave with pay, and telecommuting. Flextime usually involves a requirement of core hours spent in the workplace around which the employee may schedule his arrival and departure from work to meet family demands. Telecommuting involves employees working at home and setting their own hours, which allows them to work during different parts of the day, and to spend part of the day with their family. Recall that Yahoo! had a policy of allowing employees to telecommute and then rescinded the policy. There are also organizations that have onsite daycare centers, and some companies even have onsite fitness centers and health clinics. In a study of the effectiveness of different coping methods, Lapierre & Allen (2006) found practical support from home more important than emotional support. They also found that immediate-supervisor support for a worker significantly reduced work–family conflict through such mechanisms as allowing an employee the flexibility needed to fulfill family obligations. In contrast, flextime did not help with coping and telecommuting actually made things worse, perhaps reflecting the fact that being at home intensifies the conflict between work and family because with the employee in the home, the demands of family are more evident. Posig & Kickul (2004) identify exemplar corporations with policies designed to reduce work–family conflict. Examples include IBM’s policy of three years of job-guaranteed leave after the birth of a child, Lucent Technologies offer of one year’s childbirth leave at half pay, and SC Johnson’s program of concierge services for daytime errands. Link to Learning The Glassdoor website posts job satisfaction reviews for different careers and organizations. Use this site to research possible careers and/or organizations that interest you. Management and Organizational Structure A significant portion of I-O research focuses on management and human relations. Douglas McGregor (1960) combined scientific management (a theory of management that analyzes and synthesizes workflows with the main objective of improving economic efficiency, especially labor productivity) and human relations into the notion of leadership behavior. His theory lays out two different styles called Theory X and Theory Y. In the Theory X approach to management, managers assume that most people dislike work and are not innately self-directed. Theory X managers perceive employees as people who prefer to be led and told which tasks to perform and when. Their employees have to be watched carefully to be sure that they work hard enough to fulfill the organization’s goals. Theory X workplaces will often have employees punch a clock when arriving and leaving the workplace: Tardiness is punished. Supervisors, not employees, determine whether an employee needs to stay late, and even this decision would require someone higher up in the command chain to approve the extra hours. Theory X supervisors will ignore employees’ suggestions for improved efficiency and reprimand employees for speaking out of order. These supervisors blame efficiency failures on individual employees rather than the systems or policies in place. Managerial goals are achieved through a system of punishments and threats rather than enticements and rewards. Managers are suspicious of employees’ motivations and always suspect selfish motivations for their behavior at work (e.g., being paid is their sole motivation for working). In the Theory Y approach, on the other hand, managers assume that most people seek inner satisfaction and fulfillment from their work. Employees function better under leadership that allows them to participate in, and provide input about, setting their personal and work goals. In Theory Y workplaces, employees participate in decisions about prioritizing tasks; they may belong to teams that, once given a goal, decide themselves how it will be accomplished. In such a workplace, employees are able to provide input on matters of efficiency and safety. One example of Theroy Y in action is the policy of Toyota production lines that allows any employee to stop the entire line if a defect or other issue appears, so that the defect can be fixed and its cause remedied (Toyota Motor Manufacturing, 2013). A Theory Y workplace will also meaningfully consult employees on any changes to the work process or management system. In addition, the organization will encourage employees to contribute their own ideas. McGregor (1960) characterized Theory X as the traditional method of management used in the United States. He agued that a Theory Y approach was needed to improve organizational output and the wellbeing of individuals. The Table 13.3 below summarizes how these two management approaches differ. Table 13.3 Theory X and Theory Y Management Styles Theory X Theory Y People dislike work and avoid it. People enjoy work and find it natural. People avoid responsibility. People are more satisified when given responsibility. People want to be told what to do. People want to take part in setting their own work goals. Goals are achieved through rules and punishments. Goals are achieved through enticements and rewards. Another management style was described by Donald Clifton, who focused his research on how an organization can best use an individual’s strengths, an approach he called strengths-based management. He and his colleagues interviewed 8,000 managers and concluded that it is important to focus on a person’s strengths, not their weaknesses. A strength is a particular enduring talent possessed by an individual that allows them to provide consistent, near-perfect performance in tasks involving that talent. Clifton argued that our strengths provide the greatest opportunity for growth (Buckingham & Clifton, 2001). An example of a strength is public speaking or the ability to plan a successful event. The strengths-based approach is very popular although its effect on organization performance is not well-studied. However, Kaiser & Overfield (2011) found that managers often neglected improving their weaknesses and overused their strengths, both of which interfered with performance. Leadership is an important element of management. Leadership styles have been of major interest within I-O research, and researchers have proposed numerous theories of leadership. Bass (1985) popularized and developed the concepts of transactional leadership versus transformational leadership styles. In transactional leadership, the focus is on supervision and organizational goals, which are achieved through a system of rewards and punishments (i.e., transactions). Transactional leaders maintain the status quo: They are managers. This is in contrast to the transformational leader. People who have transformational leadership possess four attributes to varying degrees: They are charismatic (highly liked role models), inspirational (optimistic about goal attainment), intellectually stimulating (encourage critical thinking and problem solving), and considerate (Bass, Avolio, & Atwater, 1996). As women increasingly take on leadership roles in corporations, questions have arisen as to whether there are differences in leadership styles between men and women (Eagly, Johannesen-Schmidt, & van Engen, 2003). Eagly & Johnson (1990) conducted a meta-analysis to examine gender and leadership style. They found, to a slight but significant degree, that women tend to practice an interpersonal style of leadership (i.e., she focuses on the morale and welfare of the employees) and men practice a task-oriented style (i.e., he focuses on accomplishing tasks). However, the differences were less pronounced when one looked only at organizational studies and excluded laboratory experiments or surveys that did not involve actual organizational leaders. Larger gender-related differences were observed when leadership style was categorized as democratic or autocratic, and these differences were consistent across all types of studies. The authors suggest that similarities between genders in leadership styles are attributable to different genders needing to conform to the organization’s culture; additionally, they propose that gender-related differences reflect inherent differences in the strengths each gender brings to bear on leadership practice. In another meta-analysis of leadership style, Eagly, Johannesen-Schmidt, & van Engen (2003) found that women tended to exhibit the characteristics of transformational leaders, while men were more likely to be transactional leaders. However, the differences are not absolute; for example, women were found to use methods of reward for performance more often than men, which is a component of transactional leadership. The differences they found were relatively small. As Eagly, Johannesen-Schmidt, & van Engen (2003) point out, research shows that transformational leadership approaches are more effective than transactional approaches, although individual leaders typically exhibit elements of both approaches. A new and emerging area of research within psychology focuses on leadership and the relationship with leaders from the perspective of a follower. This “followership” research suggests that studies need to examine the leader-follower relationship in both directions—instead of focusing only on leadership—to better understand the dynamics of the relationship. Put differently, people are individuals, and because they are different, there probably is no single best leadership-follower dynamic between leaders and followers. For instance, think about the differences between yourself and someone you know well. Do you respond the same way to criticism? Maybe one of you likes a lot of structure and other seems to work best with less structure. Perhaps, one of you is ready to try a new restaurant at any time and the other prefers to go to the tried-and-true place that you’ve visited so many times the servers know your order before you place it. Some early research has discovered that the characteristics of individual followers will result in different types of relationships with a leader depending on the leadership style. It appears that not all leadership styles work well with all follower types. One characteristic of followers, for example, is their degree of extroversion. Previous research suggests that individuals with a high degree of extroversion would need a larger amount of interaction with their leaders in order to function well; however, other research suggests this may not necessarily be the case and instead other factors may be at work (Phillips & Bedeian; Bauer et al, 2006). Another characteristic of followers is their individual need for growth. For followers who have a strong desire to learn and grow within their organization, a leader who provides developmental opportunities might be better received than one who does not. In addition, for those followers who are low on growth and need strength, leaders who push them to grow may make them less satisfied followers as they feel forced into further development and training, possibly signaling a lower level of achievement from their supervisor. Training for leaders in both helping employees who have a strong drive for growth and those who do not appears to be helpful in improving the relationship between both types of followers and their leaders (Schyns, Kroon, & Moors, 2008). Finally, an employee’s need for leadership is an important component of the leader-follower relationship. Some individuals are significantly more autonomous than others and as a result do not respond as well to leaders who provide a lot of structure and rigidity of processes, in turn reducing the quality of their relationship with their leader. Other employees who are high in need for leadership have a better relationship with their leader if they are provided with a well-structured environment with clear responsibilities and little ambiguity in their work. These followers work best in situations where they feel they can comfortably perform the work with little requirement to think outside of the guidelines that have been provided. For these individuals, having a leader who is able to set a clear path forward for the employee with little need for deviation promotes a strong positive leader-follower relationship (Felfe & Schyns, 2006). Goals, Teamwork and Work Teams The workplace today is rapidly changing due to a variety of factors, such as shifts in technology, economics, foreign competition, globalization, and workplace demographics. Organizations need to respond quickly to changes in these factors. Many companies are responding to these changes by structuring their organizations so that work can be delegated to work teams, which bring together diverse skills, experience, and expertise. This is in contrast to organizational structures that have individuals at their base (Naquin & Tynan, 2003). In the team-based approach, teams are brought together and given a specific task or goal to accomplish. Despite their burgeoning popularity, team structures do not always deliver greater productivity—the work of teams is an active area of research (Naquin & Tynan, 2003). Why do some teams work well while others do not? There are many contributing factors. For example, teams can mask team members that are not working (i.e., social loafing). Teams can be inefficient due to poor communication; they can have poor decision-making skills due to conformity effects; and, they can have conflict within the group. The popularity of teams may in part result from the team halo effect: Teams are given credit for their successes. but individuals within a team are blamed for team failures (Naquin & Tynan, 2003). One aspect of team diversity is their gender mix. Researchers have explored whether gender mix has an effect on team performance. On the one hand, diversity can introduce communication and interpersonal-relationship problems that hinder performance, but on the other hand diversity can also increase the team’s skill set, which may include skills that can actually improve team member interactions. Hoogendoorn, Oosterbeek, & van Praag (2013) studied project teams in a university business school in which the gender mix of the teams was manipulated. They found that gender-balanced teams (i.e., nearly equal numbers of men and women) performed better, as measured by sales and profits, than predominantly male teams. The study did not have enough data to determine the relative performance of female dominated teams. The study was unsuccessful in identifying which mechanism (interpersonal relationships, learning, or skills mixes) accounted for performance improvement. There are three basic types of teams: problem resolution teams, creative teams, and tactical teams. Problem resolution teams are created for the purpose of solving a particular problem or issue; for example, the diagnostic teams at the Centers for Disease Control. Creative teams are used to develop innovative possibilities or solutions; for example, design teams for car manufacturers create new vehicle models. Tactical teams are used to execute a well-defined plan or objective, such as a police or FBI SWAT team handling a hostage situation (Larson & LaFasto, 1989). One area of active research involves a fourth kind of team—the virtual team; these studies examine how groups of geographically disparate people brought together using digital communications technology function (Powell, Piccoli, & Ives, 2004). Virtual teams are more common due to the growing globalization of organizations and the use of consulting and partnerships facilitated by digital communication. Organizational Culture Each company and organization has an organizational culture. Organizational culture encompasses the values, visions, hierarchies, norms, and interactions among its employees. It is how an organization is run, how it operates, and how it makes decisions—the industry in which the organization participates may have an influence. Different departments within one company can develop their own subculture within the organization’s culture. Ostroff, Kinicki, and Tamkins (2003) identify three layers in organizational culture: observable artifacts, espoused values, and basic assumptions. Observable artifacts are the symbols, language (jargon, slang, and humor), narratives (stories and legends), and practices (rituals) that represent the underlying cultural assumptions. Espoused values are concepts or beliefs that the management or the entire organization endorses. They are the rules that allow employees to know which actions they should take in different situations and which information they should adhere to. These basic assumptions generally are unobservable and unquestioned. Researchers have developed survey instruments to measure organizational culture. With the workforce being a global marketplace, your company may have a supplier in Korea and another in Honduras and have employees in the United States, China, and South Africa. You may have coworkers of different religious, ethnic, or racial backgrounds than yourself. Your coworkers may be from different places around the globe. Many workplaces offer diversity training to help everyone involved bridge and understand cultural differences. Diversity training educates participants about cultural differences with the goal of improving teamwork. There is always the potential for prejudice between members of two groups, but the evidence suggests that simply working together, particularly if the conditions of work are set carefully that such prejudice can be reduced or eliminated. Pettigrew and Tropp (2006) conducted a meta-analysis to examine the question of whether contact between groups reduced prejudice between those groups. They found that there was a moderate but significant effect. They also found that, as previously theorized, the effect was enhanced when the two groups met under conditions in which they have equal standing, common goals, cooperation between the groups, and especially support on the part of the institution or authorities for the contact. DIG DEEPER: Managing Generational Differences An important consideration in managing employees is age. Workers’ expectations and attitudes are developed in part by experience in particular cultural time periods. Generational constructs are somewhat arbitrary, yet they may be helpful in setting broad directions to organizational management as one generation leaves the workforce and another enters it. The baby boomer generation (born between 1946 and 1964) is in the process of leaving the workforce and will continue to depart it for a decade or more. Generation \(X\) (born between the early 1960s and the 1980s) are now in the middle of their careers. Millennials (born from 1979 to the early 1994) began to come of age at the turn of the century, and are early in their careers. Today, as these three different generations work side by side in the workplace, employers and managers need to be able to identify their unique characteristics. Each generation has distinctive expectations, habits, attitudes, and motivations (Elmore, 2010). One of the major differences among these generations is knowledge of the use of technology in the workplace. Millennials are technologically sophisticated and believe their use of technology sets them apart from other generations. They have also been characterized as self-centered and overly self-confident. Their attitudinal differences have raised concerns for managers about maintaining their motivation as employees and their ability to integrate into organizational culture created by baby boomers (Myers & Sadaghiani, 2010). For example, millennials may expect to hear that they need to pay their dues in their jobs from baby boomers who believe they paid their dues in their time. Yet millennials may resist doing so because they value life outside of work to a greater degree (Myers & Sadaghiani, 2010). Meister & Willyerd (2010) suggest alternative approaches to training and mentoring that will engage millennials and adapt to their need for feedback from supervisors: reverse mentoring, in which a younger employee educates a senior employee in social media or other digital resources. The senior employee then has the opportunity to provide useful guidance within a less demanding role. Recruiting and retaining millennials and Generation \(X\) employees poses challenges that did not exist in previous generations. The concept of building a career with the company is not relatable to most Generation \(X\) employees, who do not expect to stay with one employer for their career. This expectation arises from of a reduced sense of loyalty because they do not expect their employer to be loyal to them (Gibson, Greenwood, & Murphy, 2009). Retaining Generation \(X\) workers thus relies on motivating them by making their work meaningful (Gibson, Greenwood, & Murphy, 2009). Since millennials lack an inherent loyalty to the company, retaining them also requires effort in the form of nurturing through frequent rewards, praise, and feedback. Millennials are also interested in having many choices, including options in work scheduling, choice of job duties, and so on. They also expect more training and education from their employers. Companies that offer the best benefit package and brand attract millennials (Myers & Sadaghiani, 2010). One well-recognized negative aspect of organizational culture is a culture of harassment, including sexual harassment. Most organizations of any size have developed sexual harassment policies that define sexual harassment (or harassment in general) and the procedures the organization has set in place to prevent and address it when it does occur. Thus, in most jobs you have held, you were probably made aware of the company’s sexual harassment policy and procedures, and may have received training related to the policy. The U.S. Equal Employment Opportunity Commission (n.d.) provides the following description of sexual harassment: Unwelcome sexual advances, requests for sexual favors, and other verbal or physical conduct of a sexual nature constitute sexual harassment when this conduct explicitly or implicitly affects an individual's employment, unreasonably interferes with an individual's work performance, or creates an intimidating, hostile, or offensive work environment. (par. 2) One form of sexual harassment is called quid pro quo. Quid pro quo means you give something to get something, and it refers to a situation in which organizational rewards are offered in exchange for sexual favors. Quid pro quo harassment is often between an employee and a person with greater power in the organization. For example, a supervisor might request an action, such as a kiss or a touch, in exchange for a promotion, a positive performance review, or a pay raise. Another form of sexual harassment is the threat of withholding a reward if a sexual request is refused. Hostile environment sexual harassment is another type of workplace harassment. In this situation, an employee experiences conditions in the workplace that are considered hostile or intimidating. For example, a work environment that allows offensive language or jokes or displays sexually explicit images. Isolated occurrences of these events do not constitute harassment, but a pattern of repeated occurrences does. In addition to violating organizational policies against sexual harassment, these forms of harassment are illegal. Harassment does not have to be sexual; it may be related to any of the protected classes in the statutes regulated by the EEOC: race, national origin, religion, or age. Violence in the Workplace Workplace violence is any act or threat of physical violence, harassment, intimidation, or other threatening, disruptive behavior that occurs at the workplace. It ranges from threats and verbal abuse to physical assaults and even homicide (Occupational Safety & Health Administration, 2014). There are different targets of workplace violence: a person could commit violence against coworkers, supervisors, or property. Warning signs often precede such actions: intimidating behavior, threats, sabotaging equipment, or radical changes in a coworker’s behavior. Often there is intimidation and then escalation that leads to even further escalation. It is important for employees to involve their immediate supervisor if they ever feel intimidated or unsafe. Murder is the second leading cause of death in the workplace. It is also the primary cause of death for women in the workplace. Every year there are nearly two million workers who are physically assaulted or threatened with assault. Many are murdered in domestic violence situations by boyfriends or husbands who chose the woman’s workplace to commit their crimes. There are many triggers for workplace violence. A significant trigger is the feeling of being treated unfairly, unjustly, or disrespectfully. In a research experiment, Greenberg (1993) examined the reactions of students who were given pay for a task. In one group, the students were given extensive explanations for the pay rate. In the second group, the students were given a curt uninformative explanation. The students were made to believe the supervisor would not know how much money the student withdrew for payment. The rate of stealing (taking more pay than they were told they deserved) was higher in the group who had been given the limited explanation. This is a demonstration of the importance of procedural justice in organizations. Procedural justice refers to the fairness of the processes by which outcomes are determined in conflicts with or among employees. In another study by Greenberg & Barling (1999), they found a history of aggression and amount of alcohol consumed to be accurate predictors of workplace violence against a coworker. Aggression against a supervisor was predicted if a worker felt unfairly treated or untrusted. Job security and alcohol consumption predicted aggression against a subordinate. To understand and predict workplace violence, Greenberg & Barling (1999) emphasize the importance of considering the employee target of aggression or violence and characteristics of both the workplace characteristics and the aggressive or violent person.
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/13%3A_Industrial-Organizational_Psychology/13.04%3A_Organizational_Psychology_-_The_Social_Dimension_of_Work.txt
Learning Objectives • Describe the field of human factors psychology • Explain the role of human factors psychology in safety, productivity, and job satisfaction Human factors psychology (or ergonomics, a term that is favored in Europe) is the third subject area within industrial and organizational psychology. This field is concerned with the integration of the human-machine interface in the workplace, through design, and specifically with researching and designing machines that fit human requirements. The integration may be physical or cognitive, or a combination of both. Anyone who needs to be convinced that the field is necessary need only try to operate an unfamiliar television remote control or use a new piece of software for the first time. Whereas the two other areas of I-O psychology focus on the interface between the worker and team, group, or organization, human factors psychology focuses on the individual worker’s interaction with a machine, work station, information displays, and the local environment, such as lighting. In the United States, human factors psychology has origins in both psychology and engineering; this is reflected in the early contributions of Lillian Gilbreth (psychologist and engineer) and her husband Frank Gilbreth (engineer). Human factor professionals are involved in design from the beginning of a project, as is more common in software design projects, or toward the end in testing and evaluation, as is more common in traditional industries (Howell, 2003). Another important role of human factor professionals is in the development of regulations and principles of best design. These regulations and principles are often related to work safety. For example, the Three Mile Island nuclear accident lead to Nuclear Regulatory Commission (NRC) requirements for additional instrumentation in nuclear facilities to provide operators with more critical information and increased operator training (United States Nuclear Regulatory Commission, 2013). The American National Standards Institute (ANSI, 2000), an independent developer of industrial standards, develops many standards related to ergonomic design, such as the design of control-center workstations that are used for transportation control or industrial process control. Many of the concerns of human factors psychology are related to workplace safety. These concerns can be studied to help prevent work-related injuries of individual workers or those around them. Safety protocols may also be related to activities, such as commercial driving or flying, medical procedures, and law enforcement, that have the potential to impact the public. One of the methods used to reduce accidents in the workplace is a checklist. The airline industry is one industry that uses checklists. Pilots are required to go through a detailed checklist of the different parts of the aircraft before takeoff to ensure that all essential equipment is working correctly. Astronauts also go through checklists before takeoff. The surgical safety checklist shown in figure 13.15 was developed by the World Health Organization (WHO) and serves as the basis for many checklists at medical facilities. Safety concerns also lead to limits to how long an operator, such as a pilot or truck driver, is allowed to operate the equipment. Recently the Federal Aviation Administration (FAA) introduced limits for how long a pilot is allowed to fly without an overnight break. Howell (2003) outlines some important areas of research and practice in the field of human factors. These are summarized in Table 13.4 below. Table 13.4: Areas of Study in Human Factors Psychology Area Description I-O Questions Attention Includes vigilance and monitoring, recognizing signals in noise, mental resources, and divided attention How is attention maintained? What about tasks maintains attention? How to design systems to support attention? Cognitive engineering Includes human software interactions in complex automated systems, especially the decision-making processes of workers as they are supported by the software system How do workers use and obtain information provided by software? Task analysis Breaking down the elements of a task How can a task be performed more efficiently? How can a task be performed more safely? Cognitive task analysis Breaking down the elements of a cognitive task How are decisions made? As an example of research in human factors psychology Bruno & Abrahão (2012) examined the impact of the volume of operator decisions on the accuracy of decisions made within an information security center at a banking institution in Brazil. The study examined a total of about \(45,000\) decisions made by \(35\) operators and \(4\) managers over a period of \(60\) days. Their study found that as the number of decisions made per day by the operators climbed, that is, as their cognitive effort increased, the operators made more mistakes in falsely identifying incidents as real security breaches (when, in reality, they were not). Interestingly, the opposite mistake of identifying real intrusions as false alarms did not increase with increased cognitive demand. This appears to be good news for the bank, since false alarms are not as costly as incorrectly rejecting a genuine threat. These kinds of studies combine research on attention, perception, teamwork, and human–computer interactions in a field of considerable societal and business significance. This is exactly the context of the events that led to the massive data breach for Target in the fall of 2013. Indications are that security personnel received signals of a security breach but did not interpret them correctly, thus allowing the breach to continue for two weeks until an outside agency, the FBI, informed the company (Riley, Elgin, Lawrence, & Matlack, 2014).
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/13%3A_Industrial-Organizational_Psychology/13.05%3A_Human_Factors_Psychology_and_Workplace_Design.txt
15. What societal and management attitudes might have caused organizational psychology to develop later than industrial psychology? 16. Many of the examples of I-O psychology are applications to businesses. Name four different non-business contexts that I-O psychology could impact? 17. Construct a good interview question for a position of your choosing. The question should relate to a specific skill requirement for the position and you will need to include the criteria for rating the applicants answer. 18. What might be useful mechanisms for avoiding bias during employment interviews? 19. If you designed an assessment of job satisfaction, what elements would it include? 20. Downsizing has commonly shown to result in a period of lowered productivity for the organizations experiencing it. What might be some of the reasons for this observation? 21. What role could a flight simulator play in the design of a new aircraft? Key Terms Americans with Disabilities Act employers cannot discriminate against any individual based on a disability bona fide occupational qualification (BFOQ) requirement of certain occupations for which denying an individual employment would otherwise violate the law, such as requirements concerning religion or sex checklist method used to reduce workplace accidents diversity training training employees about cultural differences with the goal of improving teamwork downsizing process in which an organization tries to achieve greater overall efficiency by reducing the number of employees Hawthorne effect increase in performance of individuals who are noticed, watched, and paid attention to by researchers or supervisors human factors psychology branch of psychology that studies how workers interact with the tools of work and how to design those tools to optimize workers’ productivity, safety, and health immutable characteristic traits that employers cannot use to discriminate in hiring, benefits, promotions, or termination; these traits are fundamental to one’s personal identity (e.g. skin color and hair texture) industrial and organizational (I-O) psychology field in psychology that applies scientific principles to the study of work and the workplace industrial psychology branch of psychology that studies job characteristics, applicant characteristics, and how to match them; also studies employee training and performance appraisal job analysis determining and listing tasks associated with a particular job job satisfaction degree of pleasure that employees derive from their job organizational culture values, visions, hierarchies, norms and interactions between its employees; how an organization is run, how it operates, and how it makes decisions organizational psychology branch of psychology that studies the interactions between people working in organizations and the effects of those interactions on productivity performance appraisal evaluation of an employee’s success or lack of success at performing the duties of the job procedural justice fairness by which means are used to achieve results in an organization scientific management theory of management that analyzed and synthesized workflows with the main objective of improving economic efficiency, especially labor productivity sexual harassment sexually-based behavior that is knowingly unwanted and has an adverse effect of a person’s employment status, interferes with a person’s job performance, or creates a hostile or intimidating work environment telecommuting employees’ ability to set their own hours allowing them to work from home at different parts of the day Theory X assumes workers are inherently lazy and unproductive; managers must have control and use punishments Theory Y assumes workers are people who seek to work hard and productively; managers and workers can find creative solutions to problems; workers do not need to be controlled and punished transactional leadership style characteristic of leaders who focus on supervision and organizational goals achieved through a system of rewards and punishments; maintenance of the organizational status quo transformational leadership style characteristic of leaders who are charismatic role models, inspirational, intellectually stimulating, and individually considerate and who seek to change the organization U.S. Equal Employment Opportunity Commission (EEOC) responsible for enforcing federal laws that make it illegal to discriminate against a job applicant or an employee because of the person’s race, color, religion, sex (including pregnancy), national origin, age (40 or older), disability, or genetic information work team group of people within an organization or company given a specific task to achieve together work–family balance occurs when people juggle the demands of work life with the demands of family life workplace violence violence or the threat of violence against workers; can occur inside or outside the workplace Personal Application Questions 22. Which of the broad areas of I-O psychology interests you the most and why? 23. What are some of the KSAs (knowledge, skills, and abilities) that are required for your current position or a position you wish to have in the future? 24. How would you handle the situation if you were being sexually harassed? What would you consider sexual harassment? 25. Describe an example of a technology or team and technology interaction that you have had in the context of school or work that could have benefited from better design. What were the effects of the poor design? Make one suggestion for its improvement.
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/13%3A_Industrial-Organizational_Psychology/Critical_Thinking_Questions.txt
1. Who was the first psychologist to use psychology in advertising? 1. Hugo Münsterberg 2. Elton Mayo 3. Walter Dill Scott 4. Walter Bingham 2. Which test designed for the Army was used for recruits who were not fluent in English? 1. Army Personality 2. Army Alpha 3. Army Beta 4. Army Intelligence 3. Which area of I-O psychology measures job satisfaction? 1. industrial psychology 2. organizational psychology 3. human factors psychology 4. advertising psychology 4. Which statement best describes the Hawthorne effect? 1. Giving workers rest periods seems like it should decrease productivity, but it actually increases productivity. 2. Social relations among workers have a greater effect on productivity than physical environment. 3. Changes in light levels improve working conditions and therefore increase productivity. 4. The attention of researchers on subjects causes the effect the experimenter is looking for. 5. Which of the following questions is illegal to ask in a job interview in the United States? 1. Which university did you attend? 2. Which state were you born in? 3. Do you have a commercial driver’s license? 4. What salary would you expect for this position? 6. Which of the following items is not a part of KSAs? 1. aspiration 2. knowledge 3. skill 4. other abilities 7. Who is responsible for enforcing federal laws that make it illegal to discriminate against a job applicant? 1. Americans with Disabilities Act 2. Supreme Court of the United States 3. U.S. Equal Employment Opportunity Commission 4. Society for Industrial and Organizational Psychology 8. A ________ is an example of a tactical team. 1. surgical team 2. car design team 3. budget committee 4. sports team 9. Which practice is an example of Theory X management? 1. telecommuting 2. flextime 3. keystroke monitoring 4. team meetings 10. Which is one effect of the team halo effect? 1. teams appear to work better than they do 2. teams never fail 3. teams lead to greater job satisfaction 4. teams boost productivity 11. Which of the following is the most strongly predictive factor of overall job satisfaction? 1. financial rewards 2. personality 3. autonomy 4. work content 12. What is the name for what occurs when a supervisor offers a work-related reward in exchange for a sexual favor? 1. hiring bias 2. quid pro quo 3. hostile work environment 4. immutable characteristics 13. What aspect of an office workstation would a human factors psychologist be concerned about? 1. height of the chair 2. closeness to the supervisor 3. frequency of coworker visits 4. presence of an offensive sign 14. A human factors psychologist who studied how a worker interacted with a search engine would be researching in the area of ________. 1. attention 2. cognitive engineering 3. job satisfaction 4. management Summary 13.1 What Is Industrial and Organizational Psychology? The field of I-O psychology had its birth in industrial psychology and the use of psychological concepts to aid in personnel selection. However, with research such as the Hawthorne study, it was found that productivity was affected more by human interaction and not physical factors; the field of industrial psychology expanded to include organizational psychology. Both WWI and WWII had a strong influence on the development of an expansion of industrial psychology in the United States and elsewhere: The tasks the psychologists were assigned led to development of tests and research in how the psychological concepts could assist industry and other areas. This movement aided in expanding industrial psychology to include organizational psychology. 13.2 Industrial Psychology: Selecting and Evaluating Employees Industrial psychology studies the attributes of jobs, applicants of those jobs, and methods for assessing fit to a job. These procedures include job analysis, applicant testing, and interviews. It also studies and puts into place procedures for the orientation of new employees and ongoing training of employees. The process of hiring employees can be vulnerable to bias, which is illegal, and industrial psychologists must develop methods for adhering to the law in hiring. Performance appraisal systems are an active area of research and practice in industrial psychology. 13.3 Organizational Psychology: The Social Dimension of Work Organizational psychology is concerned with the effects of interactions among people in the workplace on the employees themselves and on organizational productivity. Job satisfaction and its determinants and outcomes are a major focus of organizational psychology research and practice. Organizational psychologists have also studied the effects of management styles and leadership styles on productivity. In addition to the employees and management, organizational psychology also looks at the organizational culture and how that might affect productivity. One aspect of organization culture is the prevention and addressing of sexual and other forms of harassment in the workplace. Sexual harassment includes language, behavior, or displays that create a hostile environment; it also includes sexual favors requested in exchange for workplace rewards (i.e., quid pro quo). Industrial-organizational psychology has conducted extensive research on the triggers and causes of workplace violence and safety. This enables the organization to establish procedures that can identify these triggers before they become a problem. 13.4 Human Factors Psychology and Workplace Design Human factors psychology, or ergonomics, studies the interface between workers and their machines and physical environments. Human factors psychologists specifically seek to design machines to better support the workers using them. Psychologists may be involved in design of work tools such as software, displays, or machines from the beginning of the design process or during the testing an already developed product. Human factor psychologists are also involved in the development of best design recommendations and regulations. One important aspect of human factors psychology is enhancing worker safety. Human factors research involves efforts to understand and improve interactions between technology systems and their human operators. Human–software interactions are a large sector of this research.
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/13%3A_Industrial-Organizational_Psychology/Review_Questions.txt
Scientific interest in stress, including how we adapt and cope, has been longstanding in psychology; indeed, after nearly a century of research on the topic, much has been learned and many insights have been developed. This chapter examines stress and highlights our current understanding of the phenomenon, including its psychological and physiological natures, its causes and consequences, and the steps we can take to master stress rather than become its victim. • Introduction Few would deny that today’s college students are under a lot of pressure. In addition to many usual stresses and strains incidental to the college experience (e.g., exams, term papers, and the dreaded freshman 15), students today are faced with increased college tuitions, burdensome debt, and difficulty finding employment after graduation. A significant population of non-traditional college students may face additional stressors, such as raising children or holding down a full-time job. • 14.1: What Is Stress? The term stress as it relates to the human condition first emerged in scientific literature in the 1930s, but it did not enter the popular vernacular until the 1970s. Today, we often use the term loosely in describing a variety of unpleasant feeling states; for example, we often say we are stressed out when we feel frustrated, angry, conflicted, overwhelmed, or fatigued. Despite the widespread use of the term, stress is a fairly vague concept that is difficult to define with precision. • 14.2: Stressors For an individual to experience stress, he must first encounter a potential stressor. In general, stressors can be placed into one of two broad categories: chronic and acute. Chronic stressors include events that persist over an extended period of time, such as caring for a parent with dementia, long-term unemployment, or imprisonment. Acute stressors involve brief focal events that sometimes continue to be experienced as overwhelming well after the event has ended. • 14.3: Stress and Illness The stress response, as noted earlier, consists of a coordinated but complex system of physiological reactions that are called upon as needed. These reactions are beneficial at times because they prepare us to deal with potentially dangerous or threatening situations (for example, recall our old friend, the fearsome bear on the trail). However, health is affected when physiological reactions are sustained, as can happen in response to ongoing stress. • 14.4: Regulation of Stress As we learned in the previous section, stress—especially if it is chronic—takes a toll on our bodies and can have enormously negative health implications. When we experience events in our lives that we appraise as stressful, it is essential that we use effective coping strategies to manage our stress. Coping refers to mental and behavioral efforts that we use to deal with problems relating to stress, including its presumed cause and the unpleasant feelings and emotions it produces. • 14.5: The Pursuit of Happiness Although the study of stress and how it affects us physically and psychologically is fascinating, it is—admittedly—somewhat of a grim topic. Psychology is also interested in the study of a more upbeat and encouraging approach to human affairs—the quest for happiness. • Critical Thinking Questions • Key Terms • Personal Application Questions • Review Questions • Summary Thumbnail: Frustrated man at a desk. (CC BY-SA 3.0; LaurMG). 14: Stress Lifestyle and Health Chapter Outline 14.1 What Is Stress? 14.2 Stressors 14.3 Stress and Illness 14.4 Regulation of Stress 14.5 The Pursuit of Happiness Few would deny that today’s college students are under a lot of pressure. In addition to many usual stresses and strains incidental to the college experience (e.g., exams, term papers, and the dreaded freshman \(15\)), students today are faced with increased college tuitions, burdensome debt, and difficulty finding employment after graduation. A significant population of non-traditional college students may face additional stressors, such as raising children or holding down a full-time job while working toward a degree. Of course, life is filled with many additional challenges beyond those incurred in college or the workplace. We might have concerns with financial security, difficulties with friends or neighbors, family responsibilities, and we may not have enough time to do the things we want to do. Even minor hassles—losing things, traffic jams, and loss of internet service—all involve pressure and demands that can make life seem like a struggle and that can compromise our sense of well-being. That is, all can be stressful in some way. Scientific interest in stress, including how we adapt and cope, has been longstanding in psychology; indeed, after nearly a century of research on the topic, much has been learned and many insights have been developed. This chapter examines stress and highlights our current understanding of the phenomenon, including its psychological and physiological natures, its causes and consequences, and the steps we can take to master stress rather than become its victim.
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/14%3A_Stress_Lifestyle_and_Health/14.01%3A_Prelude_to_Stress_Lifestyle_and_Health.txt
Learning Objectives • Differentiate between stimulus-based and response-based definitions of stress • Define stress as a process • Differentiate between good stress and bad stress • Describe the early contributions of Walter Cannon and Hans Selye to the stress research field • Understand the physiological basis of stress and describe the general adaptation syndrome The term stress as it relates to the human condition first emerged in scientific literature in the 1930s, but it did not enter the popular vernacular until the 1970s (Lyon, 2012). Today, we often use the term loosely in describing a variety of unpleasant feeling states; for example, we often say we are stressed out when we feel frustrated, angry, conflicted, overwhelmed, or fatigued. Despite the widespread use of the term, stress is a fairly vague concept that is difficult to define with precision. Researchers have had a difficult time agreeing on an acceptable definition of stress. Some have conceptualized stress as a demanding or threatening event or situation (e.g., a high-stress job, overcrowding, and long commutes to work). Such conceptualizations are known as stimulus-based definitions because they characterize stress as a stimulus that causes certain reactions. Stimulus-based definitions of stress are problematic, however, because they fail to recognize that people differ in how they view and react to challenging life events and situations. For example, a conscientious student who has studied diligently all semester would likely experience less stress during final exams week than would a less responsible, unprepared student. Others have conceptualized stress in ways that emphasize the physiological responses that occur when faced with demanding or threatening situations (e.g., increased arousal). These conceptualizations are referred to as response-based definitions because they describe stress as a response to environmental conditions. For example, the endocrinologist Hans Selye, a famous stress researcher, once defined stress as the “response of the body to any demand, whether it is caused by, or results in, pleasant or unpleasant conditions” (Selye, 1976, p. 74). Selye’s definition of stress is response-based in that it conceptualizes stress chiefly in terms of the body’s physiological reaction to any demand that is placed on it. Neither stimulus-based nor response-based definitions provide a complete definition of stress. Many of the physiological reactions that occur when faced with demanding situations (e.g., accelerated heart rate) can also occur in response to things that most people would not consider to be genuinely stressful, such as receiving unanticipated good news: an unexpected promotion or raise. A useful way to conceptualize stress is to view it as a process whereby an individual perceives and responds to events that he appraises as overwhelming or threatening to his well-being (Lazarus & Folkman, 1984). A critical element of this definition is that it emphasizes the importance of how we appraise—that is, judge—demanding or threatening events (often referred to as stressors); these appraisals, in turn, influence our reactions to such events. Two kinds of appraisals of a stressor are especially important in this regard: primary and secondary appraisals. A primary appraisal involves judgment about the degree of potential harm or threat to well-being that a stressor might entail. A stressor would likely be appraised as a threat if one anticipates that it could lead to some kind of harm, loss, or other negative consequence; conversely, a stressor would likely be appraised as a challenge if one believes that it carries the potential for gain or personal growth. For example, an employee who is promoted to a leadership position would likely perceive the promotion as a much greater threat if she believed the promotion would lead to excessive work demands than if she viewed it as an opportunity to gain new skills and grow professionally. Similarly, a college student on the cusp of graduation may face the change as a threat or a challenge (See figure 14.2 below). The perception of a threat triggers a secondary appraisal: judgment of the options available to cope with a stressor, as well as perceptions of how effective such options will be (Lyon, 2012) (See figure 14.3). As you may recall from what you learned about self-efficacy, an individual’s belief in his ability to complete a task is important (Bandura, 1994). A threat tends to be viewed as less catastrophic if one believes something can be done about it (Lazarus & Folkman, 1984). Imagine that two middle-aged women, Robin and Maria, perform breast self-examinations one morning and each woman notices a lump on the lower region of her left breast. Although both women view the breast lump as a potential threat (primary appraisal), their secondary appraisals differ considerably. In considering the breast lump, some of the thoughts racing through Robin’s mind are, “Oh my God, I could have breast cancer! What if the cancer has spread to the rest of my body and I cannot recover? What if I have to go through chemotherapy? I’ve heard that experience is awful! What if I have to quit my job? My husband and I won’t have enough money to pay the mortgage. Oh, this is just horrible…I can’t deal with it!” On the other hand, Maria thinks, “Hmm, this may not be good. Although most times these things turn out to be benign, I need to have it checked out. If it turns out to be breast cancer, there are doctors who can take care of it because the medical technology today is quite advanced. I’ll have a lot of different options, and I’ll be just fine.” Clearly, Robin and Maria have different outlooks on what might turn out to be a very serious situation: Robin seems to think that little could be done about it, whereas Maria believes that, worst case scenario, a number of options that are likely to be effective would be available. As such, Robin would clearly experience greater stress than would Maria. To be sure, some stressors are inherently more stressful than others in that they are more threatening and leave less potential for variation in cognitive appraisals (e.g., objective threats to one’s health or safety). Nevertheless, appraisal will still play a role in augmenting or diminishing our reactions to such events (Everly & Lating, 2002). If a person appraises an event as harmful and believes that the demands imposed by the event exceed the available resources to manage or adapt to it, the person will subjectively experience a state of stress. In contrast, if one does not appraise the same event as harmful or threatening, she is unlikely to experience stress. According to this definition, environmental events trigger stress reactions by the way they are interpreted and the meanings they are assigned. In short, stress is largely in the eye of the beholder: it’s not so much what happens to you as it is how you respond (Selye, 1976). Good Stress? Although stress carries a negative connotation, at times it may be of some benefit. Stress can motivate us to do things in our best interests, such as study for exams, visit the doctor regularly, exercise, and perform to the best of our ability at work. Indeed, Selye (1974) pointed out that not all stress is harmful. He argued that stress can sometimes be a positive, motivating force that can improve the quality of our lives. This kind of stress, which Selye called eustress (from the Greek eu = “good”), is a good kind of stress associated with positive feelings, optimal health, and performance. A moderate amount of stress can be beneficial in challenging situations. For example, athletes may be motivated and energized by pregame stress, and students may experience similar beneficial stress before a major exam. Indeed, research shows that moderate stress can enhance both immediate and delayed recall of educational material. Male participants in one study who memorized a scientific text passage showed improved memory of the passage immediately after exposure to a mild stressor as well as one day following exposure to the stressor (Hupbach & Fieman, 2012). Increasing one’s level of stress will cause performance to change in a predictable way. As shown in figure 14.4, as stress increases, so do performance and general well-being (eustress); when stress levels reach an optimal level (the highest point of the curve), performance reaches its peak. A person at this stress level is colloquially at the top of his game, meaning he feels fully energized, focused, and can work with minimal effort and maximum efficiency. But when stress exceeds this optimal level, it is no longer a positive force—it becomes excessive and debilitating, or what Selye termed distress (from the Latin dis = “bad”). People who reach this level of stress feel burned out; they are fatigued, exhausted, and their performance begins to decline. If the stress remains excessive, health may begin to erode as well (Everly & Lating, 2002). The Prevalence of Stress Stress is everywhere and, as shown in figure 14.5, it has been on the rise over the last several years. Each of us is acquainted with stress—some are more familiar than others. In many ways, stress feels like a load you just can’t carry—a feeling you experience when, for example, you have to drive somewhere in a crippling blizzard, when you wake up late the morning of an important job interview, when you run out of money before the next pay period, and before taking an important exam for which you realize you are not fully prepared. Stress is an experience that evokes a variety of responses, including those that are physiological (e.g., accelerated heart rate, headaches, or gastrointestinal problems), cognitive (e.g., difficulty concentrating or making decisions), and behavioral (e.g., drinking alcohol, smoking, or taking actions directed at eliminating the cause of the stress). Although stress can be positive at times, it can have deleterious health implications, contributing to the onset and progression of a variety of physical illnesses and diseases (Cohen & Herbert, 1996). The scientific study of how stress and other psychological factors impact health falls within the realm of health psychology, a subfield of psychology devoted to understanding the importance of psychological influences on health, illness, and how people respond when they become ill (Taylor, 1999). Health psychology emerged as a discipline in the 1970s, a time during which there was increasing awareness of the role behavioral and lifestyle factors play in the development of illnesses and diseases (Straub, 2007). In addition to studying the connection between stress and illness, health psychologists investigate issues such as why people make certain lifestyle choices (e.g., smoking or eating unhealthy food despite knowing the potential adverse health implications of such behaviors). Health psychologists also design and investigate the effectiveness of interventions aimed at changing unhealthy behaviors. Perhaps one of the more fundamental tasks of health psychologists is to identify which groups of people are especially at risk for negative health outcomes, based on psychological or behavioral factors. For example, measuring differences in stress levels among demographic groups and how these levels change over time can help identify populations who may have an increased risk for illness or disease. Figure 14.6 depicts the results of three national surveys in which several thousand individuals from different demographic groups completed a brief stress questionnaire; the surveys were administered in 1983, 2006, and 2009 (Cohen & Janicki-Deverts, 2012). All three surveys demonstrated higher stress in women than in men. Unemployed individuals reported high levels of stress in all three surveys, as did those with less education and income; retired persons reported the lowest stress levels. However, from 2006 to 2009 the greatest increase in stress levels occurred among men, Whites, people aged \(45-64\), college graduates, and those with full-time employment. One interpretation of these findings is that concerns surrounding the 2008–2009 economic downturn (e.g., threat of or actual job loss and substantial loss of retirement savings) may have been especially stressful to White, college-educated, employed men with limited time remaining in their working careers. Early contributions to the Study of Stress As previously stated, scientific interest in stress goes back nearly a century. One of the early pioneers in the study of stress was Walter Cannon, an eminent American physiologist at Harvard Medical School (See figure 14.7). In the early part of the 20th century, Cannon was the first to identify the body’s physiological reactions to stress. Cannon and the Fight-or-Flight Response Imagine that you are hiking in the beautiful mountains of Colorado on a warm and sunny spring day. At one point during your hike, a large, frightening-looking black bear appears from behind a stand of trees and sits about 50 yards from you. The bear notices you, sits up, and begins to lumber in your direction. In addition to thinking, “This is definitely not good,” a constellation of physiological reactions begins to take place inside you. Prompted by a deluge of epinephrine (adrenaline) and norepinephrine (noradrenaline) from your adrenal glands, your pupils begin to dilate. Your heart starts to pound and speeds up, you begin to breathe heavily and perspire, you get butterflies in your stomach, and your muscles become tense, preparing you to take some kind of direct action. Cannon proposed that this reaction, which he called the fight-or-flight response, occurs when a person experiences very strong emotions—especially those associated with a perceived threat (Cannon, 1932). During the fight-or-flight response, the body is rapidly aroused by activation of both the sympathetic nervous system and the endocrine system (See figure 14.8). This arousal helps prepare the person to either fight or flee from a perceived threat. According to Cannon, the fight-or-flight response is a built-in mechanism that assists in maintaining homeostasis—an internal environment in which physiological variables such as blood pressure, respiration, digestion, and temperature are stabilized at levels optimal for survival. Thus, Cannon viewed the fight-or-flight response as adaptive because it enables us to adjust internally and externally to changes in our surroundings, which is helpful in species survival. Selye and the General Adaptation Syndrome Another important early contributor to the stress field was Hans Selye, mentioned earlier. He would eventually become one of the world’s foremost experts in the study of stress (See figure 14.9). As a young assistant in the biochemistry department at McGill University in the 1930s, Selye was engaged in research involving sex hormones in rats. Although he was unable to find an answer for what he was initially researching, he incidentally discovered that when exposed to prolonged negative stimulation (stressors)—such as extreme cold, surgical injury, excessive muscular exercise, and shock—the rats showed signs of adrenal enlargement, thymus and lymph node shrinkage, and stomach ulceration. Selye realized that these responses were triggered by a coordinated series of physiological reactions that unfold over time during continued exposure to a stressor. These physiological reactions were nonspecific, which means that regardless of the type of stressor, the same pattern of reactions would occur. What Selye discovered was the general adaptation syndrome, the body’s nonspecific physiological response to stress. The general adaptation syndrome, shown in figure 14.10, consists of three stages: 1. alarm reaction 2. stage of resistance 3. stage of exhaustion (Selye, 1936; 1976). Alarm reaction describes the body’s immediate reaction upon facing a threatening situation or emergency, and it is roughly analogous to the fight-or-flight response described by Cannon. During an alarm reaction, you are alerted to a stressor, and your body alarms you with a cascade of physiological reactions that provide you with the energy to manage the situation. A person who wakes up in the middle of the night to discover her house is on fire, for example, is experiencing an alarm reaction. If exposure to a stressor is prolonged, the organism will enter the stage of resistance. During this stage, the initial shock of alarm reaction has worn off and the body has adapted to the stressor. Nevertheless, the body also remains on alert and is prepared to respond as it did during the alarm reaction, although with less intensity. For example, suppose a child who went missing is still missing \(72\) hours later. Although the parents would obviously remain extremely disturbed, the magnitude of physiological reactions would likely have diminished over the \(72\) intervening hours due to some adaptation to this event. If exposure to a stressor continues over a longer period of time, the stage of exhaustion ensues. At this stage, the person is no longer able to adapt to the stressor: the body’s ability to resist becomes depleted as physical wear takes its toll on the body’s tissues and organs. As a result, illness, disease, and other permanent damage to the body—even death—may occur. If a missing child still remained missing after three months, the long-term stress associated with this situation may cause a parent to literally faint with exhaustion at some point or even to develop a serious and irreversible illness. In short, Selye’s general adaptation syndrome suggests that stressors tax the body via a three-phase process—an initial jolt, subsequent readjustment, and a later depletion of all physical resources—that ultimately lays the groundwork for serious health problems and even death. It should be pointed out, however, that this model is a response-based conceptualization of stress, focusing exclusively on the body’s physical responses while largely ignoring psychological factors such as appraisal and interpretation of threats. Nevertheless, Selye’s model has had an enormous impact on the field of stress because it offers a general explanation for how stress can lead to physical damage and, thus, disease. As we shall discuss later, prolonged or repeated stress has been implicated in development of a number of disorders such as hypertension and coronary artery disease. The Physiological Basis of Stress What goes on inside our bodies when we experience stress? The physiological mechanisms of stress are extremely complex, but they generally involve the work of two systems—the sympathetic nervous system and the hypothalamic-pituitary-adrenal (HPA) axis. When a person first perceives something as stressful (Selye’s alarm reaction), the sympathetic nervous system triggers arousal via the release of adrenaline from the adrenal glands. Release of these hormones activates the fight-or-flight responses to stress, such as accelerated heart rate and respiration. At the same time, the HPA axis, which is primarily endocrine in nature, becomes especially active, although it works much more slowly than the sympathetic nervous system. In response to stress, the hypothalamus (one of the limbic structures in the brain) releases corticotrophin-releasing factor, a hormone that causes the pituitary gland to release adrenocorticotropic hormone (ACTH) (See figure 14.11). The ACTH then activates the adrenal glands to secrete a number of hormones into the bloodstream; an important one is cortisol, which can affect virtually every organ within the body. Cortisol is commonly known as a stress hormone and helps provide that boost of energy when we first encounter a stressor, preparing us to run away or fight. However, sustained elevated levels of cortisol weaken the immune system. In short bursts, this process can have some favorable effects, such as providing extra energy, improving immune system functioning temporarily, and decreasing pain sensitivity. However, extended release of cortisol—as would happen with prolonged or chronic stress—often comes at a high price. High levels of cortisol have been shown to produce a number of harmful effects. For example, increases in cortisol can significantly weaken our immune system (Glaser & Kiecolt-Glaser, 2005), and high levels are frequently observed among depressed individuals (Geoffroy, Hertzman, Li, & Power, 2013). In summary, a stressful event causes a variety of physiological reactions that activate the adrenal glands, which in turn release epinephrine, norepinephrine, and cortisol. These hormones affect a number of bodily processes in ways that prepare the stressed person to take direct action, but also in ways that may heighten the potential for illness. When stress is extreme or chronic, it can have profoundly negative consequences. For example, stress often contributes to the development of certain psychological disorders, including post-traumatic stress disorder, major depressive disorder, and other serious psychiatric conditions. Additionally, we noted earlier that stress is linked to the development and progression of a variety of physical illnesses and diseases. For example, researchers in one study found that people injured during the September 11, 2001, World Trade Center disaster or who developed post-traumatic stress symptoms afterward later suffered significantly elevated rates of heart disease (Jordan, Miller-Archie, Cone, Morabia, & Stellman, 2011). Another investigation yielded that self-reported stress symptoms among aging and retired Finnish food industry workers were associated with morbidity 11 years later. This study also predicted the onset of musculoskeletal, nervous system, and endocrine and metabolic disorders (Salonen, Arola, Nygård, & Huhtala, 2008). Another study reported that male South Korean manufacturing employees who reported high levels of work-related stress were more likely to catch the common cold over the next several months than were those employees who reported lower work-related stress levels (Park et al., 2011). Later, you will explore the mechanisms through which stress can produce physical illness and disease.
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/14%3A_Stress_Lifestyle_and_Health/14.02%3A_What_Is_Stress.txt
Learning Objectives • Describe different types of possible stressors • Explain the importance of life changes as potential stressors • Describe the Social Readjustment Rating Scale • Understand the concepts of job strain and job burnout For an individual to experience stress, he must first encounter a potential stressor. In general, stressors can be placed into one of two broad categories: chronic and acute. Chronic stressors include events that persist over an extended period of time, such as caring for a parent with dementia, long-term unemployment, or imprisonment. Acute stressors involve brief focal events that sometimes continue to be experienced as overwhelming well after the event has ended, such as falling on an icy sidewalk and breaking your leg (Cohen, Janicki-Deverts, & Miller, 2007). Whether chronic or acute, potential stressors come in many shapes and sizes. They can include major traumatic events, significant life changes, daily hassles, as well as other situations in which a person is regularly exposed to threat, challenge, or danger. Traumatic Events Some stressors involve traumatic events or situations in which a person is exposed to actual or threatened death or serious injury. Stressors in this category include exposure to military combat, threatened or actual physical assaults (e.g., physical attacks, sexual assault, robbery, childhood abuse), terrorist attacks, natural disasters (e.g., earthquakes, floods, hurricanes), and automobile accidents. Men, non-Whites, and individuals in lower socioeconomic status (SES) groups report experiencing a greater number of traumatic events than do women, Whites, and individuals in higher SES groups (Hatch & Dohrenwend, 2007). Some individuals who are exposed to stressors of extreme magnitude develop post-traumatic stress disorder (PTSD): a chronic stress reaction characterized by experiences and behaviors that may include intrusive and painful memories of the stressor event, jumpiness, persistent negative emotional states, detachment from others, angry outbursts, and avoidance of reminders of the event (American Psychiatric Association [APA], 2013). Life Changes Most stressors that we encounter are not nearly as intense as the ones described above. Many potential stressors we face involve events or situations that require us to make changes in our ongoing lives and require time as we adjust to those changes. Examples include death of a close family member, marriage, divorce, and moving. See figure below: In the 1960s, psychiatrists Thomas Holmes and Richard Rahe wanted to examine the link between life stressors and physical illness, based on the hypothesis that life events requiring significant changes in a person’s normal life routines are stressful, whether these events are desirable or undesirable. They developed the Social Readjustment Rating Scale (SRRS), consisting of \(43\) life events that require varying degrees of personal readjustment (Holmes & Rahe, 1967). Many life events that most people would consider pleasant (e.g., holidays, retirement, marriage) are among those listed on the SRRS; these are examples of eustress. Holmes and Rahe also proposed that life events can add up over time, and that experiencing a cluster of stressful events increases one’s risk of developing physical illnesses. In developing their scale, Holmes and Rahe asked \(394\) participants to provide a numerical estimate for each of the \(43\) items; each estimate corresponded to how much readjustment participants felt each event would require. These estimates resulted in mean value scores for each event—often called life change units (LCUs) (Rahe, McKeen, & Arthur, 1967). The numerical scores ranged from \(11\) to \(100\), representing the perceived magnitude of life change each event entails. Death of a spouse ranked highest on the scale with \(100\) LCUs, and divorce ranked second highest with \(73\) LCUs. In addition, personal injury or illness, marriage, and job termination also ranked highly on the scale with \(53\), \(50\), and \(47\) LCUs, respectively. Conversely, change in residence (\(20\) LCUs), change in eating habits (\(15\) LCUs), and vacation (\(13\) LCUs) ranked low on the scale (See Table 14.1 below). Minor violations of the law ranked the lowest with \(11\) LCUs. To complete the scale, participants checked yes for events experienced within the last \(12\) months. LCUs for each checked item are totaled for a score quantifying the amount of life change. Agreement on the amount of adjustment required by the various life events on the SRRS is highly consistent, even cross-culturally (Holmes & Masuda, 1974). Table 14.1 Some Stressors on the Social Readjustment Rating Scale (Holmes & Rahe, 1967) Life event Life change units Death of a close family member 63 Personal injury or illness 53 Dismissal from work 47 Change in financial state 38 Change to different line of work 36 Outstanding personal achievement 28 Beginning or ending school 26 Change in living conditions 25 Change in working hours or conditions 20 Change in residence 20 Change in schools 20 Change in social activities 18 Change in sleeping habits 16 Change in eating habits 15 Minor violation of the law 11 Extensive research has demonstrated that accumulating a high number of life change units within a brief period of time (one or two years) is related to a wide range of physical illnesses (even accidents and athletic injuries) and mental health problems (Monat & Lazarus, 1991; Scully, Tosi, & Banning, 2000). In an early demonstration, researchers obtained LCU scores for U.S. and Norwegian Navy personnel who were about to embark on a six-month voyage. A later examination of medical records revealed positive (but small) correlations between LCU scores prior to the voyage and subsequent illness symptoms during the ensuing six-month journey (Rahe, 1974). In addition, people tend to experience more physical symptoms, such as backache, upset stomach, diarrhea, and acne, on specific days in which self-reported LCU values are considerably higher than normal, such as the day of a family member’s wedding (Holmes & Holmes, 1970). The Social Readjustment Rating Scale (SRRS) provides researchers a simple, easy-to-administer way of assessing the amount of stress in people’s lives, and it has been used in hundreds of studies (Thoits, 2010). Despite its widespread use, the scale has been subject to criticism. First, many of the items on the SRRS are vague; for example, death of a close friend could involve the death of a long-absent childhood friend that requires little social readjustment (Dohrenwend, 2006). In addition, some have challenged its assumption that undesirable life events are no more stressful than desirable ones (Derogatis & Coons, 1993). However, most of the available evidence suggests that, at least as far as mental health is concerned, undesirable or negative events are more strongly associated with poor outcomes (such as depression) than are desirable, positive events (Hatch & Dohrenwend, 2007). Perhaps the most serious criticism is that the scale does not take into consideration respondents’ appraisals of the life events it contains. As you recall, appraisal of a stressor is a key element in the conceptualization and overall experience of stress. Being fired from work may be devastating to some but a welcome opportunity to obtain a better job for others. The SRRS remains one of the most well-known instruments in the study of stress, and it is a useful tool for identifying potential stress-related health outcomes (Scully et al., 2000). Link to Learning Go to this site and complete the SRRS scale to determine the total number of LCUs you have experienced over the last year. CONNECT THE CONCEPTS: Correlational Research The Holmes and Rahe Social Readjustment Rating Scale (SRRS) uses the correlational research method to identify the connection between stress and health. That is, respondents’ LCU scores are correlated with the number or frequency of self-reported symptoms indicating health problems. These correlations are typically positive—as LCU scores increase, the number of symptoms increase. Consider all the thousands of studies that have used this scale to correlate stress and illness symptoms: If you were to assign an average correlation coefficient to this body of research, what would be your best guess? How strong do you think the correlation coefficient would be? Why can’t the SRRS show a causal relationship between stress and illness? If it were possible to show causation, do you think stress causes illness or illness causes stress? Hassles Potential stressors do not always involve major life events. Daily hassles—the minor irritations and annoyances that are part of our everyday lives (e.g., rush hour traffic, lost keys, obnoxious coworkers, inclement weather, arguments with friends or family)—can build on one another and leave us just as stressed as life change events (See figure 14.13) (Kanner, Coyne, Schaefer, & Lazarus, 1981). Researchers have demonstrated that the frequency of daily hassles is actually a better predictor of both physical and psychological health than are life change units. In a well-known study of San Francisco residents, the frequency of daily hassles was found to be more strongly associated with physical health problems than were life change events (DeLongis, Coyne, Dakof, Folkman, & Lazarus, 1982). In addition, daily minor hassles, especially interpersonal conflicts, often lead to negative and distressed mood states (Bolger, DeLongis, Kessler, & Schilling, 1989). Cyber hassles that occur on social media may represent a new source of stress. In one investigation, undergraduates who, over a 10-week period, reported greater Facebook-induced stress (e.g., guilt or discomfort over rejecting friend requests and anger or sadness over being unfriended by another) experienced increased rates of upper respiratory infections, especially if they had larger social networks (Campisi et al., 2012). Clearly, daily hassles can add up and take a toll on us both emotionally and physically. Other Stressors Stressors can include situations in which one is frequently exposed to challenging and unpleasant events, such as difficult, demanding, or unsafe working conditions. Although most jobs and occupations can at times be demanding, some are clearly more stressful than others (See figure 14.14). For example, most people would likely agree that a firefighter’s work is inherently more stressful than that of a florist. Equally likely, most would agree that jobs containing various unpleasant elements, such as those requiring exposure to loud noise (heavy equipment operator), constant harassment and threats of physical violence (prison guard), perpetual frustration (bus driver in a major city), or those mandating that an employee work alternating day and night shifts (hotel desk clerk), are much more demanding—and thus, more stressful—than those that do not contain such elements. The Table 14.2 below lists several occupations and some of the specific stressors associated with those occupations (Sulsky & Smith, 2005). Table 14.2 Occupations and Their Related Stressors Occupation Stressors Specific to Occupation (Sulsky & Smith, 2005) Police officer physical dangers, excessive paperwork, red tape, dealing with court system, coworker and supervisor conflict, lack of support from the public Firefighter uncertainty over whether a serious fire or hazard awaits after an alarm Social worker little positive feedback from jobs or from the public, unsafe work environments, frustration in dealing with bureaucracy, excessive paperwork, sense of personal responsibility for clients, work overload Teacher Excessive paperwork, lack of adequate supplies or facilities, work overload, lack of positive feedback, vandalism, threat of physical violence Nurse Work overload, heavy physical work, patient concerns (dealing with death and medical concerns), interpersonal problems with other medical staff (especially physicians) Emergency medical worker Unpredictable and extreme nature of the job, inexperience Air traffic controller Little control over potential crisis situations and workload, fear of causing an accident, peak traffic situations, general work environment Clerical and secretarial work Little control over job mobility, unsupportive supervisors, work overload, lack of perceived control Managerial work Work overload, conflict and ambiguity in defining the managerial role, difficult work relationships Although the specific stressors for these occupations are diverse, they seem to share two common denominators: heavy workload and uncertainty about and lack of control over certain aspects of a job. Both of these factors contribute to job strain, a work situation that combines excessive job demands and workload with little discretion in decision making or job control (Karasek & Theorell, 1990). Clearly, many occupations other than the ones listed in Table 14.2 involve at least a moderate amount of job strain in that they often involve heavy workloads and little job control (e.g., inability to decide when to take breaks). Such jobs are often low-status and include those of factory workers, postal clerks, supermarket cashiers, taxi drivers, and short-order cooks. Job strain can have adverse consequences on both physical and mental health; it has been shown to be associated with increased risk of hypertension (Schnall & Landsbergis, 1994), heart attacks (Theorell et al., 1998), recurrence of heart disease after a first heart attack (Aboa-Éboulé et al., 2007), significant weight loss or gain (Kivimäki et al., 2006), and major depressive disorder (Stansfeld, Shipley, Head, & Fuhrer, 2012). A longitudinal study of over 10,000 British civil servants reported that workers under 50 years old who earlier had reported high job strain were 68% more likely to later develop heart disease than were those workers under 50 years old who reported little job strain (Chandola et al., 2008). Some people who are exposed to chronically stressful work conditions can experience job burnout, which is a general sense of emotional exhaustion and cynicism in relation to one’s job (Maslach & Jackson, 1981). Job burnout occurs frequently among those in human service jobs (e.g., social workers, teachers, therapists, and police officers). Job burnout consists of three dimensions. The first dimension is exhaustion—a sense that one’s emotional resources are drained or that one is at the end of her rope and has nothing more to give at a psychological level. Second, job burnout is characterized by depersonalization: a sense of emotional detachment between the worker and the recipients of his services, often resulting in callous, cynical, or indifferent attitudes toward these individuals. Third, job burnout is characterized by diminished personal accomplishment, which is the tendency to evaluate one’s work negatively by, for example, experiencing dissatisfaction with one’s job-related accomplishments or feeling as though one has categorically failed to influence others’ lives through one’s work. Job strain appears to be one of the greatest risk factors leading to job burnout, which is most commonly observed in workers who are older (ages \(55-64\)), unmarried, and whose jobs involve manual labor. Heavy alcohol consumption, physical inactivity, being overweight, and having a physical or lifetime mental disorder are also associated with job burnout (Ahola, et al., 2006). In addition, depression often co-occurs with job burnout. One large-scale study of over \(3,000\) Finnish employees reported that half of the participants with severe job burnout had some form of depressive disorder (Ahola et al., 2005). Job burnout is often precipitated by feelings of having invested considerable energy, effort, and time into one’s work while receiving little in return (e.g., little respect or support from others or low pay) (Tatris, Peeters, Le Blanc, Schreurs, & Schaufeli, 2001). As an illustration, consider CharlieAnn, a nursing assistant who worked in a nursing home. CharlieAnn worked long hours for little pay in a difficult facility. Her supervisor was domineering, unpleasant, and unsupportive; he was disrespectful of CharlieAnn’s personal time, frequently informing her at the last minute she must work several additional hours after her shift ended or that she must report to work on weekends. CharlieAnn had very little autonomy at her job. She had little say in her day-to-day duties and how to perform them, and she was not permitted to take breaks unless her supervisor explicitly told her that she could. CharlieAnn did not feel as though her hard work was appreciated, either by supervisory staff or by the residents of the home. She was very unhappy over her low pay, and she felt that many of the residents treated her disrespectfully. After several years, CharlieAnn began to hate her job. She dreaded going to work in the morning, and she gradually developed a callous, hostile attitude toward many of the residents. Eventually, she began to feel as though she could no longer help the nursing home residents. CharlieAnn’s absenteeism from work increased, and one day she decided that she had had enough and quit. She now has a job in sales, vowing never to work in nursing again. Link to Learning Watch this clip from the 1999 comedy Office Space for a humorous illustration of lack of supervisory support in which a sympathetic character’s insufferable boss makes a last-minute demand that he “go ahead and come in” to the office on both Saturday and Sunday. Finally, our close relationships with friends and family—particularly the negative aspects of these relationships—can be a potent source of stress. Negative aspects of close relationships can include adverse exchanges and conflicts, lack of emotional support or confiding, and lack of reciprocity. All of these can be overwhelming, threatening to the relationship, and thus stressful. Such stressors can take a toll both emotionally and physically. A longitudinal investigation of over \(9,000\) British civil servants found that those who at one point had reported the highest levels of negative interactions in their closest relationship were \(34\%\) more likely to experience serious heart problems (fatal or nonfatal heart attacks) over a \(13-15\) year period, compared to those who experienced the lowest levels of negative interaction (De Vogli, Chandola & Marmot, 2007).
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/14%3A_Stress_Lifestyle_and_Health/14.03%3A_Stressors.txt
Learning Objectives • Explain the nature of psychophysiological disorders • Describe the immune system and how stress impacts its functioning • Describe how stress and emotional factors can lead to the development and exacerbation of cardiovascular disorders, asthma, and tension headaches In this section, we will discuss stress and illness. As stress researcher Robert Sapolsky (1998) describes, "stress-related disease emerges, predominantly, out of the fact that we so often activate a physiological system that has evolved for responding to acute physical emergencies, but we turn it on for months on end, worrying about mortgages, relationships, and promotions". (p. 6) The stress response, as noted earlier, consists of a coordinated but complex system of physiological reactions that are called upon as needed. These reactions are beneficial at times because they prepare us to deal with potentially dangerous or threatening situations (for example, recall our old friend, the fearsome bear on the trail). However, health is affected when physiological reactions are sustained, as can happen in response to ongoing stress. Psychophysiological Disorders If the reactions that compose the stress response are chronic or if they frequently exceed normal ranges, they can lead to cumulative wear and tear on the body, in much the same way that running your air conditioner on full blast all summer will eventually cause wear and tear on it. For example, the high blood pressure that a person under considerable job strain experiences might eventually take a toll on his heart and set the stage for a heart attack or heart failure. Also, someone exposed to high levels of the stress hormone cortisol might become vulnerable to infection or disease because of weakened immune system functioning (McEwen, 1998). Link to learning Neuroscientists Robert Sapolsky and Carol Shively have conducted extensive research on stress in non-human primates for over 30 years. Both have shown that position in the social hierarchy predicts stress, mental health status, and disease. Their research sheds light on how stress may lead to negative health outcomes for stigmatized or ostracized people. Here are two videos featuring Dr. Sapolsky: one is regarding killer stress and the other is an excellent in-depth documentary from National Geographic. Physical disorders or diseases whose symptoms are brought about or worsened by stress and emotional factors are called psychophysiological disorders. The physical symptoms of psychophysiological disorders are real and they can be produced or exacerbated by psychological factors (hence the psycho and physiological in psychophysiological). A list of frequently encountered psychophysiological disorders is provided in the Table 14.3 below: Table 14.3 Types of Psychophysiological Disorders (adapted from Everly & Lating, 2002) Type of Psychophysiological Disorder Examples Cardiovascular hypertension, coronary heart disease Gastrointestinal irritable bowel syndrome Respiratory asthma, allergy Musculoskeletal low back pain, tension headaches Skin acne, eczema, psoriasis Friedman and Booth-Kewley (1987) statistically reviewed 101 studies to examine the link between personality and illness. They proposed the existence of disease-prone personality characteristics, including depression, anger/hostility, and anxiety. Indeed, a study of over 61,000 Norwegians identified depression as a risk factor for all major disease-related causes of death (Mykletun et al., 2007). In addition, neuroticism—a personality trait that reflects how anxious, moody, and sad one is—has been identified as a risk factor for chronic health problems and mortality (Ploubidis & Grundy, 2009). Below, we discuss two kinds of psychophysiological disorders about which a great deal is known: cardiovascular disorders and asthma. First, however, it is necessary to turn our attention to a discussion of the immune system—one of the major pathways through which stress and emotional factors can lead to illness and disease. Everyday Connection: Social States, Stress, and Health Care Psychologists have long been aware that social status (e.g., wealth, privilege) is intimately tied to stress, health, and well-being. Some factors that contribute to high stress and poor health among people with lower social status include lack of control and predictability (e.g., greater unemployment) and resource inequality (e.g., less access to health care and other community resources) (Marmot & Sapolsky, 2014). In the United States, resource inequalities tied to social status often create race and gender differences in health care. For example, African American women have the highest rates of emergency room visits and unmet health care needs compared to any other group, and this disparity increased significantly from 2006 to 2014 (Manuel, 2018). Lesbian, gay, bisexual, and transgender youth often experience poor quality of care as a result of stigma, lack of understanding, and insensitivity among health care professionals (Hafeez, Zeshan, Tahir, Jahan, & Naveed, 2017). One goal of the U.S. government’s Healthy People 2020 initiative is to eliminate gender and race disparities in health care. Their interactive dataset provides an updated snapshot of health disparities: https://www.healthypeople.gov/2020/d...sparities-data. Stress and the Immune System In a sense, the immune system is the body’s surveillance system. It consists of a variety of structures, cells, and mechanisms that serve to protect the body from invading toxins and microorganisms that can harm or damage the body’s tissues and organs. When the immune system is working as it should, it keeps us healthy and disease free by eliminating bacteria, viruses, and other foreign substances that have entered the body (Everly & Lating, 2002). Immune System Errors Sometimes, the immune system will function erroneously. For example, sometimes it can go awry by mistaking your body’s own healthy cells for invaders and repeatedly attacking them. When this happens, the person is said to have an autoimmune disease, which can affect almost any part of the body. How an autoimmune disease affects a person depends on what part of the body is targeted. For instance, rheumatoid arthritis, an autoimmune disease that affects the joints, results in joint pain, stiffness, and loss of function. Systemic lupus erythematosus, an autoimmune disease that affects the skin, can result in rashes and swelling of the skin. Grave’s disease, an autoimmune disease that affects the thyroid gland, can result in fatigue, weight gain, and muscle aches (National Institute of Arthritis and Musculoskeletal and Skin Diseases [NIAMS], 2012). In addition, the immune system may sometimes break down and be unable to do its job. This situation is referred to as immunosuppression, the decreased effectiveness of the immune system. When people experience immunosuppression, they become susceptible to any number of infections, illness, and diseases. For example, acquired immune deficiency syndrome (AIDS) is a serious and lethal disease that is caused by human immunodeficiency virus (HIV), which greatly weakens the immune system by infecting and destroying antibody-producing cells, thus rendering a person vulnerable to any of a number of opportunistic infections (Powell, 1996). Stressors and Immune Function The question of whether stress and negative emotional states can influence immune function has captivated researchers for over three decades, and discoveries made over that time have dramatically changed the face of health psychology (Kiecolt-Glaser, 2009). Psychoneuroimmunology is the field that studies how psychological factors such as stress influence the immune system and immune functioning. The term psychoneuroimmunology was first coined in 1981, when it appeared as the title of a book that reviewed available evidence for associations between the brain, endocrine system, and immune system (Zacharie, 2009). To a large extent, this field evolved from the discovery that there is a connection between the central nervous system and the immune system. Some of the most compelling evidence for a connection between the brain and the immune system comes from studies in which researchers demonstrated that immune responses in animals could be classically conditioned (Everly & Lating, 2002). For example, Ader and Cohen (1975) paired flavored water (the conditioned stimulus) with the presentation of an immunosuppressive drug (the unconditioned stimulus), causing sickness (an unconditioned response). Not surprisingly, rats exposed to this pairing developed a conditioned aversion to the flavored water. However, the taste of the water itself later produced immunosuppression (a conditioned response), indicating that the immune system itself had been conditioned. Many subsequent studies over the years have further demonstrated that immune responses can be classically conditioned in both animals and humans (Ader & Cohen, 2001). Thus, if classical conditioning can alter immunity, other psychological factors should be capable of altering it as well. Hundreds of studies involving tens of thousands of participants have tested many kinds of brief and chronic stressors and their effect on the immune system (e.g., public speaking, medical school examinations, unemployment, marital discord, divorce, death of spouse, burnout and job strain, caring for a relative with Alzheimer’s disease, and exposure to the harsh climate of Antarctica). It has been repeatedly demonstrated that many kinds of stressors are associated with poor or weakened immune functioning (Glaser & Kiecolt-Glaser, 2005; Kiecolt-Glaser, McGuire, Robles, & Glaser, 2002; Segerstrom & Miller, 2004). When evaluating these findings, it is important to remember that there is a tangible physiological connection between the brain and the immune system. For example, the sympathetic nervous system innervates immune organs such as the thymus, bone marrow, spleen, and even lymph nodes (Maier, Watkins, & Fleshner, 1994). Also, we noted earlier that stress hormones released during hypothalamic-pituitary-adrenal (HPA) axis activation can adversely impact immune function. One way they do this is by inhibiting the production of lymphocytes, white blood cells that circulate in the body’s fluids that are important in the immune response (Everly & Lating, 2002). Some of the more dramatic examples demonstrating the link between stress and impaired immune function involve studies in which volunteers were exposed to viruses. The rationale behind this research is that because stress weakens the immune system, people with high stress levels should be more likely to develop an illness compared to those under little stress. In one memorable experiment using this method, researchers interviewed \(276\) healthy volunteers about recent stressful experiences (Cohen et al., 1998). Following the interview, these participants were given nasal drops containing the cold virus (in case you are wondering why anybody would ever want to participate in a study in which they are subjected to such treatment, the participants were paid \$800 for their trouble). When examined later, participants who reported experiencing chronic stressors for more than one month—especially enduring difficulties involving work or relationships—were considerably more likely to have developed colds than were participants who reported no chronic stressors (See figure below). In another study, older volunteers were given an influenza virus vaccination. Compared to controls, those who were caring for a spouse with Alzheimer’s disease (and thus were under chronic stress) showed poorer antibody response following the vaccination (Kiecolt-Glaser, Glaser, Gravenstein, Malarkey, & Sheridan, 1996). Other studies have demonstrated that stress slows down wound healing by impairing immune responses important to wound repair (Glaser & Kiecolt-Glaser, 2005). In one study, for example, skin blisters were induced on the forearm. Subjects who reported higher levels of stress produced lower levels of immune proteins necessary for wound healing (Glaser et al., 1999). Stress, then, is not so much the sword that kills the knight, so to speak; rather, it’s the sword that breaks the knight’s shield, and your immune system is that shield. DIG DEEPER: Stress and Aging: A Tale of Telomeres Have you ever wondered why people who are stressed often seem to have a haggard look about them? A pioneering study from 2004 suggests that the reason is because stress can actually accelerate the cell biology of aging. Stress, it seems, can shorten telomeres, which are segments of DNA that protect the ends of chromosomes. Shortened telomeres can inhibit or block cell division, which includes growth and proliferation of new cells, thereby leading to more rapid aging (Sapolsky, 2004). In the study, researchers compared telomere lengths in the white blood cells in mothers of chronically ill children to those of mothers of healthy children (Epel et al., 2004). Mothers of chronically ill children would be expected to experience more stress than would mothers of healthy children. The longer a mother had spent caring for her ill child, the shorter her telomeres (the correlation between years of caregiving and telomere length was \(r = -0.40\)). In addition, higher levels of perceived stress were negatively correlated with telomere size (\(r = -0.31\)). These researchers also found that the average telomere length of the most stressed mothers, compared to the least stressed, was similar to what you would find in people who were \(9-17\) years older than they were on average. Numerous other studies since have continued to find associations between stress and eroded telomeres (Blackburn & Epel, 2012). Some studies have even demonstrated that stress can begin to erode telomeres in childhood and perhaps even before children are born. For example, childhood exposure to violence (e.g., maternal domestic violence, bullying victimization, and physical maltreatment) was found in one study to accelerate telomere erosion from ages \(5\) to \(10\) (Shalev et al., 2013). Another study reported that young adults whose mothers had experienced severe stress during their pregnancy had shorter telomeres than did those whose mothers had stress-free and uneventful pregnancies (Entringer et al., 2011). Further, the corrosive effects of childhood stress on telomeres can extend into young adulthood. In an investigation of over \(4,000\) U.K. women ages \(41-80\), adverse experiences during childhood (e.g., physical abuse, being sent away from home, and parent divorce) were associated with shortened telomere length (Surtees et al., 2010), and telomere size decreased as the amount of experienced adversity increased (See figure 14.16 below). Efforts to dissect the precise cellular and physiological mechanisms linking short telomeres to stress and disease are currently underway. For the time being, telomeres provide us with yet another reminder that stress, especially during early life, can be just as harmful to our health as smoking or fast food (Blackburn & Epel, 2012). Cardiovascular Disorders The cardiovascular system is composed of the heart and blood circulation system. For many years, disorders that involve the cardiovascular system—known as cardiovascular disorders—have been a major focal point in the study of psychophysiological disorders because of the cardiovascular system’s centrality in the stress response (Everly & Lating, 2002). Heart disease is one such condition. Each year, heart disease causes approximately one in three deaths in the United States, and it is the leading cause of death in the developed world (Centers for Disease Control and Prevention [CDC], 2011; Shapiro, 2005). The symptoms of heart disease vary somewhat depending on the specific kind of heart disease one has, but they generally involve angina—chest pains or discomfort that occur when the heart does not receive enough blood (Office on Women’s Health, 2009). The pain often feels like the chest is being pressed or squeezed; burning sensations in the chest and shortness of breath are also commonly reported. Such pain and discomfort can spread to the arms, neck, jaws, stomach (as nausea), and back (American Heart Association [AHA], 2012a) (See figure 14.17 below). A major risk factor for heart disease is hypertension, which is high blood pressure. Hypertension forces a person’s heart to pump harder, thus putting more physical strain on the heart. If left unchecked, hypertension can lead to a heart attack, stroke, or heart failure; it can also lead to kidney failure and blindness. Hypertension is a serious cardiovascular disorder, and it is sometimes called the silent killer because it has no symptoms—one who has high blood pressure may not even be aware of it (AHA, 2012b). Many risk factors contributing to cardiovascular disorders have been identified. These risk factors include social determinants such as aging, income, education, and employment status, as well as behavioral risk factors that include unhealthy diet, tobacco use, physical inactivity, and excessive alcohol consumption; obesity and diabetes are additional risk factors (World Health Organization [WHO], 2013). Over the past few decades, there has been much greater recognition and awareness of the importance of stress and other psychological factors in cardiovascular health (Nusair, Al-dadah, & Kumar, 2012). Indeed, exposure to stressors of many kinds has also been linked to cardiovascular problems; in the case of hypertension, some of these stressors include job strain (Trudel, Brisson, & Milot, 2010), natural disasters (Saito, Kim, Maekawa, Ikeda, & Yokoyama, 1997), marital conflict (Nealey-Moore, Smith, Uchino, Hawkins, & Olson-Cerny, 2007), and exposure to high traffic noise levels at one’s home (de Kluizenaar, Gansevoort, Miedema, & de Jong, 2007). Perceived discrimination appears to be associated with hypertension among African Americans (Sims et al., 2012). In addition, laboratory-based stress tasks, such as performing mental arithmetic under time pressure, immersing one’s hand into ice water (known as the cold pressor test), mirror tracing, and public speaking have all been shown to elevate blood pressure (Phillips, 2011). Are you Type A or Type B? Sometimes research ideas and theories emerge from seemingly trivial observations. In the 1950s, cardiologist Meyer Friedman was looking over his waiting room furniture, which consisted of upholstered chairs with armrests. Friedman decided to have these chairs reupholstered. When the man doing the reupholstering came to the office to do the work, he commented on how the chairs were worn in a unique manner—the front edges of the cushions were worn down, as were the front tips of the arm rests. It seemed like the cardiology patients were tapping or squeezing the front of the armrests, as well as literally sitting on the edge of their seats (Friedman & Rosenman, 1974). Were cardiology patients somehow different than other types of patients? If so, how? After researching this matter, Friedman and his colleague, Ray Rosenman, came to understand that people who are prone to heart disease tend to think, feel, and act differently than those who are not. These individuals tend to be intensively driven workaholics who are preoccupied with deadlines and always seem to be in a rush. According to Friedman and Rosenman, these individuals exhibit Type A behavior pattern; those who are more relaxed and laid-back were characterized as Type B (See figure 14.18). In a sample of Type As and Type Bs, Friedman and Rosenman were startled to discover that heart disease was over seven times more frequent among the Type As than the Type Bs (Friedman & Rosenman, 1959). The major components of the Type A pattern include an aggressive and chronic struggle to achieve more and more in less and less time (Friedman & Rosenman, 1974). Specific characteristics of the Type A pattern include an excessive competitive drive, chronic sense of time urgency, impatience, and hostility toward others (particularly those who get in the person’s way). An example of a person who exhibits Type A behavior pattern is Jeffrey. Even as a child, Jeffrey was intense and driven. He excelled at school, was captain of the swim team, and graduated with honors from an Ivy League college. Jeffrey never seems able to relax; he is always working on something, even on the weekends. However, Jeffrey always seems to feel as though there are not enough hours in the day to accomplish all he feels he should. He volunteers to take on extra tasks at work and often brings his work home with him; he often goes to bed angry late at night because he feels that he has not done enough. Jeffrey is quick tempered with his coworkers; he often becomes noticeably agitated when dealing with those coworkers he feels work too slowly or whose work does not meet his standards. He typically reacts with hostility when interrupted at work. He has experienced problems in his marriage over his lack of time spent with family. When caught in traffic during his commute to and from work, Jeffrey incessantly pounds on his horn and swears loudly at other drivers. When Jeffrey was 52, he suffered his first heart attack. By the 1970s, a majority of practicing cardiologists believed that Type A behavior pattern was a significant risk factor for heart disease (Friedman, 1977). Indeed, a number of early longitudinal investigations demonstrated a link between Type A behavior pattern and later development of heart disease (Rosenman et al., 1975; Haynes, Feinleib, & Kannel, 1980). Subsequent research examining the association between Type A and heart disease, however, failed to replicate these earlier findings (Glassman, 2007; Myrtek, 2001). Because Type A theory did not pan out as well as they had hoped, researchers shifted their attention toward determining if any of the specific elements of Type A predict heart disease. Extensive research clearly suggests that the anger/hostility dimension of Type A behavior pattern may be one of the most important factors in the development of heart disease. This relationship was initially described in the Haynes et al. (1980) study mentioned above: Suppressed hostility was found to substantially elevate the risk of heart disease for both men and women. Also, one investigation followed over \(1,000\) male medical students from \(32\) to \(48\) years. At the beginning of the study, these men completed a questionnaire assessing how they react to pressure; some indicated that they respond with high levels of anger, whereas others indicated that they respond with less anger. Decades later, researchers found that those who earlier had indicated the highest levels of anger were over 6 times more likely than those who indicated less anger to have had a heart attack by age \(55\), and they were \(3.5\) times more likely to have experienced heart disease by the same age (Chang, Ford, Meoni, Wang, & Klag, 2002). From a health standpoint, it clearly does not pay to be an angry young person. After reviewing and statistically summarizing \(35\) studies from 1983 to 2006, Chida and Steptoe (2009) concluded that the bulk of the evidence suggests that anger and hostility constitute serious long-term risk factors for adverse cardiovascular outcomes among both healthy individuals and those already suffering from heart disease. One reason angry and hostile moods might contribute to cardiovascular diseases is that such moods can create social strain, mainly in the form of antagonistic social encounters with others. This strain could then lay the foundation for disease-promoting cardiovascular responses among hostile individuals (Vella, Kamarck, Flory, & Manuck, 2012). In this transactional model, hostility and social strain form a cycle (See figure below). For example, suppose Kaitlin has a hostile disposition; she has a cynical, distrustful attitude toward others and often thinks that other people are out to get her. She is very defensive around people, even those she has known for years, and she is always looking for signs that others are either disrespecting or belittling her. In the shower each morning before work, she often mentally rehearses what she would say to someone who said or did something that angered her, such as making a political statement that was counter to her own ideology. As Kaitlin goes through these mental rehearsals, she often grins and thinks about the retaliation on anyone who will irk her that day. Socially, she is confrontational and tends to use a harsh tone with people, which often leads to very disagreeable and sometimes argumentative social interactions. As you might imagine, Kaitlin is not especially popular with others, including coworkers, neighbors, and even members of her own family. They either avoid her at all costs or snap back at her, which causes Kaitlin to become even more cynical and distrustful of others, making her disposition even more hostile. Kaitlin’s hostility—through her own doing—has created an antagonistic environment that cyclically causes her to become even more hostile and angry, thereby potentially setting the stage for cardiovascular problems. In addition to anger and hostility, a number of other negative emotional states have been linked with heart disease, including negative affectivity and depression (Suls & Bunde, 2005). Negative affectivity is a tendency to experience distressed emotional states involving anger, contempt, disgust, guilt, fear, and nervousness (Watson, Clark, & Tellegen, 1988). It has been linked with the development of both hypertension and heart disease. For example, over \(3,000\) initially healthy participants in one study were tracked longitudinally, up to \(22\) years. Those with higher levels of negative affectivity at the time the study began were substantially more likely to develop and be treated for hypertension during the ensuing years than were those with lower levels of negative affectivity (Jonas & Lando, 2000). In addition, a study of over \(10,000\) middle-aged London-based civil servants who were followed an average of \(12.5\) years revealed that those who earlier had scored in the upper third on a test of negative affectivity were \(32\%\) more likely to have experienced heart disease, heart attack, or angina over a period of years than were those who scored in the lowest third (Nabi, Kivimaki, De Vogli, Marmot, & Singh-Manoux, 2008). Hence, negative affectivity appears to be a potentially vital risk factor for the development of cardiovascular disorders. Depression and the Heart For centuries, poets and folklore have asserted that there is a connection between moods and the heart (Glassman & Shapiro, 1998). You are no doubt familiar with the notion of a broken heart following a disappointing or depressing event and have encountered that notion in songs, films, and literature. Perhaps the first to recognize the link between depression and heart disease was Benjamin Malzberg (1937), who found that the death rate among institutionalized patients with melancholia (an archaic term for depression) was six times higher than that of the population. A classic study in the late 1970s looked at over \(8,000\) manic-depressive persons in Denmark, finding a nearly \(50\%\) increase in deaths from heart disease among these patients compared with the general Danish population (Weeke, 1979). By the early 1990s, evidence began to accumulate showing that depressed individuals who were followed for long periods of time were at increased risk for heart disease and cardiac death (Glassman, 2007). In one investigation of over \(700\) Denmark residents, those with the highest depression scores were \(71\%\) more likely to have experienced a heart attack than were those with lower depression scores (Barefoot & Schroll, 1996). Figure 14.20 illustrates the gradation in risk of heart attacks for both men and women. After more than two decades of research, it is now clear that a relationship exists: Patients with heart disease have more depression than the general population, and people with depression are more likely to eventually develop heart disease and experience higher mortality than those who do not have depression (Hare, Toukhsati, Johansson, & Jaarsma, 2013); the more severe the depression, the higher the risk (Glassman, 2007). Consider the following: • In one study, death rates from cardiovascular problems was substantially higher in depressed people; depressed men were \(50\%\) more likely to have died from cardiovascular problems, and depressed women were \(70\%\) more likely (Ösby, Brandt, Correia, Ekbom, & Sparén, 2001). • A statistical review of \(10\) longitudinal studies involving initially healthy individuals revealed that those with elevated depressive symptoms have, on average, a \(64\%\) greater risk of developing heart disease than do those with fewer symptoms (Wulsin & Singal, 2003). • A study of over \(63,000\) registered nurses found that those with more depressed symptoms when the study began were \(49\%\) more likely to experience fatal heart disease over a \(12\)-year period (Whang et al., 2009). The American Heart Association, fully aware of the established importance of depression in cardiovascular diseases, several years ago recommended routine depression screening for all heart disease patients (Lichtman et al., 2008). Recently, they have recommended including depression as a risk factor for heart disease patients (AHA, 2014). Although the exact mechanisms through which depression might produce heart problems have not been fully clarified, a recent investigation examining this connection in early life has shed some light. In an ongoing study of childhood depression, adolescents who had been diagnosed with depression as children were more likely to be obese, smoke, and be physically inactive than were those who had not received this diagnosis (Rottenberg et al., 2014). One implication of this study is that depression, especially if it occurs early in life, may increase the likelihood of living an unhealthy lifestyle, thereby predisposing people to an unfavorable cardiovascular disease risk profile. It is important to point out that depression may be just one piece of the emotional puzzle in elevating the risk for heart disease, and that chronically experiencing several negative emotional states may be especially important. A longitudinal investigation of Vietnam War veterans found that depression, anxiety, hostility, and trait anger each independently predicted the onset of heart disease (Boyle, Michalek, & Suarez, 2006). However, when each of these negative psychological attributes was combined into a single variable, this new variable (which researchers called psychological risk factor) predicted heart disease more strongly than any of the individual variables. Thus, rather than examining the predictive power of isolated psychological risk factors, it seems crucial for future researchers to examine the effects of combined and more general negative emotional and psychological traits in the development of cardiovascular illnesses. Asthma Asthma is a chronic and serious disease in which the airways of the respiratory system become obstructed, leading to great difficulty expelling air from the lungs. The airway obstruction is caused by inflammation of the airways (leading to thickening of the airway walls) and a tightening of the muscles around them, resulting in a narrowing of the airways (See figure 14.21) (American Lung Association, 2010). Because airways become obstructed, a person with asthma will sometimes have great difficulty breathing and will experience repeated episodes of wheezing, chest tightness, shortness of breath, and coughing, the latter occurring mostly during the morning and night (CDC, 2006). According to the Centers for Disease Control and Prevention (CDC), around \(4,000\) people die each year from asthma-related causes, and asthma is a contributing factor to another \(7,000\) deaths each year (CDC, 2013a). The CDC has revealed that asthma affects \(18.7\) million U.S. adults and is more common among people with lower education and income levels (CDC, 2013b). Especially concerning is that asthma is on the rise, with rates of asthma increasing \(157\%\) between 2000 and 2010 (CDC, 2013b). Asthma attacks are acute episodes in which an asthma sufferer experiences the full range of symptoms. Asthma exacerbation is often triggered by environmental factors, such as air pollution, allergens (e.g., pollen, mold, and pet hairs), cigarette smoke, airway infections, cold air or a sudden change in temperature, and exercise (CDC, 2013b). Psychological factors appear to play an important role in asthma (Wright, Rodriguez, & Cohen, 1998), although some believe that psychological factors serve as potential triggers in only a subset of asthma patients (Ritz, Steptoe, Bobb, Harris, & Edwards, 2006). Many studies over the years have demonstrated that some people with asthma will experience asthma-like symptoms if they expect to experience such symptoms, such as when breathing an inert substance that they (falsely) believe will lead to airway obstruction (Sodergren & Hyland, 1999). As stress and emotions directly affect immune and respiratory functions, psychological factors likely serve as one of the most common triggers of asthma exacerbation (Trueba & Ritz, 2013). People with asthma tend to report and display a high level of negative emotions such as anxiety, and asthma attacks have been linked to periods of high emotionality (Lehrer, Isenberg, & Hochron, 1993). In addition, high levels of emotional distress during both laboratory tasks and daily life have been found to negatively affect airway function and can produce asthma-like symptoms in people with asthma (von Leupoldt, Ehnes, & Dahme, 2006). In one investigation, \(20\) adults with asthma wore preprogrammed wristwatches that signaled them to breathe into a portable device that measures airway function. Results showed that higher levels of negative emotions and stress were associated with increased airway obstruction and self-reported asthma symptoms (Smyth, Soefer, Hurewitz, Kliment, & Stone, 1999). In addition, D’Amato, Liccardi, Cecchi, Pellegrino, & D’Amato (2010) described a case study of an 18-year-old man with asthma whose girlfriend had broken up with him, leaving him in a depressed state. She had also unfriended him on Facebook , while friending other young males. Eventually, the young man was able to “friend” her once again and could monitor her activity through Facebook. Subsequently, he would experience asthma symptoms whenever he logged on and accessed her profile. When he later resigned not to use Facebook any longer, the asthma attacks stopped. This case suggests that the use of Facebook and other forms of social media may represent a new source of stress—it may be a triggering factor for asthma attacks, especially in depressed asthmatic individuals. Exposure to stressful experiences, particularly those that involve parental or interpersonal conflicts, has been linked to the development of asthma throughout the lifespan. A longitudinal study of \(145\) children found that parenting difficulties during the first year of life increased the chances that the child developed asthma by \(107\%\) (Klinnert et al., 2001). In addition, a cross-sectional study of over \(10,000\) Finnish college students found that high rates of parent or personal conflicts (e.g., parental divorce, separation from spouse, or severe conflicts in other long-term relationships) increased the risk of asthma onset (Kilpeläinen, Koskenvuo, Helenius, & Terho, 2002). Further, a study of over \(4,000\) middle-aged men who were interviewed in the early 1990s and again a decade later found that breaking off an important life partnership (e.g., divorce or breaking off relationship from parents) increased the risk of developing asthma by \(124\%\) over the time of the study (Loerbroks, Apfelbacher, Thayer, Debling, & Stürmer, 2009). Tension Headaches A headache is a continuous pain anywhere in the head and neck region. Migraine headaches are a type of headache thought to be caused by blood vessel swelling and increased blood flow (McIntosh, 2013). Migraines are characterized by severe pain on one or both sides of the head, an upset stomach, and disturbed vision. They are more frequently experienced by women than by men (American Academy of Neurology, 2014). Tension headaches are triggered by tightening/tensing of facial and neck muscles; they are the most commonly experienced kind of headache, accounting for about \(42\%\) of all headaches worldwide (Stovner et al., 2007). In the United States, well over one-third of the population experiences tension headaches each year, and \(2-3\%\) of the population suffers from chronic tension headaches (Schwartz, Stewart, Simon, & Lipton, 1998). A number of factors can contribute to tension headaches, including sleep deprivation, skipping meals, eye strain, overexertion, muscular tension caused by poor posture, and stress (MedicineNet, 2013). Although there is uncertainty regarding the exact mechanisms through which stress can produce tension headaches, stress has been demonstrated to increase sensitivity to pain (Caceres & Burns, 1997; Logan et al., 2001). In general, tension headache sufferers, compared to non-sufferers, have a lower threshold for and greater sensitivity to pain (Ukestad & Wittrock, 1996), and they report greater levels of subjective stress when faced with a stressor (Myers, Wittrock, & Foreman, 1998). Thus, stress may contribute to tension headaches by increasing pain sensitivity in already-sensitive pain pathways in tension headache sufferers (Cathcart, Petkov, & Pritchard, 2008).
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/14%3A_Stress_Lifestyle_and_Health/14.04%3A_Stress_and_Illness.txt
Learning Objectives • Define coping and differentiate between problem-focused and emotion-focused coping • Describe the importance of perceived control in our reactions to stress • Explain how social support is vital in health and longevity As we learned in the previous section, stress—especially if it is chronic—takes a toll on our bodies and can have enormously negative health implications. When we experience events in our lives that we appraise as stressful, it is essential that we use effective coping strategies to manage our stress. Coping refers to mental and behavioral efforts that we use to deal with problems relating to stress, including its presumed cause and the unpleasant feelings and emotions it produces. Coping Styles Lazarus and Folkman (1984) distinguished two fundamental kinds of coping: problem-focused coping and emotion-focused coping. In problem-focused coping, one attempts to manage or alter the problem that is causing one to experience stress (i.e., the stressor). Problem-focused coping strategies are similar to strategies used in everyday problem-solving: they typically involve identifying the problem, considering possible solutions, weighing the costs and benefits of these solutions, and then selecting an alternative (Lazarus & Folkman, 1984). As an example, suppose Bradford receives a midterm notice that he is failing statistics class. If Bradford adopts a problem-focused coping approach to managing his stress, he would be proactive in trying to alleviate the source of the stress. He might contact his professor to discuss what must be done to raise his grade, he might also decide to set aside two hours daily to study statistics assignments, and he may seek tutoring assistance. A problem-focused approach to managing stress means we actively try to do things to address the problem. Emotion-focused coping, in contrast, consists of efforts to change or reduce the negative emotions associated with stress. These efforts may include avoiding, minimizing, or distancing oneself from the problem, or positive comparisons with others (“I’m not as bad off as she is”), or seeking something positive in a negative event (“Now that I’ve been fired, I can sleep in for a few days”). In some cases, emotion-focused coping strategies involve reappraisal, whereby the stressor is construed differently (and somewhat self-deceptively) without changing its objective level of threat (Lazarus & Folkman, 1984). For example, a person sentenced to federal prison who thinks, “This will give me a great chance to network with others,” is using reappraisal. If Bradford adopted an emotion-focused approach to managing his midterm deficiency stress, he might watch a comedy movie, play video games, or spend hours on Twitter to take his mind off the situation. In a certain sense, emotion-focused coping can be thought of as treating the symptoms rather than the actual cause. While many stressors elicit both kinds of coping strategies, problem-focused coping is more likely to occur when encountering stressors we perceive as controllable, while emotion-focused coping is more likely to predominate when faced with stressors that we believe we are powerless to change (Folkman & Lazarus, 1980). Clearly, emotion-focused coping is more effective in dealing with uncontrollable stressors. For example, if at midnight you are stressing over a \(40\)-page paper due in the morning that you have not yet started, you are probably better off recognizing the hopelessness of the situation and doing something to take your mind off it; taking a problem-focused approach by trying to accomplish this task would only lead to frustration, anxiety, and even more stress. Fortunately, most stressors we encounter can be modified and are, to varying degrees, controllable. A person who cannot stand her job can quit and look for work elsewhere; a middle-aged divorcee can find another potential partner; the freshman who fails an exam can study harder next time, and a breast lump does not necessarily mean that one is fated to die of breast cancer. Control and Stress The desire and ability to predict events, make decisions, and affect outcomes—that is, to enact control in our lives—is a basic tenet of human behavior (Everly & Lating, 2002). Albert Bandura (1997) stated that “the intensity and chronicity of human stress is governed largely by perceived control over the demands of one’s life” (p. 262). As cogently described in his statement, our reaction to potential stressors depends to a large extent on how much control we feel we have over such things. Perceived control is our beliefs about our personal capacity to exert influence over and shape outcomes, and it has major implications for our health and happiness (Infurna & Gerstorf, 2014). Extensive research has demonstrated that perceptions of personal control are associated with a variety of favorable outcomes, such as better physical and mental health and greater psychological well-being (Diehl & Hay, 2010). Greater personal control is also associated with lower reactivity to stressors in daily life. For example, researchers in one investigation found that higher levels of perceived control at one point in time were later associated with lower emotional and physical reactivity to interpersonal stressors (Neupert, Almeida, & Charles, 2007). Further, a daily diary study with \(34\) older widows found that their stress and anxiety levels were significantly reduced on days during which the widows felt greater perceived control (Ong, Bergeman, & Bisconti, 2005). DIG DEEPER: Learned Helplessness When we lack a sense of control over the events in our lives, particularly when those events are threatening, harmful, or noxious, the psychological consequences can be profound. In one of the better illustrations of this concept, psychologist Martin Seligman conducted a series of classic experiments in the 1960s (Seligman & Maier, 1967) in which dogs were placed in a chamber where they received electric shocks from which they could not escape. Later, when these dogs were given the opportunity to escape the shocks by jumping across a partition, most failed to even try; they seemed to just give up and passively accept any shocks the experimenters chose to administer. In comparison, dogs who were previously allowed to escape the shocks tended to jump the partition and escape the pain (See figure 14.22 below). Figure 14.22 Seligman’s learned helplessness experiments with dogs used an apparatus that measured when the animals would move from a floor delivering shocks to one without. Seligman believed that the dogs who failed to try to escape the later shocks were demonstrating learned helplessness: They had acquired a belief that they were powerless to do anything about the noxious stimulation they were receiving. Seligman also believed that the passivity and lack of initiative these dogs demonstrated was similar to that observed in human depression. Therefore, Seligman speculated that acquiring a sense of learned helplessness might be an important cause of depression in humans: Humans who experience negative life events that they believe they are unable to control may become helpless. As a result, they give up trying to control or change the situation and some may become depressed and show lack of initiative in future situations in which they can control the outcomes (Seligman, Maier, & Geer, 1968). Seligman and colleagues later reformulated the original learned helplessness model of depression (Abramson, Seligman, & Teasdale, 1978). In their reformulation, they emphasized attributions (i.e., a mental explanation for why something occurred) that lead to the perception that one lacks control over negative outcomes are important in fostering a sense of learned helplessness. For example, suppose a coworker shows up late to work; your belief as to what caused the coworker’s tardiness would be an attribution (e.g., too much traffic, slept too late, or just doesn’t care about being on time). The reformulated version of Seligman’s study holds that the attributions made for negative life events contribute to depression. Consider the example of a student who performs poorly on a midterm exam. This model suggests that the student will make three kinds of attributions for this outcome: internal vs. external (believing the outcome was caused by his own personal inadequacies or by environmental factors), stable vs. unstable (believing the cause can be changed or is permanent), and global vs. specific (believing the outcome is a sign of inadequacy in most everything versus just this area). Assume that the student makes an internal (“I’m just not smart”), stable (“Nothing can be done to change the fact that I’m not smart”) and global (“This is another example of how lousy I am at everything”) attribution for the poor performance. The reformulated theory predicts that the student would perceive a lack of control over this stressful event and thus be especially prone to developing depression. Indeed, research has demonstrated that people who have a tendency to make internal, global, and stable attributions for bad outcomes tend to develop symptoms of depression when faced with negative life experiences (Peterson & Seligman, 1984). Seligman’s learned helplessness model has emerged over the years as a leading theoretical explanation for the onset of major depressive disorder. When you study psychological disorders, you will learn more about the latest reformulation of this model—now called hopelessness theory. People who report higher levels of perceived control view their health as controllable, thereby making it more likely that they will better manage their health and engage in behaviors conducive to good health (Bandura, 2004). Not surprisingly, greater perceived control has been linked to lower risk of physical health problems, including declines in physical functioning (Infurna, Gerstorf, Ram, Schupp, & Wagner, 2011), heart attacks (Rosengren et al., 2004), and both cardiovascular disease incidence (Stürmer, Hasselbach, & Amelang, 2006) and mortality from cardiac disease (Surtees et al., 2010). In addition, longitudinal studies of British civil servants have found that those in low-status jobs (e.g., clerical and office support staff) in which the degree of control over the job is minimal are considerably more likely to develop heart disease than those with high-status jobs or considerable control over their jobs (Marmot, Bosma, Hemingway, & Stansfeld, 1997). The link between perceived control and health may provide an explanation for the frequently observed relationship between social class and health outcomes (Kraus, Piff, Mendoza-Denton, Rheinschmidt, & Keltner, 2012). In general, research has found that more affluent individuals experience better health mainly because they tend to believe that they can personally control and manage their reactions to life’s stressors (Johnson & Krueger, 2006). Perhaps buoyed by the perceived level of control, individuals of higher social class may be prone to overestimating the degree of influence they have over particular outcomes. For example, those of higher social class tend to believe that their votes have greater sway on election outcomes than do those of lower social class, which may explain higher rates of voting in more affluent communities (Krosnick, 1990). Other research has found that a sense of perceived control can protect less affluent individuals from poorer health, depression, and reduced life-satisfaction—all of which tend to accompany lower social standing (Lachman & Weaver, 1998). Taken together, findings from these and many other studies clearly suggest that perceptions of control and coping abilities are important in managing and coping with the stressors we encounter throughout life. Social Support The need to form and maintain strong, stable relationships with others is a powerful, pervasive, and fundamental human motive (Baumeister & Leary, 1995). Building strong interpersonal relationships with others helps us establish a network of close, caring individuals who can provide social support in times of distress, sorrow, and fear. Social support can be thought of as the soothing impact of friends, family, and acquaintances (Baron & Kerr, 2003). Social support can take many forms, including advice, guidance, encouragement, acceptance, emotional comfort, and tangible assistance (such as financial help). Thus, other people can be very comforting to us when we are faced with a wide range of life stressors, and they can be extremely helpful in our efforts to manage these challenges. Even in nonhuman animals, species mates can offer social support during times of stress. For example, elephants seem to be able to sense when other elephants are stressed and will often comfort them with physical contact—such as a trunk touch—or an empathetic vocal response (Krumboltz, 2014). Scientific interest in the importance of social support first emerged in the 1970s when health researchers developed an interest in the health consequences of being socially integrated (Stroebe & Stroebe, 1996). Interest was further fueled by longitudinal studies showing that social connectedness reduced mortality. In one classic study, nearly \(7,000\) Alameda County, California, residents were followed over \(9\) years. Those who had previously indicated that they lacked social and community ties were more likely to die during the follow-up period than those with more extensive social networks. Compared to those with the most social contacts, isolated men and women were, respectively, \(2.3\) and \(2.8\) times more likely to die. These trends persisted even after controlling for a variety of health-related variables, such as smoking, alcohol consumption, self-reported health at the beginning of the study, and physical activity (Berkman & Syme, 1979). Since the time of that study, social support has emerged as one of the well-documented psychosocial factors affecting health outcomes (Uchino, 2009). A statistical review of \(148\) studies conducted between 1982 and 2007 involving over \(300,000\) participants concluded that individuals with stronger social relationships have a \(50\%\) greater likelihood of survival compared to those with weak or insufficient social relationships (Holt-Lunstad, Smith, & Layton, 2010). According to the researchers, the magnitude of the effect of social support observed in this study is comparable with quitting smoking and exceeded many well-known risk factors for mortality, such as obesity and physical inactivity (See figure 14.23). A number of large-scale studies have found that individuals with low levels of social support are at greater risk of mortality, especially from cardiovascular disorders (Brummett et al., 2001). Further, higher levels of social supported have been linked to better survival rates following breast cancer (Falagas et al., 2007) and infectious diseases, especially HIV infection (Lee & Rotheram-Borus, 2001). In fact, a person with high levels of social support is less likely to contract a common cold. In one study, \(334\) participants completed questionnaires assessing their sociability; these individuals were subsequently exposed to a virus that causes a common cold and monitored for several weeks to see who became ill. Results showed that increased sociability was linearly associated with a decreased probability of developing a cold (Cohen, Doyle, Turner, Alper, & Skoner, 2003). For many of us, friends are a vital source of social support. But what if you found yourself in a situation in which you lacked friends or companions? For example, suppose a popular high school student attends a far-away college, does not know anyone, and has trouble making friends and meaningful connections with others during the first semester. What can be done? If real life social support is lacking, access to distant friends via social media may help compensate. In a study of college freshmen, those with few face-to-face friends on campus but who communicated electronically with distant friends were less distressed that those who did not (Raney & Troop-Gordon, 2012). Also, for some people, our families—especially our parents—are a major source of social support. Social support appears to work by boosting the immune system, especially among people who are experiencing stress (Uchino, Vaughn, Carlisle, & Birmingham, 2012). In a pioneering study, spouses of cancer patients who reported high levels of social support showed indications of better immune functioning on two out of three immune functioning measures, compared to spouses who were below the median on reported social support (Baron, Cutrona, Hicklin, Russell, & Lubaroff, 1990). Studies of other populations have produced similar results, including those of spousal caregivers of dementia sufferers, medical students, elderly adults, and cancer patients (Cohen & Herbert, 1996; Kiecolt-Glaser, McGuire, Robles, & Glaser, 2002). In addition, social support has been shown to reduce blood pressure for people performing stressful tasks, such as giving a speech or performing mental arithmetic (Lepore, 1998). In these kinds of studies, participants are usually asked to perform a stressful task either alone, with a stranger present (who may be either supportive or unsupportive), or with a friend present. Those tested with a friend present generally exhibit lower blood pressure than those tested alone or with a stranger (Fontana, Diegnan, Villeneuve, & Lepore, 1999). In one study, \(112\) female participants who performed stressful mental arithmetic exhibited lower blood pressure when they received support from a friend rather than a stranger, but only if the friend was a male (Phillips, Gallagher, & Carroll, 2009). Although these findings are somewhat difficult to interpret, the authors mention that it is possible that females feel less supported and more evaluated by other females, particularly females whose opinions they value. Taken together, the findings above suggest one of the reasons social support is connected to favorable health outcomes is because it has several beneficial physiological effects in stressful situations. However, it is also important to consider the possibility that social support may lead to better health behaviors, such as a healthy diet, exercising, smoking cessation, and cooperation with medical regimens (Uchino, 2009). DIG DEEPER: Stress and Discrimination Being the recipient of prejudice and discrimination is associated with a number of negative outcomes. Many studies have shown how discrimination is a significant stressor for marginalized groups (Pascoe & Smart Richman, 2009). Discrimination negatively impacts both physical and mental health for individuals in stigmatized groups. As you’ll learn when you study social psychology, various social identities (such as gender, age, religion, sexuality, ethnicity) often lead people to simultaneously be exposed to multiple forms of discrimination, which can have even stronger negative effects on mental and physical health (Vines, Ward, Cordoba, & Black, 2017). For example, the amplified levels of discrimination faced by Latinx transgender women may have related effects, leading to high stress levels and poor mental and physical health outcomes. Perceived control and the general adaptation syndrome help explain the process by which discrimination affects mental and physical health. Discrimination can be conceptualized as an uncontrollable, persistent, and unpredictable stressor. When a discriminatory event occurs, the target of the event initially experiences an acute stress response (alarm stage). This acute reaction alone does not typically have a great impact on health. However, discrimination tends to be a chronic stressor. As people in marginalized groups experience repeated discrimination, they develop a heightened reactivity as their bodies prepare to act quickly (resistance stage). This long-term accumulation of stress responses can eventually lead to increases in negative emotion and wear on physical health (exhaustion stage). This explains why a history of perceived discrimination is associated with a host of mental and physical health problems including depression, cardiovascular disease, and cancer (Pascoe & Smart Richman, 2009). Protecting stigmatized groups from the negative impact of discrimination-induced stress may involve reducing the incidence of discriminatory behaviors in conjunction with protective strategies that reduce the impact of discriminatory events when they occur. Civil rights legislation has protected some stigmatized groups by making discrimination a prosecutable offense in many social contexts. However, some groups (e.g., transgender people) often lack important legal recourse when discrimination occurs. Moreover, most modern discrimination comes in subtle forms that fall below the radar of the law. For example, discrimination may be experienced as selective inhospitality toward people of specific races or ethnicities, but little is done in response since it would be easy to attribute the behavior to other causes. Although some cultural changes are increasingly helping people to recognize and control subtle discrimination, such shifts may take a long time. Similar to other stressors, buffers like social support and healthy coping strategies appear to be effective in lowering the impact of perceived discrimination. For example, one study (Ajrouch, Reisine, Lim, Sohn, & Ismail, 2010) showed that discrimination predicted high psychological distress among African American mothers living in Detroit. However, the women who had readily available emotional support from friends and family experienced less distress than those with fewer social resources. While coping strategies and social support may buffer the effects of discrimination, they fail to erase all of the negative impacts. Vigilant antidiscrimination efforts, including the development of legal protections for vulnerable groups, are needed to reduce discrimination, stress, and the resulting physical and mental health effects. Stress Reduction Techniques Beyond having a sense of control and establishing social support networks, there are numerous other means by which we can manage stress (See figure 14.24). A common technique people use to combat stress is exercise (Salmon, 2001). It is well-established that exercise, both of long (aerobic) and short (anaerobic) duration, is beneficial for both physical and mental health (Everly & Lating, 2002). There is considerable evidence that physically fit individuals are more resistant to the adverse effects of stress and recover more quickly from stress than less physically fit individuals (Cotton, 1990). In a study of more than \(500\) Swiss police officers and emergency service personnel, increased physical fitness was associated with reduced stress, and regular exercise was reported to protect against stress-related health problems (Gerber, Kellman, Hartman, & Pühse, 2010). One reason exercise may be beneficial is because it might buffer some of the deleterious physiological mechanisms of stress. One study found rats that exercised for six weeks showed a decrease in hypothalamic-pituitary-adrenal responsiveness to mild stressors (Campeau et al., 2010). In high-stress humans, exercise has been shown to prevent telomere shortening, which may explain the common observation of a youthful appearance among those who exercise regularly (Puterman et al., 2010). Further, exercise in later adulthood appears to minimize the detrimental effects of stress on the hippocampus and memory (Head, Singh, & Bugg, 2012). Among cancer survivors, exercise has been shown to reduce anxiety (Speck, Courneya, Masse, Duval, & Schmitz, 2010) and depressive symptoms (Craft, VanIterson, Helenowski, Rademaker, & Courneya, 2012). Clearly, exercise is a highly effective tool for regulating stress. In the 1970s, Herbert Benson, a cardiologist, developed a stress reduction method called the relaxation response technique (Greenberg, 2006). The relaxation response technique combines relaxation with transcendental meditation, and consists of four components (Stein, 2001): 1. sitting upright on a comfortable chair with feet on the ground and body in a relaxed position, 2. a quiet environment with eyes closed, 3. repeating a word or a phrase—a mantra—to oneself, such as “alert mind, calm body,” 4. passively allowing the mind to focus on pleasant thoughts, such as nature or the warmth of your blood nourishing your body. The relaxation response approach is conceptualized as a general approach to stress reduction that reduces sympathetic arousal, and it has been used effectively to treat people with high blood pressure (Benson & Proctor, 1994). Another technique to combat stress, biofeedback, was developed by Gary Schwartz at Harvard University in the early 1970s. Biofeedback is a technique that uses electronic equipment to accurately measure a person’s neuromuscular and autonomic activity—feedback is provided in the form of visual or auditory signals. The main assumption of this approach is that providing somebody biofeedback will enable the individual to develop strategies that help gain some level of voluntary control over what are normally involuntary bodily processes (Schwartz & Schwartz, 1995). A number of different bodily measures have been used in biofeedback research, including facial muscle movement, brain activity, and skin temperature, and it has been applied successfully with individuals experiencing tension headaches, high blood pressure, asthma, and phobias (Stein, 2001).
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/14%3A_Stress_Lifestyle_and_Health/14.05%3A_Regulation_of_Stress.txt
Learning Objectives • Define and discuss happiness, including its determinants • Describe the field of positive psychology and identify the kinds of problems it addresses • Explain the meaning of positive affect and discuss its importance in health outcomes • Describe the concept of flow and its relationship to happiness and fulfillment Although the study of stress and how it affects us physically and psychologically is fascinating, it is—admittedly—somewhat of a grim topic. Psychology is also interested in the study of a more upbeat and encouraging approach to human affairs—the quest for happiness. Happiness America’s founders declared that its citizens have an unalienable right to pursue happiness. But what is happiness? When asked to define the term, people emphasize different aspects of this elusive state. Indeed, happiness is somewhat ambiguous and can be defined from different perspectives (Martin, 2012). Some people, especially those who are highly committed to their religious faith, view happiness in ways that emphasize virtuosity, reverence, and enlightened spirituality. Others see happiness as primarily contentment—the inner peace and joy that come from deep satisfaction with one’s surroundings, relationships with others, accomplishments, and oneself. Still others view happiness mainly as pleasurable engagement with their personal environment—having a career and hobbies that are engaging, meaningful, rewarding, and exciting. These differences, of course, are merely differences in emphasis. Most people would probably agree that each of these views, in some respects, captures the essence of happiness. Elements of Happiness Some psychologists have suggested that happiness consists of three distinct elements: the pleasant life, the good life, and the meaningful life, as shown in figure 14.25 (Seligman, 2002; Seligman, Steen, Park, & Peterson, 2005). The pleasant life is realized through the attainment of day-to-day pleasures that add fun, joy, and excitement to our lives. For example, evening walks along the beach and a fulfilling sex life can enhance our daily pleasure and contribute to the pleasant life. The good life is achieved through identifying our unique skills and abilities and engaging these talents to enrich our lives; those who achieve the good life often find themselves absorbed in their work or their recreational pursuits. The meaningful life involves a deep sense of fulfillment that comes from using our talents in the service of the greater good: in ways that benefit the lives of others or that make the world a better place. In general, the happiest people tend to be those who pursue the full life—they orient their pursuits toward all three elements (Seligman et al., 2005). For practical purposes, a precise definition of happiness might incorporate each of these elements: an enduring state of mind consisting of joy, contentment, and other positive emotions, plus the sense that one’s life has meaning and value (Lyubomirsky, 2001). The definition implies that happiness is a long-term state—what is often characterized as subjective well-being—rather than merely a transient positive mood we all experience from time to time. It is this enduring happiness that has captured the interests of psychologists and other social scientists. The study of happiness has grown dramatically in the last three decades (Diener, 2013). One of the most basic questions that happiness investigators routinely examine is this: How happy are people in general? The average person in the world tends to be relatively happy and tends to indicate experiencing more positive feelings than negative feelings (Diener, Ng, Harter, & Arora, 2010). When asked to evaluate their current lives on a scale ranging from \(0\) to \(10\) (with \(0\) representing “worst possible life” and \(10\) representing “best possible life”), people in more than \(150\) countries surveyed from 2010–2012 reported an average score of \(5.2\). People who live in North America, Australia, and New Zealand reported the highest average score at \(7.1\), whereas those living Sub-Saharan Africa reported the lowest average score at \(4.6\) (Helliwell, Layard, & Sachs, 2013). Worldwide, the five happiest countries are Denmark, Norway, Switzerland, the Netherlands, and Sweden; the United States is ranked 17th happiest (See fig. 14.26) (Helliwell et al., 2013). Several years ago, a Gallup survey of more than \(1,000\) U.S. adults found that \(52\%\) reported that they were “very happy.” In addition, more than \(8\) in \(10\) indicated that they were “very satisfied” with their lives (Carroll, 2007). However, a recent poll of \(2,345\) U.S. adults surprisingly revealed that only one-third reported they are “very happy.” The poll also revealed that the happiness levels of certain groups, including minorities, recent college graduates, and the disabled, have trended downward in recent years (Gregoire, 2013). Although it is difficult to explain this apparent decline in happiness, it may be connected to the challenging economic conditions the United States has endured over the last several years. Of course, this presumption would imply that happiness is closely tied to one’s finances. But, is it? This question brings us to the next important issue: What factors influence happiness? Factors Connected to Happiness What really makes people happy? What factors contribute to sustained joy and contentment? Is it money, attractiveness, material possessions, a rewarding occupation, a satisfying relationship? Extensive research over the years has examined this question. One finding is that age is related to happiness: Life satisfaction usually increases the older people get, but there do not appear to be gender differences in happiness (Diener, Suh, Lucas, & Smith, 1999). Although it is important to point out that much of this work has been correlational, many of the key findings (some of which may surprise you) are summarized below. Family and other social relationships appear to be key factors correlated with happiness. Studies show that married people report being happier than those who are single, divorced, or widowed (Diener et al., 1999). Happy individuals also report that their marriages are fulfilling (Lyubomirsky, King, & Diener, 2005). In fact, some have suggested that satisfaction with marriage and family life is the strongest predictor of happiness (Myers, 2000). Happy people tend to have more friends, more high-quality social relationships, and stronger social support networks than less happy people (Lyubomirsky et al., 2005). Happy people also have a high frequency of contact with friends (Pinquart & Sörensen, 2000). Can money buy happiness? In general, extensive research suggests that the answer is yes, but with several caveats. While a nation’s per capita gross domestic product (GDP) is associated with happiness levels (Helliwell et al., 2013), changes in GDP (which is a less certain index of household income) bear little relationship to changes in happiness (Diener, Tay, & Oishi, 2013). On the whole, residents of affluent countries tend to be happier than residents of poor countries; within countries, wealthy individuals are happier than poor individuals, but the association is much weaker (Diener & Biswas-Diener, 2002). To the extent that it leads to increases in purchasing power, increases in income are associated with increases in happiness (Diener, Oishi, & Ryan, 2013). However, income within societies appears to correlate with happiness only up to a point. In a study of over \(450,000\) U.S. residents surveyed by the Gallup Organization, Kahneman and Deaton (2010) found that well-being rises with annual income, but only up to \(\$75,000\). The average increase in reported well-being for people with incomes greater than \(\$75,000\) was null. As implausible as these findings might seem—after all, higher incomes would enable people to indulge in Hawaiian vacations, prime seats as sporting events, expensive automobiles, and expansive new homes—higher incomes may impair people’s ability to savor and enjoy the small pleasures of life (Kahneman, 2011). Indeed, researchers in one study found that participants exposed to a subliminal reminder of wealth spent less time savoring a chocolate candy bar and exhibited less enjoyment of this experience than did participants who were not reminded of wealth (Quoidbach, Dunn, Petrides, & Mikolajczak, 2010). What about education and employment? Happy people, compared to those who are less happy, are more likely to graduate from college and secure more meaningful and engaging jobs. Once they obtain a job, they are also more likely to succeed (Lyubomirsky et al., 2005). While education shows a positive (but weak) correlation with happiness, intelligence is not appreciably related to happiness (Diener et al., 1999). Does religiosity correlate with happiness? In general, the answer is yes (Hackney & Sanders, 2003). However, the relationship between religiosity and happiness depends on societal circumstances. Nations and states with more difficult living conditions (e.g., widespread hunger and low life expectancy) tend to be more highly religious than societies with more favorable living conditions. Among those who live in nations with difficult living conditions, religiosity is associated with greater well-being; in nations with more favorable living conditions, religious and nonreligious individuals report similar levels of well-being (Diener, Tay, & Myers, 2011). Clearly the living conditions of one’s nation can influence factors related to happiness. What about the influence of one’s culture? To the extent that people possess characteristics that are highly valued by their culture, they tend to be happier (Diener, 2012). For example, self-esteem is a stronger predictor of life satisfaction in individualistic cultures than in collectivistic cultures (Diener, Diener, & Diener, 1995), and extraverted people tend to be happier in extraverted cultures than in introverted cultures (Fulmer et al., 2010). So we’ve identified many factors that exhibit some correlation to happiness. What factors don’t show a correlation? Researchers have studied both parenthood and physical attractiveness as potential contributors to happiness, but no link has been identified. Although people tend to believe that parenthood is central to a meaningful and fulfilling life, aggregate findings from a range of countries indicate that people who do not have children are generally happier than those who do (Hansen, 2012). And although one’s perceived level of attractiveness seems to predict happiness, a person’s objective physical attractiveness is only weakly correlated with her happiness (Diener, Wolsic, & Fujita, 1995). Life Events and Happiness An important point should be considered regarding happiness. People are often poor at affective forecasting: predicting the intensity and duration of their future emotions (Wilson & Gilbert, 2003). In one study, nearly all newlywed spouses predicted their marital satisfaction would remain stable or improve over the following four years; despite this high level of initial optimism, their marital satisfaction actually declined during this period (Lavner, Karner, & Bradbury, 2013). In addition, we are often incorrect when estimating how our long-term happiness would change for the better or worse in response to certain life events. For example, it is easy for many of us to imagine how euphoric we would feel if we won the lottery, were asked on a date by an attractive celebrity, or were offered our dream job. It is also easy to understand how long-suffering fans of the Chicago Cubs baseball team, which has not won a World Series championship since 1908, think they would feel permanently elated if their team would finally win another World Series. Likewise, it easy to predict that we would feel permanently miserable if we suffered a crippling accident or if a romantic relationship ended. However, something similar to sensory adaptation often occurs when people experience emotional reactions to life events. In much the same way our senses adapt to changes in stimulation (e.g., our eyes adapting to bright light after walking out of the darkness of a movie theater into the bright afternoon sun), we eventually adapt to changing emotional circumstances in our lives (Brickman & Campbell, 1971; Helson, 1964). When an event that provokes positive or negative emotions occurs, at first we tend to experience its emotional impact at full intensity. We feel a burst of pleasure following such things as a marriage proposal, birth of a child, acceptance to law school, an inheritance, and the like; as you might imagine, lottery winners experience a surge of happiness after hitting the jackpot (Lutter, 2007). Likewise, we experience a surge of misery following widowhood, a divorce, or a layoff from work. In the long run, however, we eventually adjust to the emotional new normal; the emotional impact of the event tends to erode, and we eventually revert to our original baseline happiness levels. Thus, what was at first a thrilling lottery windfall or World Series championship eventually loses its luster and becomes the status quo (See figure 14.27). Indeed, dramatic life events have much less long-lasting impact on happiness than might be expected (Brickman, Coats, & Janoff-Bulman, 1978). Recently, some have raised questions concerning the extent to which important life events can permanently alter people’s happiness set points (Diener, Lucas, & Scollon, 2006). Evidence from a number of investigations suggests that, in some circumstances, happiness levels do not revert to their original positions. For example, although people generally tend to adapt to marriage so that it no longer makes them happier or unhappier than before, they often do not fully adapt to unemployment or severe disabilities (Diener, 2012). Figure 14.28, which is based on longitudinal data from a sample of over \(3,000\) German respondents, shows life satisfaction scores several years before, during, and after various life events, and it illustrates how people adapt (or fail to adapt) to these events. German respondents did not get lasting emotional boosts from marriage; instead, they reported brief increases in happiness, followed by quick adaptation. In contrast, widows and those who had been laid off experienced sizeable decreases in happiness that appeared to result in long-term changes in life satisfaction (Diener et al., 2006). Further, longitudinal data from the same sample showed that happiness levels changed significantly over time for nearly a quarter of respondents, with 9% showing major changes (Fujita & Diener, 2005). Thus, long-term happiness levels can and do change for some people. Increasing Happiness Some recent findings about happiness provide an optimistic picture, suggesting that real changes in happiness are possible. For example, thoughtfully developed well-being interventions designed to augment people’s baseline levels of happiness may increase happiness in ways that are permanent and long-lasting, not just temporary. These changes in happiness may be targeted at individual, organizational, and societal levels (Diener et al., 2006). Researchers in one study found that a series of happiness interventions involving such exercises as writing down three good things that occurred each day led to increases in happiness that lasted over six months (Seligman et al., 2005). Measuring happiness and well-being at the societal level over time may assist policy makers in determining if people are generally happy or miserable, as well as when and why they might feel the way they do. Studies show that average national happiness scores (over time and across countries) relate strongly to six key variables: per capita gross domestic product (GDP, which reflects a nation’s economic standard of living), social support, freedom to make important life choices, healthy life expectancy, freedom from perceived corruption in government and business, and generosity (Helliwell et al., 2013). Investigating why people are happy or unhappy might help policymakers develop programs that increase happiness and well-being within a society (Diener et al., 2006). Resolutions about contemporary political and social issues that are frequent topics of debate—such as poverty, taxation, affordable health care and housing, clean air and water, and income inequality—might be best considered with people’s happiness in mind. Positive Psychology In 1998, Seligman (the same person who conducted the learned helplessness experiments mentioned earlier), who was then president of the American Psychological Association, urged psychologists to focus more on understanding how to build human strength and psychological well-being. In deliberately setting out to create a new direction and new orientation for psychology, Seligman helped establish a growing movement and field of research called positive psychology (Compton, 2005). In a very general sense, positive psychology can be thought of as the science of happiness; it is an area of study that seeks to identify and promote those qualities that lead to greater fulfillment in our lives. This field looks at people’s strengths and what helps individuals to lead happy, contented lives, and it moves away from focusing on people’s pathology, faults, and problems. According to Seligman and Csikszentmihalyi (2000), positive psychology, "at the subjective level is about valued subjective experiences: well-being, contentment, and satisfaction (in the past); hope and optimism (for the future); and… happiness (in the present). At the individual level, it is about positive individual traits: the capacity for love and vocation, courage, interpersonal skill, aesthetic sensibility, perseverance, forgiveness, originality, future mindedness, spirituality, high talent, and wisdom." (p. 5) Some of the topics studied by positive psychologists include altruism and empathy, creativity, forgiveness and compassion, the importance of positive emotions, enhancement of immune system functioning, savoring the fleeting moments of life, and strengthening virtues as a way to increase authentic happiness (Compton, 2005). Recent efforts in the field of positive psychology have focused on extending its principles toward peace and well-being at the level of the global community. In a war-torn world in which conflict, hatred, and distrust are common, such an extended “positive peace psychology” could have important implications for understanding how to overcome oppression and work toward global peace (Cohrs, Christie, White, & Das, 2013). DIG DEEPER: The Center for Investigating Healthy Minds On the campus of the University of Wisconsin–Madison, the Center for Investigating Healthy Minds at the Waisman Center conducts rigorous scientific research on healthy aspects of the mind, such as kindness, forgiveness, compassion, and mindfulness. Established in 2008 and led by renowned neuroscientist Dr. Richard J. Davidson, the Center examines a wide range of ideas, including such things as a kindness curriculum in schools, neural correlates of prosocial behavior, psychological effects of Tai Chi training, digital games to foster prosocial behavior in children, and the effectiveness of yoga and breathing exercises in reducing symptoms of post-traumatic stress disorder. According to its website, the Center was founded after Dr. Davidson was challenged by His Holiness, the 14th Dalai Lama, “to apply the rigors of science to study positive qualities of mind” (Center for Investigating Health Minds, 2013). The Center continues to conduct scientific research with the aim of developing mental health training approaches that help people to live happier, healthier lives). Positive Affect and Optimism Taking a cue from positive psychology, extensive research over the last \(10-15\) years has examined the importance of positive psychological attributes in physical well-being. Qualities that help promote psychological well-being (e.g., having meaning and purpose in life, a sense of autonomy, positive emotions, and satisfaction with life) are linked with a range of favorable health outcomes (especially improved cardiovascular health) mainly through their relationships with biological functions and health behaviors (such as diet, physical activity, and sleep quality) (Boehm & Kubzansky, 2012). The quality that has received attention is positive affect, which refers to pleasurable engagement with the environment, such as happiness, joy, enthusiasm, alertness, and excitement (Watson, Clark, & Tellegen, 1988). The characteristics of positive affect, as with negative affect (discussed earlier), can be brief, long-lasting, or trait-like (Pressman & Cohen, 2005). Independent of age, gender, and income, positive affect is associated with greater social connectedness, emotional and practical support, adaptive coping efforts, and lower depression; it is also associated with longevity and favorable physiological functioning (Steptoe, O’Donnell, Marmot, & Wardle, 2008). Positive affect also serves as a protective factor against heart disease. In a \(10\)-year study of Nova Scotians, the rate of heart disease was \(22\%\) lower for each one-point increase on the measure of positive affect, from \(1\) (no positive affect expressed) to \(5\) (extreme positive affect) (Davidson, Mostofsky, & Whang, 2010). In terms of our health, the expression, “don’t worry, be happy” is helpful advice indeed. There has also been much work suggesting that optimism—the general tendency to look on the bright side of things—is also a significant predictor of positive health outcomes. Although positive affect and optimism are related in some ways, they are not the same (Pressman & Cohen, 2005). Whereas positive affect is mostly concerned with positive feeling states, optimism has been regarded as a generalized tendency to expect that good things will happen (Chang, 2001). It has also been conceptualized as a tendency to view life’s stressors and difficulties as temporary and external to oneself (Peterson & Steen, 2002). Numerous studies over the years have consistently shown that optimism is linked to longevity, healthier behaviors, fewer postsurgical complications, better immune functioning among men with prostate cancer, and better treatment adherence (Rasmussen & Wallio, 2008). Further, optimistic people report fewer physical symptoms, less pain, better physical functioning, and are less likely to be rehospitalized following heart surgery (Rasmussen, Scheier, & Greenhouse, 2009). Flow Another factor that seems to be important in fostering a deep sense of well-being is the ability to derive flow from the things we do in life. Flow is described as a particular experience that is so engaging and engrossing that it becomes worth doing for its own sake (Csikszentmihalyi, 1997). It is usually related to creative endeavors and leisure activities, but it can also be experienced by workers who like their jobs or students who love studying (Csikszentmihalyi, 1999). Many of us instantly recognize the notion of flow. In fact, the term derived from respondents’ spontaneous use of the term when asked to describe how it felt when what they were doing was going well. When people experience flow, they become involved in an activity to the point where they feel they lose themselves in the activity. They effortlessly maintain their concentration and focus, they feel as though they have complete control of their actions, and time seems to pass more quickly than usual (Csikszentmihalyi, 1997). Flow is considered a pleasurable experience, and it typically occurs when people are engaged in challenging activities that require skills and knowledge they know they possess. For example, people would be more likely report flow experiences in relation to their work or hobbies than in relation to eating. When asked the question, “Do you ever get involved in something so deeply that nothing else seems to matter, and you lose track of time?” about \(20\%\) of Americans and Europeans report having these flow-like experiences regularly (Csikszentmihalyi, 1997). Although wealth and material possessions are nice to have, the notion of flow suggests that neither are prerequisites for a happy and fulfilling life. Finding an activity that you are truly enthusiastic about, something so absorbing that doing it is reward itself (whether it be playing tennis, studying Arabic, writing children’s novels, or cooking lavish meals) is perhaps the real key. According to Csikszentmihalyi (1999), creating conditions that make flow experiences possible should be a top social and political priority. How might this goal be achieved? How might flow be promoted in school systems? In the workplace? What potential benefits might be accrued from such efforts? In an ideal world, scientific research endeavors should inform us on how to bring about a better world for all people. The field of positive psychology promises to be instrumental in helping us understand what truly builds hope, optimism, happiness, healthy relationships, flow, and genuine personal fulfillment.
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/14%3A_Stress_Lifestyle_and_Health/14.06%3A_The_Pursuit_of_Happiness.txt
21. Provide an example (other than the one described earlier) of a situation or event that could be appraised as either threatening or challenging. 22. Provide an example of a stressful situation that may cause a person to become seriously ill. How would Selye’s general adaptation syndrome explain this occurrence? 23. Review the items on the Social Readjustment Rating Scale. Select one of the items and discuss how it might bring about distress and eustress. 24. Job burnout tends to be high in people who work in human service jobs. Considering the three dimensions of job burnout, explain how various job aspects unique to being a police officer might lead to job burnout in that line of work. 25. Discuss the concept of Type A behavior pattern, its history, and what we now know concerning its role in heart disease. 26. Consider the study in which volunteers were given nasal drops containing the cold virus to examine the relationship between stress and immune function (Cohen et al., 1998). How might this finding explain how people seem to become sick during stressful times in their lives (e.g., final exam week)? 27. Although problem-focused coping seems to be a more effective strategy when dealing with stressors, do you think there are any kinds of stressful situations in which emotion-focused coping might be a better strategy? 28. Describe how social support can affect health both directly and indirectly. 29. In considering the three dimensions of happiness discussed in this section (the pleasant life, the good life, and the meaningful life), what are some steps you could take to improve your personal level of happiness? 30. The day before the drawing of a \$300 million Powerball lottery, you notice that a line of people waiting to buy their Powerball tickets is stretched outside the door of a nearby convenience store. Based on what you’ve learned, provide some perspective on why these people are doing this, and what would likely happen if one of these individuals happened to pick the right numbers. Key Terms alarm reaction first stage of the general adaptation syndrome; characterized as the body’s immediate physiological reaction to a threatening situation or some other emergency; analogous to the fight-or-flight response asthma psychophysiological disorder in which the airways of the respiratory system become obstructed, leading to great difficulty expelling air from the lungs biofeedback stress-reduction technique using electronic equipment to measure a person’s involuntary (neuromuscular and autonomic) activity and provide feedback to help the person gain a level of voluntary control over these processes cardiovascular disorders disorders that involve the heart and blood circulation system coping mental or behavioral efforts used to manage problems relating to stress, including its cause and the unpleasant feelings and emotions it produces cortisol stress hormone released by the adrenal glands when encountering a stressor; helps to provide a boost of energy, thereby preparing the individual to take action daily hassles minor irritations and annoyances that are part of our everyday lives and are capable of producing stress distress bad form of stress; usually high in intensity; often leads to exhaustion, fatigue, feeling burned out; associated with erosions in performance and health eustress good form of stress; low to moderate in intensity; associated with positive feelings, as well as optimal health and performance fight-or-flight response set of physiological reactions (increases in blood pressure, heart rate, respiration rate, and sweat) that occur when an individual encounters a perceived threat; these reactions are produced by activation of the sympathetic nervous system and the endocrine system flow state involving intense engagement in an activity; usually is experienced when participating in creative, work, and leisure endeavors general adaptation syndrome Hans Selye’s three-stage model of the body’s physiological reactions to stress and the process of stress adaptation: alarm reaction, stage of resistance, and stage of exhaustion happiness enduring state of mind consisting of joy, contentment, and other positive emotions; the sense that one’s life has meaning and value health psychology subfield of psychology devoted to studying psychological influences on health, illness, and how people respond when they become ill heart disease several types of adverse heart conditions, including those that involve the heart’s arteries or valves or those involving the inability of the heart to pump enough blood to meet the body’s needs; can include heart attack and stroke hypertension high blood pressure hypothalamic-pituitary-adrenal (HPA) axis set of structures found in both the limbic system (hypothalamus) and the endocrine system (pituitary gland and adrenal glands) that regulate many of the body’s physiological reactions to stress through the release of hormones immune system various structures, cells, and mechanisms that protect the body from foreign substances that can damage the body’s tissues and organs immunosuppression decreased effectiveness of the immune system job burnout general sense of emotional exhaustion and cynicism in relation to one’s job; consists of three dimensions: exhaustion, depersonalization, and sense of diminished personal accomplishment job strain work situation involving the combination of excessive job demands and workload with little decision making latitude or job control lymphocytes white blood cells that circulate in the body’s fluids and are especially important in the body’s immune response negative affectivity tendency to experience distressed emotional states involving anger, contempt, disgust, guilt, fear, and nervousness optimism tendency toward a positive outlook and positive expectations perceived control peoples’ beliefs concerning their capacity to influence and shape outcomes in their lives positive affect state or a trait that involves pleasurable engagement with the environment, the dimensions of which include happiness, joy, enthusiasm, alertness, and excitement positive psychology scientific area of study seeking to identify and promote those qualities that lead to happy, fulfilled, and contented lives primary appraisal judgment about the degree of potential harm or threat to well-being that a stressor might entail psychoneuroimmunology field that studies how psychological factors (such as stress) influence the immune system and immune functioning psychophysiological disorders physical disorders or diseases in which symptoms are brought about or worsened by stress and emotional factors relaxation response technique stress reduction technique combining elements of relaxation and meditation secondary appraisal judgment of options available to cope with a stressor and their potential effectiveness Social Readjustment Rating Scale (SRRS) popular scale designed to measure stress; consists of 43 potentially stressful events, each of which has a numerical value quantifying how much readjustment is associated with the event social support soothing and often beneficial support of others; can take different forms, such as advice, guidance, encouragement, acceptance, emotional comfort, and tangible assistance stage of exhaustion third stage of the general adaptation syndrome; the body’s ability to resist stress becomes depleted; illness, disease, and even death may occur stage of resistance second stage of the general adaptation syndrome; the body adapts to a stressor for a period of time stress process whereby an individual perceives and responds to events that one appraises as overwhelming or threatening to one’s well-being stressors environmental events that may be judged as threatening or demanding; stimuli that initiate the stress process Type A psychological and behavior pattern exhibited by individuals who tend to be extremely competitive, impatient, rushed, and hostile toward others Type B psychological and behavior pattern exhibited by a person who is relaxed and laid back
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/14%3A_Stress_Lifestyle_and_Health/Critical_Thinking_Questions.txt
31. Think of a time in which you and others you know (family members, friends, and classmates) experienced an event that some viewed as threatening and others viewed as challenging. What were some of the differences in the reactions of those who experienced the event as threatening compared to those who viewed the event as challenging? Why do you think there were differences in how these individuals judged the same event? 32. Suppose you want to design a study to examine the relationship between stress and illness, but you cannot use the Social Readjustment Rating Scale. How would you go about measuring stress? How would you measure illness? What would you need to do in order to tell if there is a cause-effect relationship between stress and illness? 33. If a family member or friend of yours has asthma, talk to that person (if they are willing) about their symptom triggers. Does this person mention stress or emotional states? If so, are there any commonalities in these asthma triggers? 34. Try to think of an example in which you coped with a particular stressor by using problem-focused coping. What was the stressor? What did your problem-focused efforts involve? Were they effective? 35. Think of an activity you participate in that you find engaging and absorbing. For example, this might be something like playing video games, reading, or a hobby. What are your experiences typically like while engaging in this activity? Do your experiences conform to the notion of flow? If so, how? Do you think these experiences have enriched your life? Why or why not? Review Questions 1. Negative effects of stress are most likely to be experienced when an event is perceived as ________. 1. negative, but it is likely to affect one’s friends rather than oneself 2. challenging 3. confusing 4. threatening, and no clear options for dealing with it are apparent 2. Between 2006 and 2009, the greatest increases in stress levels were found to occur among ________. 1. Black people 2. those aged 45–64 3. the unemployed 4. those without college degrees 3. At which stage of Selye’s general adaptation syndrome is a person especially vulnerable to illness? 1. exhaustion 2. alarm reaction 3. fight-or-flight 4. resistance 4. During an encounter judged as stressful, cortisol is released by the ________. 1. sympathetic nervous system 2. hypothalamus 3. pituitary gland 4. adrenal glands 5. According to the Holmes and Rahe scale, which life event requires the greatest amount of readjustment? 1. marriage 2. personal illness 3. divorce 4. death of spouse 6. While waiting to pay for his weekly groceries at the supermarket, Paul had to wait about 20 minutes in a long line at the checkout because only one cashier was on duty. When he was finally ready to pay, his debit card was declined because he did not have enough money left in his checking account. Because he had left his credit cards at home, he had to place the groceries back into the cart and head home to retrieve a credit card. While driving back to his home, traffic was backed up two miles due to an accident. These events that Paul had to endure are best characterized as ________. 1. chronic stressors 2. acute stressors 3. daily hassles 4. readjustment occurrences 7. What is one of the major criticisms of the Social Readjustment Rating Scale? 1. It has too few items. 2. It was developed using only people from the New England region of the United States. 3. It does not take into consideration how a person appraises an event. 4. None of the items included are positive. 8. Which of the following is not a dimension of job burnout? 1. depersonalization 2. hostility 3. exhaustion 4. diminished personal accomplishment 9. The white blood cells that attack foreign invaders to the body are called ________. 1. antibodies 2. telomeres 3. lymphocytes 4. immune cells 10. The risk of heart disease is especially high among individuals with ________. 1. depression 2. asthma 3. telomeres 4. lymphocytes 11. The most lethal dimension of Type A behavior pattern seems to be ________. 1. hostility 2. impatience 3. time urgency 4. competitive drive 12. Which of the following statements pertaining to asthma is false? 1. Parental and interpersonal conflicts have been tied to the development of asthma. 2. Asthma sufferers can experience asthma-like symptoms simply by believing that an inert substance they breathe will lead to airway obstruction. 3. Asthma has been shown to be linked to periods of depression. 4. Rates of asthma have decreased considerably since 2000. 13. Emotion-focused coping would likely be a better method than problem-focused coping for dealing with which of the following stressors? 1. terminal cancer 2. poor grades in school 3. unemployment 4. divorce 14. Studies of British civil servants have found that those in the lowest status jobs are much more likely to develop heart disease than those who have high status jobs. These findings attest to the importance of ________ in dealing with stress. 1. biofeedback 2. social support 3. perceived control 4. emotion-focused coping 15. Relative to those with low levels of social support, individuals with high levels of social support ________. 1. are more likely to develop asthma 2. tend to have less perceived control 3. are more likely to develop cardiovascular disorders 4. tend to tolerate stress well 16. The concept of learned helplessness was formulated by Seligman to explain the ________. 1. inability of dogs to attempt to escape avoidable shocks after having received inescapable shocks 2. failure of dogs to learn to from prior mistakes 3. ability of dogs to learn to help other dogs escape situations in which they are receiving uncontrollable shocks 4. inability of dogs to learn to help other dogs escape situations in which they are receiving uncontrollable electric shocks 17. Which of the following is not one of the presumed components of happiness? 1. using our talents to help improve the lives of others 2. learning new skills 3. regular pleasurable experiences 4. identifying and using our talents to enrich our lives 18. Researchers have identified a number of factors that are related to happiness. Which of the following is not one of them? 1. age 2. annual income up to \$75,000 3. physical attractiveness 4. marriage 19. How does positive affect differ from optimism? 1. Optimism is more scientific than positive affect. 2. Positive affect is more scientific than optimism. 3. Positive affect involves feeling states, whereas optimism involves expectations. 4. Optimism involves feeling states, whereas positive affect involves expectations. 20. Carson enjoys writing mystery novels, and has even managed to publish some of his work. When he’s writing, Carson becomes extremely focused on his work; in fact, he becomes so absorbed that that he often loses track of time, often staying up well past 3 a.m. Carson’s experience best illustrates the concept of ________. 1. happiness set point 2. adaptation 3. positive affect 4. flow
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/14%3A_Stress_Lifestyle_and_Health/Personal_Application_Questions.txt
14.1 What Is Stress? Stress is a process whereby an individual perceives and responds to events appraised as overwhelming or threatening to one’s well-being. The scientific study of how stress and emotional factors impact health and well-being is called health psychology, a field devoted to studying the general impact of psychological factors on health. The body’s primary physiological response during stress, the fight-or-flight response, was first identified in the early 20th century by Walter Cannon. The fight-or-flight response involves the coordinated activity of both the sympathetic nervous system and the hypothalamic-pituitary-adrenal (HPA) axis. Hans Selye, a noted endocrinologist, referred to these physiological reactions to stress as part of general adaptation syndrome, which occurs in three stages: alarm reaction (fight-or-flight reactions begin), resistance (the body begins to adapt to continuing stress), and exhaustion (adaptive energy is depleted, and stress begins to take a physical toll). 14.2 Stressors Stressors can be chronic (long term) or acute (short term), and can include traumatic events, significant life changes, daily hassles, and situations in which people are frequently exposed to challenging and unpleasant events. Many potential stressors include events or situations that require us to make changes in our lives, such as a divorce or moving to a new residence. Thomas Holmes and Richard Rahe developed the Social Readjustment Rating Scale (SRRS) to measure stress by assigning a number of life change units to life events that typically require some adjustment, including positive events. Although the SRRS has been criticized on a number of grounds, extensive research has shown that the accumulation of many LCUs is associated with increased risk of illness. Many potential stressors also include daily hassles, which are minor irritations and annoyances that can build up over time. In addition, jobs that are especially demanding, offer little control over one’s working environment, or involve unfavorable working conditions can lead to job strain, thereby setting the stage for job burnout. 14.3 Stress and Illness Psychophysiological disorders are physical diseases that are either brought about or worsened by stress and other emotional factors. One of the mechanisms through which stress and emotional factors can influence the development of these diseases is by adversely affecting the body’s immune system. A number of studies have demonstrated that stress weakens the functioning of the immune system. Cardiovascular disorders are serious medical conditions that have been consistently shown to be influenced by stress and negative emotions, such as anger, negative affectivity, and depression. Other psychophysiological disorders that are known to be influenced by stress and emotional factors include asthma and tension headaches. 14.4 Regulation of Stress When faced with stress, people must attempt to manage or cope with it. In general, there are two basic forms of coping: problem-focused coping and emotion-focused coping. Those who use problem-focused coping strategies tend to cope better with stress because these strategies address the source of stress rather than the resulting symptoms. To a large extent, perceived control greatly impacts reaction to stressors and is associated with greater physical and mental well-being. Social support has been demonstrated to be a highly effective buffer against the adverse effects of stress. Extensive research has shown that social support has beneficial physiological effects for people, and it seems to influence immune functioning. However, the beneficial effects of social support may be related to its influence on promoting healthy behaviors. 14.5 The Pursuit of Happiness Happiness is conceptualized as an enduring state of mind that consists of the capacity to experience pleasure in daily life, as well as the ability to engage one’s skills and talents to enrich one’s life and the lives of others. Although people around the world generally report that they are happy, there are differences in average happiness levels across nations. Although people have a tendency to overestimate the extent to which their happiness set points would change for the better or for the worse following certain life events, researchers have identified a number of factors that are consistently related to happiness. In recent years, positive psychology has emerged as an area of study seeking to identify and promote qualities that lead to greater happiness and fulfillment in our lives. These components include positive affect, optimism, and flow.
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/14%3A_Stress_Lifestyle_and_Health/Summary.txt
A psychological disorder is a behavioral or mental pattern that causes significant distress or impairment of personal functioning. Such features may be persistent, relapsing and remitting, or occur as a single episode. Many disorders have been described, with signs and symptoms that vary widely between specific disorders. Such disorders may be diagnosed by a mental health professional and the causes of mental disorders are often unclear. Mental disorders are usually defined by a combination of how a person behaves, feels, perceives, or thinks. This may be associated with particular regions or functions of the brain, often in a social context. A mental disorder is one aspect of mental health, although cultural and religious beliefs, as well as social norms, should be taken into account when making a diagnosis. • Introduction Mental illness is not necessarily a cause of violence; it is far more likely that the mentally ill will be victims rather than perpetrators of violence. • 15.1: What Are Psychological Disorders? A psychological disorder is a condition characterized by abnormal thoughts, feelings, and behaviors. Psychopathology is the study of psychological disorders, including their symptoms, etiology (i.e., their causes), and treatment. The term psychopathology can also refer to the manifestation of a psychological disorder. Although consensus can be difficult, it is extremely important for mental health professionals to agree on what kinds of thoughts, feelings, and behaviors are truly abnormal. • 15.2: Diagnosing and Classifying Psychological Disorders A first step in the study of psychological disorders is carefully and systematically discerning significant signs and symptoms. How do mental health professionals ascertain whether or not a person’s inner states and behaviors truly represent a psychological disorder? Arriving at a proper diagnosis—that is, appropriately identifying and labeling a set of defined symptoms—is absolutely crucial. • 15.3: Perspectives on Psychological Disorders Scientists and mental health professionals may adopt different perspectives in attempting to understand or explain the underlying mechanisms that contribute to the development of a psychological disorder. The perspective used in explaining a psychological disorder is extremely important, in that it will consist of explicit assumptions regarding how best to study the disorder, its etiology, and what kinds of therapies or treatments are most beneficial. • 15.4: Anxiety Disorders Anxiety disorders are characterized by excessive and persistent fear and anxiety, and by related disturbances in behavior (APA, 2013). Although anxiety is universally experienced, anxiety disorders cause considerable distress. As a group, anxiety disorders are common: approximately 25%–30% of the U.S. population meets the criteria for at least one anxiety disorder during their lifetime. Also, these disorders appear to be much more common in women than they are in men. • 15.5: Obsessive-Compulsive and Related Disorders Obsessive-compulsive and related disorders are a group of overlapping disorders that generally involve intrusive, unpleasant thoughts and repetitive behaviors. Many of us experience unwanted thoughts from time to time and many of us engage in repetitive behaviors on occasion. However, obsessive-compulsive and related disorders elevate the unwanted thoughts and repetitive behaviors to a status so intense that these cognitions and activities disrupt daily life. • 15.6: Posttraumatic Stress Disorder Extremely stressful or traumatic events, such as combat, natural disasters, and terrorist attacks, place the people who experience them at an increased risk for developing psychological disorders such as posttraumatic stress disorder (PTSD). Throughout much of the 20th century, this disorder was called shell shock and combat neurosis because its symptoms were observed in soldiers who had engaged in wartime combat. • 15.7: Mood and Related Disorders All of us experience fluctuations in our moods and emotional states, and often these fluctuations are caused by events in our lives. We become elated if our favorite team wins the World Series and dejected if a romantic relationship ends or if we lose our job. At times, we feel fantastic or miserable for no clear reason. People with mood disorders also experience mood fluctuations, but their fluctuations are extreme, distort their outlook on life, and impair their ability to function. • 15.8: Schizophrenia Schizophrenia is a devastating psychological disorder that is characterized by major disturbances in thought, perception, emotion, and behavior. About \(1\%\) of the population experiences schizophrenia in their lifetime, and usually the disorder is first diagnosed during early adulthood (early to mid-\(20s\)). Most people with schizophrenia experience significant difficulties in many day-to-day activities, such as holding a job, paying bills, caring for oneself and maintaining relationships wit • 15.10: Disorders in Childhood • 15.11: Personality Disorders The term personality refers loosely to one’s stable, consistent, and distinctive way of thinking about, feeling, acting, and relating to the world. People with personality disorders exhibit a personality style that differs markedly from the expectations of their culture, is pervasive and inflexible, begins in adolescence or early adulthood, and causes distress or impairment (APA, 2013). • 15.9: Dissociative Disorders Dissociative disorders are characterized by an individual becoming split off, or dissociated, from her core sense of self. Memory and identity become disturbed; these disturbances have a psychological rather than physical cause. Dissociative disorders listed in the DSM-5 include dissociative amnesia, depersonalization/derealization disorder, and dissociative identity disorder. • Critical Thinking Questions • Key Terms • Personal Application Questions • Review Questions • Summary Thumbnail: Hoarding is a psychological disorder. (CC BY-SA 3.0; Grap). 15: Psychological Disorders Chapter Outline 15.1 What Are Psychological Disorders? 15.2 Diagnosing and Classifying Psychological Disorders 15.3 Perspectives on Psychological Disorders 15.4 Anxiety Disorders 15.5 Obsessive-Compulsive and Related Disorders 15.6 Posttraumatic Stress Disorder 15.7 Mood and Related Disorders 15.8 Schizophrenia 15.9 Dissociative Disorders 15.10 Disorders in Childhood 15.11 Personality Disorders Figure 15.1 A wreath is laid in memoriam to victims of the Washington Navy Yard shooting. (credit: modification of work by D. Myles Cullen, US Department of Defense) On Monday, September 16, 2013, a gunman killed \(12\) people as the workday began at the Washington Navy Yard in Washington, DC. Aaron Alexis, \(34\), had a troubled history: he thought that he was being controlled by radio waves. He called the police to complain about voices in his head and being under surveillance by “shadowy forces” (Thomas, Levine, Date, & Cloherty, 2013). While Alexis’s actions cannot be excused, it is clear that he had some form of mental illness. Mental illness is not necessarily a cause of violence; it is far more likely that the mentally ill will be victims rather than perpetrators of violence (Stuart, 2003). If, however, Alexis had received the help he needed, this tragedy might have been averted.
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/15%3A__Psychological_Disorders/15.01%3A_Prelude_to_Psychological_Disorders.txt
Learning Objectives • Understand the problems inherent in defining the concept of psychological disorder • Describe what is meant by harmful dysfunction • Identify the formal criteria that thoughts, feelings, and behaviors must meet to be considered abnormal and, thus, symptomatic of a psychological disorder According to the American Psychiatric Association, a psychological disorder, or mental disorder, is “a syndrome characterized by clinically significant disturbance in an individual's cognition, emotion regulation, or behavior that reflects a dysfunction in the psychological, biological, or developmental processes underlying mental functioning. Mental disorders are usually associated with significant distress in social, occupational, or other important activities” (2013). Psychopathology is the study of psychological disorders, including their symptoms, etiology (i.e., their causes), and treatment. The term psychopathology can also refer to the manifestation of a psychological disorder. Although consensus can be difficult, it is extremely important for mental health professionals to agree on what kinds of thoughts, feelings, and behaviors are truly abnormal in the sense that they genuinely indicate the presence of psychopathology. Certain patterns of behavior and inner experience can easily be labeled as abnormal and clearly signify some kind of psychological disturbance. The person who washes their hands 40 times per day and the person who claims to hear the voices of demons exhibit behaviors and inner experiences that most would regard as abnormal: beliefs and behaviors that suggest the existence of a psychological disorder. But, consider the nervousness a young man feels when talking to an attractive person or the loneliness and longing for home a first-year student experiences during her first semester of college—these feelings may not be regularly present, but they fall in the range of normal. So, what kinds of thoughts, feelings, and behaviors represent a true psychological disorder? Psychologists work to distinguish psychological disorders from inner experiences and behaviors that are merely situational, idiosyncratic, or unconventional. Mental health issues are often incorrectly viewed as less important than physical illnesses, and sometimes people are blamed or otherwise stigmatized for their condition. People with mental illnesses did not choose or create their illness, and cannot simply manage it through positive thinking or other attitudinal changes. Diagnosis, treatment, and support are all necessary, and all must be considered with respect and sensitivity to the extremely challenging nature of mental illness. While not everyone experiencing difficulty has a psychological disorder, mental health is critical to our ability to function in our relationships, education, and work. It is important that people talk with qualified professionals if they are having persistent feelings or experiences in line with the descriptions below; the discussion may or may not lead to a diagnosis, but as with physical illnesses, one has a better chance at success if they raise the issues with doctors or other experts. Definition of a Psychological Disorder Perhaps the simplest approach to conceptualizing psychological disorders is to label behaviors, thoughts, and inner experiences that are atypical, distressful, dysfunctional, and sometimes even dangerous, as signs of a disorder. For example, if you ask a classmate for a date and you are rejected, you probably would feel a little dejected. Such feelings would be normal. If you felt extremely depressed—so much so that you lost interest in activities, had difficulty eating or sleeping, felt utterly worthless, and contemplated suicide—your feelings would be atypical, would deviate from the norm, and could signify the presence of a psychological disorder. Just because something is atypical, however, does not necessarily mean it is disordered. For example, only about 4% of people in the United States have red hair, so red hair is considered an atypical characteristic (See figure 15.2), but it is not considered disordered, it’s just unusual. And it is less unusual in Scotland, where approximately 13% of the population has red hair (“DNA Project Aims,” 2012). As you will learn, some disorders, although not exactly typical, are far from atypical, and the rates in which they appear in the population are surprisingly high. If we can agree that merely being atypical is an insufficient criterion for a having a psychological disorder, is it reasonable to consider behavior or inner experiences that differ from widely expected cultural values or expectations as disordered? Using this criterion, a woman who walks around a subway platform wearing a heavy winter coat in July while screaming obscenities at strangers may be considered as exhibiting symptoms of a psychological disorder. Her actions and clothes violate socially accepted rules governing appropriate dress and behavior; these characteristics are atypical. Cultural Expectations Violating cultural expectations is not, in and of itself, a satisfactory means of identifying the presence of a psychological disorder. Since behavior varies from one culture to another, what may be expected and considered appropriate in one culture may not be viewed as such in other cultures. For example, returning a stranger’s smile is expected in the United States because a pervasive social norm dictates that we reciprocate friendly gestures. A person who refuses to acknowledge such gestures might be considered socially awkward—perhaps even disordered—for violating this expectation. However, such expectations are not universally shared. Cultural expectations in Japan involve showing reserve, restraint, and a concern for maintaining privacy around strangers. Japanese people are generally unresponsive to smiles from strangers (Patterson et al., 2007). Eye contact provides another example. In the United States and Europe, eye contact with others typically signifies honesty and attention. However, most Latin-American, Asian, and African cultures interpret direct eye contact as rude, confrontational, and aggressive (Pazain, 2010). Thus, someone who makes eye contact with you could be considered appropriate and respectful or brazen and offensive, depending on your culture (See figure 15.3 below). Hallucinations (seeing or hearing things that are not physically present) in Western societies is a violation of cultural expectations, and a person who reports such inner experiences is readily labeled as psychologically disordered. In other cultures, visions that, for example, pertain to future events may be regarded as normal experiences that are positively valued (Bourguignon, 1970). Finally, it is important to recognize that cultural norms change over time: what might be considered typical in a society at one time may no longer be viewed this way later, similar to how fashion trends from one era may elicit quizzical looks decades later—imagine how a headband, legwarmers, and the big hair of the 1980s would go over on your campus today. DIG DEEPER: The Myth of Mental Illness In the 1950s and 1960s, the concept of mental illness was widely criticized. One of the major criticisms focused on the notion that mental illness was a “myth that justifies psychiatric intervention in socially disapproved behavior” (Wakefield, 1992). Thomas Szasz (1960), a noted psychiatrist, was perhaps the biggest proponent of this view. Szasz argued that the notion of mental illness was invented by society (and the mental health establishment) to stigmatize and subjugate people whose behavior violates accepted social and legal norms. Indeed, Szasz suggested that what appear to be symptoms of mental illness are more appropriately characterized as “problems in living” (Szasz, 1960). In his 1961 book, The Myth of Mental Illness: Foundations of a Theory of Personal Conduct, Szasz expressed his disdain for the concept of mental illness and for the field of psychiatry in general (Oliver, 2006). The basis for Szasz’s attack was his contention that detectable abnormalities in bodily structures and functions (e.g., infections and organ damage or dysfunction) represent the defining features of genuine illness or disease, and because symptoms of purported mental illness are not accompanied by such detectable abnormalities, so-called psychological disorders are not disorders at all. Szasz (1961/2010) proclaimed that “disease or illness can only affect the body; hence, there can be no mental illness” (p. 267). Today, we recognize the extreme level of psychological suffering experienced by people with psychological disorders: the painful thoughts and feelings they experience, the disordered behavior they demonstrate, and the levels of distress and impairment they exhibit. This makes it very difficult to deny the reality of mental illness. However controversial Szasz’s views and those of his supporters might have been, they have influenced the mental health community and society in several ways. First, lay people, politicians, and professionals now often refer to mental illness as mental health “problems,” implicitly acknowledging the “problems in living” perspective Szasz described (Buchanan-Barker & Barker, 2009). Also influential was Szasz’s view of homosexuality. Szasz was perhaps the first psychiatrist to openly challenge the idea that homosexuality represented a form of mental illness or disease (Szasz, 1965). By challenging the idea that homosexuality represented a form a mental illness, Szasz helped pave the way for the social and civil rights that gay and lesbian people now have (Barker, 2010). His work also inspired legal changes that protect the rights of people in psychiatric institutions and allow such individuals a greater degree of influence and responsibility over their lives (Buchanan-Barker & Barker, 2009). Harmful Dysfunction If none of the criterion discussed so far is adequate by itself to define the presence of a psychological disorder, how can a disorder be conceptualized? Many efforts have been made to identify the specific dimensions of psychological disorders, yet none is entirely satisfactory. No universal definition of psychological disorder exists that can apply to all situations in which a disorder is thought to be present (Zachar & Kendler, 2007). However, one of the more influential conceptualizations was proposed by Wakefield (1992), who defined psychological disorder as a harmful dysfunction. Wakefield argued that natural internal mechanisms—that is, psychological processes honed by evolution, such as cognition, perception, and learning—have important functions, such as enabling us to experience the world the way others do and to engage in rational thought, problem solving, and communication. For example, learning allows us to associate a fear with a potential danger in such a way that the intensity of fear is roughly equal to the degree of actual danger. Dysfunction occurs when an internal mechanism breaks down and can no longer perform its normal function. But, the presence of a dysfunction by itself does not determine a disorder. The dysfunction must be harmful in that it leads to negative consequences for the individual or for others, as judged by the standards of the individual’s culture. The harm may include significant internal anguish (e.g., high levels of anxiety or depression) or problems in day-to-day living (e.g., in one’s social or work life). To illustrate, Janet has an extreme fear of spiders. Janet’s fear might be considered a dysfunction in that it signals that the internal mechanism of learning is not working correctly (i.e., a faulty process prevents Janet from appropriately associating the magnitude of fear with the actual threat posed by spiders). Janet’s fear of spiders has a significant negative influence on daily life: she avoids all situations in which she suspects spiders to be present (e.g., the basement or a friend’s home), and she quit her job last month because she saw a spider in the restroom at work and is now unemployed. According to the harmful dysfunction model, Janet’s condition would signify a disorder because (a) there is a dysfunction in an internal mechanism, and (b) the dysfunction has resulted in harmful consequences. Similar to how the symptoms of physical illness reflect dysfunctions in biological processes, the symptoms of psychological disorders presumably reflect dysfunctions in mental processes. The internal mechanism component of this model is especially appealing because it implies that disorders may occur through a breakdown of biological functions that govern various psychological processes, thus supporting contemporary neurobiological models of psychological disorders (Fabrega, 2007). The American Psychiatric Association (APA) Definition Many of the features of the harmful dysfunction model are incorporated in a formal definition of psychological disorder developed by the American Psychiatric Association (APA). According to the APA (2013), a psychological disorder is a condition that is said to consist of the following: • There are significant disturbances in thoughts, feelings, and behaviors. A person must experience inner states (e.g., thoughts and/or feelings) and exhibit behaviors that are clearly disturbed—that is, unusual, but in a negative, self-defeating way. Often, such disturbances are troubling to those around the individual who experiences them. For example, an individual who is uncontrollably preoccupied by thoughts of germs spends hours each day bathing, has inner experiences, and displays behaviors that most would consider atypical and negative (disturbed) and that would likely be troubling to family members. • The disturbances reflect some kind of biological, psychological, or developmental dysfunction. Disturbed patterns of inner experiences and behaviors should reflect some flaw (dysfunction) in the internal biological, psychological, and developmental mechanisms that lead to normal, healthy psychological functioning. For example, the hallucinations observed in schizophrenia could be a sign of brain abnormalities. • The disturbances lead to significant distress or disability in one’s life. A person’s inner experiences and behaviors are considered to reflect a psychological disorder if they cause the person considerable distress, or greatly impair his ability to function as a normal individual (often referred to as functional impairment, or occupational and social impairment). As an illustration, a person’s fear of social situations might be so distressing that it causes the person to avoid all social situations (e.g., preventing that person from being able to attend class or apply for a job). • The disturbances do not reflect expected or culturally approved responses to certain events. Disturbances in thoughts, feelings, and behaviors must be socially unacceptable responses to certain events that often happen in life. For example, it is perfectly natural (and expected) that a person would experience great sadness and might wish to be left alone following the death of a close family member. Because such reactions are in some ways culturally expected, the individual would not be assumed to signify a mental disorder. Some believe that there is no essential criterion or set of criteria that can definitively distinguish all cases of disorder from nondisorder (Lilienfeld & Marino, 1999). In truth, no single approach to defining a psychological disorder is adequate by itself, nor is there universal agreement on where the boundary is between disordered and not disordered. From time to time we all experience anxiety, unwanted thoughts, and moments of sadness; our behavior at other times may not make much sense to ourselves or to others. These inner experiences and behaviors can vary in their intensity, but are only considered disordered when they are highly disturbing to us and/or others, suggest a dysfunction in normal mental functioning, and are associated with significant distress or disability in social or occupational activities.
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/15%3A__Psychological_Disorders/15.02%3A_What_Are_Psychological_Disorders.txt
Learning Objectives • Explain why classification systems are necessary in the study of psychopathology • Describe the basic features of the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) • Discuss changes in the DSM over time, including criticisms of the current edition • Identify which disorders are generally the most common A first step in the study of psychological disorders is carefully and systematically discerning significant signs and symptoms. How do mental health professionals ascertain whether or not a person’s inner states and behaviors truly represent a psychological disorder? Arriving at a proper diagnosis—that is, appropriately identifying and labeling a set of defined symptoms—is absolutely crucial. This process enables professionals to use a common language with others in the field and aids in communication about the disorder with the patient, colleagues and the public. A proper diagnosis is an essential element to guide proper and successful treatment. For these reasons, classification systems that organize psychological disorders systematically are necessary. The Diagnostic and Statistical Manual of Mental Disorders (DSM) Although a number of classification systems have been developed over time, the one that is used by most mental health professionals in the United States is the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), published by the American Psychiatric Association (2013). (Note that the American Psychiatric Association differs from the American Psychological Association; both are abbreviated APA.) The first edition of the DSM, published in 1952, classified psychological disorders according to a format developed by the U.S. Army during World War II (Clegg, 2012). In the years since, the DSM has undergone numerous revisions and editions. The most recent edition, published in 2013, is the DSM-5 (APA, 2013). The DSM-5 includes many categories of disorders (e.g., anxiety disorders, depressive disorders, and dissociative disorders). Each disorder is described in detail, including an overview of the disorder (diagnostic features), specific symptoms required for diagnosis (diagnostic criteria), prevalence information (what percent of the population is thought to be afflicted with the disorder), and risk factors associated with the disorder. Figure 15.4 shows lifetime prevalence rates—the percentage of people in a population who develop a disorder in their lifetime—of various psychological disorders among U.S. adults. These data were based on a national sample of \(9,282\) U.S. residents (National Comorbidity Survey, 2007). The DSM-5 also provides information about comorbidity; the co-occurrence of two disorders. For example, the DSM-5 mentions that \(41\%\) of people with obsessive-compulsive disorder (OCD) also meet the diagnostic criteria for major depressive disorder (See figure 15.5). Drug use is highly comorbid with other mental illnesses; \(6\) out of \(10\) people who have a substance use disorder also suffer from another form of mental illness (National Institute on Drug Abuse [NIDA], 2007). Connect the Concepts: Comorbidity As you’ve learned in the text, comorbidity refers to situations in which an individual suffers from more than one disorder, and often the symptoms of each can interact in negative ways. Co-occurrence and comorbidity of psychological disorders are quite common, and some of the most pervasive comorbidities involve substance use disorders that co-occur with psychological disorders. Indeed, some estimates suggest that around a quarter of people who suffer from the most severe cases of mental illness exhibit substance use disorder as well. Conversely, around 10 percent of individuals seeking treatment for substance use disorder have serious mental illnesses. Observations such as these have important implications for treatment options that are available. When people with a mental illness are also habitual drug users, their symptoms can be exacerbated and resistant to treatment. Furthermore, it is not always clear whether the symptoms are due to drug use, the mental illness, or a combination of the two. Therefore, it is recommended that behavior is observed in situations in which the individual has ceased using drugs and is no longer experiencing withdrawal from the drug in order to make the most accurate diagnosis (NIDA, 2018). Obviously, substance use disorders are not the only possible comorbidities. In fact, some of the most common psychological disorders tend to co-occur. For instance, more than half of individuals who have a primary diagnosis of depressive disorder are estimated to exhibit some sort of anxiety disorder. The reverse is also true for those diagnosed with a primary diagnosis of an anxiety disorder. Further, anxiety disorders and major depression have a high rate of comorbidity with several other psychological disorders (Al-Asadi, Klein, & Meyer, 2015). The DSM has changed considerably in the half-century since it was originally published. The first two editions of the DSM, for example, listed homosexuality as a disorder; however, in 1973, the APA voted to remove it from the manual (Silverstein, 2009). Additionally, beginning with the DSM-III in 1980, mental disorders have been described in much greater detail, and the number of diagnosable conditions has grown steadily, as has the size of the manual itself. DSM-I included \(106\) diagnoses and was \(130\) total pages, whereas DSM-III included more than \(2\) times as many diagnoses (\(265\)) and was nearly seven times its size (\(886\) total pages) (Mayes & Horowitz, 2005). Although DSM-5 is longer than DSM-IV, the volume includes only \(237\) disorders, a decrease from the \(297\) disorders that were listed in DSM-IV. The latest edition, DSM-5, includes revisions in the organization and naming of categories and in the diagnostic criteria for various disorders (Regier, Kuhl, & Kupfer, 2012), while emphasizing careful consideration of the importance of gender and cultural difference in the expression of various symptoms (Fisher, 2010). Some believe that establishing new diagnoses might overpathologize the human condition by turning common human problems into mental illnesses (The Associated Press, 2013). Indeed, the finding that nearly half of all Americans will meet the criteria for a DSM disorder at some point in their life (Kessler et al., 2005) likely fuels much of this skepticism. The DSM-5 is also criticized on the grounds that its diagnostic criteria have been loosened, thereby threatening to “turn our current diagnostic inflation into diagnostic hyperinflation” (Frances, 2012, para. 22). For example, DSM-IV specified that the symptoms of major depressive disorder must not be attributable to normal bereavement (loss of a loved one). The DSM-5, however, has removed this bereavement exclusion, essentially meaning that grief and sadness after a loved one’s death can constitute major depressive disorder. The International Classification of Diseases A second classification system, the International Classification of Diseases (ICD), is also widely recognized. Published by the World Health Organization (WHO), the ICD was developed in Europe shortly after World War II and, like the DSM, has been revised several times. The categories of psychological disorders in both the DSM and ICD are similar, as are the criteria for specific disorders; however, some differences exist. Although the ICD is used for clinical purposes, this tool is also used to examine the general health of populations and to monitor the prevalence of diseases and other health problems internationally (WHO, 2013). The ICD is in its \(10^{th}\) edition (ICD-10); however, efforts are now underway to develop a new edition (ICD-11) that, in conjunction with the changes in DSM-5, will help harmonize the two classification systems as much as possible (APA, 2013). A study that compared the use of the two classification systems found that worldwide the ICD is more frequently used for clinical diagnosis, whereas the DSM is more valued for research (Mezzich, 2002). Most research findings concerning the etiology and treatment of psychological disorders are based on criteria set forth in the DSM (Oltmanns & Castonguay, 2013). The DSM also includes more explicit disorder criteria, along with an extensive and helpful explanatory text (Regier et al., 2012). The DSM is the classification system of choice among U.S. mental health professionals, and this chapter is based on the DSM paradigm. The Compassionate View of Psychological Disorders As these disorders are outlined, please bear two things in mind. First, remember that psychological disorders represent extremes of inner experience and behavior. If, while reading about these disorders, you feel that these descriptions begin to personally characterize you, do not worry—this moment of enlightenment probably means nothing more than you are normal. Each of us experiences episodes of sadness, anxiety, and preoccupation with certain thoughts—times when we do not quite feel ourselves. These episodes should not be considered problematic unless the accompanying thoughts and behaviors become extreme and have a disruptive effect on one’s life. Second, understand that people with psychological disorders are far more than just embodiments of their disorders. We do not use terms such as schizophrenics, depressives, or phobics because they are labels that objectify people who suffer from these conditions, thus promoting biased and disparaging assumptions about them. It is important to remember that a psychological disorder is not what a person is; it is something that a person has—through no fault of his or her own. As is the case with cancer or diabetes, those with psychological disorders suffer debilitating, often painful conditions that are not of their own choosing. These individuals deserve to be viewed and treated with compassion, understanding, and dignity.
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/15%3A__Psychological_Disorders/15.03%3A_Diagnosing_and_Classifying_Psychological_Disorders.txt
Learning Objectives • Discuss supernatural perspectives on the origin of psychological disorders, in their historical context • Describe modern biological and psychological perspectives on the origin of psychological disorders • Identify which disorders generally show the highest degree of heritability • Describe the diathesis-stress model and its importance to the study of psychopathology Scientists and mental health professionals may adopt different perspectives in attempting to understand or explain the underlying mechanisms that contribute to the development of a psychological disorder. The perspective used in explaining a psychological disorder is extremely important, in that it will consist of explicit assumptions regarding how best to study the disorder, its etiology, and what kinds of therapies or treatments are most beneficial. Different perspectives provide alternate ways for how to think about the nature of psychopathology. Supernatural Perspectives of Psychological Disorders For centuries, psychological disorders were viewed from a supernatural perspective: attributed to a force beyond scientific understanding. Those afflicted were thought to be practitioners of black magic or possessed by spirits (See figure 15.6) (Maher & Maher, 1985). For example, convents throughout Europe in the \(16^{th}\) and \(17^{th}\) centuries reported hundreds of nuns falling into a state of frenzy in which the afflicted foamed at the mouth, screamed and convulsed, sexually propositioned priests, and confessed to having carnal relations with devils or Christ. Although, today, these cases would suggest serious mental illness; at the time, these events were routinely explained as possession by devilish forces (Waller, 2009a). Similarly, grievous fits by young girls are believed to have precipitated the witch panic in New England late in the \(17^{th}\) century (Demos, 1983). Such beliefs in supernatural causes of mental illness are still held in some societies today; for example, beliefs that supernatural forces cause mental illness are common in some cultures in modern-day Nigeria (Aghukwa, 2012). DIG DEEPER: Dancing Mania Between the \(11^{th}\) and \(17^{th}\) centuries, a curious epidemic swept across Western Europe. Groups of people would suddenly begin to dance with wild abandon. This compulsion to dance—referred to as dancing mania—sometimes gripped thousands of people at a time (See figure 15.7. Historical accounts indicate that those afflicted would sometimes dance with bruised and bloody feet for days or weeks, screaming of terrible visions and begging priests and monks to save their souls (Waller, 2009b). What caused dancing mania is not known, but several explanations have been proposed, including spider venom and ergot poisoning (“Dancing Mania,” 2011). Historian John Waller (2009a, 2009b) has provided a comprehensive and convincing explanation of dancing mania that suggests the phenomenon was attributable to a combination of three factors: psychological distress, social contagion, and belief in supernatural forces. Waller argued that various disasters of the time (such as famine, plagues, and floods) produced high levels of psychological distress that could increase the likelihood of succumbing to an involuntary trance state. Waller indicated that anthropological studies and accounts of possession rituals show that people are more likely to enter a trance state if they expect it to happen, and that entranced individuals behave in a ritualistic manner, their thoughts and behavior shaped by the spiritual beliefs of their culture. Thus, during periods of extreme physical and mental distress, all it took were a few people—believing themselves to have been afflicted with a dancing curse—to slip into a spontaneous trance and then act out the part of one who is cursed by dancing for days on end. Biological Perspectives of Psychological Disorders The biological perspective views psychological disorders as linked to biological phenomena, such as genetic factors, chemical imbalances, and brain abnormalities; it has gained considerable attention and acceptance in recent decades (Wyatt & Midkiff, 2006). Evidence from many sources indicates that most psychological disorders have a genetic component; in fact, there is little dispute that some disorders are largely due to genetic factors. The graph in figure 15.8 shows heritability estimates for schizophrenia. Findings such as these have led many of today’s researchers to search for specific genes and genetic mutations that contribute to mental disorders. Also, sophisticated neural imaging technology in recent decades has revealed how abnormalities in brain structure and function might be directly involved in many disorders, and advances in our understanding of neurotransmitters and hormones have yielded insights into their possible connections. The biological perspective is currently thriving in the study of psychological disorders. The Diathesis-Stress Model of Psychological Disorders Despite advances in understanding the biological basis of psychological disorders, the psychosocial perspective is still very important. This perspective emphasizes the importance of learning, stress, faulty and self-defeating thinking patterns, and environmental factors. Perhaps the best way to think about psychological disorders, then, is to view them as originating from a combination of biological and psychological processes. Many develop not from a single cause, but from a delicate fusion between partly biological and partly psychosocial factors. The diathesis-stress model (Zuckerman, 1999) integrates biological and psychosocial factors to predict the likelihood of a disorder. This diathesis-stress model suggests that people with an underlying predisposition for a disorder (i.e., a diathesis) are more likely than others to develop a disorder when faced with adverse environmental or psychological events (i.e., stress), such as childhood maltreatment, negative life events, trauma, and so on. A diathesis is not always a biological vulnerability to an illness; some diatheses may be psychological (e.g., a tendency to think about life events in a pessimistic, self-defeating way). The key assumption of the diathesis-stress model is that both factors, diathesis and stress, are necessary in the development of a disorder. Different models explore the relationship between the two factors: the level of stress needed to produce the disorder is inversely proportional to the level of diathesis.
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/15%3A__Psychological_Disorders/15.04%3A_Perspectives_on_Psychological_Disorders.txt
Learning Objectives • Distinguish normal anxiety from pathological anxiety • List and describe the major anxiety disorders, including their main features and prevalence • Describe basic psychological and biological factors that are suspected to be important in the etiology of anxiety disorder Everybody experiences anxiety from time to time. Although anxiety is closely related to fear, the two states possess important differences. Fear involves an instantaneous reaction to an imminent threat, whereas anxiety involves apprehension, avoidance, and cautiousness regarding a potential threat, danger, or other negative event (Craske, 1999). While anxiety is unpleasant to most people, it is important to our health, safety, and well-being. Anxiety motivates us to take actions—such as preparing for exams, watching our weight, showing up to work on time—that enable us to avert potential future problems. Anxiety also motivates us to avoid certain things—such as running up debts and engaging in illegal activities—that could lead to future trouble. Most individuals’ level and duration of anxiety approximates the magnitude of the potential threat they face. For example, suppose a single woman in her late \(30s\) who wishes to marry is concerned about the possibility of having to settle for a spouse who is less attractive and educated than desired. This woman likely would experience anxiety of greater intensity and duration than would a \(21\)-year-old college junior who is having trouble finding a date for the annual social. Some people, however, experience anxiety that is excessive, persistent, and greatly out of proportion to the actual threat; if one’s anxiety has a disruptive influence on one’s live, this is a strong indicator that the individual is experiencing an anxiety disorder. Anxiety disorders are characterized by excessive and persistent fear and anxiety, and by related disturbances in behavior (APA, 2013). Although anxiety is universally experienced, anxiety disorders cause considerable distress. As a group, anxiety disorders are common: approximately \(25\%-30\%\) of the U.S. population meets the criteria for at least one anxiety disorder during their lifetime (Kessler et al., 2005). Also, these disorders appear to be much more common in women than they are in men; within a \(12\)-month period, around \(23\%\) of women and \(14\%\) of men will experience at least one anxiety disorder (National Comorbidity Survey, 2007). Anxiety disorders are the most frequently occurring class of mental disorders and are often comorbid with each other and with other mental disorders (Kessler, Ruscio, Shear, & Wittchen, 2009). Specific Phobia Phobia is a Greek word that means fear. A person diagnosed with a specific phobia (formerly known as simple phobia) experiences excessive, distressing, and persistent fear or anxiety about a specific object or situation (such as animals, enclosed spaces, elevators, or flying) (APA, 2013). Even though people realize their level of fear and anxiety in relation to the phobic stimulus is irrational, some people with a specific phobia may go to great lengths to avoid the phobic stimulus (the object or situation that triggers the fear and anxiety). Typically, the fear and anxiety a phobic stimulus elicits is disruptive to the person’s life. For example, a man with a phobia of flying might refuse to accept a job that requires frequent air travel, thus negatively affecting his career. Clinicians who have worked with people who have specific phobias have encountered many kinds of phobias, some of which are shown in Table 15.1. Table 15.1 Specific Phobias Phobia Feared Object or Situation Acrophobia heights Aerophobia flying Arachnophobia spiders Claustrophobia enclosed spaces Cynophobia dogs Hematophobia blood Ophidiophobia snakes Taphophobia being buried alive Trypanophobia injections Xenophobia strangers Specific phobias are common; in the United States, around \(12.5\%\) of the population will meet the criteria for a specific phobia at some point in their lifetime (Kessler et al., 2005). One type of phobia, agoraphobia, is listed in the DSM-5 as a separate anxiety disorder. Agoraphobia, which literally means “fear of the marketplace,” is characterized by intense fear, anxiety, and avoidance of situations in which it might be difficult to escape or receive help if one experiences symptoms of a panic attack (a state of extreme anxiety that we will discuss shortly). These situations include public transportation, open spaces (parking lots), enclosed spaces (stores), crowds, or being outside the home alone (APA, 2013). About \(1.4\%\) of Americans experience agoraphobia during their lifetime (Kessler et al., 2005). Acquisition of Phobias through Learning Many theories suggest that phobias develop through learning. Rachman (1977) proposed that phobias can be acquired through three major learning pathways. The first pathway is through classical conditioning. As you may recall, classical conditioning is a form of learning in which a previously neutral stimulus is paired with an unconditioned stimulus (UCS) that reflexively elicits an unconditioned response (UCR), eliciting the same response through its association with the unconditioned stimulus. The response is called a conditioned response (CR). For example, a child who has been bitten by a dog may come to fear dogs because of her past association with pain. In this case, the dog bite is the UCS and the fear it elicits is the UCR. Because a dog was associated with the bite, any dog may come to serve as a conditioned stimulus, thereby eliciting fear; the fear the child experiences around dogs, then, becomes a CR. The second pathway of phobia acquisition is through vicarious learning, such as modeling. For example, a child who observes his cousin react fearfully to spiders may later express the same fears, even though spiders have never presented any danger to him. This phenomenon has been observed in both humans and nonhuman primates (Olsson & Phelps, 2007). A study of laboratory-reared monkeys readily acquired a fear of snakes after observing wild-reared monkeys react fearfully to snakes (Mineka & Cook, 1993). The third pathway is through verbal transmission or information. For example, a child whose parents, siblings, friends, and classmates constantly tell her how disgusting and dangerous snakes are may come to acquire a fear of snakes. Interestingly, people are more likely to develop phobias of things that do not represent much actual danger to themselves, such as animals and heights, and are less likely to develop phobias toward things that present legitimate danger in contemporary society, such as motorcycles and weapons (Öhman & Mineka, 2001). Why might this be so? One theory suggests that the human brain is evolutionarily predisposed to more readily associate certain objects or situations with fear (Seligman, 1971). This theory argues that throughout our evolutionary history, our ancestors associated certain stimuli (e.g., snakes, spiders, heights, and thunder) with potential danger. As time progressed, the mind has become adapted to more readily develop fears of these things than of others. Experimental evidence has consistently demonstrated that conditioned fears develop more readily to fear-relevant stimuli (images of snakes and spiders) than to fear-irrelevant stimuli (images of flowers and berries) (Öhman & Mineka, 2001). Such prepared learning has also been shown to occur in monkeys. In one study (Cook & Mineka, 1989), monkeys watched videotapes of model monkeys reacting fearfully to either fear-relevant stimuli (toy snakes or a toy crocodile) or fear-irrelevant stimuli (flowers or a toy rabbit). The observer monkeys developed fears of the fear-relevant stimuli but not the fear-irrelevant stimuli. Social Anxiety Disorder Social anxiety disorder (formerly called social phobia) is characterized by extreme and persistent fear or anxiety and avoidance of social situations in which the person could potentially be evaluated negatively by others (APA, 2013). As with specific phobias, social anxiety disorder is common in the United States; a little over 12% of all Americans experience social anxiety disorder during their lifetime (Kessler et al., 2005). The heart of the fear and anxiety in social anxiety disorder is the person’s concern that he may act in a humiliating or embarrassing way, such as appearing foolish, showing symptoms of anxiety (blushing), or doing or saying something that might lead to rejection (such as offending others). The kinds of social situations in which individuals with social anxiety disorder usually have problems include public speaking, having a conversation, meeting strangers, eating in restaurants, and, in some cases, using public restrooms. Although many people become anxious in social situations like public speaking, the fear, anxiety, and avoidance experienced in social anxiety disorder are highly distressing and lead to serious impairments in life. Adults with this disorder are more likely to experience lower educational attainment and lower earnings (Katzelnick et al., 2001), perform more poorly at work and are more likely to be unemployed (Moitra, Beard, Weisberg, & Keller, 2011), and report greater dissatisfaction with their family lives, friends, leisure activities, and income (Stein & Kean, 2000). When people with social anxiety disorder are unable to avoid situations that provoke anxiety, they typically perform safety behaviors: mental or behavioral acts that reduce anxiety in social situations by reducing the chance of negative social outcomes. Safety behaviors include avoiding eye contact, rehearsing sentences before speaking, talking only briefly, and not talking about oneself (Alden & Bieling, 1998). Other examples of safety behaviors include the following (Marker, 2013): • assuming roles in social situations that minimize interaction with others (e.g., taking pictures, setting up equipment, or helping prepare food) • asking people many questions to keep the focus off of oneself • selecting a position to avoid scrutiny or contact with others (sitting in the back of the room) • wearing bland, neutral clothes to avoid drawing attention to oneself • avoiding substances or activities that might cause anxiety symptoms (such as caffeine, warm clothing, and physical exercise) Although these behaviors are intended to prevent the person with social anxiety disorder from doing something awkward that might draw criticism, these actions usually exacerbate the problem because they do not allow the individual to disconfirm his negative beliefs, often eliciting rejection and other negative reactions from others (Alden & Bieling, 1998). People with social anxiety disorder may resort to self-medication, such as drinking alcohol, as a means to avert the anxiety symptoms they experience in social situations (Battista & Kocovski, 2010). The use of alcohol when faced with such situations may become negatively reinforcing: encouraging individuals with social anxiety disorder to turn to the substance whenever they experience anxiety symptoms. The tendency to use alcohol as a coping mechanism for social anxiety, however, can come with a hefty price tag: a number of large scale studies have reported a high rate of comorbidity between social anxiety disorder and alcohol use disorder (Morris, Stewart, & Ham, 2005). As with specific phobias, it is highly probable that the fears inherent to social anxiety disorder can develop through conditioning experiences. For example, a child who is subjected to early unpleasant social experiences (e.g., bullying at school) may develop negative social images of herself that become activated later in anxiety-provoking situations (Hackmann, Clark, & McManus, 2000). Indeed, one study reported that 92% of a sample of adults with social anxiety disorder reported a history of severe teasing in childhood, compared to only \(35\%\) of a sample of adults with panic disorder (McCabe, Antony, Summerfeldt, Liss, & Swinson, 2003). One of the most well-established risk factors for developing social anxiety disorder is behavioral inhibition (Clauss & Blackford, 2012). Behavioral inhibition is thought to be an inherited trait, and it is characterized by a consistent tendency to show fear and restraint when presented with unfamiliar people or situations (Kagan, Reznick, & Snidman, 1988). Behavioral inhibition is displayed very early in life; behaviorally inhibited toddlers and children respond with great caution and restraint in unfamiliar situations, and they are often timid, fearful, and shy around unfamiliar people (Fox, Henderson, Marshall, Nichols, & Ghera, 2005). A recent statistical review of studies demonstrated that behavioral inhibition was associated with more than a sevenfold increase in the risk of development of social anxiety disorder, demonstrating that behavioral inhibition is a major risk factor for the disorder (Clauss & Blackford, 2012). Panic Disorder Imagine that you are at the mall one day with your friends and—suddenly and inexplicably—you begin sweating and trembling, your heart starts pounding, you have trouble breathing, and you start to feel dizzy and nauseous. This episode lasts for \(10\) minutes and is terrifying because you start to think that you are going to die. When you visit your doctor the following morning and describe what happened, she tells you that you have experienced a panic attack (See figure 15.9). If you experience another one of these episodes two weeks later and worry for a month or more that similar episodes will occur in the future, it is likely that you have developed panic disorder. People with panic disorder experience recurrent (more than one) and unexpected panic attacks, along with at least one month of persistent concern about additional panic attacks, worry over the consequences of the attacks, or self-defeating changes in behavior related to the attacks (e.g., avoidance of exercise or unfamiliar situations) (APA, 2013). As is the case with other anxiety disorders, the panic attacks cannot result from the physiological effects of drugs and other substances, a medical condition, or another mental disorder. A panic attack is defined as a period of extreme fear or discomfort that develops abruptly and reaches a peak within \(10\) minutes. Its symptoms include accelerated heart rate, sweating, trembling, choking sensations, hot flashes or chills, dizziness or lightheadedness, fears of losing control or going crazy, and fears of dying (APA, 2013). Sometimes panic attacks are expected, occurring in response to specific environmental triggers (such as being in a tunnel); other times, these episodes are unexpected and emerge randomly (such as when relaxing). According to the DSM-5, the person must experience unexpected panic attacks to qualify for a diagnosis of panic disorder. Experiencing a panic attack is often terrifying. Rather than recognizing the symptoms of a panic attack merely as signs of intense anxiety, individuals with panic disorder often misinterpret them as a sign that something is intensely wrong internally (thinking, for example, that the pounding heart represents an impending heart attack). Panic attacks can occasionally precipitate trips to the emergency room because several symptoms of panic attacks are, in fact, similar to those associated with heart problems (e.g., palpitations, racing pulse, and a pounding sensation in the chest) (Root, 2000). Unsurprisingly, those with panic disorder fear future attacks and may become preoccupied with modifying their behavior in an effort to avoid future panic attacks. For this reason, panic disorder is often characterized as fear of fear (Goldstein & Chambless, 1978). Panic attacks themselves are not mental disorders. Indeed, around \(23\%\) of Americans experience isolated panic attacks in their lives without meeting the criteria for panic disorder (Kessler et al., 2006), indicating that panic attacks are fairly common. Panic disorder is, of course, much less common, afflicting \(4.7\%\) of Americans during their lifetime (Kessler et al., 2005). Many people with panic disorder develop agoraphobia, which is marked by fear and avoidance of situations in which escape might be difficult or help might not be available if one were to develop symptoms of a panic attack. People with panic disorder often experience a comorbid disorder, such as other anxiety disorders or major depressive disorder (APA, 2013). Researchers are not entirely sure what causes panic disorder. Children are at a higher risk of developing panic disorder if their parents have the disorder (Biederman et al., 2001), and family and twins studies indicate that the heritability of panic disorder is around \(43\%\) (Hettema, Neale, & Kendler, 2001). The exact genes and gene functions involved in this disorder, however, are not well-understood (APA, 2013). Neurobiological theories of panic disorder suggest that a region of the brain called the locus coeruleus may play a role in this disorder. Located in the brainstem, the locus coeruleus is the brain’s major source of norepinephrine, a neurotransmitter that triggers the body’s fight-or-flight response. Activation of the locus coeruleus is associated with anxiety and fear, and research with nonhuman primates has shown that stimulating the locus coeruleus either electrically or through drugs produces panic-like symptoms (Charney et al., 1990). Such findings have led to the theory that panic disorder may be caused by abnormal norepinephrine activity in the locus coeruleus (Bremner, Krystal, Southwick, & Charney, 1996). Conditioning theories of panic disorder propose that panic attacks are classical conditioning responses to subtle bodily sensations resembling those normally occurring when one is anxious or frightened (Bouton, Mineka, & Barlow, 2001). For example, consider a child who has asthma. An acute asthma attack produces sensations, such as shortness of breath, coughing, and chest tightness, that typically elicit fear and anxiety. Later, when the child experiences subtle symptoms that resemble the frightening symptoms of earlier asthma attacks (such as shortness of breath after climbing stairs), he may become anxious, fearful, and then experience a panic attack. In this situation, the subtle symptoms would represent a conditioned stimulus, and the panic attack would be a conditioned response. The finding that panic disorder is nearly three times as frequent among people with asthma as it is among people without asthma (Weiser, 2007) supports the possibility that panic disorder has the potential to develop through classical conditioning. Cognitive factors may play an integral part in panic disorder. Generally, cognitive theories (Clark, 1996) argue that those with panic disorder are prone to interpret ordinary bodily sensations catastrophically, and these fearful interpretations set the stage for panic attacks. For example, a person might detect bodily changes that are routinely triggered by innocuous events such getting up from a seated position (dizziness), exercising (increased heart rate, shortness of breath), or drinking a large cup of coffee (increased heart rate, trembling). The individual interprets these subtle bodily changes catastrophically (“Maybe I’m having a heart attack!”). Such interpretations create fear and anxiety, which trigger additional physical symptoms; subsequently, the person experiences a panic attack. Support of this contention rests with findings that people with more severe catastrophic thoughts about sensations have more frequent and severe panic attacks, and among those with panic disorder, reducing catastrophic cognitions about their sensations is as effective as medication in reducing panic attacks (Good & Hinton, 2009). Generalized Anxiety Disorder Alex was always worried about many things. He worried that his children would drown when they played at the beach. Each time he left the house, he worried that an electrical short circuit would start a fire in his home. He worried that his wife would lose her job at the prestigious law firm. He worried that his daughter’s minor staph infection could turn into a massive life-threatening condition. These and other worries constantly weighed heavily on Alex’s mind, so much so that they made it difficult for him to make decisions and often left him feeling tense, irritable, and worn out. One night, Alex’s wife was to drive their son home from a soccer game. However, his wife stayed after the game and talked with some of the other parents, resulting in her arriving home \(45\) minutes late. Alex had tried to call his cell phone three or four times, but he could not get through because the soccer field did not have a signal. Extremely worried, Alex eventually called the police, convinced that his wife and son had not arrived home because they had been in a terrible car accident. Alex suffers from generalized anxiety disorder: a relatively continuous state of excessive, uncontrollable, and pointless worry and apprehension. People with generalized anxiety disorder often worry about routine, everyday things, even though their concerns are unjustified (See figure 15.10). For example, an individual may worry about her health and finances, the health of family members, the safety of her children, or minor matters (e.g., being late for an appointment) without having any legitimate reason for doing so (APA, 2013). A diagnosis of generalized anxiety disorder requires that the diffuse worrying and apprehension characteristic of this disorder—what Sigmund Freud referred to as free-floating anxiety—is not part of another disorder, occurs more days than not for at least six months, and is accompanied by any three of the following symptoms: restlessness, difficulty concentrating, being easily fatigued, muscle tension, irritability, and sleep difficulties. About \(5.7\%\) of the U.S. population will develop symptoms of generalized anxiety disorder during their lifetime (Kessler et al., 2005), and females are \(2\) times as likely as males to experience the disorder (APA, 2013). Generalized anxiety disorder is highly comorbid with mood disorders and other anxiety disorders (Noyes, 2001), and it tends to be chronic. Also, generalized anxiety disorder appears to increase the risk for heart attacks and strokes, especially in people with preexisting heart conditions (Martens et al., 2010). Although there have been few investigations aimed at determining the heritability of generalized anxiety disorder, a summary of available family and twin studies suggests that genetic factors play a modest role in the disorder (Hettema et al., 2001). Cognitive theories of generalized anxiety disorder suggest that worry represents a mental strategy to avoid more powerful negative emotions (Aikins & Craske, 2001), perhaps stemming from earlier unpleasant or traumatic experiences. Indeed, one longitudinal study found that childhood maltreatment was strongly related to the development of this disorder during adulthood (Moffitt et al., 2007); worrying might distract people from remembering painful childhood experiences.
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/15%3A__Psychological_Disorders/15.05%3A_Anxiety_Disorders.txt
Learning Objectives • Describe the main features and prevalence of obsessive-compulsive disorder, body dysmorphic disorder, and hoarding disorder • Understand some of the factors in the development of obsessive-compulsive disorder Obsessive-compulsive and related disorders are a group of overlapping disorders that generally involve intrusive, unpleasant thoughts and repetitive behaviors. Many of us experience unwanted thoughts from time to time (e.g., craving double cheeseburgers when dieting), and many of us engage in repetitive behaviors on occasion (e.g., pacing when nervous). However, obsessive-compulsive and related disorders elevate the unwanted thoughts and repetitive behaviors to a status so intense that these cognitions and activities disrupt daily life. Included in this category are obsessive-compulsive disorder (OCD), body dysmorphic disorder, and hoarding disorder. Obsessive-Compulsive Disorder People with obsessive-compulsive disorder (OCD) experience thoughts and urges that are intrusive and unwanted (obsessions) and/or the need to engage in repetitive behaviors or mental acts (compulsions). A person with this disorder might, for example, spend hours each day washing his hands or constantly checking and rechecking to make sure that a stove, faucet, or light has been turned off. Obsessions are more than just unwanted thoughts that seem to randomly jump into our head from time to time, such as recalling an insensitive remark a coworker made recently, and they are more significant than day-to-day worries we might have, such as justifiable concerns about being laid off from a job. Rather, obsessions are characterized as persistent, unintentional, and unwanted thoughts and urges that are highly intrusive, unpleasant, and distressing (APA, 2013). Common obsessions include concerns about germs and contamination, doubts (“Did I turn the water off?”), order and symmetry (“I need all the spoons in the tray to be arranged a certain way”), and urges that are aggressive or lustful. Usually, the person knows that such thoughts and urges are irrational and thus tries to suppress or ignore them, but has an extremely difficult time doing so. These obsessive symptoms sometimes overlap, such that someone might have both contamination and aggressive obsessions (Abramowitz & Siqueland, 2013). Compulsions are repetitive and ritualistic acts that are typically carried out primarily as a means to minimize the distress that obsessions trigger or to reduce the likelihood of a feared event (APA, 2013). Compulsions often include such behaviors as repeated and extensive hand washing, cleaning, checking (e.g., that a door is locked), and ordering (e.g., lining up all the pencils in a particular way), and they also include such mental acts as counting, praying, or reciting something to oneself (See figure 15.11). Compulsions characteristic of OCD are not performed out of pleasure, nor are they connected in a realistic way to the source of the distress or feared event. Approximately \(2.3\%\) of the U.S. population will experience OCD in their lifetime (Ruscio, Stein, Chiu, & Kessler, 2010) and, if left untreated, OCD tends to be a chronic condition creating lifelong interpersonal and psychological problems (Norberg, Calamari, Cohen, & Riemann, 2008). Body Dysmorphic Disorder An individual with body dysmorphic disorder is preoccupied with a perceived flaw in her physical appearance that is either nonexistent or barely noticeable to other people (APA, 2013). These perceived physical defects cause the person to think she is unattractive, ugly, hideous, or deformed. These preoccupations can focus on any bodily area, but they typically involve the skin, face, or hair. The preoccupation with imagined physical flaws drives the person to engage in repetitive and ritualistic behavioral and mental acts, such as constantly looking in the mirror, trying to hide the offending body part, comparisons with others, and, in some extreme cases, cosmetic surgery (Phillips, 2005). An estimated \(2.4\%\) of the adults in the United States meet the criteria for body dysmorphic disorder, with slightly higher rates in women than in men (APA, 2013). Hoarding Disorder Although hoarding was traditionally considered to be a symptom of OCD, considerable evidence suggests that hoarding represents an entirely different disorder (Mataix-Cols et al., 2010). People with hoarding disorder cannot bear to part with personal possessions, regardless of how valueless or useless these possessions are. As a result, these individuals accumulate excessive amounts of usually worthless items that clutter their living areas (See figure 15.12). Often, the quantity of cluttered items is so excessive that the person is unable use his kitchen, or sleep in his bed. People who suffer from this disorder have great difficulty parting with items because they believe the items might be of some later use, or because they form a sentimental attachment to the items (APA, 2013). Importantly, a diagnosis of hoarding disorder is made only if the hoarding is not caused by another medical condition and if the hoarding is not a symptom of another disorder (e.g., schizophrenia) (APA, 2013). Causes of OCD The results of family and twin studies suggest that OCD has a moderate genetic component. The disorder is five times more frequent in the first-degree relatives of people with OCD than in people without the disorder (Nestadt et al., 2000). Additionally, the concordance rate of OCD among identical twins is around \(57\%\); however, the concordance rate for fraternal twins is \(22\%\) (Bolton, Rijsdijk, O’Connor, Perrin, & Eley, 2007). Studies have implicated about two dozen potential genes that may be involved in OCD; these genes regulate the function of three neurotransmitters: serotonin, dopamine, and glutamate (Pauls, 2010). Many of these studies included small sample sizes and have yet to be replicated. Thus, additional research needs to be done in this area. A brain region that is believed to play a critical role in OCD is the orbitofrontal cortex (Kopell & Greenberg, 2008), an area of the frontal lobe involved in learning and decision-making (Rushworth, Noonan, Boorman, Walton, & Behrens, 2011) (See figure 15.13). In people with OCD, the orbitofrontal cortex becomes especially hyperactive when they are provoked with tasks in which, for example, they are asked to look at a photo of a toilet or of pictures hanging crookedly on a wall (Simon, Kaufmann, Müsch, Kischkel, & Kathmann, 2010). The orbitofrontal cortex is part of a series of brain regions that, collectively, is called the OCD circuit; this circuit consists of several interconnected regions that influence the perceived emotional value of stimuli and the selection of both behavioral and cognitive responses (Graybiel & Rauch, 2000). As with the orbitofrontal cortex, other regions of the OCD circuit show heightened activity during symptom provocation (Rotge et al., 2008), which suggests that abnormalities in these regions may produce the symptoms of OCD (Saxena, Bota, & Brody, 2001). Consistent with this explanation, people with OCD show a substantially higher degree of connectivity of the orbitofrontal cortex and other regions of the OCD circuit than do those without OCD (Beucke et al., 2013). The findings discussed above were based on imaging studies, and they highlight the potential importance of brain dysfunction in OCD. However, one important limitation of these findings is the inability to explain differences in obsessions and compulsions. Another limitation is that the correlational relationship between neurological abnormalities and OCD symptoms cannot imply causation (Abramowitz & Siqueland, 2013). CONNECT THE CONCEPTS: Conditioning and OCD The symptoms of OCD have been theorized to be learned responses, acquired and sustained as the result of a combination of two forms of learning: classical conditioning and operant conditioning (Mowrer, 1960; Steinmetz, Tracy, & Green, 2001). Specifically, the acquisition of OCD may occur first as the result of classical conditioning, whereby a neutral stimulus becomes associated with an unconditioned stimulus that provokes anxiety or distress. When an individual has acquired this association, subsequent encounters with the neutral stimulus trigger anxiety, including obsessive thoughts; the anxiety and obsessive thoughts (which are now a conditioned response) may persist until she identifies some strategy to relieve it. Relief may take the form of a ritualistic behavior or mental activity that, when enacted repeatedly, reduces the anxiety. Such efforts to relieve anxiety constitute an example of negative reinforcement (a form of operant conditioning). Recall from the chapter on learning that negative reinforcement involves the strengthening of behavior through its ability to remove something unpleasant or aversive. Hence, compulsive acts observed in OCD may be sustained because they are negatively reinforcing, in the sense that they reduce anxiety triggered by a conditioned stimulus. Suppose an individual with OCD experiences obsessive thoughts about germs, contamination, and disease whenever she encounters a doorknob. What might have constituted a viable unconditioned stimulus? Also, what would constitute the conditioned stimulus, unconditioned response, and conditioned response? What kinds of compulsive behaviors might we expect, and how do they reinforce themselves? What is decreased? Additionally, and from the standpoint of learning theory, how might the symptoms of OCD be treated successfully?
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/15%3A__Psychological_Disorders/15.06%3A_Obsessive-Compulsive_and_Related_Disorders.txt
Learning Objectives • Describe the nature and symptoms of posttraumatic stress disorder • Identify the risk factors associated with this disorder • Understand the role of learning and cognitive factors in its development Extremely stressful or traumatic events, such as combat, natural disasters, and terrorist attacks, place the people who experience them at an increased risk for developing psychological disorders such as posttraumatic stress disorder (PTSD). Throughout much of the \(20^{th}\) century, this disorder was called shell shock and combat neurosis because its symptoms were observed in soldiers who had engaged in wartime combat. By the late 1970s it had become clear that women who had experienced sexual traumas (e.g., rape, domestic battery, and incest) often experienced the same set of symptoms as did soldiers (Herman, 1997). The term posttraumatic stress disorder was developed given that these symptoms could happen to anyone who experienced psychological trauma. A Broader Definition of PTSD PTSD was listed among the anxiety disorders in previous DSM editions. In DSM-5, it is now listed among a group called Trauma-and-Stressor-Related Disorders. For a person to be diagnosed with PTSD, she be must exposed to, witness, or experience the details of a traumatic experience (e.g., a first responder), one that involves “actual or threatened death, serious injury, or sexual violence” (APA, 2013, p. 271). These experiences can include such events as combat, threatened or actual physical attack, sexual assault, natural disasters, terrorist attacks, and automobile accidents. This criterion makes PTSD the only disorder listed in the DSM in which a cause (extreme trauma) is explicitly specified. Symptoms of PTSD include intrusive and distressing memories of the event, flashbacks (states that can last from a few seconds to several days, during which the individual relives the event and behaves as if the event were occurring at that moment [APA, 2013]), avoidance of stimuli connected to the event, persistently negative emotional states (e.g., fear, anger, guilt, and shame), feelings of detachment from others, irritability, proneness toward outbursts, and an exaggerated startle response (jumpiness). For PTSD to be diagnosed, these symptoms must occur for at least one month. Roughly \(7\%\) of adults in the United States, including \(9.7\%\) of women and \(3.6\%\) of men, experience PTSD in their lifetime (National Comorbidity Survey, 2007), with higher rates among people exposed to mass trauma and people whose jobs involve duty-related trauma exposure (e.g., police officers, firefighters, and emergency medical personnel) (APA, 2013). Nearly \(21\%\) of residents of areas affected by Hurricane Katrina suffered from PTSD one year following the hurricane (Kessler et al., 2008), and \(12.6\%\) of Manhattan residents were observed as having PTSD \(2-3\) years after the \(9/11\) terrorist attacks (DiGrande et al., 2008). Risk Factors for PTSD Of course, not everyone who experiences a traumatic event will go on to develop PTSD; several factors strongly predict the development of PTSD: trauma experience, greater trauma severity, lack of immediate social support, and more subsequent life stress (Brewin, Andrews, & Valentine, 2000). Traumatic events that involve harm by others (e.g., combat, rape, and sexual molestation) carry greater risk than do other traumas (e.g., natural disasters) (Kessler, Sonnega, Bromet, Hughes, & Nelson, 1995). Factors that increase the risk of PTSD include female gender, low socioeconomic status, low intelligence, personal history of mental disorders, history of childhood adversity (abuse or other trauma during childhood), and family history of mental disorders (Brewin et al., 2000). Personality characteristics such as neuroticism and somatization (the tendency to experience physical symptoms when one encounters stress) have been shown to elevate the risk of PTSD (Bramsen, Dirkzwager, & van der Ploeg, 2000). People who experience childhood adversity and/or traumatic experiences during adulthood are at significantly higher risk of developing PTSD if they possess one or two short versions of a gene that regulates the neurotransmitter serotonin (Xie et al., 2009). This suggests a possible diathesis-stress interpretation of PTSD: its development is influenced by the interaction of psychosocial and biological factors. Support for Sufferers of PTSD Research has shown that social support following a traumatic event can reduce the likelihood of PTSD (Ozer, Best, Lipsey, & Weiss, 2003). Social support is often defined as the comfort, advice, and assistance received from relatives, friends, and neighbors. Social support can help individuals cope during difficult times by allowing them to discuss feelings and experiences and providing a sense of being loved and appreciated. A \(14\)-year study of \(1,377\) American Legionnaires who had served in the Vietnam War found that those who perceived less social support when they came home were more likely to develop PTSD than were those who perceived greater support (See figure 15.14). In addition, those who became involved in the community were less likely to develop PTSD, and they were more likely to experience a remission of PTSD than were those who were less involved (Koenen, Stellman, Stellman, & Sommer, 2003). Learning and the Development of PTSD PTSD learning models suggest that some symptoms are developed and maintained through classical conditioning. The traumatic event may act as an unconditioned stimulus that elicits an unconditioned response characterized by extreme fear and anxiety. Cognitive, emotional, physiological, and environmental cues accompanying or related to the event are conditioned stimuli. These traumatic reminders evoke conditioned responses (extreme fear and anxiety) similar to those caused by the event itself (Nader, 2001). A person who was in the vicinity of the Twin Towers during the 9/11 terrorist attacks and who developed PTSD may display excessive hypervigilance and distress when planes fly overhead; this behavior constitutes a conditioned response to the traumatic reminder (conditioned stimulus of the sight and sound of an airplane). Differences in how conditionable individuals are help to explain differences in the development and maintenance of PTSD symptoms (Pittman, 1988). Conditioning studies demonstrate facilitated acquisition of conditioned responses and delayed extinction of conditioned responses in people with PTSD (Orr et al., 2000). Cognitive factors are important in the development and maintenance of PTSD. One model suggests that two key processes are crucial: disturbances in memory for the event, and negative appraisals of the trauma and its aftermath (Ehlers & Clark, 2000). According to this theory, some people who experience traumas do not form coherent memories of the trauma; memories of the traumatic event are poorly encoded and, thus, are fragmented, disorganized, and lacking in detail. Therefore, these individuals are unable remember the event in a way that gives it meaning and context. A rape victim who cannot coherently remember the event may remember only bits and pieces (e.g., the attacker repeatedly telling her she is stupid); because she was unable to develop a fully integrated memory, the fragmentary memory tends to stand out. Although unable to retrieve a complete memory of the event, she may be haunted by intrusive fragments involuntarily triggered by stimuli associated with the event (e.g., memories of the attacker’s comments when encountering a person who resembles the attacker). This interpretation fits previously discussed material concerning PTSD and conditioning. The model also proposes that negative appraisals of the event (“I deserved to be raped because I’m stupid”) may lead to dysfunctional behavioral strategies (e.g., avoiding social activities where men are likely to be present) that maintain PTSD symptoms by preventing both a change in the nature of the memory and a change in the problematic appraisals.
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/15%3A__Psychological_Disorders/15.07%3A_Posttraumatic_Stress_Disorder.txt
Learning Objectives • Distinguish normal states of sadness and euphoria from states of depression and mania • Describe the symptoms of major depressive disorder and bipolar disorder • Understand the differences between major depressive disorder and persistent depressive disorder, and identify two subtypes of depression • Define the criteria for a manic episode • Understand genetic, biological, and psychological explanations of major depressive disorder • Discuss the relationship between mood disorders and suicidal ideation, as well as factors associated with suicide Blake cries all day and feeling that he is worthless and his life is hopeless, he cannot get out of bed. Crystal stays up all night, talks very rapidly, and went on a shopping spree in which she spent \(\$3,000\) on furniture, although she cannot afford it. Maria recently had a baby, and she feels overwhelmed, teary, anxious, and panicked, and believes she is a terrible mother—practically every day since the baby was born. All these individuals demonstrate symptoms of a potential mood disorder. Mood disorders (See figure 15.15) are characterized by severe disturbances in mood and emotions—most often depression, but also mania and elation (Rothschild, 1999). All of us experience fluctuations in our moods and emotional states, and often these fluctuations are caused by events in our lives. We become elated if our favorite team wins the World Series and dejected if a romantic relationship ends or if we lose our job. At times, we feel fantastic or miserable for no clear reason. People with mood disorders also experience mood fluctuations, but their fluctuations are extreme, distort their outlook on life, and impair their ability to function. The DSM-5 lists two general categories of mood disorders. Depressive disorders are a group of disorders in which depression is the main feature. Depression is a vague term that, in everyday language, refers to an intense and persistent sadness. Depression is a heterogeneous mood state—it consists of a broad spectrum of symptoms that range in severity. Depressed people feel sad, discouraged, and hopeless. These individuals lose interest in activities once enjoyed, often experience a decrease in drives such as hunger and sex, and frequently doubt personal worth. Depressive disorders vary by degree, but this chapter highlights the most well-known: major depressive disorder (sometimes called unipolar depression). Bipolar and related disorders are a group of disorders in which mania is the defining feature. Mania is a state of extreme elation and agitation. When people experience mania, they may become extremely talkative, behave recklessly, or attempt to take on many tasks simultaneously. The most recognized of these disorders is bipolar disorder. Major Depressive Disorder According to the DSM-5, the defining symptoms of major depressive disorder include “depressed mood most of the day, nearly every day” (feeling sad, empty, hopeless, or appearing tearful to others), and loss of interest and pleasure in usual activities (APA, 2013). In addition to feeling overwhelmingly sad most of each day, people with depression will no longer show interest or enjoyment in activities that previously were gratifying, such as hobbies, sports, sex, social events, time spent with family, and so on. Friends and family members may notice that the person has completely abandoned previously enjoyed hobbies; for example, an avid tennis player who develops major depressive disorder no longer plays tennis (Rothschild, 1999). To receive a diagnosis of major depressive disorder, a person must, for at least two weeks, have a depressed mood and/or a loss of interest or pleasure in most activities. In addition, the person will show signs and symptoms of several of the following: significant weight loss or weight gain, insomnia or hypersomnia, psychomotor agitation (such as fidgeting, inability to sit, pacing, hand-wringing) or psychomotor retardation (such as talking and moving slowly), fatigue, feelings of worthlessness or guilt, difficulty concentrating or indecisiveness, and suicidal ideation. Major depressive disorder is considered episodic: its symptoms are typically present at their full magnitude for a certain period of time and then gradually abate. Approximately 50%–60% of people who experience an episode of major depressive disorder will have a second episode at some point in the future; those who have had two episodes have a 70% chance of having a third episode, and those who have had three episodes have a 90% chance of having a fourth episode (Rothschild, 1999). Although the episodes can last for months, a majority of people diagnosed with this condition (around 70%) recover within a year. However, a substantial number do not recover; around 12% show serious signs of impairment associated with major depressive disorder after 5 years (Boland & Keller, 2009). In the long-term, many who do recover will still show minor symptoms that fluctuate in their severity (Judd, 2012). Results of Major Depressive Disorder Major depressive disorder is a serious and incapacitating condition that can have a devastating effect on the quality of one’s life. The person suffering from this disorder lives a profoundly miserable existence that often results in unavailability for work or education, abandonment of promising careers, and lost wages; occasionally, the condition requires hospitalization. The majority of those with major depressive disorder report having faced some kind of discrimination, and many report that having received such treatment has stopped them from initiating close relationships, applying for jobs for which they are qualified, and applying for education or training (Lasalvia et al., 2013). Major depressive disorder also takes a toll on health. Depression is a risk factor for the development of heart disease in healthy patients, as well as adverse cardiovascular outcomes in patients with preexisting heart disease (Whooley, 2006). Risk Factors for Major Depressive Disorder Major depressive disorder is often referred to as the common cold of psychiatric disorders. Around \(6.6\%\) of the U.S. population experiences major depressive disorder each year; \(16.9\%\) will experience the disorder during their lifetime (Kessler & Wang, 2009). It is more common among women than among men, affecting approximately \(20\%\) of women and \(13\%\) of men at some point in their life (National Comorbidity Survey, 2007). The greater risk among women is not accounted for by a tendency to report symptoms or to seek help more readily, suggesting that gender differences in the rates of major depressive disorder may reflect biological and gender-related environmental experiences (Kessler, 2003). Lifetime rates of major depressive disorder tend to be highest in North and South America, Europe, and Australia; they are considerably lower in Asian countries (Hasin, Fenton, & Weissman, 2011). The rates of major depressive disorder are higher among younger age cohorts than among older cohorts, perhaps because people in younger age cohorts are more willing to admit depression (Kessler & Wang, 2009). A number of risk factors are associated with major depressive disorder: unemployment (including homemakers); earning less than \(\$20,000\) per year; living in urban areas; or being separated, divorced, or widowed (Hasin et al., 2011). Comorbid disorders include anxiety disorders and substance abuse disorders (Kessler & Wang, 2009). Subtypes of Depression The DSM-5 lists several different subtypes of depression. These subtypes—what the DSM-5 refer to as specifiers—are not specific disorders; rather, they are labels used to indicate specific patterns of symptoms or to specify certain periods of time in which the symptoms may be present. One subtype, seasonal pattern, applies to situations in which a person experiences the symptoms of major depressive disorder only during a particular time of year (e.g., fall or winter). In everyday language, people often refer to this subtype as the winter blues. Another subtype, peripartum onset (commonly referred to as postpartum depression), applies to women who experience major depression during pregnancy or in the four weeks following the birth of their child (APA, 2013). These women often feel very anxious and may even have panic attacks. They may feel guilty, agitated, and be weepy. They may not want to hold or care for their newborn, even in cases in which the pregnancy was desired and intended. In extreme cases, the mother may have feelings of wanting to harm her child or herself. In a horrific illustration, a woman named Andrea Yates, who suffered from extreme peripartum-onset depression (as well as other mental illnesses), drowned her five children in a bathtub (Roche, 2002). Most women with peripartum-onset depression do not physically harm their children, but most do have difficulty being adequate caregivers (Fields, 2010). A surprisingly high number of women experience symptoms of peripartum-onset depression. A study of \(10,000\) women who had recently given birth found that \(14\%\) screened positive for peripartum-onset depression, and that nearly \(20\%\) reported having thoughts of wanting to harm themselves (Wisner et al., 2013). People with persistent depressive disorder (previously known as dysthymia) experience depressed moods most of the day nearly every day for at least two years, as well as at least two of the other symptoms of major depressive disorder. People with persistent depressive disorder are chronically sad and melancholy, but do not meet all the criteria for major depression. However, episodes of full-blown major depressive disorder can occur during persistent depressive disorder (APA, 2013). Bipolar Disorder A person with bipolar disorder (commonly known as manic depression) often experiences mood states that vacillate between depression and mania; that is, the person’s mood is said to alternate from one emotional extreme to the other (in contrast to unipolar, which indicates a persistently sad mood). To be diagnosed with bipolar disorder, a person must have experienced a manic episode at least once in his life; although major depressive episodes are common in bipolar disorder, they are not required for a diagnosis (APA, 2013). According to the DSM-5, a manic episode is characterized as a “distinct period of abnormally and persistently elevated, expansive, or irritable mood and abnormally and persistently increased activity or energy lasting at least one week,” that lasts most of the time each day (APA, 2013, p. 124). During a manic episode, some experience a mood that is almost euphoric and become excessively talkative, sometimes spontaneously starting conversations with strangers; others become excessively irritable and complain or make hostile comments. The person may talk loudly and rapidly, exhibiting flight of ideas, abruptly switching from one topic to another. These individuals are easily distracted, which can make a conversation very difficult. They may exhibit grandiosity, in which they experience inflated but unjustified self-esteem and self-confidence. For example, they might quit a job in order to “strike it rich” in the stock market, despite lacking the knowledge, experience, and capital for such an endeavor. They may take on several tasks at the same time (e.g., several time-consuming projects at work) and yet show little, if any, need for sleep; some may go for days without sleep. Patients may also recklessly engage in pleasurable activities that could have harmful consequences, including spending sprees, reckless driving, making foolish investments, excessive gambling, or engaging in sexual encounters with strangers (APA, 2013). During a manic episode, individuals usually feel as though they are not ill and do not need treatment. However, the reckless behaviors that often accompany these episodes—which can be antisocial, illegal, or physically threatening to others—may require involuntary hospitalization (APA, 2013). Some patients with bipolar disorder will experience a rapid-cycling subtype, which is characterized by at least four manic episodes (or some combination of at least four manic and major depressive episodes) within one year. Link to Learning In the 1997 independent film Sweetheart, actress Janeane Garofalo plays the part of Jasmine, a young woman with bipolar disorder. Watch this firsthand account from a person living with bipolar disorder to learn more. Risk Factors for Bipolar Disorder Bipolar disorder is considerably less frequent than major depressive disorder. In the United States, \(1\) out of every \(167\) people meets the criteria for bipolar disorder each year, and \(1\) out of \(100\) meet the criteria within their lifetime (Merikangas et al., 2011). The rates are higher in men than in women, and about half of those with this disorder report onset before the age of \(25\) (Merikangas et al., 2011). Around \(90\%\) of those with bipolar disorder have a comorbid disorder, most often an anxiety disorder or a substance abuse problem. Unfortunately, close to half of the people suffering from bipolar disorder do not receive treatment (Merikangas & Tohen, 2011). Suicide rates are extremely high among those with bipolar disorder: around \(36\%\) of individuals with this disorder attempt suicide at least once in their lifetime (Novick, Swartz, & Frank, 2010), and between \(15\%-19\%\) complete suicide (Newman, 2004). The Biological Basis of Mood Disorders Mood disorders have been shown to have a strong genetic and biological basis. Relatives of those with major depressive disorder have double the risk of developing major depressive disorder, whereas relatives of patients with bipolar disorder have over nine times the risk (Merikangas et al., 2011). The rate of concordance for major depressive disorder is higher among identical twins than fraternal twins (\(50\%\) vs. \(38\%\), respectively), as is that of bipolar disorder (\(67\%\) vs. \(16\%\), respectively), suggesting that genetic factors play a stronger role in bipolar disorder than in major depressive disorder (Merikangas et al. 2011). People with mood disorders often have imbalances in certain neurotransmitters, particularly norepinephrine and serotonin (Thase, 2009). These neurotransmitters are important regulators of the bodily functions that are disrupted in mood disorders, including appetite, sex drive, sleep, arousal, and mood. Medications that are used to treat major depressive disorder typically boost serotonin and norepinephrine activity, whereas lithium—used in the treatment of bipolar disorder—blocks norepinephrine activity at the synapses (See figure 15.16 below). Depression is linked to abnormal activity in several regions of the brain (Fitzgerald, Laird, Maller, & Daskalakis, 2008) including those important in assessing the emotional significance of stimuli and experiencing emotions (amygdala), and in regulating and controlling emotions (like the prefrontal cortex, or PFC) (LeMoult, Castonguay, Joormann, & McAleavey, 2013). Depressed individuals show elevated amygdala activity (Drevets, Bogers, & Raichle, 2002), especially when presented with negative emotional stimuli, such as photos of sad faces (See figure 15.17) (Surguladze et al., 2005). Interestingly, heightened amygdala activation to negative emotional stimuli among depressed persons occurs even when stimuli are presented outside of conscious awareness (Victor, Furey, Fromm, Öhman, & Drevets, 2010), and it persists even after the negative emotional stimuli are no longer present (Siegle, Thompson, Carter, Steinhauer, & Thase, 2007). Additionally, depressed individuals exhibit less activation in the prefrontal, particularly on the left side (Davidson, Pizzagalli, & Nitschke, 2009). Because the PFC can dampen amygdala activation, thereby enabling one to suppress negative emotions (Phan et al., 2005), decreased activation in certain regions of the PFC may inhibit its ability to override negative emotions that might then lead to more negative mood states (Davidson et al., 2009). These findings suggest that depressed persons are more prone to react to emotionally negative stimuli, yet have greater difficulty controlling these reactions. Since the 1950s, researchers have noted that depressed individuals have abnormal levels of cortisol, a stress hormone released into the blood by the neuroendocrine system during times of stress (Mackin & Young, 2004). When cortisol is released, the body initiates a fight-or-flight response in reaction to a threat or danger. Many people with depression show elevated cortisol levels (Holsboer & Ising, 2010), especially those reporting a history of early life trauma such as the loss of a parent or abuse during childhood (Baes, Tofoli, Martins, & Juruena, 2012). Such findings raise the question of whether high cortisol levels are a cause or a consequence of depression. High levels of cortisol are a risk factor for future depression (Halligan, Herbert, Goodyer, & Murray, 2007), and cortisol activates activity in the amygdala while deactivating activity in the PFC (McEwen, 2005)—both brain disturbances are connected to depression. Thus, high cortisol levels may have a causal effect on depression, as well as on its brain function abnormalities (van Praag, 2005). Also, because stress results in increased cortisol release (Michaud, Matheson, Kelly, Anisman, 2008), it is equally reasonable to assume that stress may precipitate depression. A Diathesis-Stress Model and Major Depressive Disorders Indeed, it has long been believed that stressful life events can trigger depression, and research has consistently supported this conclusion (Mazure, 1998). Stressful life events include significant losses, such as death of a loved one, divorce or separation, and serious health and money problems; life events such as these often precede the onset of depressive episodes (Brown & Harris, 1989). In particular, exit events—instances in which an important person departs (e.g., a death, divorce or separation, or a family member leaving home)—often occur prior to an episode (Paykel, 2003). Exit events are especially likely to trigger depression if these happenings occur in a way that humiliates or devalues the individual. For example, people who experience the breakup of a relationship initiated by the other person develop major depressive disorder at a rate more than \(2\) times that of people who experience the death of a loved one (Kendler, Hettema, Butera, Gardner, & Prescott, 2003). Likewise, individuals who are exposed to traumatic stress during childhood—such as separation from a parent, family turmoil, and maltreatment (physical or sexual abuse)—are at a heightened risk of developing depression at any point in their lives (Kessler, 1997). A recent review of \(16\) studies involving over \(23,000\) subjects concluded that those who experience childhood maltreatment are more than \(2\) times as likely to develop recurring and persistent depression (Nanni, Uher, & Danese, 2012). Of course, not everyone who experiences stressful life events or childhood adversities succumbs to depression—indeed, most do not. Clearly, a diathesis-stress interpretation of major depressive disorder, in which certain predispositions or vulnerability factors influence one’s reaction to stress, would seem logical. If so, what might such predispositions be? A study by Caspi and others (2003) suggests that an alteration in a specific gene that regulates serotonin (the 5-HTTLPR gene) might be one culprit. These investigators found that people who experienced several stressful life events were significantly more likely to experience episodes of major depression if they carried one or two short versions of this gene than if they carried two long versions. Those who carried one or two short versions of the 5-HTTLPR gene were unlikely to experience an episode, however, if they had experienced few or no stressful life events. Numerous studies have replicated these findings, including studies of people who experienced maltreatment during childhood (Goodman & Brand, 2009). In a recent investigation conducted in the United Kingdom (Brown & Harris, 2013), researchers found that childhood maltreatment before age 9 elevated the risk of chronic adult depression (a depression episode lasting for at least \(12\) months) among those individuals having one (LS) or two (SS) short versions of the 5-HTTLPR gene (See figure 15.18). Childhood maltreatment did not increase the risk for chronic depression for those have two long (LL) versions of this gene. Thus, genetic vulnerability may be one mechanism through which stress potentially leads to depression. Cognitive Theories of Depression Cognitive theories of depression take the view that depression is triggered by negative thoughts, interpretations, self-evaluations, and expectations (Joormann, 2009). These diathesis-stress models propose that depression is triggered by a “cognitive vulnerability” (negative and maladaptive thinking) and by precipitating stressful life events (Gotlib & Joormann, 2010). Perhaps the most well-known cognitive theory of depression was developed in the 1960s by psychiatrist Aaron Beck, based on clinical observations and supported by research (Beck, 2008). Beck theorized that depression-prone people possess depressive schemas, or mental predispositions to think about most things in a negative way (Beck, 1976). Depressive schemas contain themes of loss, failure, rejection, worthlessness, and inadequacy, and may develop early in childhood in response to adverse experiences, then remain dormant until they are activated by stressful or negative life events. Depressive schemas prompt dysfunctional and pessimistic thoughts about the self, the world, and the future. Beck believed that this dysfunctional style of thinking is maintained by cognitive biases, or errors in how we process information about ourselves, which lead us to focus on negative aspects of experiences, interpret things negatively, and block positive memories (Beck, 2008). A person whose depressive schema consists of a theme of rejection might be overly attentive to social cues of rejection (more likely to notice another’s frown), and he might interpret this cue as a sign of rejection and automatically remember past incidents of rejection. Longitudinal studies have supported Beck’s theory, in showing that a preexisting tendency to engage in this negative, self-defeating style of thinking—when combined with life stress—over time predicts the onset of depression (Dozois & Beck, 2008). Cognitive therapies for depression, aimed at changing a depressed person’s negative thinking, were developed as an expansion of this theory (Beck, 1976). Another cognitive theory of depression, hopelessness theory, postulates that a particular style of negative thinking leads to a sense of hopelessness, which then leads to depression (Abramson, Metalsky, & Alloy, 1989). According to this theory, hopelessness is an expectation that unpleasant outcomes will occur or that desired outcomes will not occur, and there is nothing one can do to prevent such outcomes. A key assumption of this theory is that hopelessness stems from a tendency to perceive negative life events as having stable (“It’s never going to change”) and global (“It’s going to affect my whole life”) causes, in contrast to unstable (“It’s fixable”) and specific (“It applies only to this particular situation”) causes, especially if these negative life events occur in important life realms, such as relationships, academic achievement, and the like. Suppose a student who wishes to go to law school does poorly on an admissions test. If the student infers negative life events as having stable and global causes, she may believe that her poor performance has a stable and global cause (“I lack intelligence, and it’s going to prevent me from ever finding a meaningful career”), as opposed to an unstable and specific cause (“I was sick the day of the exam, so my low score was a fluke”). Hopelessness theory predicts that people who exhibit this cognitive style in response to undesirable life events will view such events as having negative implications for their future and self-worth, thereby increasing the likelihood of hopelessness—the primary cause of depression (Abramson et al., 1989). One study testing hopelessness theory measured the tendency to make negative inferences for bad life effects in participants who were experiencing uncontrollable stressors. Over the ensuing six months, those with scores reflecting high cognitive vulnerability were 7 times more likely to develop depression compared to those with lower scores (Kleim, Gonzalo, & Ehlers, 2011). A third cognitive theory of depression focuses on how people’s thoughts about their distressed moods—depressed symptoms in particular—can increase the risk and duration of depression. This theory, which focuses on rumination in the development of depression, was first described in the late 1980s to explain the higher rates of depression in women than in men (Nolen-Hoeksema, 1987). Rumination is the repetitive and passive focus on the fact that one is depressed and dwelling on depressed symptoms, rather that distracting one’s self from the symptoms or attempting to address them in an active, problem-solving manner (Nolen-Hoeksema, 1991). When people ruminate, they have thoughts such as “Why am I so unmotivated? I just can’t get going. I’m never going to get my work done feeling this way” (Nolen-Hoeksema & Hilt, 2009, p. 393). Women are more likely than men to ruminate when they are sad or depressed (Butler & Nolen-Hoeksema, 1994), and the tendency to ruminate is associated with increases in depression symptoms (Nolen-Hoeksema, Larson, & Grayson, 1999), heightened risk of major depressive episodes (Abela & Hankin, 2011), and chronicity of such episodes (Robinson & Alloy, 2003) Suicide For some people with mood disorders, the extreme emotional pain they experience becomes unendurable. Overwhelmed by hopelessness, devastated by incapacitating feelings of worthlessness, and burdened with the inability to adequately cope with such feelings, they may consider suicide to be a reasonable way out. Suicide, defined by the CDC as “death caused by self-directed injurious behavior with any intent to die as the result of the behavior” (CDC, 2013a), in a sense represents an outcome of several things going wrong all at the same time Crosby, Ortega, & Melanson, 2011). Not only must the person be biologically or psychologically vulnerable, but he must also have the means to perform the suicidal act, and he must lack the necessary protective factors (e.g., social support from friends and family, religion, coping skills, and problem-solving skills) that provide comfort and enable one to cope during times of crisis or great psychological pain (Berman, 2009). Suicide is not listed as a disorder in the DSM-5; however, suffering from a mental disorder—especially a mood disorder—poses the greatest risk for suicide. Around 90% of those who complete suicides have a diagnosis of at least one mental disorder, with mood disorders being the most frequent (Fleischman, Bertolote, Belfer, & Beautrais, 2005). In fact, the association between major depressive disorder and suicide is so strong that one of the criteria for the disorder is thoughts of suicide, as discussed above (APA, 2013). Suicide rates can be difficult to interpret because some deaths that appear to be accidental may in fact be acts of suicide (e.g., automobile crash). Nevertheless, investigations into U.S. suicide rates have uncovered these facts: • Suicide was the \(10^{th}\) leading cause of death for all ages in 2010 (Centers for Disease Control and Prevention [CDC], 2012). • There were \(38,364\) suicides in 2010 in the United States—an average of \(105\) each day (CDC, 2012). • Suicide among males is \(4\) times higher than among females and accounts for \(79\%\) of all suicides; firearms are the most commonly used method of suicide for males, whereas poisoning is the most commonly used method for females (CDC, 2012). • From 1991 to 2003, suicide rates were consistently higher among those \(65\) years and older. Since 2001, however, suicide rates among those ages \(25-64\) have risen consistently, and, since 2006, suicide rates have been greater for those ages \(65\) and older (CDC, 2013b). This increase in suicide rates among middle-aged Americans has prompted concern in some quarters that baby boomers (individuals born between 1946–1964) who face economic worry and easy access to prescription medication may be particularly vulnerable to suicide (Parker-Pope, 2013). • The highest rates of suicide within the United States are among American Indians/Alaskan natives and Non-Hispanic Whites (CDC, 2013b). • Suicide rates vary across the United States, with the highest rates consistently found in the mountain states of the west (Alaska, Montana, Nevada, Wyoming, Colorado, and Idaho) (Berman, 2009). Contrary to popular belief, suicide rates peak during the springtime (April and May), not during the holiday season or winter. In fact, suicide rates are generally lowest during the winter months (Postolache et al., 2010). Risk Factors for Suicide Suicidal risk is especially high among people with substance abuse problems. Individuals with alcohol dependence are at \(10\) times greater risk for suicide than the general population (Wilcox, Conner, & Caine, 2004). The risk of suicidal behavior is especially high among those who have made a prior suicide attempt. Among those who attempt suicide, \(16\%\) make another attempt within a year and over \(21\%\) make another attempt within four years (Owens, Horrocks, & House, 2002). Suicidal individuals may be at high risk for terminating their life if they have a lethal means in which to act, such as a firearm in the home (Brent & Bridge, 2003). Withdrawal from social relationships, feeling as though one is a burden to others, and engaging in reckless and risk-taking behaviors may be precursors to suicidal behavior (Berman, 2009). A sense of entrapment or feeling unable to escape one’s miserable feelings or external circumstances (e.g., an abusive relationship with no perceived way out) predicts suicidal behavior (O’Connor, Smyth, Ferguson, Ryan, & Williams, 2013). Tragically, reports of suicides among adolescents following instances of cyberbullying have emerged in recent years. In one widely-publicized case a few years ago, Phoebe Prince, a \(15\)-year-old Massachusetts high school student, committed suicide following incessant harassment and taunting from her classmates via texting and Facebook (McCabe, 2010). Suicides can have a contagious effect on people. For example, another’s suicide, especially that of a family member, heightens one’s risk of suicide (Agerbo, Nordentoft, & Mortensen, 2002). Additionally, widely-publicized suicides tend to trigger copycat suicides in some individuals. One study examining suicide statistics in the United States from 1947–1967 found that the rates of suicide skyrocketed for the first month after a suicide story was printed on the front page of the New York Times (Phillips, 1974). Austrian researchers found a significant increase in the number of suicides by firearms in the three weeks following extensive reports in Austria’s largest newspaper of a celebrity suicide by gun (Etzersdorfer, Voracek, & Sonneck, 2004). A review of \(42\) studies concluded that media coverage of celebrity suicides is more than \(14\) times more likely to trigger copycat suicides than is coverage of non-celebrity suicides (Stack, 2000). This review also demonstrated that the medium of coverage is important: televised stories are considerably less likely to prompt a surge in suicides than are newspaper stories. Research suggests that a trend appears to be emerging whereby people use online social media to leave suicide notes, although it is not clear to what extent suicide notes on such media might induce copycat suicides (Ruder, Hatch, Ampanozi, Thali, & Fischer, 2011). Nevertheless, it is reasonable to conjecture that suicide notes left by individuals on social media may influence the decisions of other vulnerable people who encounter them (Luxton, June, & Fairall, 2012). One possible contributing factor in suicide is brain chemistry. Contemporary neurological research shows that disturbances in the functioning of serotonin are linked to suicidal behavior (Pompili et al., 2010). Low levels of serotonin predict future suicide attempts and suicide completions, and low levels have been observed post-mortem among suicide victims (Mann, 2003). Serotonin dysfunction, as noted earlier, is also known to play an important role in depression; low levels of serotonin have also been linked to aggression and impulsivity (Stanley et al., 2000). The combination of these three characteristics constitutes a potential formula for suicide—especially violent suicide. A classic study conducted during the 1970s found that patients with major depressive disorder who had very low levels of serotonin attempted suicide more frequently and more violently than did patients with higher levels (Asberg, Thorén, Träskman, Bertilsson, & Ringberger, 1976; Mann, 2003). Suicidal thoughts, plans, and even off-hand remarks (“I might kill myself this afternoon”) should always be taken extremely seriously. People who contemplate terminating their life need immediate help. Below are links to two excellent websites that contain resources (including hotlines) for people who are struggling with suicidal ideation, have loved ones who may be suicidal, or who have lost loved ones to suicide: http://www.afsp.org and http://suicidology.org.
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/15%3A__Psychological_Disorders/15.08%3A_Mood_Disorders.txt
Learning Objectives • Recognize the essential nature of schizophrenia, avoiding the misconception that it involves a split personality • Categorize and describe the major symptoms of schizophrenia • Understand the interplay between genetic, biological, and environmental factors that are associated with the development of schizophrenia • Discuss the importance of research examining prodromal symptoms of schizophrenia Schizophrenia is a devastating psychological disorder that is characterized by major disturbances in thought, perception, emotion, and behavior. About \(1\%\) of the population experiences schizophrenia in their lifetime, and usually the disorder is first diagnosed during early adulthood (early to mid-\(20s\)). Most people with schizophrenia experience significant difficulties in many day-to-day activities, such as holding a job, paying bills, caring for oneself (grooming and hygiene), and maintaining relationships with others. Frequent hospitalizations are more often the rule rather than the exception with schizophrenia. Even when they receive the best treatments available, many with schizophrenia will continue to experience serious social and occupational impairment throughout their lives. What is schizophrenia? First, schizophrenia is not a condition involving a split personality; that is, schizophrenia is not the same thing as dissociative identity disorder (better known as multiple personality disorder). These disorders are sometimes confused because the word schizophrenia first coined by the Swiss psychiatrist Eugen Bleuler in 1911, derives from Greek words that refer to a “splitting” (schizo) of psychic functions (phrene) (Green, 2001). Schizophrenia is considered a psychotic disorder, or one in which the person’s thoughts, perceptions, and behaviors are impaired to the point where she is not able to function normally in life. In informal terms, one who suffers from a psychotic disorder (that is, has a psychosis) is disconnected from the world in which most of us live. Symptoms of Schizophrenia The main symptoms of schizophrenia include hallucinations, delusions, disorganized thinking, disorganized or abnormal motor behavior, and negative symptoms (APA, 2013). A hallucination is a perceptual experience that occurs in the absence of external stimulation. Auditory hallucinations (hearing voices) occur in roughly two-thirds of patients with schizophrenia and are by far the most common form of hallucination (Andreasen, 1987). The voices may be familiar or unfamiliar, they may have a conversation or argue, or the voices may provide a running commentary on the person’s behavior (Tsuang, Farone, & Green, 1999). Less common are visual hallucinations (seeing things that are not there) and olfactory hallucinations (smelling odors that are not actually present). Delusions are beliefs that are contrary to reality and are firmly held even in the face of contradictory evidence. Many of us hold beliefs that some would consider odd, but a delusion is easily identified because it is clearly absurd. A person with schizophrenia may believe that his mother is plotting with the FBI to poison his coffee, or that his neighbor is an enemy spy who wants to kill him. These kinds of delusions are known as paranoid delusions, which involve the (false) belief that other people or agencies are plotting to harm the person. People with schizophrenia also may hold grandiose delusions, beliefs that one holds special power, unique knowledge, or is extremely important. For example, the person who claims to be Jesus Christ, or who claims to have knowledge going back 5,000 years, or who claims to be a great philosopher is experiencing grandiose delusions. Other delusions include the belief that one’s thoughts are being removed (thought withdrawal) or thoughts have been placed inside one’s head (thought insertion). Another type of delusion is somatic delusion, which is the belief that something highly abnormal is happening to one’s body (e.g., that one’s kidneys are being eaten by cockroaches). Disorganized thinking refers to disjointed and incoherent thought processes—usually detected by what a person says. The person might ramble, exhibit loose associations (jump from topic to topic), or talk in a way that is so disorganized and incomprehensible that it seems as though the person is randomly combining words. Disorganized thinking is also exhibited by blatantly illogical remarks (e.g., “Fenway Park is in Boston. I live in Boston. Therefore, I live at Fenway Park.”) and by tangentiality: responding to others’ statements or questions by remarks that are either barely related or unrelated to what was said or asked. For example, if a person diagnosed with schizophrenia is asked if she is interested in receiving special job training, she might state that she once rode on a train somewhere. To a person with schizophrenia, the tangential (slightly related) connection between job training and riding a train are sufficient enough to cause such a response. Disorganized or abnormal motor behavior refers to unusual behaviors and movements: becoming unusually active, exhibiting silly child-like behaviors (giggling and self-absorbed smiling), engaging in repeated and purposeless movements, or displaying odd facial expressions and gestures. In some cases, the person will exhibit catatonic behaviors, which show decreased reactivity to the environment, such as posturing, in which the person maintains a rigid and bizarre posture for long periods of time, or catatonic stupor, a complete lack of movement and verbal behavior. Negative symptoms are those that reflect noticeable decreases and absences in certain behaviors, emotions, or drives (Green, 2001). A person who exhibits diminished emotional expression shows no emotion in his facial expressions, speech, or movements, even when such expressions are normal or expected. Avolition is characterized by a lack of motivation to engage in self-initiated and meaningful activity, including the most basic of tasks, such as bathing and grooming. Alogia refers to reduced speech output; in simple terms, patients do not say much. Another negative symptom is asociality, or social withdrawal and lack of interest in engaging in social interactions with others. A final negative symptom, anhedonia, refers to an inability to experience pleasure. One who exhibits anhedonia expresses little interest in what most people consider to be pleasurable activities, such as hobbies, recreation, or sexual activity. Link to Learning Watch this video of schizophrenia case studies and try to identify which classic symptoms of schizophrenia are shown. Causes of Schizophrenia There is considerable evidence suggesting that schizophrenia has a genetic basis. The risk of developing schizophrenia is nearly \(6\) times greater if one has a parent with schizophrenia than if one does not (Goldstein, Buka, Seidman, & Tsuang, 2010). Additionally, one’s risk of developing schizophrenia increases as genetic relatedness to family members diagnosed with schizophrenia increases (Gottesman, 2001). Genes When considering the role of genetics in schizophrenia, as in any disorder, conclusions based on family and twin studies are subject to criticism. This is because family members who are closely related (such as siblings) are more likely to share similar environments than are family members who are less closely related (such as cousins); further, identical twins may be more likely to be treated similarly by others than might fraternal twins. Thus, family and twin studies cannot completely rule out the possible effects of shared environments and experiences. Such problems can be corrected by using adoption studies, in which children are separated from their parents at an early age. One of the first adoption studies of schizophrenia conducted by Heston (1966) followed \(97\) adoptees, including \(47\) who were born to mothers with schizophrenia, over a \(36\)-year period. Five of the \(47\) adoptees (\(11\%\)) whose mothers had schizophrenia were later diagnosed with schizophrenia, compared to none of the \(50\) control adoptees. Other adoption studies have consistently reported that for adoptees who are later diagnosed with schizophrenia, their biological relatives have a higher risk of schizophrenia than do adoptive relatives (Shih, Belmonte, & Zandi, 2004). Although adoption studies have supported the hypothesis that genetic factors contribute to schizophrenia, they have also demonstrated that the disorder most likely arises from a combination of genetic and environmental factors, rather than just genes themselves. For example, investigators in one study examined the rates of schizophrenia among \(303\) adoptees (Tienari et al., 2004). A total of \(145\) of the adoptees had biological mothers with schizophrenia; these adoptees constituted the high genetic risk group. The other \(158\) adoptees had mothers with no psychiatric history; these adoptees composed the low genetic risk group. The researchers managed to determine whether the adoptees’ families were either healthy or disturbed. For example, the adoptees were considered to be raised in a disturbed family environment if the family exhibited a lot of criticism, conflict, and a lack of problem-solving skills. The findings revealed that adoptees whose mothers had schizophrenia (high genetic risk) and who had been raised in a disturbed family environment were much more likely to develop schizophrenia or another psychotic disorder (\(36.8\%\)) than were adoptees whose biological mothers had schizophrenia but who had been raised in a healthy environment (\(5.8\%\)), or than adoptees with a low genetic risk who were raised in either a disturbed (\(5.3\%\)) or healthy (\(4.8\%\)) environment. Because the adoptees who were at high genetic risk were likely to develop schizophrenia only if they were raised in a disturbed home environment, this study supports a diathesis-stress interpretation of schizophrenia—both genetic vulnerability and environmental stress are necessary for schizophrenia to develop, genes alone do not show the complete picture. Neurotransmitters If we accept that schizophrenia is at least partly genetic in origin, as it seems to be, it makes sense that the next step should be to identify biological abnormalities commonly found in people with the disorder. Perhaps not surprisingly, a number of neurobiological factors have indeed been found to be related to schizophrenia. One such factor that has received considerable attention for many years is the neurotransmitter dopamine. Interest in the role of dopamine in schizophrenia was stimulated by two sets of findings: drugs that increase dopamine levels can produce schizophrenia-like symptoms, and medications that block dopamine activity reduce the symptoms (Howes & Kapur, 2009). The dopamine hypothesis of schizophrenia proposed that an overabundance of dopamine or too many dopamine receptors are responsible for the onset and maintenance of schizophrenia (Snyder, 1976). More recent work in this area suggests that abnormalities in dopamine vary by brain region and thus contribute to symptoms in unique ways. In general, this research has suggested that an overabundance of dopamine in the limbic system may be responsible for some symptoms, such as hallucinations and delusions, whereas low levels of dopamine in the prefrontal cortex might be responsible primarily for the negative symptoms (avolition, alogia, asociality, and anhedonia) (Davis, Kahn, Ko, & Davidson, 1991). In recent years, serotonin has received attention, and newer antipsychotic medications used to treat the disorder work by blocking serotonin receptors (Baumeister & Hawkins, 2004). Brain Anatomy Brain imaging studies reveal that people with schizophrenia have enlarged ventricles, the cavities within the brain that contain cerebral spinal fluid (Green, 2001). This finding is important because larger than normal ventricles suggests that various brain regions are reduced in size, thus implying that schizophrenia is associated with a loss of brain tissue. In addition, many people with schizophrenia display a reduction in gray matter (cell bodies of neurons) in the frontal lobes (Lawrie & Abukmeil, 1998), and many show less frontal lobe activity when performing cognitive tasks (Buchsbaum et al., 1990). The frontal lobes are important in a variety of complex cognitive functions, such as planning and executing behavior, attention, speech, movement, and problem solving. Hence, abnormalities in this region provide merit in explaining why people with schizophrenia experience deficits in these of areas. Events During Pregnancy Why do people with schizophrenia have these brain abnormalities? A number of environmental factors that could impact normal brain development might be at fault. High rates of obstetric complications in the births of children who later developed schizophrenia have been reported (Cannon, Jones, & Murray, 2002). In addition, people are at an increased risk for developing schizophrenia if their mother was exposed to influenza during the first trimester of pregnancy (Brown et al., 2004). Research has also suggested that a mother’s emotional stress during pregnancy may increase the risk of schizophrenia in offspring. One study reported that the risk of schizophrenia is elevated substantially in offspring whose mothers experienced the death of a relative during the first trimester of pregnancy (Khashan et al., 2008). Marijuana Another variable that is linked to schizophrenia is marijuana use. Although a number of reports have shown that individuals with schizophrenia are more likely to use marijuana than are individuals without schizophrenia (Thornicroft, 1990), such investigations cannot determine if marijuana use leads to schizophrenia, or vice versa. However, a number of longitudinal studies have suggested that marijuana use is, in fact, a risk factor for schizophrenia. A classic investigation of over \(45,000\) Swedish conscripts who were followed up after \(15\) years found that those individuals who had reported using marijuana at least once by the time of conscription were more than \(2\) times as likely to develop schizophrenia during the ensuing \(15\) years than were those who reported never using marijuana; those who had indicated using marijuana \(50\) or more times were \(6\) times as likely to develop schizophrenia (Andréasson, Allbeck, Engström, & Rydberg, 1987). More recently, a review of \(35\) longitudinal studies found a substantially increased risk of schizophrenia and other psychotic disorders in people who had used marijuana, with the greatest risk in the most frequent users (Moore et al., 2007). Other work has found that marijuana use is associated with an onset of psychotic disorders at an earlier age (Large, Sharma, Compton, Slade, & Nielssen, 2011). Overall, the available evidence seems to indicate that marijuana use plays a causal role in the development of schizophrenia, although it is important to point out that marijuana use is not an essential or sufficient risk factor as not all people with schizophrenia have used marijuana and the majority of marijuana users do not develop schizophrenia (Casadio, Fernandes, Murray, & Di Forti, 2011). One plausible interpretation of the data is that early marijuana use may disrupt normal brain development during important early maturation periods in adolescence (Trezza, Cuomo, & Vanderschuren, 2008). Thus, early marijuana use may set the stage for the development of schizophrenia and other psychotic disorders, especially among individuals with an established vulnerability (Casadio et al., 2011). Schizophrenia: Early Warning Signs Early detection and treatment of conditions such as heart disease and cancer have improved survival rates and quality of life for people who suffer from these conditions. A new approach involves identifying people who show minor symptoms of psychosis, such as unusual thought content, paranoia, odd communication, delusions, problems at school or work, and a decline in social functioning—which are coined prodromal symptoms—and following these individuals over time to determine which of them develop a psychotic disorder and which factors best predict such a disorder. A number of factors have been identified that predict a greater likelihood that prodromal individuals will develop a psychotic disorder: genetic risk (a family history of psychosis), recent deterioration in functioning, high levels of unusual thought content, high levels of suspicion or paranoia, poor social functioning, and a history of substance abuse (Fusar-Poli et al., 2013). Further research will enable a more accurate prediction of those at greatest risk for developing schizophrenia, and thus to whom early intervention efforts should be directed. Big Deeper: Forensic Psycology In August 2013, 17-year-old Cody Metzker-Madsen attacked 5-year-old Dominic Elkins on his foster parents’ property. Believing that he was fighting goblins and that Dominic was the goblin commander, Metzker-Madsen beat Dominic with a brick and then held him face down in a creek. Dr. Alan Goldstein, a clinical and forensic psychologist, testified that Metzker-Madsen believed that the goblins he saw were real and was not aware that it was Dominic at the time. He was found not guilty by reason of insanity and was not held legally responsible for Dominic's death (Nelson, 2014). Cody was also found to be a danger to himself or others. He will be held in a psychiatric facility until he is judged to be no longer dangerous. This does not mean that he "got away with" anything. In fact, according to the American Psychiatric Association, individuals who are found not guilty by reason of insanity are often confined to psychiatric hospitals for as long or longer than they would have spent in prison for a conviction. Most people with mental illness are not violent. Only 3–5% of violent acts are committed by individuals diagnosed with severe mental illness, whereas individuals with severe mental illnesses are more than ten times as likely to be victims of crime (MentalHealth.gov, 2017). The psychologists who work with individuals such as Metzker-Madsen are part of the subdiscipline of forensic psychology. Forensic psychologists are involved in psychological assessment and treatment of individuals involved with the legal system. They use their knowledge of human behavior and mental illness to assist the judicial and legal system in making decisions in cases involving such issues as personal injury suits, workers' compensation, competency to stand trial, and pleas of not guilty by reason of insanity.
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/15%3A__Psychological_Disorders/15.09%3A_Schizophrenia.txt
Learning Objectives • Describe the nature and symptoms of attention deficit/hyperactivity disorder and autism spectrum disorder • Discuss the prevalence and factors that contribute to the development of these disorders Most of the disorders we have discussed so far are typically diagnosed in adulthood, although they can and sometimes do occur during childhood. However, there are a group of conditions that, when present, are diagnosed early in childhood, often before the time a child enters school. These conditions are listed in the DSM-5 as neurodevelopmental disorders, and they involve developmental problems in personal, social, academic, and intellectual functioning (APA, 2013). While they are often diagnosed in childhood, many people live with them throughout adulthood. In this section, we will discuss two such disorders: attention deficit/ hyperactivity disorder and autism. Attention Deficit/Hyperactivity Disorder Diego is always active, from the time he wakes up in the morning until the time he goes to bed at night. His mother reports that he came out the womb kicking and screaming, and he has not stopped moving since. He has a sweet disposition, but always seems to be in trouble with his teachers, parents, and after-school program counselors. He seems to accidentally break things; he lost his jacket three times last winter, and he never seems to sit still. His teachers believe he is a smart child, but he never finishes anything he starts and is so impulsive that he does not seem to learn much in school. Diego likely has attention deficit/hyperactivity disorder (ADHD). The symptoms of this disorder were first described by Hans Hoffman in the 1920s. While taking care of his son while his wife was in the hospital giving birth to a second child, Hoffman noticed that the boy had trouble concentrating on his homework, had a short attention span, and had to repeatedly go over easy homework to learn the material (Jellinek & Herzog, 1999). Later, it was discovered that many hyperactive children—those who are fidgety, restless, socially disruptive, and have trouble with impulse control—also display short attention spans, problems with concentration, and distractibility. By the 1970s, it had become clear that many children who display attention problems often also exhibit signs of hyperactivity. In recognition of such findings, the DSM-III (published in 1980) included a new disorder: attention deficit disorder with and without hyperactivity, now known as attention deficit/hyperactivity disorder (ADHD). A child with ADHD shows a constant pattern of inattention and/or hyperactive and impulsive behavior that interferes with normal functioning (APA, 2013). Some of the signs of inattention include great difficulty with and avoidance of tasks that require sustained attention (such as conversations or reading), failure to follow instructions (often resulting in failure to complete school work and other duties), disorganization (difficulty keeping things in order, poor time management, sloppy and messy work), lack of attention to detail, becoming easily distracted, and forgetfulness. Hyperactivity is characterized by excessive movement, and includes fidgeting or squirming, leaving one’s seat in situations when remaining seated is expected, having trouble sitting still (e.g., in a restaurant), running about and climbing on things, blurting out responses before another person’s question or statement has been completed, difficulty waiting one’s turn for something, and interrupting and intruding on others. Frequently, the hyperactive child comes across as noisy and boisterous. The child’s behavior is hasty, impulsive, and seems to occur without much forethought; these characteristics may explain why adolescents and young adults diagnosed with ADHD receive more traffic tickets and have more automobile accidents than do others (Thompson, Molina, Pelham, & Gnagy, 2007). ADHD occurs in about 8% of children (Danielson et al, 2016), and studies estimate that for about 60% of these people, ADHD continues into adulthood (Sibley et al 2016). On the average, boys are 3 times more likely to have ADHD than are girls; however, such findings might reflect the greater propensity of boys to engage in aggressive and antisocial behavior and thus incur a greater likelihood of being referred to psychological clinics (Barkley, 2006). Children with ADHD face severe academic and social challenges. Compared to their non-ADHD counterparts, children with ADHD have lower grades and standardized test scores and higher rates of expulsion, grade retention, and dropping out (Loe & Feldman, 2007). they also are less well-liked and more often rejected by their peers (Hoza et al., 2005). A recent study found that nearly 81% of those whose ADHD persisted into adulthood had experienced at least one other comorbid disorder, compared to 47% of those whose ADHD did not persist (Barbaresi et al., 2013). Life Problems from ADHD Children with ADHD face considerably worse long-term outcomes than do those children who do not have ADHD. Adults diagnosed with ADHD in childhood, but not treated for ADHD, have been reported to have poor outcomes in a wide range of areas of life, including social function, education, criminality, alcohol use, substance use, and occupational outcomes (Arnold et al, 2015). In one investigation, 135 adults who had been identified as having ADHD symptoms in the 1970s were contacted decades later and interviewed (Klein et al., 2012). Compared to a control sample of 136 participants who had never been diagnosed with ADHD, those who were diagnosed as children: • had worse educational attainment (more likely to have dropped out of high school and less likely to have earned a bachelor’s degree); • had lower socioeconomic status; • held less prestigious occupational positions; • were more likely to be unemployed; • made considerably less in salary; • scored worse on a measure of occupational functioning (indicating, for example, lower job satisfaction, poorer work relationships, and more firings); • scored worse on a measure of social functioning (indicating, for example, fewer friendships and less involvement in social activities); • were more likely to be divorced; and • were more likely to have non-alcohol-related substance abuse problems. (Klein et al., 2012) Longitudinal studies also show that children diagnosed with ADHD are at higher risk for substance abuse problems. One study reported that childhood ADHD predicted later drinking problems, daily smoking, and use of marijuana and other illicit drugs (Molina & Pelham, 2003). The risk of substance abuse problems appears to be even greater for those with ADHD who also exhibit antisocial tendencies (Marshal & Molina, 2006). Diagnosis, treatment, and general awareness of ADHD has certainly improved in the decades since the people in the above studies were diagnosed. Studies that include more recent outcomes show positive effects of treatment as opposed to non-treatment (Harpin, 2013; Arnold 2015). In most cases, the same studies indicate that more research and work needs to be undertaken to understand the most effect treatments and their impacts. Causes of ADHD Family and twin studies indicate that genetics play a significant role in the development of ADHD. Burt (2009), in a review of 26 studies, reported that the median rate of concordance for identical twins was .66 (one study reported a rate of .90), whereas the median concordance rate for fraternal twins was .20. This study also found that the median concordance rate for unrelated (adoptive) siblings was .09; although this number is small, it is greater than 0, thus suggesting that the environment may have at least some influence. Another review of studies concluded that the heritability of inattention and hyperactivity were 71% and 73%, respectively (Nikolas & Burt, 2010). The specific genes involved in ADHD are thought to include at least two that are important in the regulation of the neurotransmitter dopamine (Gizer, Ficks, & Waldman, 2009), suggesting that dopamine may be important in ADHD. Indeed, medications used in the treatment of ADHD, such as methylphenidate (Ritalin) and amphetamine with dextroamphetamine (Adderall), have stimulant qualities and elevate dopamine activity. People with ADHD show less dopamine activity in key regions of the brain, especially those associated with motivation and reward (Volkow et al., 2009), which provides support to the theory that dopamine deficits may be a vital factor in the development this disorder (Swanson et al., 2007). Brain imaging studies have shown that children with ADHD exhibit abnormalities in their frontal lobes, an area in which dopamine is in abundance. Compared to children without ADHD, those with ADHD appear to have smaller frontal lobe volume, and they show less frontal lobe activation when performing mental tasks. Recall that one of the functions of the frontal lobes is to inhibit our behavior. Thus, abnormalities in this region may go a long way toward explaining the hyperactive, uncontrolled behavior of ADHD. By the 1970s, many had become aware of the connection between nutritional factors and childhood behavior. At the time, much of the public believed that hyperactivity was caused by sugar and food additives, such as artificial coloring and flavoring. Undoubtedly, part of the appeal of this hypothesis was that it provided a simple explanation of (and treatment for) behavioral problems in children. A statistical review of 16 studies, however, concluded that sugar consumption has no effect at all on the behavioral and cognitive performance of children (Wolraich, Wilson, & White, 1995). Additionally, although food additives have been shown to increase hyperactivity in non-ADHD children, the effect is rather small (McCann et al., 2007). Numerous studies, however, have shown a significant relationship between exposure to nicotine in cigarette smoke during the prenatal period and ADHD (Linnet et al., 2003). Maternal smoking during pregnancy is associated with the development of more severe symptoms of the disorder (Thakur et al., 2013). Is ADHD caused by poor parenting? No. Remember, the genetics studies discussed above suggested that the family environment does not seem to play much of a role in the development of this disorder; if it did, we would expect the concordance rates to be higher for fraternal twins and adoptive siblings than has been demonstrated. All things considered, the evidence seems to point to the conclusion that ADHD is triggered more by genetic and neurological factors and less by social or environmental ones. Big Deeper: Why Is the Prevalence Rate of ADHD Increasing? Many people believe that the rates of ADHD have increased in recent years, and there is evidence to support this contention. In a recent study, investigators found that the parent-reported prevalence of ADHD among children (4–17 years old) in the United States increased by 22% during a 4-year period, from 7.8% in 2003 to 9.5% in 2007 (CDC, 2010). Over time this increase in parent-reported ADHD was observed in all sociodemographic groups and was reflected by substantial increases in 12 states (Indiana, North Carolina, and Colorado were the top three). The increases were greatest for older teens (ages 15–17), multiracial and Hispanic children, and children with a primary language other than English. Another investigation found that from 1998–2000 through 2007–2009 the parent-reported prevalence of ADHD increased among U.S. children between the ages of 5–17 years old, from 6.9% to 9.0% (Akinbami, Liu, Pastor, & Reuben, 2011). A major weakness of both studies was that children were not actually given a formal diagnosis. Instead, parents were simply asked whether or not a doctor or other health-care provider had ever told them their child had ADHD; the reported prevalence rates thus may have been affected by the accuracy of parental memory. Nevertheless, the findings from these studies raise important questions concerning what appears to be a demonstrable rise in the prevalence of ADHD. Although the reasons underlying this apparent increase in the rates of ADHD over time are poorly understood and, at best, speculative, several explanations are viable: • ADHD may be over-diagnosed by doctors who are too quick to medicate children as a behavior treatment. • There is greater awareness of ADHD now than in the past. Nearly everyone has heard of ADHD, and most parents and teachers are aware of its key symptoms. Thus, parents may be quick to take their children to a doctor if they believe their child possesses these symptoms, or teachers may be more likely now than in the past to notice the symptoms and refer the child for evaluation. • The use of computers, video games, iPhones, and other electronic devices has become pervasive among children in the early 21st century, and these devices could potentially shorten children’s attentions spans. Thus, what might seem like inattention to some parents and teachers could simply reflect exposure to too much technology. • ADHD diagnostic criteria have changed over time. Autism Spectrum Disorder A seminal paper published in 1943 by psychiatrist Leo Kanner described an unusual neurodevelopmental condition he observed in a group of children. He called this condition early infantile autism, and it was characterized mainly by an inability to form close emotional ties with others, speech and language abnormalities, repetitive behaviors, and an intolerance of minor changes in the environment and in normal routines (Bregman, 2005). What the DSM-5 refers to as autism spectrum disorder today, is a direct extension of Kanner’s work. Autism spectrum disorder is probably the most misunderstood of the neurodevelopmental disorders. Children with this disorder show signs of significant disturbances in three main areas: (a) deficits in social interaction, (b) deficits in communication, and (c) repetitive patterns of behavior or interests. These disturbances appear early in life and cause serious impairments in functioning (APA, 2013). The child with autism spectrum disorder might exhibit deficits in social interaction by not initiating conversations with other children or turning their head away when spoken to. Typically, these children do not make eye contact with others and seem to prefer playing alone rather than with others. In some cases, it is almost as though these individuals live in a personal and isolated social world others are simply not privy to or able to penetrate. Communication deficits can range from a complete lack of speech, to one word responses (e.g., saying “Yes” or “No” when replying to questions or statements that require additional elaboration), to echoed speech (e.g., parroting what another person says, either immediately or several hours or even days later), to difficulty maintaining a conversation because of an inability to reciprocate others’ comments. These deficits can also include problems in using and understanding nonverbal cues (e.g., facial expressions, gestures, and postures) that facilitate normal communication. Repetitive patterns of behavior or interests can be exhibited a number of ways. The child might engage in stereotyped, repetitive movements (rocking, head-banging, or repeatedly dropping an object and then picking it up), or they might show great distress at small changes in routine or the environment. In some cases, the person with autism spectrum disorder might show highly restricted and fixated interests that appear to be abnormal in their intensity. For instance, the person might learn and memorize every detail about something even though doing so serves no apparent purpose. Importantly, autism spectrum disorder is not the same thing as intellectual disability, although these two conditions are often comorbid. The DSM-5 specifies that the symptoms of autism spectrum disorder are not caused or explained by intellectual disability. Life Problems From Autism Spectrum Disorder Autism spectrum disorder is referred to in everyday language as autism; in fact, the disorder was termed “autistic disorder” in earlier editions of the DSM, and its diagnostic criteria were much narrower than those of autism spectrum disorder. The qualifier “spectrum” in autism spectrum disorder is used to indicate that individuals with the disorder can show a range, or spectrum, of symptoms that vary in their magnitude and severity: some severe, others less severe. The previous edition of the DSM included a diagnosis of Asperger’s disorder, generally recognized as a less severe form of autistic disorder; individuals diagnosed with Asperger’s disorder were described as having average or high intelligence and a strong vocabulary, but exhibiting impairments in social interaction and social communication, such as talking only about their special interests (Wing, Gould, & Gillberg, 2011). However, because research has failed to demonstrate that Asperger’s disorder differs qualitatively from autistic disorder, the DSM-5 does not include it, which is prompting concerns among some parents that their children may no longer be eligible for special services (“Asperger’s Syndrome Dropped,” 2012). Some individuals with autism spectrum disorder, particularly those with better language and intellectual skills, can live and work independently as adults. However, most do not because the symptoms remain sufficient to cause serious impairment in many realms of life (APA, 2013). Link to Learning Watch this video about early signs of autism to learn more. Current estimates from the Center for Disease Control and Prevention’s Autism and Developmental Disabilities Monitoring Network indicate that 1 in 59 children in the United States has autism spectrum disorder; the disorder is 4 times more common among boys (1 in 38) than in girls (1 in 152) (Baio et al, 2018). Rates of autistic spectrum disorder have increased dramatically since the 1980s. For example, California saw an increase of 273% in reported cases from 1987 through 1998 (Byrd, 2002); between 2000 and 2008, the rate of autism diagnoses in the United States increased 78% (CDC, 2012). Although it is difficult to interpret this increase, it is possible that the rise in prevalence is the result of the broadening of the diagnosis, increased efforts to identify cases in the community, and greater awareness and acceptance of the diagnosis. In addition, mental health professionals are now more knowledgeable about autism spectrum disorder and are better equipped to make the diagnosis, even in subtle cases (Novella, 2008). Causes of Autism Spectrum Disorder The exact causes of autism spectrum disorder remain unknown despite massive research efforts over the last two decades (Meek, Lemery-Chalfant, Jahromi, & Valiente, 2013). Autism appears to be strongly influenced by genetics, as identical twins show concordance rates of 60%–90%, whereas concordance rates for fraternal twins and siblings are 5%–10% (Autism Genome Project Consortium, 2007). Many different genes and gene mutations have been implicated in autism (Meek et al., 2013). Among the genes involved are those important in the formation of synaptic circuits that facilitate communication between different areas of the brain (Gauthier et al., 2011). A number of environmental factors are also thought to be associated with increased risk for autism spectrum disorder, at least in part, because they contribute to new mutations. These factors include exposure to pollutants, such as plant emissions and mercury, urban versus rural residence, and vitamin D deficiency (Kinney, Barch, Chayka, Napoleon, & Munir, 2009). Child Vaccinations and Autism Spectrum Disorder In the late 1990s, a prestigious medical journal published an article purportedly showing that autism is triggered by the MMR (measles, mumps, and rubella) vaccine. These findings were very controversial and drew a great deal of attention, sparking an international forum on whether children should be vaccinated. In a shocking turn of events, some years later the article was retracted by the journal that had published it after evidence of fraud and unethical practices on the part of the lead researcher. Despite the retraction, the reporting in popular media led to concerns about a possible link between vaccines and autism persisting. A recent survey of parents, for example, found that roughly a third of respondents expressed such a concern (Kennedy, LaVail, Nowak, Basket, & Landry, 2011); and perhaps fearing that their children would develop autism, more than 10% of parents of young children refuse or delay vaccinations (Dempsey et al., 2011). Some parents of children with autism mounted a campaign against scientists who refuted the vaccine-autism link. Even politicians and several well-known celebrities weighed in; for example, actress Jenny McCarthy (who believed that a vaccination caused her son’s autism) co-authored a book on the matter. However, there is no scientific evidence that a link exists between autism and vaccinations (Hughes, 2007). Indeed, a recent study compared the vaccination histories of 256 children with autism spectrum disorder with that of 752 control children across three time periods during their first two years of life (birth to 3 months, birth to 7 months, and birth to 2 years) (DeStefano, Price, & Weintraub, 2013). At the time of the study, the children were between 6 and 13 years old, and their prior vaccination records were obtained. Because vaccines contain immunogens (substances that fight infections), the investigators examined medical records to see how many immunogens children received to determine if those children who received more immunogens were at greater risk for developing autism spectrum disorder. The results of this study, a portion of which are shown in Figure 15.19, clearly demonstrate that the quantity of immunogens from vaccines received during the first two years of life were not at all related to the development of autism spectrum disorder. There is not a relationship between vaccinations and autism spectrum disorders. Why does concern over vaccines and autism spectrum disorder persist? Since the proliferation of the Internet in the 1990s, parents have been constantly bombarded with online information that can become magnified and take on a life of its own. The enormous volume of electronic information pertaining to autism spectrum disorder, combined with how difficult it can be to grasp complex scientific concepts, can make separating good research from bad challenging (Downs, 2008). Notably, the study that fueled the controversy reported that 8 out of 12 children—according to their parents—developed symptoms consistent with autism spectrum disorder shortly after receiving a vaccination. To conclude that vaccines cause autism spectrum disorder on this basis, as many did, is clearly incorrect for a number of reasons, not the least of which is because correlation does not imply causation, as you’ve learned. Additionally, as was the case with diet and ADHD in the 1970s, the notion that autism spectrum disorder is caused by vaccinations is appealing to some because it provides a simple explanation for this condition. Like all disorders, however, there are no simple explanations for autism spectrum disorder. Although the research discussed above has shed some light on its causes, science is still a long way from complete understanding of the disorder.
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/15%3A__Psychological_Disorders/15.10%3A_Disorders_in_Childhood.txt
Learning Objectives • Describe the nature of personality disorders and how they differ from other disorders • List and distinguish between the three clusters of personality disorders • Identify the basic features of borderline personality disorder and antisocial personality disorder, and the factors that are important in the etiology of both The term personality refers loosely to one’s stable, consistent, and distinctive way of thinking about, feeling, acting, and relating to the world. People with personality disorders exhibit a personality style that differs markedly from the expectations of their culture, is pervasive and inflexible, begins in adolescence or early adulthood, and causes distress or impairment (APA, 2013). Generally, individuals with these disorders exhibit enduring personality styles that are extremely troubling and often create problems for them and those with whom they come into contact. Their maladaptive personality styles frequently bring them into conflict with others, disrupt their ability to develop and maintain social relationships, and prevent them from accomplishing realistic life goals. The DSM-5 recognizes \(10\) personality disorders, organized into \(3\) different clusters. Cluster A disorders include paranoid personality disorder, schizoid personality disorder, and schizotypal personality disorder. People with these disorders display a personality style that is odd or eccentric. Cluster B disorders include antisocial personality disorder, histrionic personality disorder, narcissistic personality disorder, and borderline personality disorder. People with these disorders usually are impulsive, overly dramatic, highly emotional, and erratic. Cluster C disorders include avoidant personality disorder, dependent personality disorder, and obsessive-compulsive personality disorder (which is not the same thing as obsessive-compulsive disorder). People with these disorders often appear to be nervous and fearful. Table 15.2 provides a description of each of the DSM-5 personality disorders: Table 15.2 DSM-5 Personality Disorders DSM-5 Personality Disorder Description Cluster Paranoid harbors a pervasive and unjustifiable suspiciousness and mistrust of others; reluctant to confide in or become close to others; reads hidden demeaning or threatening meaning into benign remarks or events; takes offense easily and bears grudges; not due to schizophrenia or other psychotic disorders A Schizoid lacks interest and desire to form relationships with others; aloof and shows emotional coldness and detachment; indifferent to approval or criticism of others; lacks close friends or confidants; not due to schizophrenia or other psychotic disorders, not an autism spectrum disorder A Schizotypal exhibits eccentricities in thought, perception, emotion, speech, and behavior; shows suspiciousness or paranoia; has unusual perceptual experiences; speech is often idiosyncratic; displays inappropriate emotions; lacks friends or confidants; not due to schizophrenia or other psychotic disorder, or to autism spectrum disorder A Antisocial continuously violates the rights of others; history of antisocial tendencies prior to age 15; often lies, fights, and has problems with the law; impulsive and fails to think ahead; can be deceitful and manipulative in order to gain profit or pleasure; irresponsible and often fails to hold down a job or pay financial debts; lacks feelings for others and remorse over misdeeds B Histrionic excessively overdramatic, emotional, and theatrical; feels uncomfortable when not the center of others’ attention; behavior is often inappropriately seductive or provocative; speech is highly emotional but often vague and diffuse; emotions are shallow and often shift rapidly; may alienate friends with demands for constant attention B Narcissistic overinflated and unjustified sense of self-importance and preoccupied with fantasies of success; believes he is entitled to special treatment from others; shows arrogant attitudes and behaviors; takes advantage of others; lacks empathy B Borderline unstable in self-image, mood, and behavior; cannot tolerate being alone and experiences chronic feelings of emptiness; unstable and intense relationships with others; behavior is impulsive, unpredictable, and sometimes self-damaging; shows inappropriate and intense anger; makes suicidal gestures B Avoidant socially inhibited and oversensitive to negative evaluation; avoids occupations that involve interpersonal contact because of fears of criticism or rejection; avoids relationships with others unless guaranteed to be accepted unconditionally; feels inadequate and views self as socially inept and unappealing; unwilling to take risks or engage in new activities if they may prove embarrassing C Dependent allows others to take over and run her life; is submissive, clingy, and fears separation; cannot make decisions without advice and reassurance from others; lacks self-confidence; cannot do things on her own; feels uncomfortable or helpless when alone C Obsessive-Compulsive pervasive need for perfectionism that interferes with the ability to complete tasks; preoccupied with details, rules, order, and schedules; excessively devoted to work at the expense of leisure and friendships; rigid, inflexible, and stubborn; insists things be done his way; miserly with money C Slightly over \(9\%\) of the U.S. population suffers from a personality disorder, with avoidant and schizoid personality disorders the most frequent (Lezenweger, Lane, Loranger, & Kessler, 2007). Two of these personality disorders, borderline personality disorder and antisocial personality disorder, are regarded by many as especially problematic. Borderline Personality Disorder The “borderline” in borderline personality disorder was originally coined in the late 1930s in an effort to describe patients who appeared anxious, but were prone to brief psychotic experiences—that is, patients who were thought to be literally on the borderline between anxiety and psychosis (Freeman, Stone, Martin, & Reinecke, 2005). Today, borderline personality disorder has a completely different meaning. Borderline personality disorder is characterized chiefly by instability in interpersonal relationships, self-image, and mood, as well as marked impulsivity (APA, 2013). People with borderline personality disorder cannot tolerate the thought of being alone and will make frantic efforts (including making suicidal gestures and engaging in self-mutilation) to avoid abandonment or separation (whether real or imagined). Their relationships are intense and unstable; for example, a lover may be idealized early in a relationship, but then later vilified at the slightest sign she appears to no longer show interest. These individuals have an unstable view of self and, thus, might suddenly display a shift in personal attitudes, interests, career plans, and choice of friends. For example, a law school student may, despite having invested tens of thousands of dollars toward earning a law degree and despite having performed well in the program, consider dropping out and pursuing a career in another field. People with borderline personality disorder may be highly impulsive and may engage in reckless and self-destructive behaviors such as excessive gambling, spending money irresponsibly, substance abuse, engaging in unsafe sex, and reckless driving. They sometimes show intense and inappropriate anger that they have difficulty controlling, and they can be moody, sarcastic, bitter, and verbally abusive. The prevalence of borderline personality disorder in the U.S. population is estimated to be around \(1.4\%\) (Lezenweger et al., 2007), but the rates are higher among those who use mental health services; approximately \(10\%\) of mental health outpatients and \(20\%\) of psychiatric inpatients meet the criteria for diagnosis (APA, 2013). Additionally, borderline personality disorder is comorbid with anxiety, mood, and substance use disorders (Lezenweger et al., 2007). Biological Basis for Borderline Personality Disorder Genetic factors appear to be important in the development of borderline personality disorder. For example, core personality traits that characterize this disorder, such as impulsivity and emotional instability, show a high degree of heritability (Livesley, 2008). Also, the rates of borderline personality disorder among relatives of people with this disorder have been found to be as high as \(24.9\%\) (White, Gunderson, Zanarani, & Hudson, 2003). Individuals with borderline personality disorder report experiencing childhood physical, sexual, and/or emotional abuse at rates far greater than those observed in the general population (Afifi et al., 2010), indicating that environmental factors are also crucial. These findings would suggest that borderline personality disorder may be determined by an interaction between genetic factors and adverse environmental experiences. Consistent with this hypothesis, one study found that the highest rates of borderline personality disorder were among individuals with a borderline temperament (characterized by high novelty seeking and high harm-avoidance) and those who experienced childhood abuse and/or neglect (Joyce et al., 2003). Antisocial Personality Disorder Most human beings live in accordance with a moral compass, a sense of right and wrong. Most individuals learn at a very young age that there are certain things that should not be done. We learn that we should not lie or cheat. We are taught that it is wrong to take things that do not belong to us, and that it is wrong to exploit others for personal gain. We also learn the importance of living up to our responsibilities, of doing what we say we will do. People with antisocial personality disorder, however, do not seem to have a moral compass. These individuals act as though they neither have a sense of nor care about right or wrong. Not surprisingly, these people represent a serious problem for others and for society in general. According to the DSM-5, the individual with antisocial personality disorder shows no regard at all for other people’s rights or feelings. This lack of regard is exhibited a number of ways and can include repeatedly performing illegal acts, lying to or conning others, impulsivity and recklessness, irritability and aggressiveness toward others, and failure to act in a responsible way (e.g., leaving debts unpaid) (APA, 2013). People with this disorder have no remorse over their misdeeds; these people will hurt, manipulate, exploit, and abuse others and not feel any guilt. Signs of this disorder can emerge early in life; however, a person must be at least 18 years old to be diagnosed with antisocial personality disorder. People with antisocial personality disorder seem to view the world as self-serving and unkind. They seem to think that they should use whatever means necessary to get by in life. They tend to view others not as living, thinking, feeling beings, but rather as pawns to be used or abused for a specific purpose. They often have an over-inflated sense of themselves and can appear extremely arrogant. They frequently display superficial charm; for example, without really meaning it they might say exactly what they think another person wants to hear. They lack empathy: they are incapable of understanding the emotional point-of-view of others. People with this disorder may become involved in illegal enterprises, show cruelty toward others, leave their jobs with no plans to obtain another job, have multiple sexual partners, repeatedly get into fights with others, and show reckless disregard for themselves and others (e.g., repeated arrests for driving while intoxicated) (APA, 2013). The DSM-5 has included an alternative model for conceptualizing personality disorders based on the traits identified in the Five Factor Model of personality. This model addresses the level of personality functioning such as impairments in self (identity or self-direction) and interpersonal (empathy or intimacy) functioning. In the case of antisocial personality disorder, the DSM-5 identifies the predominant traits of antagonism (such as disregard for others’ needs, manipulative or deceitful behavior) and disinhibition (characterized by impulsivity, irresponsibility, and risk-taking) (Harwood, Schade, Krueger, Wright, & Markon, 2012). A psychopathology specifier is also included that emphasizes traits such as attention seeking and low anxiousness (lack of concern about negative consequences for risky or harmful behavior) (Crego & Widiger, 2014). Risk Factors for Antisocial Personality Disorder Antisocial personality disorder is observed in about \(3.6\%\) of the population; the disorder is much more common among males, with a \(3\) to \(1\) ratio of men to women, and it is more likely to occur in men who are younger, widowed, separated, divorced, of lower socioeconomic status, who live in urban areas, and who live in the western United States (Compton, Conway, Stinson, Colliver, & Grant, 2005). Compared to men with antisocial personality disorder, women with the disorder are more likely to have experienced emotional neglect and sexual abuse during childhood, and they are more likely to have had parents who abused substances and who engaged in antisocial behaviors themselves (Alegria et al., 2013). The Table 15.3 below shows some of the differences in the specific types of antisocial behaviors that men and women with antisocial personality disorder exhibit (Alegria et al., 2013). Table 15.3 Gender Differences in Antisocial Personality Disorder Men with antisocial personality disorder are more likely than women with antisocial personality disorder to Women with antisocial personality disorder are more likely than men with antisocial personality to • do things that could easily hurt themselves or others • receive three or more traffic tickets for reckless driving • have their driver’s license suspended • destroy others’ property • start a fire on purpose • make money illegally • do anything that could lead to arrest • hit someone hard enough to injure them • hurt an animal on purpose • run away from home overnight • frequently miss school or work • lie frequently • forge someone’s signature • get into a fight that comes to blows with an intimate partner • live with others besides the family for at least one month • harass, threaten, or blackmail someone Family, twin, and adoption studies suggest that both genetic and environmental factors influence the development of antisocial personality disorder, as well as general antisocial behavior (criminality, violence, aggressiveness) (Baker, Bezdjian, & Raine, 2006). Personality and temperament dimensions that are related to this disorder, including fearlessness, impulsive antisociality, and callousness, have a substantial genetic influence (Livesley & Jang, 2008). Adoption studies clearly demonstrate that the development of antisocial behavior is determined by the interaction of genetic factors and adverse environmental circumstances (Rhee & Waldman, 2002). For example, one investigation found that adoptees of biological parents with antisocial personality disorder were more likely to exhibit adolescent and adult antisocial behaviors if they were raised in adverse adoptive family environments (e.g., adoptive parents had marital problems, were divorced, used drugs, and had legal problems) than if they were raised in a more normal adoptive environment (Cadoret, Yates, Ed, Woodworth, & Stewart, 1995). Researchers who are interested in the importance of environment in the development of antisocial personality disorder have directed their attention to such factors as the community, the structure and functioning of the family, and peer groups. Each of these factors influences the likelihood of antisocial behavior. One longitudinal investigation of more than \(800\) Seattle-area youth measured risk factors for violence at \(10\), \(14\), \(16\), and \(18\) years of age (Herrenkohl et al., 2000). The risk factors examined included those involving the family, peers, and community. A portion of the findings from this study are provided in the figure below: Those with antisocial tendencies do not seem to experience emotions the way most other people do. These individuals fail to show fear in response to environment cues that signal punishment, pain, or noxious stimulation. For instance, they show less skin conductance (sweatiness on hands) in anticipation of electric shock than do people without antisocial tendencies (Hare, 1965). Skin conductance is controlled by the sympathetic nervous system and is used to assess autonomic nervous system functioning. When the sympathetic nervous system is active, people become aroused and anxious, and sweat gland activity increases. Thus, increased sweat gland activity, as assessed through skin conductance, is taken as a sign of arousal or anxiety. For those with antisocial personality disorder, a lack of skin conductance may indicate the presence of characteristics such as emotional deficits and impulsivity that underlie the propensity for antisocial behavior and negative social relationships (Fung et al., 2005). Another example showing that those with antisocial personality disorder fail to respond to environmental cues comes from a recent study by Stuppy-Sullivan and Baskin-Sommers (2019). The researchers studied cognitive and reward factors associated with antisocial personality disorder dysfunction in 119 incarcerated males. Each subject was administered three tasks targeting different aspects of cognition and reward. High-magnitude rewards tended to impair perception in those with antisocial personality disorder, worsened executive function when they were consciously aware of the high rewards, and worsened inhibition when the tasks placed high demand on working memory.
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/15%3A__Psychological_Disorders/15.11%3A_Personality_Disorders.txt
Learning Objectives • Describe the essential nature of dissociative disorders • Identify and differentiate the symptoms of dissociative amnesia, depersonalization/ derealization disorder, and dissociative identity disorder • Discuss the potential role of both social and psychological factors in dissociative identity disorder Dissociative disorders are characterized by an individual becoming split off, or dissociated, from her core sense of self. Memory and identity become disturbed; these disturbances have a psychological rather than physical cause. Dissociative disorders listed in the DSM-5 include dissociative amnesia, depersonalization/derealization disorder, and dissociative identity disorder. Dissociative Amnesia Amnesia refers to the partial or total forgetting of some experience or event. An individual with dissociative amnesia is unable to recall important personal information, usually following an extremely stressful or traumatic experience such as combat, natural disasters, or being the victim of violence. The memory impairments are not caused by ordinary forgetting. Some individuals with dissociative amnesia will also experience dissociative fugue (from the word “to flee” in French), whereby they suddenly wander away from their home, experience confusion about their identity, and sometimes even adopt a new identity (Cardeña & Gleaves, 2006). Most fugue episodes last only a few hours or days, but some can last longer. One study of residents in communities in upstate New York reported that about 1.8% experienced dissociative amnesia in the previous year (Johnson, Cohen, Kasen, & Brook, 2006). Some have questioned the validity of dissociative amnesia (Pope, Hudson, Bodkin, & Oliva, 1998); it has even been characterized as a “piece of psychiatric folklore devoid of convincing empirical support” (McNally, 2003, p. 275). Notably, scientific publications regarding dissociative amnesia rose during the 1980s and reached a peak in the mid-1990s, followed by an equally sharp decline by 2003; in fact, only 13 cases of individuals with dissociative amnesia worldwide could be found in the literature that same year (Pope, Barry, Bodkin, & Hudson, 2006). Further, no description of individuals showing dissociative amnesia following a trauma exists in any fictional or nonfictional work prior to 1800 (Pope, Poliakoff, Parker, Boynes, & Hudson, 2006). However, a study of 82 individuals who enrolled for treatment at a psychiatric outpatient hospital found that nearly \(10\%\) met the criteria for dissociative amnesia, perhaps suggesting that the condition is underdiagnosed, especially in psychiatric populations (Foote, Smolin, Kaplan, Legatt, & Lipschitz, 2006). Depersonalization/ Derealization Disorder Depersonalization/derealization disorder is characterized by recurring episodes of depersonalization, derealization, or both. Depersonalization is defined as feelings of “unreality or detachment from, or unfamiliarity with, one’s whole self or from aspects of the self” (APA, 2013, p. 302). Individuals who experience depersonalization might believe their thoughts and feelings are not their own; they may feel robotic as though they lack control over their movements and speech; they may experience a distorted sense of time and, in extreme cases, they may sense an “out-of-body” experience in which they see themselves from the vantage point of another person. Derealization is conceptualized as a sense of “unreality or detachment from, or unfamiliarity with, the world, be it individuals, inanimate objects, or all surroundings” (APA, 2013, p. 303). A person who experiences derealization might feel as though he is in a fog or a dream, or that the surrounding world is somehow artificial and unreal. Individuals with depersonalization/derealization disorder often have difficulty describing their symptoms and may think they are going crazy (APA, 2013). Dissociative Identity Disorder By far, the most well-known dissociative disorder is dissociative identity disorder (formerly called multiple personality disorder). People with dissociative identity disorder exhibit two or more separate personalities or identities, each well-defined and distinct from one another. They also experience memory gaps for the time during which another identity is in charge (e.g., one might find unfamiliar items in her shopping bags or among her possessions), and in some cases may report hearing voices, such as a child’s voice or the sound of somebody crying (APA, 2013). The study of upstate New York residents mentioned above (Johnson et al., 2006) reported that \(1.5\%\) of their sample experienced symptoms consistent with dissociative identity disorder in the previous year. Dissociative identity disorder (DID) is highly controversial. Some believe that people fake symptoms to avoid the consequences of illegal actions (e.g., “I am not responsible for shoplifting because it was my other personality”). In fact, it has been demonstrated that people are generally skilled at adopting the role of a person with different personalities when they believe it might be advantageous to do so. As an example, Kenneth Bianchi was an infamous serial killer who, along with his cousin, murdered over a dozen females around Los Angeles in the late 1970s. Eventually, he and his cousin were apprehended. At Bianchi’s trial, he pled not guilty by reason of insanity, presenting himself as though he had DID and claiming that a different personality (“Steve Walker”) committed the murders. When these claims were scrutinized, he admitted faking the symptoms and was found guilty (Schwartz, 1981). A second reason DID is controversial is because rates of the disorder suddenly skyrocketed in the 1980s. More cases of DID were identified during the five years prior to 1986 than in the preceding two centuries (Putnam, Guroff, Silberman, Barban, & Post, 1986). Although this increase may be due to the development of more sophisticated diagnostic techniques, it is also possible that the popularization of DID—helped in part by Sybil, a popular 1970s book (and later film) about a woman with \(16\) different personalities—may have prompted clinicians to overdiagnose the disorder (Piper & Merskey, 2004). Casting further scrutiny on the existence of multiple personalities or identities is the recent suggestion that the story of Sybil was largely fabricated, and the idea for the book might have been exaggerated (Nathan, 2011). Despite its controversial nature, DID is clearly a legitimate and serious disorder, and although some people may fake symptoms, others suffer their entire lives with it. People with this disorder tend to report a history of childhood trauma, some cases having been corroborated through medical or legal records (Cardeña & Gleaves, 2006). Research by Ross et al. (1990) suggests that in one study about \(95\%\) of people with DID were physically and/or sexually abused as children. Of course, not all reports of childhood abuse can be expected to be valid or accurate. However, there is strong evidence that traumatic experiences can cause people to experience states of dissociation, suggesting that dissociative states—including the adoption of multiple personalities—may serve as a psychologically important coping mechanism for threat and danger (Dalenberg et al., 2012). Critical Thinking Questions 23. Discuss why thoughts, feelings, or behaviors that are merely atypical or unusual would not necessarily signify the presence of a psychological disorder. Provide an example. 24. Describe the DSM-5. What is it, what kind of information does it contain, and why is it important to the study and treatment of psychological disorders? 25. The International Classification of Diseases (ICD) and the DSM differ in various ways. What are some of the differences in these two classification systems? 26. Why is the perspective one uses in explaining a psychological disorder important? 27. Describe how cognitive theories of the etiology of anxiety disorders differ from learning theories. 28. Discuss the common elements of each of the three disorders covered in this section: obsessive-compulsive disorder, body dysmorphic disorder, and hoarding disorder. 29. List some of the risk factors associated with the development of PTSD following a traumatic event. 30. Describe several of the factors associated with suicide. 31. Why is research following individuals who show prodromal symptoms of schizophrenia so important? 32. The prevalence of most psychological disorders has increased since the 1980s. However, as discussed in this section, scientific publications regarding dissociative amnesia peaked in the mid-1990s but then declined steeply through 2003. In addition, no fictional or nonfictional description of individuals showing dissociative amnesia following a trauma exists prior to 1800. How would you explain this phenomenon? 33. Compare the factors that are important in the development of ADHD with those that are important in the development of autism spectrum disorder. 34. Imagine that a child has a genetic vulnerability to antisocial personality disorder. How might this child’s environment shape the likelihood of developing this personality disorder?
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/15%3A__Psychological_Disorders/15.9%3A_Dissociative_Disorders.txt
agoraphobia anxiety disorder characterized by intense fear, anxiety, and avoidance of situations in which it might be difficult to escape if one experiences symptoms of a panic attack antisocial personality disorder characterized by a lack of regard for others’ rights, impulsivity, deceitfulness, irresponsibility, and lack of remorse over misdeeds anxiety disorder characterized by excessive and persistent fear and anxiety, and by related disturbances in behavior attention deficit/hyperactivity disorder childhood disorder characterized by inattentiveness and/or hyperactive, impulsive behavior atypical describes behaviors or feelings that deviate from the norm autism spectrum disorder childhood disorder characterized by deficits in social interaction and communication, and repetitive patterns of behavior or interests bipolar and related disorders group of mood disorders in which mania is the defining feature bipolar disorder mood disorder characterized by mood states that vacillate between depression and mania body dysmorphic disorder involves excessive preoccupation with an imagined defect in physical appearance borderline personality disorder instability in interpersonal relationships, self-image, and mood, as well as impulsivity; key features include intolerance of being alone and fear of abandonment, unstable relationships, unpredictable behavior and moods, and intense and inappropriate anger catatonic behavior decreased reactivity to the environment; includes posturing and catatonic stupor comorbidity co-occurrence of two disorders in the same individual delusion belief that is contrary to reality and is firmly held, despite contradictory evidence depersonalization/derealization disorder dissociative disorder in which people feel detached from the self (depersonalization), and the world feels artificial and unreal (derealization) depressive disorder one of a group of mood disorders in which depression is the defining feature diagnosis determination of which disorder a set of symptoms represents Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) authoritative index of mental disorders and the criteria for their diagnosis; published by the American Psychiatric Association (APA) diathesis-stress model suggests that people with a predisposition for a disorder (a diathesis) are more likely to develop the disorder when faced with stress; model of psychopathology disorganized thinking disjointed and incoherent thought processes, usually detected by what a person says disorganized/abnormal motor behavior highly unusual behaviors and movements (such as child-like behaviors), repeated and purposeless movements, and displaying odd facial expressions and gestures dissociative amnesia dissociative disorder characterized by an inability to recall important personal information, usually following an extremely stressful or traumatic experience dissociative disorders group of DSM-5 disorders in which the primary feature is that a person becomes dissociated, or split off, from their core sense of self, resulting in disturbances in identity and memory dissociative fugue symptom of dissociative amnesia in which a person suddenly wanders away from one’s home and experiences confusion about their identity dissociative identity disorder dissociative disorder (formerly known as multiple personality disorder) in which a person exhibits two or more distinct, well-defined personalities or identities and experiences memory gaps for the time during which another identity emerged dopamine hypothesis theory of schizophrenia that proposes that an overabundance of dopamine or dopamine receptors is responsible for the onset and maintenance of schizophrenia etiology cause or causes of a psychological disorder flashback psychological state lasting from a few seconds to several days, during which one relives a traumatic event and behaves as though the event were occurring at that moment flight of ideas symptom of mania that involves an abruptly switching in conversation from one topic to another generalized anxiety disorder characterized by a continuous state of excessive, uncontrollable, and pointless worry and apprehension grandiose delusion characterized by beliefs that one holds special power, unique knowledge, or is extremely important hallucination perceptual experience that occurs in the absence of external stimulation, such as the auditory hallucinations (hearing voices) common to schizophrenia harmful dysfunction model of psychological disorders resulting from the inability of an internal mechanism to perform its natural function hoarding disorder characterized by persistent difficulty in parting with possessions, regardless of their actual value or usefulness hopelessness theory cognitive theory of depression proposing that a style of thinking that perceives negative life events as having stable and global causes leads to a sense of hopelessness and then to depression International Classification of Diseases (ICD) authoritative index of mental and physical diseases, including infectious diseases, and the criteria for their diagnosis; published by the World Health Organization (WHO) locus coeruleus area of the brainstem that contains norepinephrine, a neurotransmitter that triggers the body’s fight-or-flight response; has been implicated in panic disorder major depressive disorder commonly referred to as “depression” or “major depression,” characterized by sadness or loss of pleasure in usual activities, as well other symptoms mania state of extreme elation and agitation manic episode period in which an individual experiences mania, characterized by extremely cheerful and euphoric mood, excessive talkativeness, irritability, increased activity levels, and other symptoms mood disorder one of a group of disorders characterized by severe disturbances in mood and emotions; the categories of mood disorders listed in the DSM-5 are bipolar and related disorders and depressive disorders negative symptom characterized by decreases and absences in certain normal behaviors, emotions, or drives, such as an expressionless face, lack of motivation to engage in activities, reduced speech, lack of social engagement, and inability to experience pleasure neurodevelopmental disorder one of the disorders that are first diagnosed in childhood and involve developmental problems in academic, intellectual, social functioning obsessive-compulsive and related disorders group of overlapping disorders listed in the DSM-5 that involves intrusive, unpleasant thoughts and/or repetitive behaviors obsessive-compulsive disorder characterized by the tendency to experience intrusive and unwanted thoughts and urges (obsession) and/or the need to engage in repetitive behaviors or mental acts (compulsions) in response to the unwanted thoughts and urges orbitofrontal cortex area of the frontal lobe involved in learning and decision-making panic attack period of extreme fear or discomfort that develops abruptly; symptoms of panic attacks are both physiological and psychological panic disorder anxiety disorder characterized by unexpected panic attacks, along with at least one month of worry about panic attacks or self-defeating behavior related to the attacks paranoid delusion characterized by beliefs that others are out to harm them peripartum onset subtype of depression that applies to women who experience an episode of major depression either during pregnancy or in the four weeks following childbirth persistent depressive disorder depressive disorder characterized by a chronically sad and melancholy mood personality disorder group of DSM-5 disorders characterized by an inflexible and pervasive personality style that differs markedly from the expectations of one’s culture and causes distress and impairment; people with these disorders have a personality style that frequently brings them into conflict with others and disrupts their ability to develop and maintain social relationships posttraumatic stress disorder (PTSD) experiencing a profoundly traumatic event leads to a constellation of symptoms that include intrusive and distressing memories of the event, avoidance of stimuli connected to the event, negative emotional states, feelings of detachment from others, irritability, proneness toward outbursts, hypervigilance, and a tendency to startle easily; these symptoms must occur for at least one month prodromal symptom in schizophrenia, one of the early minor symptoms of psychosis psychological disorder condition characterized by abnormal thoughts, feelings, and behaviors psychopathology study of psychological disorders, including their symptoms, causes, and treatment; manifestation of a psychological disorder rumination in depression, tendency to repetitively and passively dwell on one’s depressed symptoms, their meanings, and their consequences safety behavior mental and behavior acts designed to reduce anxiety in social situations by reducing the chance of negative social outcomes; common in social anxiety disorder schizophrenia severe disorder characterized by major disturbances in thought, perception, emotion, and behavior with symptoms that include hallucinations, delusions, disorganized thinking and behavior, and negative symptoms seasonal pattern subtype of depression in which a person experiences the symptoms of major depressive disorder only during a particular time of year social anxiety disorder characterized by extreme and persistent fear or anxiety and avoidance of social situations in which one could potentially be evaluated negatively by others somatic delusion belief that something highly unusual is happening to one’s body or internal organs specific phobia anxiety disorder characterized by excessive, distressing, and persistent fear or anxiety about a specific object or situation suicidal ideation thoughts of death by suicide, thinking about or planning suicide, or making a suicide attempt suicide death caused by intentional, self-directed injurious behavior supernatural describes a force beyond scientific understanding ventricle one of the fluid-filled cavities within the brain
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/15%3A__Psychological_Disorders/Key_Terms.txt
35. Identify a behavior that is considered unusual or abnormal in your own culture; however, it would be considered normal and expected in another culture. 36. Even today, some believe that certain occurrences have supernatural causes. Think of an event, recent or historical, for which others have provided supernatural explanation. 37. Think of someone you know who seems to have a tendency to make negative, self-defeating explanations for negative life events. How might this tendency lead to future problems? What steps do you think could be taken to change this thinking style? 38. Try to find an example (via a search engine) of a past instance in which a person committed a horrible crime, was apprehended, and later claimed to have dissociative identity disorder during the trial. What was the outcome? Was the person revealed to be faking? If so, how was this determined? 39. Discuss the characteristics of autism spectrum disorder with a few of your friends or members of your family (choose friends or family members who know little about the disorder) and ask them if they think the cause is due to vaccinations. If they indicate that they believe this to be true, why do you think this might be the case? What would be your response? Review Questions 1. In the harmful dysfunction definition of psychological disorders, dysfunction involves ________. 1. the inability of a psychological mechanism to perform its function 2. the breakdown of social order in one’s community 3. communication problems in one’s immediate family 4. all the above 2. Patterns of inner experience and behavior are thought to reflect the presence of a psychological disorder if they ________. 1. are highly atypical 2. lead to significant distress and impairment in one’s life 3. embarrass one’s friends and/or family 4. violate the norms of one’s culture 3. The letters in the abbreviation DSM-5 stand for ________. 1. Diseases and Statistics Manual of Medicine 2. Diagnosable Standards Manual of Mental Disorders 3. Diseases and Symptoms Manual of Mental Disorders 4. Diagnostic and Statistical Manual of Mental Disorders 4. A study based on over 9,000 U. S. residents found that the most prevalent disorder was ________. 1. major depressive disorder 2. social anxiety disorder 3. obsessive-compulsive disorder 4. specific phobia 5. The diathesis-stress model presumes that psychopathology results from ________. 1. vulnerability and adverse experiences 2. biochemical factors 3. chemical imbalances and structural abnormalities in the brain 4. adverse childhood experiences 6. Dr. Anastasia believes that major depressive disorder is caused by an over-secretion of cortisol. His view on the cause of major depressive disorder reflects a ________ perspective. 1. psychological 2. supernatural 3. biological 4. diathesis-stress 7. In which of the following anxiety disorders is the person in a continuous state of excessive, pointless worry and apprehension? 1. panic disorder 2. generalized anxiety disorder 3. agoraphobia 4. social anxiety disorder 8. Which of the following would constitute a safety behavior? 1. encountering a phobic stimulus in the company of other people 2. avoiding a field where snakes are likely to be present 3. avoiding eye contact 4. worrying as a distraction from painful memories 9. Which of the following best illustrates a compulsion? 1. mentally counting backward from 1,000 2. persistent fear of germs 3. thoughts of harming a neighbor 4. falsely believing that a spouse has been cheating 10. Research indicates that the symptoms of OCD ________. 1. are similar to the symptoms of panic disorder 2. are triggered by low levels of stress hormones 3. are related to hyperactivity in the orbitofrontal cortex 4. are reduced if people are asked to view photos of stimuli that trigger the symptoms 11. Symptoms of PTSD include all of the following except ________. 1. intrusive thoughts or memories of a traumatic event 2. avoidance of things that remind one of a traumatic event 3. jumpiness 4. physical complaints that cannot be explained medically 12. Which of the following elevates the risk for developing PTSD? 1. severity of the trauma 2. frequency of the trauma 3. high levels of intelligence 4. social support 13. Common symptoms of major depressive disorder include all of the following except ________. 1. periods of extreme elation and euphoria 2. difficulty concentrating and making decisions 3. loss of interest or pleasure in usual activities 4. psychomotor agitation and retardation 14. Suicide rates are ________ among men than among women, and they are ________ during the winter holiday season than during the spring months. 1. higher; higher 2. lower; lower 3. higher; lower 4. lower; higher 15. Clifford falsely believes that the police have planted secret cameras in his home to monitor his every movement. Clifford’s belief is an example of ________. 1. a delusion 2. a hallucination 3. tangentiality 4. a negative symptom 16. A study of adoptees whose biological mothers had schizophrenia found that the adoptees were most likely to develop schizophrenia ________. 1. if their childhood friends later developed schizophrenia 2. if they abused drugs during adolescence 3. if they were raised in a disturbed adoptive home environment 4. regardless of whether they were raised in a healthy or disturbed home environment 17. Dissociative amnesia involves ________. 1. memory loss following head trauma 2. memory loss following stress 3. feeling detached from the self 4. feeling detached from the world 18. Dissociative identity disorder mainly involves ________. 1. depersonalization 2. derealization 3. schizophrenia 4. different personalities 19. Which of the following is not a primary characteristic of ADHD? 1. short attention span 2. difficulty concentrating and distractibility 3. restricted and fixated interest 4. excessive fidgeting and squirming 20. One of the primary characteristics of autism spectrum disorder is ________. 1. bed-wetting 2. difficulty relating to others 3. short attention span 4. intense and inappropriate interest in others 21. People with borderline personality disorder often ________. 1. try to be the center of attention 2. are shy and withdrawn 3. are impulsive and unpredictable 4. tend to accomplish goals through cruelty 22. Antisocial personality disorder is associated with ________. 1. emotional deficits 2. memory deficits 3. parental overprotection 4. increased empathy
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/15%3A__Psychological_Disorders/Personal_Application_Questions.txt
15.1 What Are Psychological Disorders? Psychological disorders are conditions characterized by abnormal thoughts, feelings, and behaviors. Although challenging, it is essential for psychologists and mental health professionals to agree on what kinds of inner experiences and behaviors constitute the presence of a psychological disorder. Inner experiences and behaviors that are atypical or violate social norms could signify the presence of a disorder; however, each of these criteria alone is inadequate. Harmful dysfunction describes the view that psychological disorders result from the inability of an internal mechanism to perform its natural function. Many of the features of harmful dysfunction conceptualization have been incorporated in the APA’s formal definition of psychological disorders. According to this definition, the presence of a psychological disorder is signaled by significant disturbances in thoughts, feelings, and behaviors; these disturbances must reflect some kind of dysfunction (biological, psychological, or developmental), must cause significant impairment in one’s life, and must not reflect culturally expected reactions to certain life events. 15.2 Diagnosing and Classifying Psychological Disorders The diagnosis and classification of psychological disorders is essential in studying and treating psychopathology. The classification system used by most U.S. professionals is the DSM-5. The first edition of the DSM was published in 1952, and has undergone numerous revisions. The 5th and most recent edition, the DSM-5, was published in 2013. The diagnostic manual includes a total of 237 specific diagnosable disorders, each described in detail, including its symptoms, prevalence, risk factors, and comorbidity. Over time, the number of diagnosable conditions listed in the DSM has grown steadily, prompting criticism from some. Nevertheless, the diagnostic criteria in the DSM are more explicit than that of any other system, which makes the DSM system highly desirable for both clinical diagnosis and research. 15.3 Perspectives on Psychological Disorders Psychopathology is very complex, involving a plethora of etiological theories and perspectives. For centuries, psychological disorders were viewed primarily from a supernatural perspective and thought to arise from divine forces or possession from spirits. Some cultures continue to hold this supernatural belief. Today, many who study psychopathology view mental illness from a biological perspective, whereby psychological disorders are thought to result largely from faulty biological processes. Indeed, scientific advances over the last several decades have provided a better understanding of the genetic, neurological, hormonal, and biochemical bases of psychopathology. The psychological perspective, in contrast, emphasizes the importance of psychological factors (e.g., stress and thoughts) and environmental factors in the development of psychological disorders. A contemporary, promising approach is to view disorders as originating from an integration of biological and psychosocial factors. The diathesis-stress model suggests that people with an underlying diathesis, or vulnerability, for a psychological disorder are more likely than those without the diathesis to develop the disorder when faced with stressful events. 15.4 Anxiety Disorders Anxiety disorders are a group of disorders in which a person experiences excessive, persistent, and distressing fear and anxiety that interferes with normal functioning. Anxiety disorders include specific phobia: a specific unrealistic fear; social anxiety disorder: extreme fear and avoidance of social situations; panic disorder: suddenly overwhelmed by panic even though there is no apparent reason to be frightened; agoraphobia: an intense fear and avoidance of situations in which it might be difficult to escape; and generalized anxiety disorder: a relatively continuous state of tension, apprehension, and dread. 15.5 Obsessive-Compulsive and Related Disorders Obsessive-compulsive and related disorders are a group of DSM-5 disorders that overlap somewhat in that they each involve intrusive thoughts and/or repetitive behaviors. Perhaps the most recognized of these disorders is obsessive-compulsive disorder, in which a person is obsessed with unwanted, unpleasant thoughts and/or compulsively engages in repetitive behaviors or mental acts, perhaps as a way of coping with the obsessions. Body dysmorphic disorder is characterized by the individual becoming excessively preoccupied with one or more perceived flaws in their physical appearance that are either nonexistent or unnoticeable to others. Preoccupation with the perceived physical defects causes the person to experience significant anxiety regarding how they appear to others. Hoarding disorder is characterized by persistent difficulty in discarding or parting with objects, regardless of their actual value, often resulting in the accumulation of items that clutter and congest their living area. 15.6 Posttraumatic Stress Disorder Posttraumatic stress disorder (PTSD) was described through much of the 20th century and was referred to as shell shock and combat neurosis in the belief that its symptoms were thought to emerge from the stress of active combat. Today, PTSD is defined as a disorder in which the experience of a traumatic or profoundly stressful event, such as combat, sexual assault, or natural disaster, produces a constellation of symptoms that must last for one month or more. These symptoms include intrusive and distressing memories of the event, flashbacks, avoidance of stimuli or situations that are connected to the event, persistently negative emotional states, feeling detached from others, irritability, proneness toward outbursts, and a tendency to be easily startled. Not everyone who experiences a traumatic event will develop PTSD; a variety of risk factors associated with its development have been identified. 15.7 Mood and Related Disorders Mood disorders are those in which the person experiences severe disturbances in mood and emotion. They include depressive disorders and bipolar and related disorders. Depressive disorders include major depressive disorder, which is characterized by episodes of profound sadness and loss of interest or pleasure in usual activities and other associated features, and persistent depressive disorder, which marked by a chronic state of sadness. Bipolar disorder is characterized by mood states that vacillate between sadness and euphoria; a diagnosis of bipolar disorder requires experiencing at least one manic episode, which is defined as a period of extreme euphoria, irritability, and increased activity. During a manic episode, a person will likely exhibit behaviors atypical for that person. They may become excessively talkative, exhibit flight of ideas, and make grandiose plans. They may go on a spending spree, maxing out their credit card with items they can not afford, gamble, or engage in risky sexual behaviors. About fifty percent of people suffering from bipolar disorder do not receive treatment. Bipolar disorder is a definitive risk factor for suicide, with about a third of people with bipolar disorder attempting suicide. When a person's pain and distress completely overwhelm their ability to cope, some people may consider suicide. People who suffer from mental health and substance abuse problems are at a much higher risk of suicide than the general public. Males die by suicide at a significantly higher rate than females, and males use much more lethal means in their attempts. A person contemplating suicide needs help and should not have access to lethal means of suicide, such as firearms. If you or someone you know is contemplating suicide, there are many helpful resources. Three of them are listed below: 15.8 Schizophrenia Schizophrenia is a severe disorder characterized by a complete breakdown in one’s ability to function in life; it often requires hospitalization. People with schizophrenia experience hallucinations and delusions, and they have extreme difficulty regulating their emotions and behavior. Thinking is incoherent and disorganized, behavior is extremely bizarre, emotions are flat, and motivation to engage in most basic life activities is lacking. Considerable evidence shows that genetic factors play a central role in schizophrenia; however, adoption studies have highlighted the additional importance of environmental factors. Neurotransmitter and brain abnormalities, which may be linked to environmental factors such as obstetric complications or exposure to influenza during the gestational period, have also been implicated. A promising new area of schizophrenia research involves identifying individuals who show prodromal symptoms and following them over time to determine which factors best predict the development of schizophrenia. Future research may enable us to pinpoint those especially at risk for developing schizophrenia and who may benefit from early intervention. 15.9 Dissociative Disorders The main characteristic of dissociative disorders is that people become dissociated from their sense of self, resulting in memory and identity disturbances. Dissociative disorders listed in the DSM-5 include dissociative amnesia, depersonalization/derealization disorder, and dissociative identity disorder. A person with dissociative amnesia is unable to recall important personal information, often after a stressful or traumatic experience. Depersonalization/derealization disorder is characterized by recurring episodes of depersonalization (i.e., detachment from or unfamiliarity with the self) and/or derealization (i.e., detachment from or unfamiliarity with the world). A person with dissociative identity disorder exhibits two or more well-defined and distinct personalities or identities, as well as memory gaps for the time during which another identity was present. Dissociative identity disorder has generated controversy, mainly because some believe its symptoms can be faked by patients if presenting its symptoms somehow benefits the patient in avoiding negative consequences or taking responsibility for one’s actions. The diagnostic rates of this disorder have increased dramatically following its portrayal in popular culture. However, many people legitimately suffer over the course of a lifetime with this disorder. 15.10 Disorders in Childhood Neurodevelopmental disorders are a group of disorders that are typically diagnosed during childhood and are characterized by developmental deficits in personal, social, academic, and intellectual realms; these disorders include attention deficit/hyperactivity disorder (ADHD) and autism spectrum disorder. ADHD is characterized by a pervasive pattern of inattention and/or hyperactive and impulsive behavior that interferes with normal functioning. Genetic and neurobiological factors contribute to the development of ADHD, which can persist well into adulthood and is often associated with poor long-term outcomes. The major features of autism spectrum disorder include deficits in social interaction and communication and repetitive movements or interests. As with ADHD, genetic factors appear to play a prominent role in the development of autism spectrum disorder; exposure to environmental pollutants such as mercury have also been linked to the development of this disorder. Although it is believed by some that autism is triggered by the MMR vaccination, evidence does not support this claim. 15.11 Personality Disorders Individuals with personality disorders exhibit a personality style that is inflexible, causes distress and impairment, and creates problems for themselves and others. The DSM-5 recognizes 10 personality disorders, organized into three clusters. The disorders in Cluster A include those characterized by a personality style that is odd and eccentric. Cluster B includes personality disorders characterized chiefly by a personality style that is impulsive, dramatic, highly emotional, and erratic, and those in Cluster C are characterized by a nervous and fearful personality style. Two Cluster B personality disorders, borderline personality disorder and antisocial personality disorder, are especially problematic. People with borderline personality disorder show marked instability in mood, behavior, and self-image, as well as impulsivity. They cannot stand to be alone, are unpredictable, have a history of stormy relationships, and frequently display intense and inappropriate anger. Genetic factors and adverse childhood experiences (e.g., sexual abuse) appear to be important in its development. People with antisocial personality display a lack of regard for the rights of others; they are impulsive, deceitful, irresponsible, and unburdened by any sense of guilt. Genetic factors and socialization both appear to be important in the origin of antisocial personality disorder. Research has also shown that those with this disorder do not experience emotions the way most other people do.
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/15%3A__Psychological_Disorders/Summary.txt
In this chapter, you will see that approaches to therapy include both psychological and biological interventions, all with the goal of alleviating distress. Because psychological problems can originate from various sources—biology, genetics, childhood experiences, conditioning, and sociocultural influences—psychologists have developed many different therapeutic techniques and approaches. • Introduction What comes to mind when you think about therapy for psychological problems? You might picture someone lying on a couch talking about his childhood while the therapist sits and takes notes, à la Sigmund Freud. But can you envision a therapy session in which someone is wearing virtual reality headgear to conquer a fear of snakes? • 16.1: Mental Health Treatment - Past and Present Before we explore the various approaches to therapy used today, let’s begin our study of therapy by looking at how many people experience mental illness and how many receive treatment. According to the U.S. Department of Health and Human Services (2013), 19% of U.S. adults experienced mental illness in 2012. According to the Substance Abuse and Mental Health Services Administration (SAMHSA), in 2008, 13.4% of adults received treatment for a mental health issue. • 16.2: Types of Treatment Two types of therapy are psychotherapy and biomedical therapy. Both types of treatment help people with psychological disorders, such as depression, anxiety, and schizophrenia. Psychotherapy is a psychological treatment that employs methods to help someone overcome personal problems, or to attain personal growth. In modern practice, it has evolved into what is known as psychodynamic therapy. Biomedical therapy involves medication and/or medical procedures to treat psychological disorders. • 16.3: Treatment Modalities Once a person seeks treatment, whether voluntarily or involuntarily, he has an intake done to assess his clinical needs. An intake is the therapist’s first meeting with the client. The therapist gathers specific information to address the client’s immediate needs, such as the presenting problem, the client’s support system, and insurance status. The therapist informs the client about confidentiality, fees, and what to expect in treatment. • 16.4: Substance-Related and Addictive Disorders - A Special Case Addiction is often viewed as a chronic disease. The choice to use a substance is initially voluntary; however, because chronic substance use can permanently alter the neural structure in the prefrontal cortex, an area of the brain associated with decision-making and judgment, a person becomes driven to use drugs and/or alcohol. This helps explain why relapse rates tend to be high. About 40%–60% of individuals relapse, which means they return to abusing drugs and/or alcohol. • 16.5: The Sociocultural Model and Therapy Utilization Multicultural counseling and therapy aims to offer both a helping role and process that uses modalities and defines goals consistent with the life experiences and cultural values of clients. It strives to recognize client identities to include individual, group, and universal dimensions, advocate the use of universal and culture-specific strategies and roles in the healing process, and balances the importance of individualism and collectivism in the assessment, diagnosis, and treatment. • Critical Thinking Questions • Key Terms • Personal Application Questions • Review Questions • Summary Thumbnail: This is the famous couch in Freud’s consulting room. Patients were instructed to lie comfortably on the couch and to face away from Freud in order to feel less inhibited and to help them focus. Today, a psychotherapy patient is not likely to lie on a couch; instead he is more likely to sit facing the therapist (Prochaska & Norcross, 2010). (credit: Robert Huffstutter). 16: Therapy and Treatment Chapter Outline 16.1 Mental Health Treatment: Past and Present 16.2 Types of Treatment 16.3 Treatment Modalities 16.4 Substance-Related and Addictive Disorders: A Special Case 16.5 The Sociocultural Model and Therapy Utilization What comes to mind when you think about therapy for mental health issues? You might picture someone lying on a couch talking about his childhood while the therapist sits and takes notes, à la Sigmund Freud. But can you envision a therapy session in which someone is wearing virtual reality headgear to conquer a fear of snakes? In this chapter, you will see that approaches to therapy include both psychological and biological interventions, all with the goal of alleviating distress. Because psychological problems can originate from various sources—biology, genetics, childhood experiences, conditioning, and sociocultural influences—psychologists have developed many different therapeutic techniques and approaches. The Ocean Therapy program shown in Figure 16.1 uses multiple approaches to support the mental health of veterans in the group. There are many misconceptions and assumptions about therapy and treatment. In the same way that mental health and psychological disorders are often misunderstood and may be discounted, seeking help for problems can be a difficult and scary time for people. There is no one method that works for everyone, and those seeking help are displaying strength and courage in their decision to address a highly stigmatized and challenging issue. The goal of treatment is not to change whom a person is, but to address symptoms and/or underlying conditions.
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/16%3A_Therapy_and_Treatment/16.01%3A_Prelude_to_Therapy_and_Treatment.txt
Learning Objectives • Explain how people with psychological disorders have been treated throughout the ages • Discuss deinstitutionalization • Discuss the ways in which mental health services are delivered today • Distinguish between voluntary and involuntary treatment Before we explore the various approaches to therapy used today, let’s begin our study of therapy by looking at how many people experience mental illness and how many receive treatment. According to the U.S. Department of Health and Human Services (2013), \(19\%\) of U.S. adults experienced mental illness in 2012. For teens (ages \(13-18\)), the rate is similar to that of adults, and for children ages \(8-15\), current estimates suggest that \(13\%\) experience mental illness in a given year (National Institute of Mental Health [NIMH], n.d.-a) With many different treatment options available, approximately how many people receive mental health treatment per year? According to the Substance Abuse and Mental Health Services Administration (SAMHSA), in 2008, \(13.4\%\) of adults received treatment for a mental health issue (NIMH, n.d.-b). These percentages, shown in figure 16.2, reflect the number of adults who received care in inpatient and outpatient settings and/or used prescription medication for psychological disorders. Children and adolescents also receive mental health services. The Centers for Disease Control and Prevention's National Health and Nutrition Examination Survey (NHANES) found that approximately half (\(50.6\%\)) of children with mental disorders had received treatment for their disorder within the past year (NIMH, n.d.-c). However, there were some differences between treatment rates by category of disorder (See figure 16.3). For example, children with anxiety disorders were least likely to have received treatment in the past year, while children with ADHD or a conduct disorder were more likely to receive treatment. Can you think of some possible reasons for these differences in receiving treatment? Considering the many forms of treatment for mental health disorders available today, how did these forms of treatment emerge? Let’s take a look at the history of mental health treatment from the past (with some questionable approaches in light of modern understanding of mental illness) to where we are today. Treatment in the Past For much of history, the mentally ill have been treated very poorly. It was believed that mental illness was caused by demonic possession, witchcraft, or an angry god (Szasz, 1960). For example, in medieval times, abnormal behaviors were viewed as a sign that a person was possessed by demons. If someone was considered to be possessed, there were several forms of treatment to release spirits from the individual. The most common treatment was exorcism, often conducted by priests or other religious figures: Incantations and prayers were said over the person’s body, and she may have been given some medicinal drinks. Another form of treatment for extreme cases of mental illness was trephining: A small hole was made in the afflicted individual’s skull to release spirits from the body. Most people treated in this manner died. In addition to exorcism and trephining, other practices involved execution or imprisonment of people with psychological disorders. Still others were left to be homeless beggars. Generally speaking, most people who exhibited strange behaviors were greatly misunderstood and treated cruelly. The prevailing theory of psychopathology in earlier history was the idea that mental illness was the result of demonic possession by either an evil spirit or an evil god because early beliefs incorrectly attributed all unexplainable phenomena to deities deemed either good or evil. From the late 1400s to the late 1600s, a common belief perpetuated by some religious organizations was that some people made pacts with the devil and committed horrible acts, such as eating babies (Blumberg, 2007). These people were considered to be witches and were tried and condemned by courts—they were often burned at the stake. Worldwide, it is estimated that tens of thousands of mentally ill people were killed after being accused of being witches or under the influence of witchcraft (Hemphill, 1966) By the 18th century, people who were considered odd and unusual were placed in asylums (See figure 16.4). Asylums were the first institutions created for the specific purpose of housing people with psychological disorders, but the focus was ostracizing them from society rather than treating their disorders. Often these people were kept in windowless dungeons, beaten, chained to their beds, and had little to no contact with caregivers. In the late 1700s, a French physician, Philippe Pinel, argued for more humane treatment of the mentally ill. He suggested that they be unchained and talked to, and that’s just what he did for patients at La Salpêtrière in Paris in 1795 (See figure 16.5). Patients benefited from this more humane treatment, and many were able to leave the hospital. In the 19th century, Dorothea Dix led reform efforts for mental health care in the United States (See figure 16.6). She investigated how those who are mentally ill and poor were cared for, and she discovered an underfunded and unregulated system that perpetuated abuse of this population (Tiffany, 1891). Horrified by her findings, Dix began lobbying various state legislatures and the U.S. Congress for change (Tiffany, 1891). Her efforts led to the creation of the first mental asylums in the United States. Despite reformers’ efforts, however, a typical asylum was filthy, offered very little treatment, and often kept people for decades. At Willard Psychiatric Center in upstate New York, for example, one treatment was to submerge patients in cold baths for long periods of time. Electroshock treatment was also used, and the way the treatment was administered often broke patients’ backs; in 1943, doctors at Willard administered \(1,443\) shock treatments (Willard Psychiatric Center, 2009). (Electroshock is now called electroconvulsive treatment, and the therapy is still used, but with safeguards and under anesthesia. A brief application of electric stimulus is used to produce a generalized seizure. Controversy continues over its effectiveness versus the side effects.) Many of the wards and rooms were so cold that a glass of water would be frozen by morning (Willard Psychiatric Center, 2009). Willard’s doors were not closed until 1995. Conditions like these remained commonplace until well into the \(20^{th}\) century. Starting in 1954 and gaining popularity in the 1960s, antipsychotic medications were introduced. These proved a tremendous help in controlling the symptoms of certain psychological disorders, such as psychosis. Psychosis was a common diagnosis of individuals in mental hospitals, and it was often evidenced by symptoms like hallucinations and delusions, indicating a loss of contact with reality. Then in 1963, Congress passed and John F. Kennedy signed the Mental Retardation Facilities and Community Mental Health Centers Construction Act, which provided federal support and funding for community mental health centers (National Institutes of Health, 2013). This legislation changed how mental health services were delivered in the United States. It started the process of deinstitutionalization, the closing of large asylums, by providing for people to stay in their communities and be treated locally. In 1955, there were 558,239 severely mentally ill patients institutionalized at public hospitals (Torrey, 1997). By 1994, by percentage of the population, there were \(92\%\) fewer hospitalized individuals (Torrey, 1997). Mental Health Treatment Today Today, there are community mental health centers across the nation. They are located in neighborhoods near the homes of clients, and they provide large numbers of people with mental health services of various kinds and for many kinds of problems. Unfortunately, part of what occurred with deinstitutionalization was that those released from institutions were supposed to go to newly created centers, but the system was not set up effectively. Centers were underfunded, staff was not trained to handle severe illnesses such as schizophrenia, there was high staff burnout, and no provision was made for the other services people needed, such as housing, food, and job training. Without these supports, those people released under deinstitutionalization often ended up homeless. Even today, a large portion of the homeless population is considered to be mentally ill (See figure 16.7). Statistics show that \(26\%\) of homeless adults living in shelters experience mental illness (U.S. Department of Housing and Urban Development [HUD], 2011). Another group of the mentally ill population is involved in the corrections system. According to a 2006 special report by the Bureau of Justice Statistics (BJS), approximately \(705,600\) mentally ill adults were incarcerated in the state prison system, and another \(78,800\) were incarcerated in the federal prison system. A further \(479,000\) were in local jails. According to the study, “people with mental illnesses are overrepresented in probation and parole populations at estimated rates ranging from two to four times the general population” (Prins & Draper, 2009, p. 23). The Treatment Advocacy Center reported that the growing number of mentally ill inmates has placed a burden on the correctional system (Torrey et al., 2014). Today, instead of asylums, there are psychiatric hospitals run by state governments and local community hospitals focused on short-term care. In all types of hospitals, the emphasis is on short-term stays, with the average length of stay being less than two weeks and often only several days. This is partly due to the very high cost of psychiatric hospitalization, which can be about \(\$800\) to \(\$1000\) per night (Stensland, Watson, & Grazier, 2012). Therefore, insurance coverage often limits the length of time a person can be hospitalized for treatment. Usually individuals are hospitalized only if they are an imminent threat to themselves or others. Link to Learning View this timeline that shows the history of mental institutions in the United States to learn more. Most people suffering from mental illnesses are not hospitalized. If someone is feeling very depressed, complains of hearing voices, or feels anxious all the time, he or she might seek psychological treatment. A friend, spouse, or parent might refer someone for treatment. The individual might go see his primary care physician first and then be referred to a mental health practitioner. Some people seek treatment because they are involved with the state’s child protective services—that is, their children have been removed from their care due to abuse or neglect. The parents might be referred to psychiatric or substance abuse facilities and the children would likely receive treatment for trauma. If the parents are interested in and capable of becoming better parents, the goal of treatment might be family reunification. For other children whose parents are unable to change—for example, the parent or parents who are heavily addicted to drugs and refuse to enter treatment—the goal of therapy might be to help the children adjust to foster care and/or adoption (See figure 16.8 below). Some people seek therapy because the criminal justice system referred them or required them to go. For some individuals, for example, attending weekly counseling sessions might be a condition of parole. If an individual is mandated to attend therapy, she is seeking services involuntarily. Involuntary treatment refers to therapy that is not the individual’s choice. Other individuals might voluntarily seek treatment. Voluntary treatment means the person chooses to attend therapy to obtain relief from symptoms. Psychological treatment can occur in a variety of places. An individual might go to a community mental health center or a practitioner in private or community practice. A child might see a school counselor, school psychologist, or school social worker. An incarcerated person might receive group therapy in prison. There are many different types of treatment providers, and licensing requirements vary from state to state. Besides psychologists and psychiatrists, there are clinical social workers, marriage and family therapists, and trained religious personnel who also perform counseling and therapy. A range of funding sources pay for mental health treatment: health insurance, government, and private pay. In the past, even when people had health insurance, the coverage would not always pay for mental health services. This changed with the Mental Health Parity and Addiction Equity Act of 2008, which requires group health plans and insurers to make sure there is parity of mental health services (U.S. Department of Labor, n.d.). This means that co-pays, total number of visits, and deductibles for mental health and substance abuse treatment need to be equal to and cannot be more restrictive or harsher than those for physical illnesses and medical/surgical problems. Finding treatment sources is also not always easy: there may be limited options, especially in rural areas and low-income urban areas; waiting lists; poor quality of care available for indigent patients; and financial obstacles such as co-pays, deductibles, and time off from work. Over \(85\%\) of the \(l,669\) federally designated mental health professional shortage areas are rural; often primary care physicians and law enforcement are the first-line mental health providers (Ivey, Scheffler, & Zazzali, 1998), although they do not have the specialized training of a mental health professional, who often would be better equipped to provide care. Availability, accessibility, and acceptability (the stigma attached to mental illness) are all problems in rural areas. Approximately two-thirds of those with symptoms receive no care at all (U.S. Department of Health and Human Services, 2005; Wagenfeld, Murray, Mohatt, & DeBruiynb, 1994). At the end of 2013, the U.S. Department of Agriculture announced an investment of \(\$50\) million to help improve access and treatment for mental health problems as part of the Obama administration’s effort to strengthen rural communities.
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/16%3A_Therapy_and_Treatment/16.02%3A_Mental_Health_Treatment_-_Past_and_Present.txt
Learning Objectives • Distinguish between psychotherapy and biomedical therapy • Recognize various orientations to psychotherapy • Discuss psychotropic medications and recognize which medications are used to treat specific psychological disorders One of the goals of therapy is to help a person stop repeating and reenacting destructive patterns and to start looking for better solutions to difficult situations. This goal is reflected in the following poem: Autobiography in Five Short Chapters by Portia Nelson (1993) Chapter One I walk down the street. There is a deep hole in the sidewalk. I fall in. I am lost. . . . I am helpless. It isn't my fault. It takes forever to find a way out. Chapter Two I walk down the same street. There is a deep hole in the sidewalk. I pretend I don't see it. I fall in again. I can't believe I am in this same place. But, it isn't my fault. It still takes a long time to get out. Chapter Three I walk down the same street. There is a deep hole in the sidewalk. I see it is there. I still fall in . . . it's a habit . . . but, my eyes are open. I know where I am. It is my fault. I get out immediately. Chapter Four I walk down the same street. There is a deep hole in the sidewalk. I walk around it. Chapter Five I walk down another street. Two types of therapy are psychotherapy and biomedical therapy. Both types of treatment help people with psychological disorders, such as depression, anxiety, and schizophrenia. Psychotherapy is a psychological treatment that employs various methods to help someone overcome personal problems, or to attain personal growth. In modern practice, it has evolved ino what is known as psychodynamic therapy, which will be discussed later. Biomedical therapy involves medication and/or medical procedures to treat psychological disorders. First, we will explore the various psychotherapeutic orientations outlined in the Table 16.1 below (many of these orientations were discussed in the Introduction chapter). Table 16.1 Various Psychotherapy Techniques Type Description Example Psychodynamic psychotherapy Talk therapy based on belief that the unconscious and childhood conflicts impact behavior Patient talks about his past Play therapy Psychoanalytical therapy wherein interaction with toys is used instead of talk; used in child therapy Patient (child) acts out family scenes with dolls Behavior therapy Principles of learning applied to change undesirable behaviors Patient learns to overcome fear of elevators through several stages of relaxation techniques Cognitive therapy Awareness of cognitive process helps patients eliminate thought patterns that lead to distress Patient learns not to overgeneralize failure based on single failure Cognitive-behavioral therapy Work to change cognitive distortions and self-defeating behaviors Patient learns to identify self-defeating behaviors to overcome an eating disorder Humanistic therapy Increase self-awareness and acceptance through focus on conscious thoughts Patient learns to articulate thoughts that keep her from achieving her goals Psychotherapy Techniques: Psychoanalysis Psychoanalysis was developed by Sigmund Freud and was the first form of psychotherapy. It was the dominant therapeutic technique in the early \(20^{th}\) century, but it has since waned significantly in popularity. Freud believed most of our psychological problems are the result of repressed impulses and trauma experienced in childhood, and he believed psychoanalysis would help uncover long-buried feelings. In a psychoanalyst’s office, you might see a patient lying on a couch speaking of dreams or childhood memories, and the therapist using various Freudian methods such as free association and dream analysis (See figure 16.9. In free association, the patient relaxes and then says whatever comes to mind at the moment. However, Freud felt that the ego would at times try to block, or repress, unacceptable urges or painful conflicts during free association. Consequently, a patient would demonstrate resistance to recalling these thoughts or situations. In dream analysis, a therapist interprets the underlying meaning of dreams. Psychoanalysis is a therapy approach that typically takes years. Over the course of time, the patient reveals a great deal about himself to the therapist. Freud suggested that during this patient-therapist relationship, the patient comes to develop strong feelings for the therapist—maybe positive feelings, maybe negative feelings. Freud called this transference: the patient transfers all the positive or negative emotions associated with the patient’s other relationships to the psychoanalyst. For example, Crystal is seeing a psychoanalyst. During the years of therapy, she comes to see her therapist as a father figure. She transfers her feelings about her father onto her therapist, perhaps in an effort to gain the love and attention she did not receive from her own father. Today, Freud’s psychoanalytical perspective has been expanded upon by the developments of subsequent theories and methodologies: the psychodynamic perspective. This approach to therapy remains centered on the role of people’s internal drives and forces, but treatment is less intensive than Freud’s original model. Link to Learning View a brief video overview of psychoanalysis theory, research, and practice to learn more. Psychotherapy: Play Therapy Play therapy is often used with children since they are not likely to sit on a couch and recall their dreams or engage in traditional talk therapy. This technique uses a therapeutic process of play to “help clients prevent or resolve psychosocial difficulties and achieve optimal growth” (O’Connor, 2000, p. 7). The idea is that children play out their hopes, fantasies, and traumas while using dolls, stuffed animals, and sandbox figurines (See figure 16.10). Play therapy can also be used to help a therapist make a diagnosis. The therapist observes how the child interacts with toys (e.g., dolls, animals, and home settings) in an effort to understand the roots of the child’s disturbed behavior. Play therapy can be nondirective or directive. In nondirective play therapy, children are encouraged to work through their problems by playing freely while the therapist observes (LeBlanc & Ritchie, 2001). In directive play therapy, the therapist provides more structure and guidance in the play session by suggesting topics, asking questions, and even playing with the child (Harter, 1977). Psychotherapy: Behavior Therapy In psychoanalysis, therapists help their patients look into their past to uncover repressed feelings. In behavior therapy, a therapist employs principles of learning to help clients change undesirable behaviors—rather than digging deeply into one’s unconscious. Therapists with this orientation believe that dysfunctional behaviors, like phobias and bedwetting, can be changed by teaching clients new, more constructive behaviors. Behavior therapy employs both classical and operant conditioning techniques to change behavior. One type of behavior therapy utilizes classical conditioning techniques. Therapists using these techniques believe that dysfunctional behaviors are conditioned responses. Applying the conditioning principles developed by Ivan Pavlov, these therapists seek to recondition their clients and thus change their behavior. Emmie is eight years old, and frequently wets her bed at night. She’s been invited to several sleepovers, but she won’t go because of her problem. Using a type of conditioning therapy, Emmie begins to sleep on a liquid-sensitive bed pad that is hooked to an alarm. When moisture touches the pad, it sets off the alarm, waking up Emmie. When this process is repeated enough times, Emmie develops an association between urinary relaxation and waking up, and this stops the bedwetting. Emmie has now gone three weeks without wetting her bed and is looking forward to her first sleepover this weekend. One commonly used classical conditioning therapeutic technique is counterconditioning: a client learns a new response to a stimulus that has previously elicited an undesirable behavior. Two counterconditioning techniques are aversive conditioning and exposure therapy. Aversive conditioning uses an unpleasant stimulus to stop an undesirable behavior. Therapists apply this technique to eliminate addictive behaviors, such as smoking, nail biting, and drinking. In aversion therapy, clients will typically engage in a specific behavior (such as nail biting) and at the same time are exposed to something unpleasant, such as a mild electric shock or a bad taste. After repeated associations between the unpleasant stimulus and the behavior, the client can learn to stop the unwanted behavior. Aversion therapy has been used effectively for years in the treatment of alcoholism (Davidson, 1974; Elkins, 1991; Streeton & Whelan, 2001). One common way this occurs is through a chemically based substance known as Antabuse. When a person takes Antabuse and then consumes alcohol, uncomfortable side effects result including nausea, vomiting, increased heart rate, heart palpitations, severe headache, and shortness of breath. Antabuse is repeatedly paired with alcohol until the client associates alcohol with unpleasant feelings, which decreases the client’s desire to consume alcohol. Antabuse creates a conditioned aversion to alcohol because it replaces the original pleasure response with an unpleasant one. In exposure therapy, a therapist seeks to treat clients’ fears or anxiety by presenting them with the object or situation that causes their problem, with the idea that they will eventually get used to it. This can be done via reality, imagination, or virtual reality. Exposure therapy was first reported in 1924 by Mary Cover Jones, who is considered the mother of behavior therapy. Jones worked with a boy named Peter who was afraid of rabbits. Her goal was to replace Peter’s fear of rabbits with a conditioned response of relaxation, which is a response that is incompatible with fear (See figure 16.11). How did she do it? Jones began by placing a caged rabbit on the other side of a room with Peter while he ate his afternoon snack. Over the course of several days, Jones moved the rabbit closer and closer to where Peter was seated with his snack. After two months of being exposed to the rabbit while relaxing with his snack, Peter was able to hold the rabbit and pet it while eating (Jones, 1924). Thirty years later, Joseph Wolpe (1958) refined Jones’s techniques, giving us the behavior therapy technique of exposure therapy that is used today. A popular form of exposure therapy is systematic desensitization, wherein a calm and pleasant state is gradually associated with increasing levels of anxiety-inducing stimuli. The idea is that you can’t be nervous and relaxed at the same time. Therefore, if you can learn to relax when you are facing environmental stimuli that make you nervous or fearful, you can eventually eliminate your unwanted fear response (Wolpe, 1958) (See figure 16.12 below). For example, Jayden is terrified of elevators. Nothing bad has ever happened to him on an elevator, but he’s so afraid of elevators that he will always take the stairs. That wasn’t a problem when Jayden worked on the second floor of an office building, but now he has a new job—on the 29th floor of a skyscraper in downtown Los Angeles. Jayden knows he can’t climb 29 flights of stairs in order to get to work each day, so he decided to see a behavior therapist for help. The therapist asks Jayden to first construct a hierarchy of elevator-related situations that elicit fear and anxiety. They range from situations of mild anxiety such as being nervous around the other people in the elevator, to the fear of getting an arm caught in the door, to panic-provoking situations such as getting trapped or the cable snapping. Next, the therapist uses progressive relaxation. They teach Jayden how to relax each of his muscle groups so that he achieves a drowsy, relaxed, and comfortable state of mind. Once he’s in this state, the therapist asks Jayden to imagine a mildly anxiety-provoking situation. Jayden is standing in front of the elevator thinking about pressing the call button. If this scenario causes Jayden anxiety, he lifts his finger. The therapist would then tell Jayden to forget the scene and return to his relaxed state. They repeat this scenario over and over until Jayden can imagine himself pressing the call button without anxiety. Over time the therapist and Jayden use progressive relaxation and imagination to proceed through all of the situations on Jayden’s hierarchy until he becomes desensitized to each one. After this, Jayden and the therapist begin to practice what he only previously envisioned in therapy, gradually going from pressing the button to actually riding an elevator. The goal is that Jayden will soon be able to take the elevator all the way up to the 29th floor of his office without feeling any anxiety. Sometimes, it’s too impractical, expensive, or embarrassing to re-create anxiety- producing situations, so a therapist might employ virtual reality exposure therapy by using a simulation to help conquer fears. Virtual reality exposure therapy has been used effectively to treat numerous anxiety disorders such as the fear of public speaking, claustrophobia (fear of enclosed spaces), aviophobia (fear of flying), and post-traumatic stress disorder (PTSD), a trauma and stressor-related disorder (Gerardi, Cukor, Difede, Rizzo, & Rothbaum, 2010). Link to Learning A new virtual reality exposure therapy is being used to treat PTSD in soldiers. Virtual Iraq is a simulation that mimics Middle Eastern cities and desert roads with situations similar to those soldiers experienced while deployed in Iraq. This method of virtual reality exposure therapy has been effective in treating PTSD for combat veterans. Approximately 80% of participants who completed treatment saw clinically significant reduction in their symptoms of PTSD, anxiety, and depression (Rizzo et al., 2010). Watch this Virtual Iraq video that shows soldiers being treated via simulation to learn more. Some behavior therapies employ operant conditioning. Recall what you learned about operant conditioning: We have a tendency to repeat behaviors that are reinforced. What happens to behaviors that are not reinforced? They become extinguished. These principles can be applied to help people with a wide range of psychological problems. For instance, operant conditioning techniques designed to reinforce positive behaviors and punish unwanted behaviors have been an effective tool to help children with autism (Lovaas, 1987, 2003; Sallows & Graupner, 2005; Wolf & Risley, 1967). This technique is called Applied Behavior Analysis (ABA). In this treatment, child-specific reinforcers (e.g., stickers, praise, candy, bubbles, and extra play time) are used to reward and motivate autistic children when they demonstrate desired behaviors such as sitting on a chair when requested, verbalizing a greeting, or making eye contact. Punishment such as a timeout or a sharp “No!” from the therapist or parent might be used to discourage undesirable behaviors such as pinching, scratching, and pulling hair. One popular operant conditioning intervention is called the token economy. This involves a controlled setting where individuals are reinforced for desirable behaviors with tokens, such as a poker chip, that can be exchanged for items or privileges. Token economies are often used in psychiatric hospitals to increase patient cooperation and activity levels. Patients are rewarded with tokens when they engage in positive behaviors (e.g., making their beds, brushing their teeth, coming to the cafeteria on time, and socializing with other patients). They can later exchange the tokens for extra TV time, private rooms, visits to the canteen, and so on (Dickerson, Tenhula, & Green-Paden, 2005). Psychotherapy: Cognitive Therapy Cognitive therapy is a form of psychotherapy that focuses on how a person’s thoughts lead to feelings of distress. The idea behind cognitive therapy is that how you think determines how you feel and act. Cognitive therapists help their clients change dysfunctional thoughts in order to relieve distress. They help a client see how they misinterpret a situation (cognitive distortion). For example, a client may overgeneralize. Because Ray failed one test in his Psychology 101 course, he feels he is stupid and worthless. These thoughts then cause his mood to worsen. Therapists also help clients recognize when they blow things out of proportion. Because Ray failed his Psychology 101 test, he has concluded that he’s going to fail the entire course and probably flunk out of college altogether. These errors in thinking have contributed to Ray’s feelings of distress. His therapist will help him challenge these irrational beliefs, focus on their illogical basis, and correct them with more logical and rational thoughts and beliefs. Cognitive therapy was developed by psychiatrist Aaron Beck in the 1960s. His initial focus was on depression and how a client’s self-defeating attitude served to maintain a depression despite positive factors in her life (Beck, Rush, Shaw, & Emery, 1979) (See figure 16.3). Through questioning, a cognitive therapist can help a client recognize dysfunctional ideas, challenge catastrophizing thoughts about themselves and their situations, and find a more positive way to view things (Beck, 2011). Psychotherapy: Cognitive-Behavioral Therapy Cognitive-behavioral therapists focus much more on present issues than on a patient’s childhood or past, as in other forms of psychotherapy. One of the first forms of cognitive-behavioral therapy was rational emotive therapy (RET), which was founded by Albert Ellis and grew out of his dislike of Freudian psychoanalysis (Daniel, n.d.). Behaviorists such as Joseph Wolpe also influenced Ellis’s therapeutic approach (National Association of Cognitive-Behavioral Therapists, 2009). Cognitive-behavioral therapy (CBT) helps clients examine how their thoughts affect their behavior. It aims to change cognitive distortions and self-defeating behaviors. In essence, this approach is designed to change the way people think as well as how they act. It is similar to cognitive therapy in that CBT attempts to make individuals aware of their irrational and negative thoughts and helps people replace them with new, more positive ways of thinking. It is also similar to behavior therapies in that CBT teaches people how to practice and engage in more positive and healthy approaches to daily situations. In total, hundreds of studies have shown the effectiveness of cognitive-behavioral therapy in the treatment of numerous psychological disorders such as depression, PTSD, anxiety disorders, eating disorders, bipolar disorder, and substance abuse (Beck Institute for Cognitive Behavior Therapy, n.d.). For example, CBT has been found to be effective in decreasing levels of hopelessness and suicidal thoughts in previously suicidal teenagers (Alavi, Sharifi, Ghanizadeh, & Dehbozorgi, 2013). Cognitive-behavioral therapy has also been effective in reducing PTSD in specific populations, such as transit workers (Lowinger & Rombom, 2012). Cognitive-behavioral therapy aims to change cognitive distortions and self-defeating behaviors using techniques like the ABC model. With this model, there is an Action (sometimes called an activating event), the Belief about the event, and the Consequences of this belief. Let’s say, Jon and Joe both go to a party. Jon and Joe each have met a young woman at the party: Jon is talking with Megan most of the party, and Joe is talking with Amanda. At the end of the party, Jon asks Megan for her phone number and Joe asks Amanda. Megan tells Jon she would rather not give him her number, and Amanda tells Joe the same thing. Both Jon and Joe are surprised, as they thought things were going well. What can Jon and Joe tell themselves about why the women were not interested? Let’s say Jon tells himself he is a loser, or is ugly, or “has no game.” Jon then gets depressed and decides not to go to another party, which starts a cycle that keeps him depressed. Joe tells himself that he had bad breath, goes out and buys a new toothbrush, goes to another party, and meets someone new. Jon’s belief about what happened results in a consequence of further depression, whereas Joe’s belief does not. Jon is internalizing the attribution or reason for the rebuffs, which triggers his depression. On the other hand, Joe is externalizing the cause, so his thinking does not contribute to feelings of depression. Cognitive-behavioral therapy examines specific maladaptive and automatic thoughts and cognitive distortions. Some examples of cognitive distortions are all-or-nothing thinking, overgeneralization, and jumping to conclusions. In overgeneralization, someone takes a small situation and makes it huge—for example, instead of saying, “This particular woman was not interested in me,” the man says, “I am ugly, a loser, and no one is ever going to be interested in me.” All or nothing thinking, which is a common type of cognitive distortion for people suffering from depression, reflects extremes. In other words, everything is black or white. After being turned down for a date, Jon begins to think, “No woman will ever go out with me. I’m going to be alone forever.” He begins to feel anxious and sad as he contemplates his future. The third kind of distortion involves jumping to conclusions—assuming that people are thinking negatively about you or reacting negatively to you, even though there is no evidence. Consider the example of Savannah and Hillaire, who recently met at a party. They have a lot in common, and Savannah thinks they could become friends. She calls Hillaire to invite her for coffee. Since Hillaire doesn’t answer, Savannah leaves her a message. Several days go by and Savannah never hears back from her potential new friend. Maybe Hillaire never received the message because she lost her phone or she is too busy to return the phone call. But if Savannah believes that Hillaire didn’t like Savannah or didn’t want to be her friend, she is demonstrating the cognitive distortion of jumping to conclusions. How effective is CBT? One client said this about his cognitive-behavioral therapy: "I have had many painful episodes of depression in my life, and this has had a negative effect on my career and has put considerable strain on my friends and family. The treatments I have received, such as taking antidepressants and psychodynamic counseling, have helped [me] to cope with the symptoms and to get some insights into the roots of my problems. CBT has been by far the most useful approach I have found in tackling these mood problems. It has raised my awareness of how my thoughts impact on my moods. How the way I think about myself, about others and about the world can lead me into depression. It is a practical approach, which does not dwell so much on childhood experiences, whilst acknowledging that it was then that these patterns were learned. It looks at what is happening now, and gives tools to manage these moods on a daily basis." (Martin, 2007, n.p.) Psychotherapy: Humanistic Therapy Humanistic psychology focuses on helping people achieve their potential. So it makes sense that the goal of humanistic therapy is to help people become more self-aware and accepting of themselves. In contrast to psychoanalysis, humanistic therapists focus on conscious rather than unconscious thoughts. They also emphasize the patient’s present and future, as opposed to exploring the patient’s past. Psychologist Carl Rogers developed a therapeutic orientation known as Rogerian, or client-centered therapy. Note the change from patients to clients. Rogers (1951) felt that the term patient suggested the person seeking help was sick and looking for a cure. Since this is a form of nondirective therapy, a therapeutic approach in which the therapist does not give advice or provide interpretations but helps the person to identify conflicts and understand feelings, Rogers (1951) emphasized the importance of the person taking control of his own life to overcome life’s challenges. In client-centered therapy, the therapist uses the technique of active listening. In active listening, the therapist acknowledges, restates, and clarifies what the client expresses. Therapists also practice what Rogers called unconditional positive regard, which involves not judging clients and simply accepting them for who they are. Rogers (1951) also felt that therapists should demonstrate genuineness, empathy, and acceptance toward their clients because this helps people become more accepting of themselves, which results in personal growth. Evaluating Various Forms of Psychotherapy How can we assess the effectiveness of psychotherapy? Is one technique more effective than another? For anyone considering therapy, these are important questions. According to the American Psychological Association, three factors work together to produce successful treatment. The first is the use of evidence-based treatment that is deemed appropriate for your particular issue. The second important factor is the clinical expertise of the psychologist or therapist. The third factor is your own characteristics, values, preferences, and culture. Many people begin psychotherapy feeling like their problem will never be resolved; however, psychotherapy helps people see that they can do things to make their situation better. Psychotherapy can help reduce a person’s anxiety, depression, and maladaptive behaviors. Through psychotherapy, individuals can learn to engage in healthy behaviors designed to help them better express emotions, improve relationships, think more positively, and perform more effectively at work or school. Many studies have explored the effectiveness of psychotherapy. For example, one large-scale study that examined \(16\) meta-analyses of CBT reported that it was equally effective or more effective than other therapies in treating PTSD, generalized anxiety disorder, depression, and social phobia (Butlera, Chapmanb, Formanc, & Becka, 2006). Another study found that CBT was as effective at treating depression (\(43\%\) success rate) as prescription medication (\(50\%\) success rate) compared to the placebo rate of \(25\%\) (DeRubeis et al., 2005). Another meta-analysis found that psychodynamic therapy was also as effective at treating these types of psychological issues as CBT (Shedler, 2010). However, no studies have found one psychotherapeutic approach more effective than another (Abbass, Kisely, & Kroenke, 2006; Chorpita et al., 2011), nor have they shown any relationship between a client’s treatment outcome and the level of the clinician’s training or experience (Wampold, 2007). Regardless of which type of psychotherapy an individual chooses, one critical factor that determines the success of treatment is the person’s relationship with the psychologist or therapist. Biomedical Therapies Individuals can be prescribed biologically based treatments or psychotropic medications that are used to treat mental disorders. While these are often used in combination with psychotherapy, they also are taken by individuals not in therapy. This is known as biomedical therapy. Medications used to treat psychological disorders are called psychotropic medications and are prescribed by medical doctors, including psychiatrists. In Louisiana and New Mexico, psychologists are able to prescribe some types of these medications (American Psychological Association, 2014). Different types and classes of medications are prescribed for different disorders. An individual with depression might be given an antidepressant, an individual with bipolar disorder might be given a mood stabilizer, and an individual with schizophrenia might be given an antipsychotic. These medications treat the symptoms of a psychological disorder by altering the levels or effects of neurotransmitters. For example, each type of antidepressant affects a different neurotransmitter, such as SSRI (selective serotonin reuptake inhibitor) antidepressants that increase the level of the neurotransmitter serotonin, and SNRI (serotonin-norepinephrine reuptake inhibitor) antidepressants that increase the levels of both serotonin and norepinephrine. They can help people feel better so that they can function on a daily basis, but they do not cure the disorder. Some people may only need to take a psychotropic medication for a short period of time. Others with severe disorders like bipolar disorder or schizophrenia may need to take psychotropic medication for a long time. Psychotropic medications are a popular treatment option for many types of disorders, and research suggests that they are most effective when combined with psychotherapy. This is especially true for the most common mental disorders, such as depressive and anxiety disorders (Cuijpers et al, 2014). When considering adding medication as a treatment option, individuals should know that some psychotropic medications have very concerning side effects. Table 16.2 shows the commonly prescribed types of medications, how they are used, and some of the potential side effects that may occur. Table 16.2 Commonly Prescribed Psychotropic Medications Type of Medication Used to Treat Brand Names of Commonly Prescribed Medications How They Work Side Effects Antipsychotics (developed in the 1950s) Schizophrenia and other types of severe thought disorders Haldol, Mellaril, Prolixin, Thorazine Treat positive psychotic symptoms such as auditory and visual hallucinations, delusions, and paranoia by blocking the neurotransmitter dopamine Long-term use can lead to tardive dyskinesia, involuntary movements of the arms, legs, tongue and facial muscles, resulting in Parkinson’s-like tremors Atypical Antipsychotics (developed in the late 1980s) Schizophrenia and other types of severe thought disorders Abilify, Risperdal, Clozaril Treat the negative symptoms of schizophrenia, such as withdrawal and apathy, by targeting both dopamine and serotonin receptors; newer medications may treat both positive and negative symptoms Can increase the risk of obesity and diabetes as well as elevate cholesterol levels; constipation, dry mouth, blurred vision, drowsiness, and dizziness Anti-depressants Depression and increasingly for anxiety Paxil, Prozac, Zoloft (selective serotonin reuptake inhibitors, [SSRIs]); Tofranil and Elavil (tricyclics) Alter levels of neurotransmitters such as serotonin and norepinephrine SSRIs: headache, nausea, weight gain, drowsiness, reduced sex drive Tricyclics: dry mouth, constipation, blurred vision, drowsiness, reduced sex drive, increased risk of suicide Anti-anxiety agents Anxiety and agitation that occur in OCD, PTSD, panic disorder, and social phobia Xanax, Valium, Ativan Depress central nervous system activity Drowsiness, dizziness, headache, fatigue, lightheadedness Mood Stabilizers Bipolar disorder Lithium, Depakote, Lamictal, Tegretol Treat episodes of mania as well as depression Excessive thirst, irregular heartbeat, itching/rash, swelling (face, mouth, and extremities), nausea, loss of appetite Stimulants ADHD Adderall, Ritalin Improve ability to focus on a task and maintain attention Decreased appetite, difficulty sleeping, stomachache, headache Another biologically based treatment that continues to be used, although infrequently, is electroconvulsive therapy (ECT) (formerly known by its unscientific name as electroshock therapy). It involves using an electrical current to induce seizures to help alleviate the effects of severe depression. The exact mechanism is unknown, although it does help alleviate symptoms for people with severe depression who have not responded to traditional drug therapy (Pagnin, de Queiroz, Pini, & Cassano, 2004). About \(85\%\) of people treated with ECT improve (Reti, n.d.). However, the memory loss associated with repeated administrations has led to it being implemented as a last resort (Donahue, 2000; Prudic, Peyser, & Sackeim, 2000). A more recent alternative is transcranial magnetic stimulation (TMS), a procedure approved by the FDA in 2008 that uses magnetic fields to stimulate nerve cells in the brain to improve depression symptoms; it is used when other treatments have not worked (Mayo Clinic, 2012). DIG DEEPER: Evidence-based Practice A buzzword in therapy today is evidence-based practice. However, it’s not a novel concept but one that has been used in medicine for at least two decades. Evidence-based practice is used to reduce errors in treatment selection by making clinical decisions based on research (Sackett & Rosenberg, 1995). In any case, evidence-based treatment is on the rise in the field of psychology. So what is it, and why does it matter? In an effort to determine which treatment methodologies are evidenced-based, professional organizations such as the American Psychological Association (APA) have recommended that specific psychological treatments be used to treat certain psychological disorders (Chambless & Ollendick, 2001). According to the APA (2005), “Evidence-based practice in psychology (EBPP) is the integration of the best available research with clinical expertise in the context of patient characteristics, culture, and preferences” (p. 1). The foundational idea behind evidence based treatment is that best practices are determined by research evidence that has been compiled by comparing various forms of treatment (Charman & Barkham, 2005). These treatments are then operationalized and placed in treatment manuals—trained therapists follow these manuals. The benefits are that evidence-based treatment can reduce variability between therapists to ensure that a specific approach is delivered with integrity (Charman & Barkham, 2005). Therefore, clients have a higher chance of receiving therapeutic interventions that are effective at treating their specific disorder. While EBPP is based on randomized control trials, critics of EBPP reject it stating that the results of trials cannot be applied to individuals and instead determinations regarding treatment should be based on a therapist’s judgment (Mullen & Streiner, 2004).
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/16%3A_Therapy_and_Treatment/16.03%3A_Types_of_Treatment.txt
Learning Objectives • Distinguish between the various modalities of treatment • Discuss benefits of group therapy Once a person seeks treatment, whether voluntarily or involuntarily, he has an intake done to assess his clinical needs. An intake is the therapist’s first meeting with the client. The therapist gathers specific information to address the client’s immediate needs, such as the presenting problem, the client’s support system, and insurance status. The therapist informs the client about confidentiality, fees, and what to expect in treatment. Confidentiality means the therapist cannot disclose confidential communications to any third party unless mandated or permitted by law to do so. During the intake, the therapist and client will work together to discuss treatment goals. Then a treatment plan will be formulated, usually with specific measurable objectives. Also, the therapist and client will discuss how treatment success will be measured and the estimated length of treatment. There are several different modalities of treatment (See figure 16.14): Individual therapy, family therapy, couples therapy, and group therapy are the most common. Indvidual Therapy In individual therapy, also known as individual psychotherapy or individual counseling, the client and clinician meet one-on-one (usually from \(45\) minutes to \(1\) hour). These meetings typically occur weekly or every other week, and sessions are conducted in a confidential and caring environment (See figure 16.15). The clinician will work with clients to help them explore their feelings, work through life challenges, identify aspects of themselves and their lives that they wish to change, and set goals to help them work towards these changes. A client might see a clinician for only a few sessions, or the client may attend individual therapy sessions for a year or longer. The amount of time spent in therapy depends on the needs of the client as well as her personal goals. Group Therapy In group therapy, a clinician meets together with several clients with similar problems (See figure 16.16). When children are placed in group therapy, it is particularly important to match clients for age and problems. One benefit of group therapy is that it can help decrease a client’s shame and isolation about a problem while offering needed support, both from the therapist and other members of the group (American Psychological Association, 2014). A nine-year-old sexual abuse victim, for example, may feel very embarrassed and ashamed. If he is placed in a group with other sexually abused boys, he will realize that he is not alone. A child struggling with poor social skills would likely benefit from a group with a specific curriculum to foster special skills. A woman suffering from post-partum depression could feel less guilty and more supported by being in a group with similar women. Group therapy also has some specific limitations. Members of the group may be afraid to speak in front of other people because sharing secrets and problems with complete strangers can be stressful and overwhelming. There may be personality clashes and arguments among group members. There could also be concerns about confidentiality: Someone from the group might share what another participant said to people outside of the group. Another benefit of group therapy is that members can confront each other about their patterns. For those with some types of problems, such as sexual abusers, group therapy is the recommended treatment. Group treatment for this population is considered to have several benefits: Group treatment is more economical than individual, couples, or family therapy. Sexual abusers often feel more comfortable admitting and discussing their offenses in a treatment group where others are modeling openness. Clients often accept feedback about their behavior more willingly from other group members than from therapists. Finally, clients can practice social skills in group treatment settings. (McGrath, Cumming, Burchard, Zeoli, & Ellerby, 2009) Groups that have a strong educational component are called psycho-educational groups. For example, a group for children whose parents have cancer might discuss in depth what cancer is, types of treatment for cancer, and the side effects of treatments, such as hair loss. Often, group therapy sessions with children take place in school. They are led by a school counselor, a school psychologist, or a school social worker. Groups might focus on test anxiety, social isolation, self-esteem, bullying, or school failure (Shechtman, 2002). Whether the group is held in school or in a clinician’s office, group therapy has been found to be effective with children facing numerous kinds of challenges (Shechtman, 2002). During a group session, the entire group could reflect on an individual’s problem or difficulties, and others might disclose what they have done in that situation. When a clinician is facilitating a group, the focus is always on making sure that everyone benefits and participates in the group and that no one person is the focus of the entire session. Groups can be organized in various ways: some have an overarching theme or purpose, some are time-limited, some have open membership that allows people to come and go, and some are closed. Some groups are structured with planned activities and goals, while others are unstructured: There is no specific plan, and group members themselves decide how the group will spend its time and on what goals it will focus. This can become a complex and emotionally charged process, but it is also an opportunity for personal growth (Page & Berkow, 1994). Couples Therapy Couples therapy involves two people in an intimate relationship who are having difficulties and are trying to resolve them (See figure 16.17). The couple may be dating, partnered, engaged, or married. The primary therapeutic orientation used in couples counseling is cognitive-behavioral therapy (Rathus & Sanderson, 1999). Couples meet with a therapist to discuss conflicts and/or aspects of their relationship that they want to change. The therapist helps them see how their individual backgrounds, beliefs, and actions are affecting their relationship. Often, a therapist tries to help the couple resolve these problems, as well as implement strategies that will lead to a healthier and happier relationship, such as how to listen, how to argue, and how to express feelings. However, sometimes, after working with a therapist, a couple will realize that they are too incompatible and will decide to separate. Some couples seek therapy to work out their problems, while others attend therapy to determine whether staying together is the best solution. Counseling couples in a high-conflict and volatile relationship can be difficult. In fact, psychologists Peter Pearson and Ellyn Bader, who founded the Couples Institute in Palo Alto, California, have compared the experience of the clinician in couples’ therapy to be like “piloting a helicopter in a hurricane” (Weil, 2012, para. 7). Family Therapy Family therapy is a special form of group therapy, consisting of one or more families. Although there are many theoretical orientations in family therapy, one of the most predominant is the systems approach. The family is viewed as an organized system, and each individual within the family is a contributing member who creates and maintains processes within the system that shape behavior (Minuchin, 1985). Each member of the family influences and is influenced by the others. The goal of this approach is to enhance the growth of each family member as well as that of the family as a whole. Often, dysfunctional patterns of communication that develop between family members can lead to conflict. A family with this dynamic might wish to attend therapy together rather than individually. In many cases, one member of the family has problems that detrimentally affect everyone. For example, a mother’s depression, teen daughter’s eating disorder, or father’s alcohol dependence could affect all members of the family. The therapist would work with all members of the family to help them cope with the issue, and to encourage resolution and growth in the case of the individual family member with the problem. With family therapy, the nuclear family (i.e., parents and children) or the nuclear family plus whoever lives in the household (e.g., grandparent) come into treatment. Family therapists work with the whole family unit to heal the family. There are several different types of family therapy. In structural family therapy, the therapist examines and discusses the boundaries and structure of the family: who makes the rules, who sleeps in the bed with whom, how decisions are made, and what are the boundaries within the family. In some families, the parents do not work together to make rules, or one parent may undermine the other, leading the children to act out. The therapist helps them resolve these issues and learn to communicate more effectively. Link to Learning Watch this video of a structural family session to learn more. In strategic family therapy, the goal is to address specific problems within the family that can be dealt with in a relatively short amount of time. Typically, the therapist would guide what happens in the therapy session and design a detailed approach to resolving each member’s problem (Madanes, 1991).
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/16%3A_Therapy_and_Treatment/16.04%3A_Treatment_Modalities.txt
Learning Objectives • Recognize the goal of substance-related and addictive disorders treatment • Discuss what makes for effective treatment • Describe how comorbid disorders are treated Addiction is often viewed as a chronic disease (Figure 16.18). The choice to use a substance is initially voluntary; however, because chronic substance use can permanently alter the neural structure in the prefrontal cortex, an area of the brain associated with decision-making and judgment, a person becomes driven to use drugs and/or alcohol (Muñoz-Cuevas, Athilingam, Piscopo, & Wilbrecht, 2013). This helps explain why relapse rates tend to be high. About \(40\%-60\%\) of individuals relapse, which means they return to abusing drugs and/or alcohol after a period of improvement (National Institute on Drug Abuse [NIDA], 2008). The goal of substance-related treatment is to help an addicted person stop compulsive drug-seeking behaviors (NIDA, 2012). This means an addicted person will need long-term treatment, similar to a person battling a chronic physical disease such as hypertension or diabetes. Treatment usually includes behavioral therapy and/or medication, depending on the individual (NIDA, 2012). Specialized therapies have also been developed for specific types of substance-related disorders, including alcohol, cocaine, and opioids (McGovern & Carroll, 2003). Substance-related treatment is considered much more cost-effective than incarceration or not treating those with addictions (NIDA, 2012). See figure below. What makes Treatment Effective? Specific factors make substance-related treatment much more effective. One factor is duration of treatment. Generally, the addict needs to be in treatment for at least three months to achieve a positive outcome (Simpson, 1981; Simpson, Joe, & Bracy, 1982; NIDA, 2012). This is due to the psychological, physiological, behavioral, and social aspects of abuse (Simpson, 1981; Simpson et al., 1982; NIDA, 2012). While in treatment, an addict might receive behavior therapy, which can help motivate the addict to participate in the treatment program and teach strategies for dealing with cravings and how to prevent relapse. Also, treatment needs to be holistic and address multiple needs, not just the drug addiction. This means that treatment will address factors such as communication, stress management, relationship issues, parenting, vocational concerns, and legal concerns (McGovern & Carroll, 2003; NIDA, 2012). While individual therapy is used in the treatment of substance-related disorders, group therapy is the most widespread treatment modality (Weiss, Jaffee, de Menil, & Cogley, 2004). The rationale behind using group therapy for addiction treatment is that addicts are much more likely to maintain sobriety in a group format. It has been suggested that this is due to the rewarding and therapeutic benefits of the group, such as support, affiliation, identification, and even confrontation (Center for Substance Abuse Treatment, 2005). For teenagers, the whole family often needs to participate in treatment to address issues such as family dynamics, communication, and relapse prevention. Family involvement in teen drug addiction is vital. Research suggests that greater parental involvement is correlated with a greater reduction in use by teen substance abusers. Also, mothers who participated in treatment displayed better mental health and greater warmth toward their children (Bertrand et al., 2013). However, neither individual nor group therapy has been found to be more effective (Weiss et al., 2004). Regardless of the type of treatment service, the primary focus is on abstinence or at the very least a significant reduction in use (McGovern & Carroll, 2003). Treatment also usually involves medications to detox the addict safely after an overdose, to prevent seizures and agitation that often occur in detox, to prevent reuse of the drug, and to manage withdrawal symptoms. Getting off drugs often involves the use of drugs—some of which can be just as addictive. Detox can be difficult and dangerous. Comorbid Disorders Frequently, a person who is addicted to drugs and/or alcohol has an additional psychological disorder. Saying a person has comorbid disorders means the individual has two or more diagnoses. This can often be a substance-related diagnosis and another psychiatric diagnosis, such as depression, bipolar disorder, or schizophrenia. These individuals fall into the category of mentally ill and chemically addicted (MICA)—their problems are often chronic and expensive to treat, with limited success. Compared with the overall population, substance abusers are twice as likely to have a mood or anxiety disorder. Drug abuse can cause symptoms of mood and anxiety disorders and the reverse is also true—people with debilitating symptoms of a psychiatric disorder may self-medicate and abuse substances. In cases of comorbidity, the best treatment is thought to address both (or multiple) disorders simultaneously (NIDA, 2012). Behavior therapies are used to treat comorbid conditions, and in many cases, psychotropic medications are used along with psychotherapy. For example, evidence suggests that bupropion (trade names: Wellbutrin and Zyban), approved for treating depression and nicotine dependence, might also help reduce craving and use of the drug methamphetamine (NIDA, 2011). However, more research is needed to better understand how these medications work—particularly when combined in patients with comorbidities.
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/16%3A_Therapy_and_Treatment/16.05%3A_Substance_Related_and_Addictive_Disorders_-_A_Special_Case.txt
Learning Objectives • Explain how the sociocultural model is used in therapy • Discuss barriers to mental health services among ethnic minorities The sociocultural perspective looks at you, your behaviors, and your symptoms in the context of your culture and background. For example, José is an 18-year-old Hispanic male from a traditional family. José comes to treatment because of depression. During the intake session, he reveals that he is gay and is nervous about telling his family. He also discloses that he is concerned because his religious background has taught him that being gay is wrong. How does his religious and cultural background affect him? How might his cultural background affect how his family reacts if José were to tell them he is gay? Mental health professionals must develop cultural competence (Figure 16.20), which means they must understand and address issues of race, culture, and ethnicity. They must also develop strategies to effectively address the needs of various populations for which Eurocentric therapies have limited application (Sue, 2004). For example, a counselor whose treatment focuses on individual decision making may be ineffective at helping a Chinese client with a collectivist approach to problem solving (Sue, 2004). Multicultural counseling and therapy aims to offer both a helping role and process that uses modalities and defines goals consistent with the life experiences and cultural values of clients. It strives to recognize client identities to include individual, group, and universal dimensions, advocate the use of universal and culture-specific strategies and roles in the healing process, and balances the importance of individualism and collectivism in the assessment, diagnosis, and treatment of client and client systems (Sue, 2001). This therapeutic perspective integrates the impact of cultural and social norms, starting at the beginning of treatment. Therapists who use this perspective work with clients to obtain and integrate information about their cultural patterns into a unique treatment approach based on their particular situation (Stewart, Simmons, & Habibpour, 2012). Sociocultural therapy can include individual, group, family, and couples treatment modalities. Link to Learning Watch this short video about cultural competence and sociocultural treatments to learn more. Barriers to Treatment Statistically, ethnic minorities tend to utilize mental health services less frequently than White, middle-class Americans (Alegría et al., 2008; Richman, Kohn-Wood, & Williams, 2007). Why is this so? Perhaps the reason has to do with access and availability of mental health services. Ethnic minorities and individuals of low socioeconomic status (SES) report that barriers to services include lack of insurance, transportation, and time (Thomas & Snowden, 2002). However, researchers have found that even when income levels and insurance variables are taken into account, ethnic minorities are far less likely to seek out and utilize mental health services. And when access to mental health services is comparable across ethnic and racial groups, differences in service utilization remain (Richman et al., 2007). In a study involving thousands of women, it was found that the prevalence rate of anorexia was similar across different races, but that bulimia nervosa was more prevalent among Hispanic and African American women when compared with non-Hispanic whites (Marques et al., 2011). Although they have similar or higher rates of eating disorders, Hispanic and African American women with these disorders tend to seek and engage in treatment far less than Caucasian women. These findings suggest ethnic disparities in access to care, as well as clinical and referral practices that may prevent Hispanic and African American women from receiving care, which could include lack of bilingual treatment, stigma, fear of not being understood, family privacy, and lack of education about eating disorders. Perceptions and attitudes toward mental health services may also contribute to this imbalance. A recent study at King’s College, London, found many complex reasons why people do not seek treatment: self-sufficiency and not seeing the need for help, not seeing therapy as effective, concerns about confidentiality, and the many effects of stigma and shame (Clement et al., 2014). And in another study, African Americans exhibiting depression were less willing to seek treatment due to fear of possible psychiatric hospitalization as well as fear of the treatment itself (Sussman, Robins, & Earls, 1987). Instead of mental health treatment, many African Americans prefer to be self-reliant or use spiritual practices (Snowden, 2001; Belgrave & Allison, 2010). For example, it has been found that the Black church plays a significant role as an alternative to mental health services by providing prevention and treatment-type programs designed to enhance the psychological and physical well-being of its members (Blank, Mahmood, Fox, & Guterbock, 2002). Additionally, people belonging to ethnic groups that already report concerns about prejudice and discrimination are less likely to seek services for a mental illness because they view it as an additional stigma (Gary, 2005; Townes, Cunningham, & Chavez-Korell, 2009; Scott, McCoy, Munson, Snowden, & McMillen, 2011). For example, in one recent study of 462 older Korean Americans (over the age of 60) many participants reported suffering from depressive symptoms. However, 71% indicated they thought depression was a sign of personal weakness, and 14% reported that having a mentally ill family member would bring shame to the family (Jang, Chiriboga, & Okazaki, 2009). Language differences are a further barrier to treatment. In the previous study on Korean Americans’ attitudes toward mental health services, it was found that there were no Korean-speaking mental health professionals where the study was conducted (Orlando and Tampa, Florida) (Jang et al., 2009). Because of the growing number of people from ethnically diverse backgrounds, there is a need for therapists and psychologists to develop knowledge and skills to become culturally competent (Ahmed, Wilson, Henriksen, & Jones, 2011). Those providing therapy must approach the process from the context of the unique culture of each client (Sue & Sue, 2007). DIG DEEPER: Supporting Mental Health Treatment In the United States, about one in six children and one in five adults experiences a mental health disorder, but fewer than half of these people receive professional support for their disorder (Whitney & Peterson, 2019). Access to qualified mental health professionals is not universal or equitable, but it has improved to the point that more people could receive help if they sought it. Why then, do so many people go without support, therapy, or treatment? It seems that the public has a negative perception of people with mental health disorders. According to researchers from Indiana University, the University of Virginia, and Columbia University, interviews with over 1,300 U.S. adults show that they believe children with depression are prone to violence and that if a child receives treatment for a psychological disorder, then that child is more likely to be rejected by peers at school. Bernice Pescosolido, author of the study, asserts that this is a misconception. And it is not limited to perceptions of mental health issues in children: adults living with mental health issues may face even more scrutiny when sharing their condition or seeking support. Stigmatization of psychological disorders is one of the main reasons why people do not get the help they need when they are having difficulties. Pescosolido and her colleagues caution that this stigma surrounding mental illness, based on misconceptions rather than facts, can be devastating to emotional and social well-being. Fortunately, we are starting to see discussions related to the destigmatization of mental illness and an increase in public education and awareness. Dozens of leaders have contributed to the conversation, including athletes like Naomi Osaka, Simone Biles, Michael Phelps, Kevin Love, and Dak Prescott, as well as artists such as Adele, Bruce Springsteen, Ariana Grande, Big Sean, and Bebe Rexha. Mental health awareness is stronger within workplaces, educational settings, and communities overall. However, stigma remains, particularly regarding mental health issues that are frequently misunderstood. The National Alliance on Mental Illness (NAMI) outlines key considerations regarding support, sensitivity, and compassion regarding mental health: • Talk and listen openly about mental health: if you are confident and comfortable sharing your own mental health story, you may help someone else. Likewise, if you are comfortable learning about someone's experience, they may appreciate a friendly and supportive ear. • Avoid assumptions, generalizations, or judgments: people experience mental health differently, even if they have the same symptoms or diagnosis as another person. Although you may have the best intentions, it is usually not helpful to act as if you know how they feel or know how they should handle their condition. • Be conscious of language: using appropriate language creates a more welcoming and comfortable environment and reduces bias. Avoid language that stigmatizes, blames, or discourages people based on their or their family member's mental health. • Encourage equality regarding mental and physical illness, so that people recognize the necessity of addressing and treating both. • Encourage people to get help if they need it: first steps can include speaking to a doctor or counselor, or attending a support group meeting. Managing mental health and addressing mental illness can be extremely challenging and painful, and may sometimes seem futile and confusing. As we mention above, a significant number of people have experienced mental health problems, and it is in all of our interests to improve wellbeing. With greater awareness and understanding, we will increase their capacity for better health and recovery, creating more productive and supportive communities, families, and relationships. Join the effort by encouraging and supporting those around you to seek help if they need it. To learn more, visit the National Alliance on Mental Illness (NAMI) website (http://www.nami.org/).
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/16%3A_Therapy_and_Treatment/16.06%3A_The_Sociocultural_Model_and_Therapy_Utilization.txt
15. People with psychological disorders have been treated poorly throughout history. Describe some efforts to improve treatment, include explanations for the success or lack thereof. 16. Usually someone is hospitalized only if they are an imminent threat to themselves or others. Describe a situation that might meet these criteria. 17. Imagine that you are a psychiatrist. Your patient, Pat, comes to you with the following symptoms: anxiety and feelings of sadness. Which therapeutic approach would you recommend and why? 18. Compare and contrast individual and group therapies. 19. You are conducting an intake assessment. Your client is a 45-year-old single, employed male with cocaine dependence. He failed a drug screen at work and is mandated to treatment by his employer if he wants to keep his job. Your client admits that he needs help. Why would you recommend group therapy for him? 20. Lashawn is a 24-year-old African American female. For years she has been struggling with bulimia. She knows she has a problem, but she is not willing to seek mental health services. What are some reasons why she may be hesitant to get help? Key Terms asylum institution created for the specific purpose of housing people with psychological disorders aversive conditioning counterconditioning technique that pairs an unpleasant stimulant with an undesirable behavior behavior therapy therapeutic orientation that employs principles of learning to help clients change undesirable behaviors biomedical therapy treatment that involves medication and/or medical procedures to treat psychological disorders cognitive therapy form of psychotherapy that focuses on how a person’s thoughts lead to feelings of distress, with the aim of helping them change these irrational thoughts cognitive-behavioral therapy form of psychotherapy that aims to change cognitive distortions and self-defeating behaviors comorbid disorder individual who has two or more diagnoses, which often includes a substance abuse diagnosis and another psychiatric diagnosis, such as depression, bipolar disorder, or schizophrenia confidentiality therapist cannot disclose confidential communications to any third party, unless mandated or permitted by law counterconditioning classical conditioning therapeutic technique in which a client learns a new response to a stimulus that has previously elicited an undesirable behavior couples therapy two people in an intimate relationship, such as husband and wife, who are having difficulties and are trying to resolve them with therapy cultural competence therapist’s understanding and attention to issues of race, culture, and ethnicity in providing treatment deinstitutionalization process of closing large asylums and integrating people back into the community where they can be treated locally dream analysis technique in psychoanalysis in which patients recall their dreams and the psychoanalyst interprets them to reveal unconscious desires or struggles electroconvulsive therapy (ECT) type of biomedical therapy that involves using an electrical current to induce seizures in a person to help alleviate the effects of severe depression exposure therapy counterconditioning technique in which a therapist seeks to treat a client’s fear or anxiety by presenting the feared object or situation with the idea that the person will eventually get used to it family therapy special form of group therapy consisting of one or more families free association technique in psychoanalysis in which the patient says whatever comes to mind at the moment group therapy treatment modality in which 5–10 people with the same issue or concern meet together with a trained clinician humanistic therapy therapeutic orientation aimed at helping people become more self-aware and accepting of themselves individual therapy treatment modality in which the client and clinician meet one-on-one intake therapist’s first meeting with the client in which the therapist gathers specific information to address the client’s immediate needs involuntary treatment therapy that is mandated by the courts or other systems nondirective therapy therapeutic approach in which the therapist does not give advice or provide interpretations but helps the person identify conflicts and understand feelings play therapy therapeutic process, often used with children, that employs toys to help them resolve psychological problems psychoanalysis therapeutic orientation developed by Sigmund Freud that employs free association, dream analysis, and transference to uncover repressed feelings psychotherapy (also, psychodynamic psychotherapy) psychological treatment that employs various methods to help someone overcome personal problems, or to attain personal growth rational emotive therapy (RET) form of cognitive-behavioral therapy relapse repeated drug use and/or alcohol use after a period of improvement from substance abuse Rogerian (client-centered therapy) non-directive form of humanistic psychotherapy developed by Carl Rogers that emphasizes unconditional positive regard and self-acceptance strategic family therapy therapist guides the therapy sessions and develops treatment plans for each family member for specific problems that can addressed in a short amount of time structural family therapy therapist examines and discusses with the family the boundaries and structure of the family: who makes the rules, who sleeps in the bed with whom, how decisions are made, and what are the boundaries within the family systematic desensitization form of exposure therapy used to treat phobias and anxiety disorders by exposing a person to the feared object or situation through a stimulus hierarchy token economy controlled setting where individuals are reinforced for desirable behaviors with tokens (e.g., poker chip) that be exchanged for items or privileges transference process in psychoanalysis in which the patient transfers all of the positive or negative emotions associated with the patient’s other relationships to the psychoanalyst unconditional positive regard fundamental acceptance of a person regardless of what they say or do; term associated with humanistic psychology virtual reality exposure therapy uses a simulation rather than the actual feared object or situation to help people conquer their fears voluntary treatment therapy that a person chooses to attend in order to obtain relief from her symptoms
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/16%3A_Therapy_and_Treatment/Critical_Thinking_Questions.txt
21. Do you think there is a stigma associated with mentally ill persons today? Why or why not? 22. What are some places in your community that offer mental health services? Would you feel comfortable seeking assistance at one of these facilities? Why or why not? 23. If you were to choose a therapist practicing one of the techniques presented in this section, which kind of therapist would you choose and why? 24. Your best friend tells you that she is concerned about her cousin. The cousin—a teenage girl—is constantly coming home after her curfew, and your friend suspects that she has been drinking. What treatment modality would you recommend to your friend and why? 25. What are some substance-related and addictive disorder treatment facilities in your community, and what types of services do they provide? Would you recommend any of them to a friend or family member with a substance abuse problem? Why or why not? 26. What is your attitude toward mental health treatment? Would you seek treatment if you were experiencing symptoms or having trouble functioning in your life? Why or why not? In what ways do you think your cultural and/or religious beliefs influence your attitude toward psychological intervention? Review Questions 1. Who of the following does not support the humane and improved treatment of people with mental illness? 1. Philippe Pinel 2. medieval priests 3. Dorothea Dix 4. All of the above 2. The process of closing large asylums and providing for people to stay in the community to be treated locally is known as ________. 1. deinstitutionalization 2. exorcism 3. deactivation 4. decentralization 3. Joey was convicted of domestic violence. As part of his sentence, the judge has ordered that he attend therapy for anger management. This is considered ________ treatment. 1. involuntary 2. voluntary 3. forced 4. mandatory 4. Today, most people with mental health issues are not hospitalized. Typically they are only hospitalized if they ________. 1. have schizophrenia 2. have insurance 3. are an imminent threat to themselves or others 4. require therapy 5. The idea behind ________ is that how you think determines how you feel and act. 1. cognitive therapy 2. cognitive-behavioral therapy 3. behavior therapy 4. client-centered therapy 6. Mood stabilizers, such as lithium, are used to treat ________. 1. anxiety disorders 2. depression 3. bipolar disorder 4. ADHD 7. Clay is in a therapy session. The therapist asks him to relax and say whatever comes to his mind at the moment. This therapist is using ________, which is a technique of ________. 1. active listening; client-centered therapy 2. systematic desensitization; behavior therapy 3. transference; psychoanalysis 4. free association; psychoanalysis 8. A treatment modality in which 5–10 people with the same issue or concern meet together with a trained clinician is known as ________. 1. family therapy 2. couples therapy 3. group therapy 4. self-help group 9. What happens during an intake? 1. The therapist gathers specific information to address the client’s immediate needs such as the presenting problem, the client’s support system, and insurance status. The therapist informs the client about confidentiality, fees, and what to expect in a therapy session. 2. The therapist guides what happens in the therapy session and designs a detailed approach to resolving each member’s presenting problem. 3. The therapist meets with a couple to help them see how their individual backgrounds, beliefs, and actions are affecting their relationship. 4. The therapist examines and discusses with the family the boundaries and structure of the family: For example, who makes the rules, who sleeps in the bed with whom, and how decisions are made. 10. What is the minimum amount of time addicts should receive treatment if they are to achieve a desired outcome? 1. 3 months 2. 6 months 3. 9 months 4. 12 months 11. When an individual has two or more diagnoses, which often includes a substance-related diagnosis and another psychiatric diagnosis, this is known as ________. 1. bipolar disorder 2. comorbid disorder 3. codependency 4. bi-morbid disorder 12. John was drug-free for almost six months. Then he started hanging out with his addict friends, and he has now started abusing drugs again. This is an example of ________. 1. release 2. reversion 3. re-addiction 4. relapse 13. The sociocultural perspective looks at you, your behaviors, and your symptoms in the context of your ________. 1. education 2. socioeconomic status 3. culture and background 4. age 14. Which of the following was not listed as a barrier to mental health treatment? 1. fears about treatment 2. language 3. transportation 4. being a member of the ethnic majority
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/16%3A_Therapy_and_Treatment/Personal_Application_Questions.txt
16.1 Mental Health Treatment: Past and Present It was once believed that people with psychological disorders, or those exhibiting strange behavior, were possessed by demons. These people were forced to take part in exorcisms, were imprisoned, or executed. Later, asylums were built to house the mentally ill, but the patients received little to no treatment, and many of the methods used were cruel. Philippe Pinel and Dorothea Dix argued for more humane treatment of people with psychological disorders. In the mid-1960s, the deinstitutionalization movement gained support and asylums were closed, enabling people with mental illness to return home and receive treatment in their own communities. Some did go to their family homes, but many became homeless due to a lack of resources and support mechanisms. Today, instead of asylums, there are psychiatric hospitals run by state governments and local community hospitals, with the emphasis on short-term stays. However, most people who have mental illness are not hospitalized. A person suffering symptoms could speak with a primary care physician, who most likely would refer him to someone who specializes in therapy. The person can receive outpatient mental health services from a variety of sources, including psychologists, psychiatrists, marriage and family therapists, school counselors, clinical social workers, and religious personnel. These therapy sessions would be covered through insurance, government funds, or private (self) pay. 16.2 Types of Treatment Psychoanalysis was developed by Sigmund Freud. Freud’s theory is that a person’s psychological problems are the result of repressed impulses or childhood trauma. The goal of the therapist is to help a person uncover buried feelings by using techniques such as free association and dream analysis. Play therapy is a psychodynamic therapy technique often used with children. The idea is that children play out their hopes, fantasies, and traumas, using dolls, stuffed animals, and sandbox figurines. In behavior therapy, a therapist employs principles of learning from classical and operant conditioning to help clients change undesirable behaviors. Counterconditioning is a commonly used therapeutic technique in which a client learns a new response to a stimulus that has previously elicited an undesirable behavior via classical conditioning. Principles of operant conditioning can be applied to help people deal with a wide range of psychological problems. Token economy is an example of a popular operant conditioning technique. Cognitive therapy is a technique that focuses on how thoughts lead to feelings of distress. The idea behind cognitive therapy is that how you think determines how you feel and act. Cognitive therapists help clients change dysfunctional thoughts in order to relieve distress. Cognitive-behavioral therapy explores how our thoughts affect our behavior. Cognitive-behavioral therapy aims to change cognitive distortions and self-defeating behaviors. Humanistic therapy focuses on helping people achieve their potential. One form of humanistic therapy developed by Carl Rogers is known as client-centered or Rogerian therapy. Client-centered therapists use the techniques of active listening, unconditional positive regard, genuineness, and empathy to help clients become more accepting of themselves. Often in combination with psychotherapy, people can be prescribed biologically based treatments such as psychotropic medications and/or other medical procedures such as electro-convulsive therapy. 16.3 Treatment Modalities There are several modalities of treatment: individual therapy, group therapy, couples therapy, and family therapy are the most common. In an individual therapy session, a client works one-on-one with a trained therapist. In group therapy, usually 5–10 people meet with a trained group therapist to discuss a common issue (e.g., divorce, grief, eating disorders, substance abuse, or anger management). Couples therapy involves two people in an intimate relationship who are having difficulties and are trying to resolve them. The couple may be dating, partnered, engaged, or married. The therapist helps them resolve their problems as well as implement strategies that will lead to a healthier and happier relationship. Family therapy is a special form of group therapy. The therapy group is made up of one or more families. The goal of this approach is to enhance the growth of each individual family member and the family as a whole. 16.4 Substance-Related and Addictive Disorders: A Special Case Addiction is often viewed as a chronic disease that rewires the brain. This helps explain why relapse rates tend to be high, around 40%–60% (McLellan, Lewis, & O’Brien, & Kleber, 2000). The goal of treatment is to help an addict stop compulsive drug-seeking behaviors. Treatment usually includes behavioral therapy, which can take place individually or in a group setting. Treatment may also include medication. Sometimes a person has comorbid disorders, which usually means that they have a substance-related disorder diagnosis and another psychiatric diagnosis, such as depression, bipolar disorder, or schizophrenia. The best treatment would address both problems simultaneously. 16.5 The Sociocultural Model and Therapy Utilization The sociocultural perspective looks at you, your behaviors, and your symptoms in the context of your culture and background. Clinicians using this approach integrate cultural and religious beliefs into the therapeutic process. Research has shown that ethnic minorities are less likely to access mental health services than their White middle-class American counterparts. Barriers to treatment include lack of insurance, transportation, and time; cultural views that mental illness is a stigma; fears about treatment; and language barriers. Supporting mental health treatment involves speaking and listening openly about mental health, avoiding assumptions, being conscious about language, and encouraging others to get help when needed.
textbooks/socialsci/Psychology/Introductory_Psychology/Introductory_Psychology_2e_(OpenStax)/16%3A_Therapy_and_Treatment/Summary.txt
• 1.1: Why Science? Psychologists believe that scientific methods can be used in the behavioral domain to understand and improve the world. This module outlines the characteristics of the science, and the promises it holds for understanding behavior. The ethics that guide psychological research are briefly described. It concludes with the reasons you should learn about scientific psychology. 01: INTRODUCTION TO PSYCHOLOGY AS A SCIENCE By Edward Diener University of Utah, University of Virginia Scientific research has been one of the great drivers of progress in human history, and the dramatic changes we have seen during the past century are due primarily to scientific findings—modern medicine, electronics, automobiles and jets, birth control, and a host of other helpful inventions. Psychologists believe that scientific methods can be used in the behavioral domain to understand and improve the world. Although psychology trails the biological and physical sciences in terms of progress, we are optimistic based on discoveries to date that scientific psychology will make many important discoveries that can benefit humanity. This module outlines the characteristics of the science, and the promises it holds for understanding behavior. The ethics that guide psychological research are briefly described. It concludes with the reasons you should learn about scientific psychology learning objectives • Describe how scientific research has changed the world. • Describe the key characteristics of the scientific approach. • Discuss a few of the benefits, as well as problems that have been created by science. • Describe several ways that psychological science has improved the world. • Describe a number of the ethical guidelines that psychologists follow. Scientific Advances and World Progress There are many people who have made positive contributions to humanity in modern times. Take a careful look at the names on the following list. Which of these individuals do you think has helped humanity the most? 1. Mother Teresa 2. Albert Schweitzer 3. Edward Jenner 4. Norman Borlaug 5. Fritz Haber The usual response to this question is “Who on earth are Jenner, Borlaug, and Haber?” Many people know that Mother Teresa helped thousands of people living in the slums of Kolkata (Calcutta). Others recall that Albert Schweitzer opened his famous hospital in Africa and went on to earn the Nobel Peace Prize. The other three historical figures, on the other hand, are far less well known. Jenner, Borlaug, and Haber were scientists whose research discoveries saved millions, and even billions, of lives. Dr. Edward Jenner is often considered the “father of immunology” because he was among the first to conceive of and test vaccinations. His pioneering work led directly to the eradication of smallpox. Many other diseases have been greatly reduced because of vaccines discovered using science—measles, pertussis, diphtheria, tetanus, typhoid, cholera, polio, hepatitis—and all are the legacy of Jenner. Fritz Haber and Norman Borlaug saved more than a billion human lives. They created the “Green Revolution” by producing hybrid agricultural crops and synthetic fertilizer. Humanity can now produce food for the seven billion people on the planet, and the starvation that does occur is related to political and economic factors rather than our collective ability to produce food. If you examine major social and technological changes over the past century most of them can be directly attributed to science. The world in 1914 was very different than the one we see today (Easterbrook, 2003). There were few cars and most people traveled by foot, horseback, or carriage. There were no radios, televisions, birth control pills, artificial hearts or antibiotics. Only a small portion of the world had telephones, refrigeration or electricity. These days we find that 80% of all households have television and 84% have electricity. It is estimated that three quarters of the world’s population has access to a mobile phone! Life expectancy was 47 years in 1900 and 79 years in 2010. The percentage of hungry and malnourished people in the world has dropped substantially across the globe. Even average levels of I.Q. have risen dramatically over the past century due to better nutrition and schooling. All of these medical advances and technological innovations are the direct result of scientific research and understanding. In the modern age it is easy to grow complacent about the advances of science but make no mistake about it—science has made fantastic discoveries, and continues to do so. These discoveries have completely changed our world. What Is Science? What is this process we call “science,” which has so dramatically changed the world? Ancient people were more likely to believe in magical and supernatural explanations for natural phenomena such as solar eclipses or thunderstorms. By contrast, scientifically minded people try to figure out the natural world through testing and observation. Specifically, science is the use of systematic observation in order to acquire knowledge. For example, children in a science class might combine vinegar and baking soda to observe the bubbly chemical reaction. These empirical methods are wonderful ways to learn about the physical and biological world. Science is not magic—it will not solve all human problems, and might not answer all our questions about behavior. Nevertheless, it appears to be the most powerful method we have for acquiring knowledge about the observable world. The essential elements of science are as follows: 1. Systematic observation is the core of science. Scientists observe the world, in a very organized way. We often measure the phenomenon we are observing. We record our observations so that memory biases are less likely to enter in to our conclusions. We are systematic in that we try to observe under controlled conditions, and also systematically vary the conditions of our observations so that we can see variations in the phenomena and understand when they occur and do not occur. 2. Observation leads to hypotheses we can test. When we develop hypotheses and theories, we state them in a way that can be tested. For example, you might make the claim that candles made of paraffin wax burn more slowly than do candles of the exact same size and shape made from bee’s wax. This claim can be readily tested by timing the burning speed of candles made from these materials. 3. Science is democratic. People in ancient times may have been willing to accept the views of their kings or pharaohs as absolute truth. These days, however, people are more likely to want to be able to form their own opinions and debate conclusions. Scientists are skeptical and have open discussions about their observations and theories. These debates often occur as scientists publish competing findings with the idea that the best data will win the argument. 4. Science is cumulative. We can learn the important truths discovered by earlier scientists and build on them. Any physics student today knows more about physics than Sir Isaac Newton did even though Newton was possibly the most brilliant physicist of all time. A crucial aspect of scientific progress is that after we learn of earlier advances, we can build upon them and move farther along the path of knowledge. Psychology as a Science Even in modern times many people are skeptical that psychology is really a science. To some degree this doubt stems from the fact that many psychological phenomena such as depression, intelligence, and prejudice do not seem to be directly observable in the same way that we can observe the changes in ocean tides or the speed of light. Because thoughts and feelings are invisible many early psychological researchers chose to focus on behavior. You might have noticed that some people act in a friendly and outgoing way while others appear to be shy and withdrawn. If you have made these types of observations then you are acting just like early psychologists who used behavior to draw inferences about various types of personality. By using behavioral measures and rating scales it is possible to measure thoughts and feelings. This is similar to how other researchers explore “invisible” phenomena such as the way that educators measure academic performance or economists measure quality of life. One important pioneering researcher was Francis Galton, a cousin of Charles Darwin who lived in England during the late 1800s. Galton used patches of color to test people’s ability to distinguish between them. He also invented the self-report questionnaire, in which people offered their own expressed judgments or opinions on various matters. Galton was able to use self-reports to examine—among other things—people’s differing ability to accurately judge distances. Although he lacked a modern understanding of genetics Galton also had the idea that scientists could look at the behaviors of identical and fraternal twins to estimate the degree to which genetic and social factors contribute to personality; a puzzling issue we currently refer to as the “nature-nurture question.” In modern times psychology has become more sophisticated. Researchers now use better measures, more sophisticated study designs and better statistical analyses to explore human nature. Simply take the example of studying the emotion of happiness. How would you go about studying happiness? One straightforward method is to simply ask people about their happiness and to have them use a numbered scale to indicate their feelings. There are, of course, several problems with this. People might lie about their happiness, might not be able to accurately report on their own happiness, or might not use the numerical scale in the same way. With these limitations in mind modern psychologists employ a wide range of methods to assess happiness. They use, for instance, “peer report measures” in which they ask close friends and family members about the happiness of a target individual. Researchers can then compare these ratings to the self-report ratings and check for discrepancies. Researchers also use memory measures, with the idea that dispositionally positive people have an easier time recalling pleasant events and negative people have an easier time recalling unpleasant events. Modern psychologists even use biological measures such as saliva cortisol samples (cortisol is a stress related hormone) or fMRI images of brain activation (the left pre-frontal cortex is one area of brain activity associated with good moods). Despite our various methodological advances it is true that psychology is still a very young science. While physics and chemistry are hundreds of years old psychology is barely a hundred and fifty years old and most of our major findings have occurred only in the last 60 years. There are legitimate limits to psychological science but it is a science nonetheless. Psychological Science is Useful Psychological science is useful for creating interventions that help people live better lives. A growing body of research is concerned with determining which therapies are the most and least effective for the treatment of psychological disorders. For example, many studies have shown that cognitive behavioral therapy can help many people suffering from depression and anxiety disorders (Butler, Chapman, Forman, & Beck, 2006; Hoffman & Smits, 2008). In contrast, research reveals that some types of therapies actually might be harmful on average (Lilienfeld, 2007). In organizational psychology, a number of psychological interventions have been found by researchers to produce greater productivity and satisfaction in the workplace (e.g., Guzzo, Jette, & Katzell, 1985). Human factor engineers have greatly increased the safety and utility of the products we use. For example, the human factors psychologist Alphonse Chapanis and other researchers redesigned the cockpit controls of aircraft to make them less confusing and easier to respond to, and this led to a decrease in pilot errors and crashes. Forensic sciences have made courtroom decisions more valid. We all know of the famous cases of imprisoned persons who have been exonerated because of DNA evidence. Equally dramatic cases hinge on psychological findings. For instance, psychologist Elizabeth Loftus has conducted research demonstrating the limits and unreliability of eyewitness testimony and memory. Thus, psychological findings are having practical importance in the world outside the laboratory. Psychological science has experienced enough success to demonstrate that it works, but there remains a huge amount yet to be learned. Ethics of Scientific Psychology Psychology differs somewhat from the natural sciences such as chemistry in that researchers conduct studies with human research participants. Because of this there is a natural tendency to want to guard research participants against potential psychological harm. For example, it might be interesting to see how people handle ridicule but it might not be advisable to ridicule research participants. Scientific psychologists follow a specific set of guidelines for research known as a code of ethics. There are extensive ethical guidelines for how human participants should be treated in psychological research (Diener & Crandall, 1978; Sales & Folkman, 2000). Following are a few highlights: 1. Informed consent. In general, people should know when they are involved in research, and understand what will happen to them during the study. They should then be given a free choice as to whether to participate. 2. Confidentiality. Information that researchers learn about individual participants should not be made public without the consent of the individual. 3. Privacy. Researchers should not make observations of people in private places such as their bedrooms without their knowledge and consent. Researchers should not seek confidential information from others, such as school authorities, without consent of the participant or his or her guardian. 4. Benefits. Researchers should consider the benefits of their proposed research and weigh these against potential risks to the participants. People who participate in psychological studies should be exposed to risk only if they fully understand these risks and only if the likely benefits clearly outweigh the risks. 5. Deception. Some researchers need to deceive participants in order to hide the true nature of the study. This is typically done to prevent participants from modifying their behavior in unnatural ways. Researchers are required to “debrief” their participants after they have completed the study. Debriefing is an opportunity to educate participants about the true nature of the study. Why Learn About Scientific Psychology? I once had a psychology professor who asked my class why we were taking a psychology course. Our responses give the range of reasons that people want to learn about psychology: 1. To understand ourselves 2. To understand other people and groups 3. To be better able to influence others, for example, in socializing children or motivating employees 4. To learn how to better help others and improve the world, for example, by doing effective psychotherapy 5. To learn a skill that will lead to a profession such as being a social worker or a professor 6. To learn how to evaluate the research claims you hear or read about 7. Because it is interesting, challenging, and fun! People want to learn about psychology because this is exciting in itself, regardless of other positive outcomes it might have. Why do we see movies? Because they are fun and exciting, and we need no other reason. Thus, one good reason to study psychology is that it can be rewarding in itself. Conclusions The science of psychology is an exciting adventure. Whether you will become a scientific psychologist, an applied psychologist, or an educated person who knows about psychological research, this field can influence your life and provide fun, rewards, and understanding. My hope is that you learn a lot from the modules in this e-text, and also that you enjoy the experience! I love learning about psychology and neuroscience, and hope you will too! Outside Resources Web: Science Heroes- A celebration of people who have made lifesaving discoveries. http://www.scienceheroes.com/index.p...=258&Itemid=27 Discussion Questions 1. Some claim that science has done more harm than good. What do you think? 2. Humanity is faced with many challenges and problems. Which of these are due to human behavior, and which are external to human actions? 3. If you were a research psychologist, what phenomena or behaviors would most interest you? 4. Will psychological scientists be able to help with the current challenges humanity faces, such as global warming, war, inequality, and mental illness? 5. What can science study and what is outside the realm of science? What questions are impossible for scientists to study? 6. Some claim that science will replace religion by providing sound knowledge instead of myths to explain the world. They claim that science is a much more reliable source of solutions to problems such as disease than is religion. What do you think? Will science replace religion, and should it? 7. Are there human behaviors that should not be studied? Are some things so sacred or dangerous that we should not study them? Vocabulary Empirical methods Approaches to inquiry that are tied to actual measurement and observation. Ethics Professional guidelines that offer researchers a template for making decisions that protect research participants from potential harm and that help steer scientists away from conflicts of interest or other situations that might compromise the integrity of their research. Hypotheses A logical idea that can be tested. Systematic observation The careful observation of the natural world with the aim of better understanding it. Observations provide the basic data that allow scientists to track, tally, or otherwise organize information about the natural world. Theories Groups of closely related phenomena or observations.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/01%3A_INTRODUCTION_TO_PSYCHOLOGY_AS_A_SCIENCE/1.01%3A_Why_Science.txt
• 2.1: Research Designs Psychologists test research questions using a variety of methods. Most research relies on either correlations or experiments. With correlations, researchers measure variables as they naturally occur in people and compute the degree to which two variables go together. • 2.2: Conducting Psychology Research in the Real World This module highlights the importance of also conducting research outside the psychology laboratory, within participants’ natural, everyday environments, and reviews existing methodologies for studying daily life. 02: RESEARCH IN PSYCHOLOGY Psychologists test research questions using a variety of methods. Most research relies on either correlations or experiments. With correlations, researchers measure variables as they naturally occur in people and compute the degree to which two variables go together. With experiments, researchers actively make changes in one variable and watch for changes in another variable. Experiments allow researchers to make causal inferences. Other types of methods include longitudinal and quasi-experimental designs. Many factors, including practical constraints, determine the type of methods researchers use. Often researchers survey people even though it would be better, but more expensive and time consuming, to track them longitudinally. learning objectives • Articulate the difference between correlational and experimental designs. • Understand how to interpret correlations. • Understand how experiments help us to infer causality. • Understand how surveys relate to correlational and experimental research. • Explain what a longitudinal study is. • List a strength and weakness of different research designs. Research Designs In the early 1970’s, a man named Uri Geller tricked the world: he convinced hundreds of thousands of people that he could bend spoons and slow watches using only the power of his mind. In fact, if you were in the audience, you would have likely believed he had psychic powers. Everything looked authentic—this man had to have paranormal abilities! So, why have you probably never heard of him before? Because when Uri was asked to perform his miracles in line with scientific experimentation, he was no longer able to do them. That is, even though it seemed like he was doing the impossible, when he was tested by science, he proved to be nothing more than a clever magician. When we look at dinosaur bones to make educated guesses about extinct life, or systematically chart the heavens to learn about the relationships between stars and planets, or study magicians to figure out how they perform their tricks, we are forming observations—the foundation of science. Although we are all familiar with the saying “seeing is believing,” conducting science is more than just what your eyes perceive. Science is the result of systematic and intentional study of the natural world. And psychology is no different. In the movie Jerry Maguire, Cuba Gooding, Jr. became famous for using the phrase, “Show me the money!” In psychology, as in all sciences, we might say, “Show me the data!” One of the important steps in scientific inquiry is to test our research questions, otherwise known as hypotheses. However, there are many ways to test hypotheses in psychological research. Which method you choose will depend on the type of questions you are asking, as well as what resources are available to you. All methods have limitations, which is why the best research uses a variety of methods. Most psychological research can be divided into two types: experimental and correlational research. Experimental Research If somebody gave you \$20 that absolutely had to be spent today, how would you choose to spend it? Would you spend it on an item you’ve been eyeing for weeks, or would you donate the money to charity? Which option do you think would bring you the most happiness? If you’re like most people, you’d choose to spend the money on yourself (duh, right?). Our intuition is that we’d be happier if we spent the money on ourselves. Knowing that our intuition can sometimes be wrong, Professor Elizabeth Dunn (2008) at the University of British Columbia set out to conduct an experiment on spending and happiness. She gave each of the participants in her experiment \$20 and then told them they had to spend the money by the end of the day. Some of the participants were told they must spend the money on themselves, and some were told they must spend the money on others (either charity or a gift for someone). At the end of the day she measured participants’ levels of happiness using a self-report questionnaire. (But wait, how do you measure something like happiness when you can’t really see it? Psychologists measure many abstract concepts, such as happiness and intelligence, by beginning with operational definitions of the concepts. See the Noba modules on Intelligence [noba.to/ncb2h79v] and Happiness [noba.to/qnw7g32t], respectively, for more information on specific measurement strategies.) In an experiment, researchers manipulate, or cause changes, in the independent variable, and observe or measure any impact of those changes in the dependent variable. The independent variable is the one under the experimenter’s control, or the variable that is intentionally altered between groups. In the case of Dunn’s experiment, the independent variable was whether participants spent the money on themselves or on others. The dependent variable is the variable that is not manipulated at all, or the one where the effect happens. One way to help remember this is that the dependent variable “depends” on what happens to the independent variable. In our example, the participants’ happiness (the dependent variable in this experiment) depends on how the participants spend their money (the independent variable). Thus, any observed changes or group differences in happiness can be attributed to whom the money was spent on. What Dunn and her colleagues found was that, after all the spending had been done, the people who had spent the money on others were happier than those who had spent the money on themselves. In other words, spending on others causes us to be happier than spending on ourselves. Do you find this surprising? But wait! Doesn’t happiness depend on a lot of different factors—for instance, a person’s upbringing or life circumstances? What if some people had happy childhoods and that’s why they’re happier? Or what if some people dropped their toast that morning and it fell jam-side down and ruined their whole day? It is correct to recognize that these factors and many more can easily affect a person’s level of happiness. So how can we accurately conclude that spending money on others causes happiness, as in the case of Dunn’s experiment? The most important thing about experiments is random assignment. Participants don’t get to pick which condition they are in (e.g., participants didn’t choose whether they were supposed to spend the money on themselves versus others). The experimenter assigns them to a particular condition based on the flip of a coin or the roll of a die or any other random method. Why do researchers do this? With Dunn’s study, there is the obvious reason: you can imagine which condition most people would choose to be in, if given the choice. But another equally important reason is that random assignment makes it so the groups, on average, are similar on all characteristics except what the experimenter manipulates. By randomly assigning people to conditions (self-spending versus other-spending), some people with happy childhoods should end up in each condition. Likewise, some people who had dropped their toast that morning (or experienced some other disappointment) should end up in each condition. As a result, the distribution of all these factors will generally be consistent across the two groups, and this means that on average the two groups will be relatively equivalent on all these factors. Random assignment is critical to experimentation because if the only difference between the two groups is the independent variable, we can infer that the independent variable is the cause of any observable difference (e.g., in the amount of happiness they feel at the end of the day). Here’s another example of the importance of random assignment: Let’s say your class is going to form two basketball teams, and you get to be the captain of one team. The class is to be divided evenly between the two teams. If you get to pick the players for your team first, whom will you pick? You’ll probably pick the tallest members of the class or the most athletic. You probably won’t pick the short, uncoordinated people, unless there are no other options. As a result, your team will be taller and more athletic than the other team. But what if we want the teams to be fair? How can we do this when we have people of varying height and ability? All we have to do is randomly assign players to the two teams. Most likely, some tall and some short people will end up on your team, and some tall and some short people will end up on the other team. The average height of the teams will be approximately the same. That is the power of random assignment! Other considerations In addition to using random assignment, you should avoid introducing confounds into your experiments. Confounds are things that could undermine your ability to draw causal inferences. For example, if you wanted to test if a new happy pill will make people happier, you could randomly assign participants to take the happy pill or not (the independent variable) and compare these two groups on their self-reported happiness (the dependent variable). However, if some participants know they are getting the happy pill, they might develop expectations that influence their self-reported happiness. This is sometimes known as a placebo effect. Sometimes a person just knowing that he or she is receiving special treatment or something new is enough to actually cause changes in behavior or perception: In other words, even if the participants in the happy pill condition were to report being happier, we wouldn’t know if the pill was actually making them happier or if it was the placebo effect—an example of a confound. A related idea is participant demand. This occurs when participants try to behave in a way they think the experimenter wants them to behave. Placebo effects and participant demand often occur unintentionally. Even experimenter expectations can influence the outcome of a study. For example, if the experimenter knows who took the happy pill and who did not, and the dependent variable is the experimenter’s observations of people’s happiness, then the experimenter might perceive improvements in the happy pill group that are not really there. One way to prevent these confounds from affecting the results of a study is to use a double-blind procedure. In a double-blind procedure, neither the participant nor the experimenter knows which condition the participant is in. For example, when participants are given the happy pill or the fake pill, they don’t know which one they are receiving. This way the participants shouldn’t experience the placebo effect, and will be unable to behave as the researcher expects (participant demand). Likewise, the researcher doesn’t know which pill each participant is taking (at least in the beginning—later, the researcher will get the results for data-analysis purposes), which means the researcher’s expectations can’t influence his or her observations. Therefore, because both parties are “blind” to the condition, neither will be able to behave in a way that introduces a confound. At the end of the day, the only difference between groups will be which pills the participants received, allowing the researcher to determine if the happy pill actually caused people to be happier. Correlational Designs When scientists passively observe and measure phenomena it is called correlational research. Here, we do not intervene and change behavior, as we do in experiments. In correlational research, we identify patterns of relationships, but we usually cannot infer what causes what. Importantly, with correlational research, you can examine only two variables at a time, no more and no less. So, what if you wanted to test whether spending on others is related to happiness, but you don’t have \$20 to give to each participant? You could use a correlational design—which is exactly what Professor Dunn did, too. She asked people how much of their income they spent on others or donated to charity, and later she asked them how happy they were. Do you think these two variables were related? Yes, they were! The more money people reported spending on others, the happier they were. More details about the correlation To find out how well two variables correspond, we can plot the relation between the two scores on what is known as a scatterplot (Figure 2.4.1). In the scatterplot, each dot represents a data point. (In this case it’s individuals, but it could be some other unit.) Importantly, each dot provides us with two pieces of information—in this case, information about how good the person rated the past month (x-axis) and how happy the person felt in the past month (y-axis). Which variable is plotted on which axis does not matter. The association between two variables can be summarized statistically using the correlation coefficient (abbreviated as r). A correlationcoefficient provides information about the direction and strength of the association between two variables. For the example above, the direction of the association is positive. This means that people who perceived the past month as being good reported feeling more happy, whereas people who perceived the month as being bad reported feeling less happy. With a positive correlation, the two variables go up or down together. In a scatterplot, the dots form a pattern that extends from the bottom left to the upper right (just as they do in Figure 2.4.1). The r value for a positive correlation is indicated by a positive number (although, the positive sign is usually omitted). Here, the r value is .81. A negative correlation is one in which the two variables move in opposite directions. That is, as one variable goes up, the other goes down. Figure 2.4.2 shows the association between the average height of males in a country (y-axis) and the pathogen prevalence (or commonness of disease; x-axis) of that country. In this scatterplot, each dot represents a country. Notice how the dots extend from the top left to the bottom right. What does this mean in real-world terms? It means that people are shorter in parts of the world where there is more disease. The r value for a negative correlation is indicated by a negative number—that is, it has a minus (–) sign in front of it. Here, it is –.83. The strength of a correlation has to do with how well the two variables align. Recall that in Professor Dunn’s correlational study, spending on others positively correlated with happiness: The more money people reported spending on others, the happier they reported to be. At this point you may be thinking to yourself, I know a very generous person who gave away lots of money to other people but is miserable! Or maybe you know of a very stingy person who is happy as can be. Yes, there might be exceptions. If an association has many exceptions, it is considered a weak correlation. If an association has few or no exceptions, it is considered a strong correlation. A strong correlation is one in which the two variables always, or almost always, go together. In the example of happiness and how good the month has been, the association is strong. The stronger a correlation is, the tighter the dots in the scatterplot will be arranged along a sloped line. The r value of a strong correlation will have a high absolute value. In other words, you disregard whether there is a negative sign in front of the r value, and just consider the size of the numerical value itself. If the absolute value is large, it is a strong correlation. A weak correlation is one in which the two variables correspond some of the time, but not most of the time. Figure 2.4.3 shows the relation between valuing happiness and grade point average (GPA). People who valued happiness more tended to earn slightly lower grades, but there were lots of exceptions to this. The r value for a weak correlation will have a low absolute value. If two variables are so weakly related as to be unrelated, we say they are uncorrelated, and the r value will be zero or very close to zero. In the previous example, is the correlation between height and pathogen prevalence strong? Compared to Figure 2.4.3, the dots in Figure 2.4.2 are tighter and less dispersed. The absolute value of –.83 is large. Therefore, it is a strong negative correlation. Can you guess the strength and direction of the correlation between age and year of birth? If you said this is a strong negative correlation, you are correct! Older people always have lower years of birth than younger people (e.g., 1950 vs. 1995), but at the same time, the older people will have a higher age (e.g., 65 vs. 20). In fact, this is a perfect correlation because there are no exceptions to this pattern. I challenge you to find a 10-year-old born before 2003! You can’t. Problems with the correlation If generosity and happiness are positively correlated, should we conclude that being generous causes happiness? Similarly, if height and pathogen prevalence are negatively correlated, should we conclude that disease causes shortness? From a correlation alone, we can’t be certain. For example, in the first case it may be that happiness causes generosity, or that generosity causes happiness. Or, a third variable might cause both happiness andgenerosity, creating the illusion of a direct link between the two. For example, wealth could be the third variable that causes both greater happiness and greater generosity. This is why correlation does not mean causation—an often repeated phrase among psychologists. Qualitative Designs Just as correlational research allows us to study topics we can’t experimentally manipulate (e.g., whether you have a large or small income), there are other types of research designs that allow us to investigate these harder-to-study topics. Qualitative designs, including participant observation, case studies, and narrative analysis are examples of such methodologies. Although something as simple as “observation” may seem like it would be a part of all research methods, participant observation is a distinct methodology that involves the researcher embedding him- or herself into a group in order to study its dynamics. For example, Festinger, Riecken, and Shacter (1956) were very interested in the psychology of a particular cult. However, this cult was very secretive and wouldn’t grant interviews to outside members. So, in order to study these people, Festinger and his colleagues pretended to be cult members, allowing them access to the behavior and psychology of the cult. Despite this example, it should be noted that the people being observed in a participant observation study usually know that the researcher is there to study them. Another qualitative method for research is the case study, which involves an intensive examination of specific individuals or specific contexts. Sigmund Freud, the father of psychoanalysis, was famous for using this type of methodology; however, more current examples of case studies usually involve brain injuries. For instance, imagine that researchers want to know how a very specific brain injury affects people’s experience of happiness. Obviously, the researchers can’t conduct experimental research that involves inflicting this type of injury on people. At the same time, there are too few people who have this type of injury to conduct correlational research. In such an instance, the researcher may examine only one person with this brain injury, but in doing so, the researcher will put the participant through a very extensive round of tests. Hopefully what is learned from this one person can be applied to others; however, even with thorough tests, there is the chance that something unique about this individual (other than the brain injury) will affect his or her happiness. But with such a limited number of possible participants, a case study is really the only type of methodology suitable for researching this brain injury. The final qualitative method to be discussed in this section is narrative analysis. Narrative analysis centers around the study of stories and personal accounts of people, groups, or cultures. In this methodology, rather than engaging with participants directly, or quantifying their responses or behaviors, researchers will analyze the themes, structure, and dialogue of each person’s narrative. That is, a researcher will examine people’s personal testimonies in order to learn more about the psychology of those individuals or groups. These stories may be written, audio-recorded, or video-recorded, and allow the researcher not only to study what the participant says but how he or she says it. Every person has a unique perspective on the world, and studying the way he or she conveys a story can provide insight into that perspective. Quasi-Experimental Designs What if you want to study the effects of marriage on a variable? For example, does marriage make people happier? Can you randomly assign some people to get married and others to remain single? Of course not. So how can you study these important variables? You can use a quasi-experimental design. A quasi-experimental design is similar to experimental research, except that random assignment to conditions is not used. Instead, we rely on existing group memberships (e.g., married vs. single). We treat these as the independent variables, even though we don’t assign people to the conditions and don’t manipulate the variables. As a result, with quasi-experimental designs causal inference is more difficult. For example, married people might differ on a variety of characteristics from unmarried people. If we find that married participants are happier than single participants, it will be hard to say that marriage causes happiness, because the people who got married might have already been happier than the people who have remained single. Because experimental and quasi-experimental designs can seem pretty similar, let’s take another example to distinguish them. Imagine you want to know who is a better professor: Dr. Smith or Dr. Khan. To judge their ability, you’re going to look at their students’ final grades. Here, the independent variable is the professor (Dr. Smith vs. Dr. Khan) and the dependent variable is the students’ grades. In an experimental design, you would randomly assign students to one of the two professors and then compare the students’ final grades. However, in real life, researchers can’t randomly force students to take one professor over the other; instead, the researchers would just have to use the preexisting classes and study them as-is (quasi-experimental design). Again, the key difference is random assignment to the conditions of the independent variable. Although the quasi-experimental design (where the students choose which professor they want) may seem random, it’s most likely not. For example, maybe students heard Dr. Smith sets low expectations, so slackers prefer this class, whereas Dr. Khan sets higher expectations, so smarter students prefer that one. This now introduces a confounding variable (student intelligence) that will almost certainly have an effect on students’ final grades, regardless of how skilled the professor is. So, even though a quasi-experimental design is similar to an experimental design (i.e., it has a manipulated independent variable), because there’s no random assignment, you can’t reasonably draw the same conclusions that you would with an experimental design. Longitudinal Studies Another powerful research design is the longitudinal study. Longitudinal studies track the same people over time. Some longitudinal studies last a few weeks, some a few months, some a year or more. Some studies that have contributed a lot to psychology followed the same people over decades. For example, one study followed more than 20,000 Germans for two decades. From these longitudinal data, psychologist Rich Lucas (2003) was able to determine that people who end up getting married indeed start off a bit happier than their peers who never marry. Longitudinal studies like this provide valuable evidence for testing many theories in psychology, but they can be quite costly to conduct, especially if they follow many people for many years. Surveys A survey is a way of gathering information, using old-fashioned questionnaires or the Internet. Compared to a study conducted in a psychology laboratory, surveys can reach a larger number of participants at a much lower cost. Although surveys are typically used for correlational research, this is not always the case. An experiment can be carried out using surveys as well. For example, King and Napa (1998) presented participants with different types of stimuli on paper: either a survey completed by a happy person or a survey completed by an unhappy person. They wanted to see whether happy people were judged as more likely to get into heaven compared to unhappy people. Can you figure out the independent and dependent variables in this study? Can you guess what the results were? Happy people (vs. unhappy people; the independent variable) were judged as more likely to go to heaven (the dependent variable) compared to unhappy people! Likewise, correlational research can be conducted without the use of surveys. For instance, psychologists LeeAnn Harker and Dacher Keltner (2001) examined the smile intensity of women’s college yearbook photos. Smiling in the photos was correlated with being married 10 years later! Tradeoffs in Research Even though there are serious limitations to correlational and quasi-experimental research, they are not poor cousins to experiments and longitudinal designs. In addition to selecting a method that is appropriate to the question, many practical concerns may influence the decision to use one method over another. One of these factors is simply resource availability—how much time and money do you have to invest in the research? (Tip: If you’re doing a senior honor’s thesis, do not embark on a lengthy longitudinal study unless you are prepared to delay graduation!) Often, we survey people even though it would be more precise—but much more difficult—to track them longitudinally. Especially in the case of exploratory research, it may make sense to opt for a cheaper and faster method first. Then, if results from the initial study are promising, the researcher can follow up with a more intensive method. Beyond these practical concerns, another consideration in selecting a research design is the ethics of the study. For example, in cases of brain injury or other neurological abnormalities, it would be unethical for researchers to inflict these impairments on healthy participants. Nonetheless, studying people with these injuries can provide great insight into human psychology (e.g., if we learn that damage to a particular region of the brain interferes with emotions, we may be able to develop treatments for emotional irregularities). In addition to brain injuries, there are numerous other areas of research that could be useful in understanding the human mind but which pose challenges to a true experimental design—such as the experiences of war, long-term isolation, abusive parenting, or prolonged drug use. However, none of these are conditions we could ethically experimentally manipulate and randomly assign people to. Therefore, ethical considerations are another crucial factor in determining an appropriate research design. Research Methods: Why You Need Them Just look at any major news outlet and you’ll find research routinely being reported. Sometimes the journalist understands the research methodology, sometimes not (e.g., correlational evidence is often incorrectly represented as causal evidence). Often, the media are quick to draw a conclusion for you. After reading this module, you should recognize that the strength of a scientific finding lies in the strength of its methodology. Therefore, in order to be a savvy consumer of research, you need to understand the pros and cons of different methods and the distinctions among them. Plus, understanding how psychologists systematically go about answering research questions will help you to solve problems in other domains, both personal and professional, not just in psychology. Outside Resources Article: Harker and Keltner study of yearbook photographs and marriage http://psycnet.apa.org/journals/psp/80/1/112/ Article: Rich Lucas’s longitudinal study on the effects of marriage on happiness http://psycnet.apa.org/journals/psp/84/3/527/ Article: Spending money on others promotes happiness. Elizabeth Dunn’s research https://www.sciencemag.org/content/3.../1687.abstract Article: What makes a life good? http://psycnet.apa.org/journals/psp/75/1/156/ Discussion Questions 1. What are some key differences between experimental and correlational research? 2. Why might researchers sometimes use methods other than experiments? 3. How do surveys relate to correlational and experimental designs? Vocabulary Confounds Factors that undermine the ability to draw causal inferences from an experiment. Correlation Measures the association between two variables, or how they go together. Dependent variable The variable the researcher measures but does not manipulate in an experiment. Experimenter expectations When the experimenter’s expectations influence the outcome of a study. Independent variable The variable the researcher manipulates and controls in an experiment. Longitudinal study A study that follows the same group of individuals over time. Operational definitions How researchers specifically measure a concept. Participant demand When participants behave in a way that they think the experimenter wants them to behave. Placebo effect When receiving special treatment or something new affects human behavior. Quasi-experimental design An experiment that does not require random assignment to conditions. Random assignment Assigning participants to receive different conditions of an experiment by chance.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/02%3A_RESEARCH_IN_PSYCHOLOGY/2.01%3A_Research_Designs.txt
By Matthias R. Mehl University of Arizona Because of its ability to determine cause-and-effect relationships, the laboratory experiment is traditionally considered the method of choice for psychological science. One downside, however, is that as it carefully controls conditions and their effects, it can yield findings that are out of touch with reality and have limited use when trying to understand real-world behavior. This module highlights the importance of also conducting research outside the psychology laboratory, within participants’ natural, everyday environments, and reviews existing methodologies for studying daily life. learning objectives • Identify limitations of the traditional laboratory experiment. • Explain ways in which daily life research can further psychological science. • Know what methods exist for conducting psychological research in the real world. Introduction The laboratory experiment is traditionally considered the “gold standard” in psychology research. This is because only laboratory experiments can clearly separate cause from effect and therefore establish causality. Despite this unique strength, it is also clear that a scientific field that is mainly based on controlled laboratory studies ends up lopsided. Specifically, it accumulates a lot of knowledge on what can happen—under carefully isolated and controlled circumstances—but it has little to say about what actually does happen under the circumstances that people actually encounter in their daily lives. For example, imagine you are a participant in an experiment that looks at the effect of being in a good mood on generosity, a topic that may have a good deal of practical application. Researchers create an internally-valid, carefully-controlled experiment where they randomly assign you to watch either a happy movie or a neutral movie, and then you are given the opportunity to help the researcher out by staying longer and participating in another study. If people in a good mood are more willing to stay and help out, the researchers can feel confident that – since everything else was held constant – your positive mood led you to be more helpful. However, what does this tell us about helping behaviors in the real world? Does it generalize to other kinds of helping, such as donating money to a charitable cause? Would all kinds of happy movies produce this behavior, or only this one? What about other positive experiences that might boost mood, like receiving a compliment or a good grade? And what if you were watching the movie with friends, in a crowded theatre, rather than in a sterile research lab? Taking research out into the real world can help answer some of these sorts of important questions. As one of the founding fathers of social psychology remarked, “Experimentation in the laboratory occurs, socially speaking, on an island quite isolated from the life of society” (Lewin, 1944, p. 286). This module highlights the importance of going beyond experimentation and also conducting research outside the laboratory (Reis & Gosling, 2010), directly within participants’ natural environments, and reviews existing methodologies for studying daily life. Rationale for Conducting Psychology Research in the Real World One important challenge researchers face when designing a study is to find the right balance between ensuring internal validity, or the degree to which a study allows unambiguous causal inferences, and external validity, or the degree to which a study ensures that potential findings apply to settings and samples other than the ones being studied (Brewer, 2000). Unfortunately, these two kinds of validity tend to be difficult to achieve at the same time, in one study. This is because creating a controlled setting, in which all potentially influential factors (other than the experimentally-manipulated variable) are controlled, is bound to create an environment that is quite different from what people naturally encounter (e.g., using a happy movie clip to promote helpful behavior). However, it is the degree to which an experimental situation is comparable to the corresponding real-world situation of interest that determines how generalizable potential findings will be. In other words, if an experiment is very far-off from what a person might normally experience in everyday life, you might reasonably question just how useful its findings are. Because of the incompatibility of the two types of validity, one is often—by design—prioritized over the other. Due to the importance of identifying true causal relationships, psychology has traditionally emphasized internal over external validity. However, in order to make claims about human behavior that apply across populations and environments, researchers complement traditional laboratory research, where participants are brought into the lab, with field research where, in essence, the psychological laboratory is brought to participants. Field studies allow for the important test of how psychological variables and processes of interest “behave” under real-world circumstances (i.e., what actually does happen rather than what can happen). They can also facilitate “downstream” operationalizations of constructs that measure life outcomes of interest directly rather than indirectly. Take, for example, the fascinating field of psychoneuroimmunology, where the goal is to understand the interplay of psychological factors - such as personality traits or one’s stress level - and the immune system. Highly sophisticated and carefully controlled experiments offer ways to isolate the variety of neural, hormonal, and cellular mechanisms that link psychological variables such as chronic stress to biological outcomes such as immunosuppression (a state of impaired immune functioning; Sapolsky, 2004). Although these studies demonstrate impressively how psychological factors can affect health-relevant biological processes, they—because of their research design—remain mute about the degree to which these factors actually do undermine people’s everyday health in real life. It is certainly important to show that laboratory stress can alter the number of natural killer cells in the blood. But it is equally important to test to what extent the levels of stress that people experience on a day-to-day basis result in them catching a cold more often or taking longer to recover from one. The goal for researchers, therefore, must be to complement traditional laboratory experiments with less controlled studies under real-world circumstances. The term ecological validity is used to refer the degree to which an effect has been obtained under conditions that are typical for what happens in everyday life (Brewer, 2000). In this example, then, people might keep a careful daily log of how much stress they are under as well as noting physical symptoms such as headaches or nausea. Although many factors beyond stress level may be responsible for these symptoms, this more correlational approach can shed light on how the relationship between stress and health plays out outside of the laboratory. An Overview of Research Methods for Studying Daily Life Capturing “life as it is lived” has been a strong goal for some researchers for a long time. Wilhelm and his colleagues recently published a comprehensive review of early attempts to systematically document daily life (Wilhelm, Perrez, & Pawlik, 2012). Building onto these original methods, researchers have, over the past decades, developed a broad toolbox for measuring experiences, behavior, and physiology directly in participants’ daily lives (Mehl & Conner, 2012). Figure 1 provides a schematic overview of the methodologies described below. Studying Daily Experiences Starting in the mid-1970s, motivated by a growing skepticism toward highly-controlled laboratory studies, a few groups of researchers developed a set of new methods that are now commonly known as the experience-sampling method (Hektner, Schmidt, & Csikszentmihalyi, 2007), ecological momentary assessment (Stone & Shiffman, 1994), or the diary method (Bolger & Rafaeli, 2003). Although variations within this set of methods exist, the basic idea behind all of them is to collect in-the-moment (or, close-to-the-moment) self-report data directly from people as they go about their daily lives. This is typically accomplished by asking participants’ repeatedly (e.g., five times per day) over a period of time (e.g., a week) to report on their current thoughts and feelings. The momentary questionnaires often ask about their location (e.g., “Where are you now?”), social environment (e.g., “With whom are you now?”), activity (e.g., “What are you currently doing?”), and experiences (e.g., “How are you feeling?”). That way, researchers get a snapshot of what was going on in participants’ lives at the time at which they were asked to report. Technology has made this sort of research possible, and recent technological advances have altered the different tools researchers are able to easily use. Initially, participants wore electronic wristwatches that beeped at preprogrammed but seemingly random times, at which they completed one of a stack of provided paper questionnaires. With the mobile computing revolution, both the prompting and the questionnaire completion were gradually replaced by handheld devices such as smartphones. Being able to collect the momentary questionnaires digitally and time-stamped (i.e., having a record of exactly when participants responded) had major methodological and practical advantages and contributed to experience sampling going mainstream (Conner, Tennen, Fleeson, & Barrett, 2009). Over time, experience sampling and related momentary self-report methods have become very popular, and, by now, they are effectively the gold standard for studying daily life. They have helped make progress in almost all areas of psychology (Mehl & Conner, 2012). These methods ensure receiving many measurements from many participants, and has further inspired the development of novel statistical methods (Bolger & Laurenceau, 2013). Finally, and maybe most importantly, they accomplished what they sought out to accomplish: to bring attention to what psychology ultimately wants and needs to know about, namely “what people actually do, think, and feel in the various contexts of their lives” (Funder, 2001, p. 213). In short, these approaches have allowed researchers to do research that is more externally valid, or more generalizable to real life, than the traditional laboratory experiment. To illustrate these techniques, consider a classic study, Stone, Reed, and Neale (1987), who tracked positive and negative experiences surrounding a respiratory infection using daily experience sampling. They found that undesirable experiences peaked and desirable ones dipped about four to five days prior to participants coming down with the cold. More recently, Killingsworth and Gilbert (2010) collected momentary self-reports from more than 2,000 participants via a smartphone app. They found that participants were less happy when their mind was in an idling, mind-wandering state, such as surfing the Internet or multitasking at work, than when it was in an engaged, task-focused one, such as working diligently on a paper. These are just two examples that illustrate how experience-sampling studies have yielded findings that could not be obtained with traditional laboratory methods. Recently, the day reconstruction method (DRM) (Kahneman, Krueger, Schkade, Schwarz, & Stone, 2004) has been developed to obtain information about a person’s daily experiences without going through the burden of collecting momentary experience-sampling data. In the DRM, participants report their experiences of a given day retrospectively after engaging in a systematic, experiential reconstruction of the day on the following day. As a participant in this type of study, you might look back on yesterday, divide it up into a series of episodes such as “made breakfast,” “drove to work,” “had a meeting,” etc. You might then report who you were with in each episode and how you felt in each. This approach has shed light on what situations lead to moments of positive and negative mood throughout the course of a normal day. Studying Daily Behavior Experience sampling is often used to study everyday behavior (i.e., daily social interactions and activities). In the laboratory, behavior is best studied using direct behavioral observation (e.g., video recordings). In the real world, this is, of course, much more difficult. As Funder put it, it seems it would require a “detective’s report [that] would specify in exact detail everything the participant said and did, and with whom, in all of the contexts of the participant’s life” (Funder, 2007, p. 41). As difficult as this may seem, Mehl and colleagues have developed a naturalistic observation methodology that is similar in spirit. Rather than following participants—like a detective—with a video camera (see Craik, 2000), they equip participants with a portable audio recorder that is programmed to periodically record brief snippets of ambient sounds (e.g., 30 seconds every 12 minutes). Participants carry the recorder (originally a microcassette recorder, now a smartphone app) on them as they go about their days and return it at the end of the study. The recorder provides researchers with a series of sound bites that, together, amount to an acoustic diary of participants’ days as they naturally unfold—and that constitute a representative sample of their daily activities and social encounters. Because it is somewhat similar to having the researcher’s ear at the participant’s lapel, they called their method the electronically activated recorder, or EAR (Mehl, Pennebaker, Crow, Dabbs, & Price, 2001). The ambient sound recordings can be coded for many things, including participants’ locations (e.g., at school, in a coffee shop), activities (e.g., watching TV, eating), interactions (e.g., in a group, on the phone), and emotional expressions (e.g., laughing, sighing). As unnatural or intrusive as it might seem, participants report that they quickly grow accustomed to the EAR and say they soon find themselves behaving as they normally would. In a cross-cultural study, Ramírez-Esparza and her colleagues used the EAR method to study sociability in the United States and Mexico. Interestingly, they found that although American participants rated themselves significantly higher than Mexicans on the question, “I see myself as a person who is talkative,” they actually spent almost 10 percent less time talking than Mexicans did (Ramírez-Esparza, Mehl, Álvarez Bermúdez, & Pennebaker, 2009). In a similar way, Mehl and his colleagues used the EAR method to debunk the long-standing myth that women are considerably more talkative than men. Using data from six different studies, they showed that both sexes use on average about 16,000 words per day. The estimated sex difference of 546 words was trivial compared to the immense range of more than 46,000 words between the least and most talkative individual (695 versus 47,016 words; Mehl, Vazire, Ramírez-Esparza, Slatcher, & Pennebaker, 2007). Together, these studies demonstrate how naturalistic observation can be used to study objective aspects of daily behavior and how it can yield findings quite different from what other methods yield (Mehl, Robbins, & Deters, 2012). A series of other methods and creative ways for assessing behavior directly and unobtrusively in the real world are described in a seminal book on real-world, subtle measures (Webb, Campbell, Schwartz, Sechrest, & Grove, 1981). For example, researchers have used time-lapse photography to study the flow of people and the use of space in urban public places (Whyte, 1980). More recently, they have observed people’s personal (e.g., dorm rooms) and professional (e.g., offices) spaces to understand how personality is expressed and detected in everyday environments (Gosling, Ko, Mannarelli, & Morris, 2002). They have even systematically collected and analyzed people’s garbage to measure what people actually consume (e.g., empty alcohol bottles or cigarette boxes) rather than what they say they consume (Rathje & Murphy, 2001). Because people often cannot and sometimes may not want to accurately report what they do, the direct—and ideally nonreactive—assessment of real-world behavior is of high importance for psychological research (Baumeister, Vohs, & Funder, 2007). Studying Daily Physiology In addition to studying how people think, feel, and behave in the real world, researchers are also interested in how our bodies respond to the fluctuating demands of our lives. What are the daily experiences that make our “blood boil”? How do our neurotransmitters and hormones respond to the stressors we encounter in our lives? What physiological reactions do we show to being loved—or getting ostracized? You can see how studying these powerful experiences in real life, as they actually happen, may provide more rich and informative data than one might obtain in an artificial laboratory setting that merely mimics these experiences. Also, in pursuing these questions, it is important to keep in mind that what is stressful, engaging, or boring for one person might not be so for another. It is, in part, for this reason that researchers have found only limited correspondence between how people respond physiologically to a standardized laboratory stressor (e.g., giving a speech) and how they respond to stressful experiences in their lives. To give an example, Wilhelm and Grossman (2010) describe a participant who showed rather minimal heart rate increases in response to a laboratory stressor (about five to 10 beats per minute) but quite dramatic increases (almost 50 beats per minute) later in the afternoon while watching a soccer game. Of course, the reverse pattern can happen as well, such as when patients have high blood pressure in the doctor’s office but not in their home environment—the so-called white coat hypertension (White, Schulman, McCabe, & Dey, 1989). Ambulatory physiological monitoring – that is, monitoring physiological reactions as people go about their daily lives - has a long history in biomedical research and an array of monitoring devices exist (Fahrenberg & Myrtek, 1996). Among the biological signals that can now be measured in daily life with portable signal recording devices are the electrocardiogram (ECG), blood pressure, electrodermal activity (or “sweat response”), body temperature, and even the electroencephalogram (EEG) (Wilhelm & Grossman, 2010). Most recently, researchers have added ambulatory assessment of hormones (e.g., cortisol) and other biomarkers (e.g., immune markers) to the list (Schlotz, 2012). The development of ever more sophisticated ways to track what goes on underneath our skins as we go about our lives is a fascinating and rapidly advancing field. In a recent study, Lane, Zareba, Reis, Peterson, and Moss (2011) used experience sampling combined with ambulatory electrocardiography (a so-called Holter monitor) to study how emotional experiences can alter cardiac function in patients with a congenital heart abnormality (e.g., long QT syndrome). Consistent with the idea that emotions may, in some cases, be able to trigger a cardiac event, they found that typical—in most cases even relatively low intensity— daily emotions had a measurable effect on ventricular repolarization, an important cardiac indicator that, in these patients, is linked to risk of a cardiac event. In another study, Smyth and colleagues (1998) combined experience sampling with momentary assessment of cortisol, a stress hormone. They found that momentary reports of current or even anticipated stress predicted increased cortisol secretion 20 minutes later. Further, and independent of that, the experience of other kinds of negative affect (e.g., anger, frustration) also predicted higher levels of cortisol and the experience of positive affect (e.g., happy, joyful) predicted lower levels of this important stress hormone. Taken together, these studies illustrate how researchers can use ambulatory physiological monitoring to study how the little—and seemingly trivial or inconsequential—experiences in our lives leave objective, measurable traces in our bodily systems. Studying Online Behavior Another domain of daily life that has only recently emerged is virtual daily behavior or how people act and interact with others on the Internet. Irrespective of whether social media will turn out to be humanity’s blessing or curse (both scientists and laypeople are currently divided over this question), the fact is that people are spending an ever increasing amount of time online. In light of that, researchers are beginning to think of virtual behavior as being as serious as “actual” behavior and seek to make it a legitimate target of their investigations (Gosling & Johnson, 2010). One way to study virtual behavior is to make use of the fact that most of what people do on the Web—emailing, chatting, tweeting, blogging, posting— leaves direct (and permanent) verbal traces. For example, differences in the ways in which people use words (e.g., subtle preferences in word choice) have been found to carry a lot of psychological information (Pennebaker, Mehl, & Niederhoffer, 2003). Therefore, a good way to study virtual social behavior is to study virtual language behavior. Researchers can download people’s—often public—verbal expressions and communications and analyze them using modern text analysis programs (e.g., Pennebaker, Booth, & Francis, 2007). For example, Cohn, Mehl, and Pennebaker (2004) downloaded blogs of more than a thousand users of lifejournal.com, one of the first Internet blogging sites, to study how people responded socially and emotionally to the attacks of September 11, 2001. In going “the online route,” they could bypass a critical limitation of coping research, the inability to obtain baseline information; that is, how people were doing before the traumatic event occurred. Through access to the database of public blogs, they downloaded entries from two months prior to two months after the attacks. Their linguistic analyses revealed that in the first days after the attacks, participants expectedly expressed more negative emotions and were more cognitively and socially engaged, asking questions and sending messages of support. Already after two weeks, though, their moods and social engagement returned to baseline, and, interestingly, their use of cognitive-analytic words (e.g., “think,” “question”) even dropped below their normal level. Over the next six weeks, their mood hovered around their pre-9/11 baseline, but both their social engagement and cognitive-analytic processing stayed remarkably low. This suggests a social and cognitive weariness in the aftermath of the attacks. In using virtual verbal behavior as a marker of psychological functioning, this study was able to draw a fine timeline of how humans cope with disasters. Reflecting their rapidly growing real-world importance, researchers are now beginning to investigate behavior on social networking sites such as Facebook (Wilson, Gosling, & Graham, 2012). Most research looks at psychological correlates of online behavior such as personality traits and the quality of one’s social life but, importantly, there are also first attempts to export traditional experimental research designs into an online setting. In a pioneering study of online social influence, Bond and colleagues (2012) experimentally tested the effects that peer feedback has on voting behavior. Remarkably, their sample consisted of 16 million (!) Facebook users. They found that online political-mobilization messages (e.g., “I voted” accompanied by selected pictures of their Facebook friends) influenced real-world voting behavior. This was true not just for users who saw the messages but also for their friends and friends of their friends. Although the intervention effect on a single user was very small, through the enormous number of users and indirect social contagion effects, it resulted cumulatively in an estimated 340,000 additional votes—enough to tilt a close election. In short, although still in its infancy, research on virtual daily behavior is bound to change social science, and it has already helped us better understand both virtual and “actual” behavior. “Smartphone Psychology”? A review of research methods for studying daily life would not be complete without a vision of “what’s next.” Given how common they have become, it is safe to predict that smartphones will not just remain devices for everyday online communication but will also become devices for scientific data collection and intervention (Kaplan & Stone, 2013; Yarkoni, 2012). These devices automatically store vast amounts of real-world user interaction data, and, in addition, they are equipped with sensors to track the physical (e. g., location, position) and social (e.g., wireless connections around the phone) context of these interactions. Miller (2012, p. 234) states, “The question is not whether smartphones will revolutionize psychology but how, when, and where the revolution will happen.” Obviously, their immense potential for data collection also brings with it big new challenges for researchers (e.g., privacy protection, data analysis, and synthesis). Yet it is clear that many of the methods described in this module—and many still to be developed ways of collecting real-world data—will, in the future, become integrated into the devices that people naturally and happily carry with them from the moment they get up in the morning to the moment they go to bed. Conclusion This module sought to make a case for psychology research conducted outside the lab. If the ultimate goal of the social and behavioral sciences is to explain human behavior, then researchers must also—in addition to conducting carefully controlled lab studies—deal with the “messy” real world and find ways to capture life as it naturally happens. Mortensen and Cialdini (2010) refer to the dynamic give-and-take between laboratory and field research as “full-cycle psychology”. Going full cycle, they suggest, means that “researchers use naturalistic observation to determine an effect’s presence in the real world, theory to determine what processes underlie the effect, experimentation to verify the effect and its underlying processes, and a return to the natural environment to corroborate the experimental findings” (Mortensen & Cialdini, 2010, p. 53). To accomplish this, researchers have access to a toolbox of research methods for studying daily life that is now more diverse and more versatile than it has ever been before. So, all it takes is to go ahead and—literally—bring science to life. Outside Resources Website: Society for Ambulatory Assessment http://www.ambulatory-assessment.org Discussion Questions 1. What do you think about the tradeoff between unambiguously establishing cause and effect (internal validity) and ensuring that research findings apply to people’s everyday lives (external validity)? Which one of these would you prioritize as a researcher? Why? 2. What challenges do you see that daily-life researchers may face in their studies? How can they be overcome? 3. What ethical issues can come up in daily-life studies? How can (or should) they be addressed? 4. How do you think smartphones and other mobile electronic devices will change psychological research? What are their promises for the field? And what are their pitfalls? Vocabulary Ambulatory assessment An overarching term to describe methodologies that assess the behavior, physiology, experience, and environments of humans in naturalistic settings. Daily Diary method A methodology where participants complete a questionnaire about their thoughts, feelings, and behavior of the day at the end of the day. Day reconstruction method (DRM) A methodology where participants describe their experiences and behavior of a given day retrospectively upon a systematic reconstruction on the following day. Ecological momentary assessment An overarching term to describe methodologies that repeatedly sample participants’ real-world experiences, behavior, and physiology in real time. Ecological validity The degree to which a study finding has been obtained under conditions that are typical for what happens in everyday life. Electronically activated recorder, or EAR A methodology where participants wear a small, portable audio recorder that intermittently records snippets of ambient sounds around them. Experience-sampling method A methodology where participants report on their momentary thoughts, feelings, and behaviors at different points in time over the course of a day. External validity The degree to which a finding generalizes from the specific sample and context of a study to some larger population and broader settings. Full-cycle psychology A scientific approach whereby researchers start with an observational field study to identify an effect in the real world, follow up with laboratory experimentation to verify the effect and isolate the causal mechanisms, and return to field research to corroborate their experimental findings. Generalize Generalizing, in science, refers to the ability to arrive at broad conclusions based on a smaller sample of observations. For these conclusions to be true the sample should accurately represent the larger population from which it is drawn. Internal validity The degree to which a cause-effect relationship between two variables has been unambiguously established. Linguistic inquiry and word count A quantitative text analysis methodology that automatically extracts grammatical and psychological information from a text by counting word frequencies. Lived day analysis A methodology where a research team follows an individual around with a video camera to objectively document a person’s daily life as it is lived. White coat hypertension A phenomenon in which patients exhibit elevated blood pressure in the hospital or doctor’s office but not in their everyday lives.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/02%3A_RESEARCH_IN_PSYCHOLOGY/2.02%3A_Conducting_Psychology_Research_in_the_Real_World.txt
• 3.1: The Brain and Nervous System The brain is the most complex part of the human body. It is the center of consciousness and also controls all voluntary and involuntary movement and bodily functions. It communicates with each part of the body through the nervous system, a network of channels that carry electrochemical signals. • 3.2: Evolutionary Theories in Psychology Evolution or change over time occurs through the processes of natural and sexual selection. In response to problems in our environment, we adapt both physically and psychologically to ensure our survival and reproduction. • 3.3: The Nature-Nurture Question People have a deep intuition about what has been called the “nature–nurture question.” Some aspects of our behavior feel as though they originate in our genetic makeup, while others feel like the result of our upbringing or our own hard work. Genes and environments always combine to produce behavior, and the real science is in the discovery of how they combine for a given behavior. • 3.4: Gender This module discusses gender and its related concepts, including sex, gender roles, gender identity, sexual orientation, and sexism. In addition, this module includes a discussion of differences that exist between males and females and how these real gender differences compare to the stereotypes society holds about gender differences. In fact, there are significantly fewer real gender differences than one would expect relative to the large number of stereotypes about gender differences. 03: BIOLOGY AS THE BASIS OF BEHAVIOR By Robert Biswas-Diener Portland State University The brain is the most complex part of the human body. It is the center of consciousness and also controls all voluntary and involuntary movement and bodily functions. It communicates with each part of the body through the nervous system, a network of channels that carry electrochemical signals. Learning objectives • Name the various parts of the nervous system and their respective functions • Explain how neurons communicate with each other • Identify the location and function of the limbic system • Articulate how the primary motor cortex is an example of brain region specialization • Name at least three neuroimaging techniques and describe how they work In the 1800s a German scientist by the name of Ernst Weber conducted several experiments meant to investigate how people perceive the world via their own bodies (Hernstein & Boring, 1966). It is obvious that we use our sensory organs—our eyes, and ears, and nose—to take in and understand the world around us. Weber was particularly interested in the sense of touch. Using a drafting compass he placed the two points far apart and set them on the skin of a volunteer. When the points were far apart the research participants could easily distinguish between them. As Weber repeated the process with ever closer points, however, most people lost the ability to tell the difference between them. Weber discovered that the ability to recognize these “just noticeable differences” depended on where on the body the compass was positioned. Your back, for example, is far less sensitive to touch than is the skin on your face. Similarly, the tip of your tongue is extremely sensitive! In this way, Weber began to shed light on the way that nerves, the nervous system, and the brain form the biological foundation of psychological processes. In this module we will explore the biological side of psychology by paying particular attention to the brain and to the nervous system. Understanding the nervous system is vital to understanding psychology in general. It is through the nervous system that we experience pleasure and pain, feel emotions, learn and use language, and plan goals, just to name a few examples. In the pages that follow we will begin by examining how the human nervous system develops and then we will learn about the parts of the brain and how they function. We will conclude with a section on how modern psychologists study the brain. It is worth mentioning here, at the start, that an introduction to the biological aspects of psychology can be both the most interesting and most frustrating of all topics for new students of psychology. This is, in large part, due to the fact that there is so much new information to learn and new vocabulary associated with all the various parts of the brain and nervous system. In fact, there are 30 key vocabulary words presented in this module! We encourage you not to get bogged down in difficult words. Instead, pay attention to the broader concepts, perhaps even skipping over the vocabulary on your first reading. It is helpful to pass back through with a second reading, once you are already familiar with the topic, with attention to learning the vocabulary. Nervous System development across the human lifespan As a species, humans have evolved a complex nervous system and brain over millions of years. Comparisons of our nervous systems with those of other animals, such as chimpanzees, show some similarities (Darwin, 1859). Researchers can also use fossils to study the relationship between brain volume and human behavior over the course of evolutionary history. Homo habilis, for instance, a human ancestor living about 2 million years ago shows a larger brain volume than its own ancestors but far less than modern homo sapiens. The main difference between humans and other animals-- in terms of brain development-- is that humans have a much more developed frontal cortex (the front part of the brain associated with planning). Interestingly, a person’s unique nervous system develops over the course of their lifespan in a way that resembles the evolution of nervous systems in animals across vast stretches of time. For example, the human nervous system begins developing even before a person is born. It begins as a simple bundle of tissue that forms into a tube and extends along the head-to-tail plane becoming the spinal cord and brain. 25 days into its development, the embryo has a distinct spinal cord, as well as hindbrain, midbrain and forebrain (Stiles & Jernigan, 2010). What, exactly, is this nervous system that is developing and what does it do? The nervous system can be thought of as the body’s communication network that consists of all nerve cells. There are many ways in which we can divide the nervous system to understand it more clearly. One common way to do so is by parsing it into the central nervous system and the peripheral nervous system. Each of these can be sub-divided, in turn. Let’s take a closer, more in-depth look at each. And, don’t worry, the nervous system is complicated with many parts and many new vocabulary words. It might seem overwhelming at first but through the figures and a little study you can get it. The Central Nervous System (CNS): The Neurons inside the Brain The Central Nervous System, or CNS for short, is made up of the brain and spinal cord (see Figure 1.4.2). The CNS is the portion of the nervous system that is encased in bone (the brain is protected by the skull and the spinal cord is protected by the spinal column). It is referred to as “central” because it is the brain and spinal cord that are primarily responsible for processing sensory information—touching a hot stove or seeing a rainbow, for example—and sending signals to the peripheral nervous system for action. It communicates largely by sending electrical signals through individual nerve cells that make up the fundamental building blocks of the nervous system, called neurons. There are approximately 100 billion neurons in the human brain and each has many contacts with other neurons, called synapses (Brodal, 1992). If we were able to magnify a view of individual neurons we would see that they are cells made from distinct parts (see Figure 1.4.3). The three main components of a neuron are the dendrites, the soma, and the axon. Neurons communicate with one another by receiving information through the dendrites, which act as an antenna. When the dendrites channel this information to the soma, or cell body, it builds up as an electro-chemical signal. This electrical part of the signal, called an action potential shoots down the axon, a long tail that leads away from the soma and toward the next neuron. When people talk about “nerves” in the nervous system, it typically refers to bundles of axons that form long neural wires along which electrical signals can travel. Cell-to-cell communication is helped by the fact that the axon is covered by a myelin sheath—a layer of fatty cells that allow the signal to travel very rapidly from neuron to neuron (Kandel, Schwartz & Jessell, 2000) If we were to zoom in still further we could take a closer look at the synapse, the space between neurons (see Figure 1.4.4). Here, we would see that there is a space between neurons, called the synaptic gap. To give you a sense of scale we can compare the synaptic gap to the thickness of a dime, the thinnest of all American coins (about 1.35 mm). You could stack approximately 70,000 synaptic gaps in the thickness of a single coin! As the action potential, the electrical signal reaches the end of the axon, tiny packets of chemicals, called neurotransmitters, are released. This is the chemical part of the electro-chemical signal. These neurotransmitters are the chemical signals that travel from one neuron to another, enabling them to communicate with one another. There are many different types of neurotransmitters and each has a specialized function. For example, serotonin affects sleep, hunger and mood. Dopamine is associated with attention, learning and pleasure (Kandel & Schwartz, 1982) It is amazing to realize that when you think—when you reach out to grab a glass of water, when you realize that your best friend is happy, when you try to remember the name of the parts of a neuron—what you are experiencing is actually electro-chemical impulses shooting between nerves! The Central Nervous System: Looking at the Brain as a Whole If we were to zoom back out and look at the central nervous system again we would see that the brain is the largest single part of the central nervous system. The brain is the headquarters of the entire nervous system and it is here that most of your sensing, perception, thinking, awareness, emotions, and planning take place. For many people the brain is so important that there is a sense that it is there—inside the brain—that a person’s sense of self is located (as opposed to being primarily in your toes, by contrast). The brain is so important, in fact, that it consumes 20% of the total oxygen and calories we consume even though it is only, on average, about 2% of our overall weight. It is helpful to examine the various parts of the brain and to understand their unique functions to get a better sense of the role the brain plays. We will start by looking at very general areas of the brain and then we will zoom in and look at more specific parts. Anatomists and neuroscientists often divide the brain into portions based on the location and function of various brain parts. Among the simplest ways to organize the brain is to describe it as having three basic portions: the hindbrain, midbrain and forebrain. Another way to look at the brain is to consider the brain stem, the Cerebellum, and the Cerebrum. There is another part, called the Limbic System that is less well defined. It is made up of a number of structures that are “sub-cortical” (existing in the hindbrain) as well as cortical regions of the brain (see Figure 1.4.5). The brain stem is the most basic structure of the brain and is located at the top of the spine and bottom of the brain. It is sometimes considered the “oldest” part of the brain because we can see similar structures in other, less evolved animals such as crocodiles. It is in charge of a wide range of very basic “life support” functions for the human body including breathing, digestion, and the beating of the heart. Amazingly, the brain stem sends the signals to keep these processes running smoothly without any conscious effort on our behalf. The limbic system is a collection of highly specialized neural structures that sit at the top of the brain stem, which are involved in regulating our emotions. Collectively, the limbic system is a term that doesn’t have clearly defined areas as it includes forebrain regions as well as hindbrain regions. These include the amygdala, the thalamus, the hippocampus, the insula cortex, the anterior cingulate cortex, and the prefrontal cortex. These structures influence hunger, the sleep-wake cycle, sexual desire, fear and aggression, and even memory. The cerebellum is a structure at the very back of the brain. Aristotle referred to it as the “small brain” based on its appearance and it is principally involved with movement and posture although it is also associated with a variety of other thinking processes. The cerebellum, like the brain stem, coordinates actions without the need for any conscious awareness. The cerebrum (also called the “cerebral cortex”) is the “newest,” most advanced portion of the brain. The cerebral hemispheres (the left and right hemispheres that make up each side of the top of the brain) are in charge of the types of processes that are associated with more awareness and voluntary control such as speaking and planning as well as contain our primary sensory areas (such as seeing, hearing, feeling, and moving). These two hemispheres are connected to one another by a thick bundle of axons called the corpus callosum. There are instances in which people—either because of a genetic abnormality or as the result of surgery—have had their corpus callosum severed so that the two halves of the brain cannot easily communicate with one another. The rare split-brain patients offer helpful insights into how the brain works. For example, we now understand that the brain is contralateral, or opposite-sided. This means that the left side of the brain is responsible for controlling a number of sensory and motor functions of the right side of the body, and vice versa. Consider this striking example: A split brain patient is seated at a table and an object such as a car key can be placed where a split-brain patient can only see it through the right visual field. Right visual field images will be processed on the left side of the brain and left visual field images will be processed on the right side of the brain. Because language is largely associated with the left side of the brain the patient who sees car key in the right visual field when asked “What do you see?” would answer, “I see a car key.” In contrast, a split-brain patient who only saw the car key in the left visual field, thus the information went to the non-language right side of the brain, might have a difficult time speaking the word “car key.” In fact in this case, the patient is likely to respond “I didn’t see anything at all.” However, if asked to draw the item with their left hand—a process associated with the right side of the brain—the patient will be able to do so! See the outside resources below for a video demonstration of this striking phenomenon. Besides looking at the brain as an organ that is made up of two halves we can also examine it by looking at its four various lobes of the cerebral cortex, the outer part of the brain (see Figure 1.4.6). Each of these is associated with a specific function. The occipital lobe, located at the back of the cerebral cortex, is the house of the visual area of the brain. You can see the road in front of you when you are driving, track the motion of a ball in the air thanks to the occipital lobe. The temporal lobe, located on the underside of the cerebral cortex, is where sounds and smells are processed. The parietal lobe, at the upper back of the cerebral cortex, is where touch and taste are processed. Finally, the frontal lobe, located at the forward part of the cerebral cortex is where behavioral motor plans are processed as well as a number of highly complicated processes occur including speech and language use, creative problem solving, and planning and organization. One particularly fascinating area in the frontal lobe is called the “primary motor cortex”. This strip running along the side of the brain is in charge of voluntary movements like waving goodbye, wiggling your eyebrows, and kissing. It is an excellent example of the way that the various regions of the brain are highly specialized. Interestingly, each of our various body parts has a unique portion of the primary motor cortex devoted to it (see Figure 1.4.7). Each individual finger has about as much dedicated brain space as your entire leg. Your lips, in turn, require about as much dedicated brain processing as all of your fingers and your hand combined! Because the cerebral cortex in general, and the frontal lobe in particular, are associated with such sophisticated functions as planning and being self-aware they are often thought of as a higher, less primal portion of the brain. Indeed, other animals such as rats and kangaroos while they do have frontal regions of their brain do not have the same level of development in the cerebral cortices. The closer an animal is to humans on the evolutionary tree—think chimpanzees and gorillas, the more developed is this portion of their brain. The Peripheral Nervous System In addition to the central nervous system (the brain and spinal cord) there is also a complex network of nerves that travel to every part of the body. This is called the peripheral nervous system (PNS) and it carries the signals necessary for the body to survive (see Figure 1.4.8). Some of the signals carried by the PNS are related to voluntary actions. If you want to type a message to a friend, for instance, you make conscious choices about which letters go in what order and your brain sends the appropriate signals to your fingers to do the work. Other processes, by contrast, are not voluntary. Without your awareness your brain is also sending signals to your organs, your digestive system, and the muscles that are holding you up right now with instructions about what they should be doing. All of this occurs through the pathways of your peripheral nervous system. How we study the brain The brain is difficult to study because it is housed inside the thick bone of the skull. What’s more, it is difficult to access the brain without hurting or killing the owner of the brain. As a result, many of the earliest studies of the brain (and indeed this is still true today) focused on unfortunate people who happened to have damage to some particular area of their brain. For instance, in the 1880s a surgeon named Paul Broca conducted an autopsy on a former patient who had lost his powers of speech. Examining his patient’s brain, Broca identified a damaged area—now called the “Broca’s Area”—on the left side of the brain (see Figure 1.4.9) (AAAS, 1880). Over the years a number of researchers have been able to gain insights into the function of specific regions of the brain from these types of patients. An alternative to examining the brains or behaviors of humans with brain damage or surgical lesions can be found in the instance of animals. Some researchers examine the brains of other animals such as rats, dogs and monkeys. Although animals brains differ from human brains in both size and structure there are many similarities as well. The use of animals for study can yield important insights into human brain function. In modern times, however, we do not have to exclusively rely on the study of people with brain lesions. Advances in technology have led to ever more sophisticated imaging techniques. Just as X-ray technology allows us to peer inside the body, neuroimaging techniques allow us glimpses of the working brain (Raichle,1994). Each type of imaging uses a different technique and each has its own advantages and disadvantages. Positron Emission Tomography (PET) records metabolic activity in the brain by detecting the amount of radioactive substances, which are injected into a person’s bloodstream, the brain is consuming. This technique allows us to see how much an individual uses a particular part of the brain while at rest, or not performing a task. Another technique, known as Functional Magnetic Resonance Imaging (fMRI) relies on blood flow. This method measures changes in the levels of naturally occurring oxygen in the blood. As a brain region becomes active, it requires more oxygen. This technique measures brain activity based on this increase oxygen level. This means fMRI does not require a foreign substance to be injected into the body. Both PET and fMRI scans have poor temporal resolution , meaning that they cannot tell us exactly when brain activity occurred. This is because it takes several seconds for blood to arrive at a portion of the brain working on a task. One imaging technique that has better temporal resolution is Electroencephalography (EEG), which measures electrical brain activity instead of blood flow. Electrodes are place on the scalp of participants and they are nearly instantaneous in picking up electrical activity. Because this activity could be coming from any portion of the brain, however, EEG is known to have poor spatial resolution, meaning that it is not accurate with regards to specific location. Another technique, known as Diffuse Optical Imaging (DOI) can offer high temporal and spatial resolution. DOI works by shining infrared light into the brain. It might seem strange that light can pass through the head and brain. Light properties change as they pass through oxygenated blood and through active neurons. As a result, researchers can make inferences regarding where and when brain activity is happening. Conclusion It has often been said that the brain studies itself. This means that humans are uniquely capable of using our most sophisticated organ to understand our most sophisticated organ. Breakthroughs in the study of the brain and nervous system are among the most exciting discoveries in all of psychology. In the future, research linking neural activity to complex, real world attitudes and behavior will help us to understand human psychology and better intervene in it to help people. Outside Resources Video: Animation of Neurons Video: Split Brain Patient Web: Animation of the Magnetic Resonance Imaging (MRI) http://sites.sinauer.com/neuroscience5e/animations01.01.html Web: Animation of the Positron Emission Tomography (PET) http://sites.sinauer.com/neuroscience5e/animations01.02.html Web: Teaching resources and videos for teaching about the brain, from Colorado State University: www.learner.org/resources/series142.html Web: The Brain Museum http://brainmuseum.org/ Discussion Questions 1. In your opinion is learning about the functions of various parts of the brain by studying the abilities of brain damaged patients ethical. What, in your opinion, are the potential benefits and considerations? 2. Are research results on the brain more compelling to you than are research results from survey studies on attitudes? Why or why not? How does biological research such as studies of the brain influence public opinion regarding the science of psychology? 3. If humans continue to evolve what changes might you predict in our brains and cognitive abilities? 4. Which brain scanning techniques, or combination of techniques, do you find to be the best? Why? Why do you think scientists may or may not employ exactly your recommended techniques? Vocabulary Action Potential A transient all-or-nothing electrical current that is conducted down the axon when the membrane potential reaches the threshold of excitation. Axon Part of the neuron that extends off the soma, splitting several times to connect with other neurons; main output of the neuron. Brain Stem The “trunk” of the brain comprised of the medulla, pons, midbrain, and diencephalon. Broca’s Area An area in the frontal lobe of the left hemisphere. Implicated in language production. Central Nervous System The portion of the nervous system that includes the brain and spinal cord. Cerebellum The distinctive structure at the back of the brain, Latin for “small brain.” Cerebrum Usually refers to the cerebral cortex and associated white matter, but in some texts includes the subcortical structures. Contralateral Literally “opposite side”; used to refer to the fact that the two hemispheres of the brain process sensory information and motor commands for the opposite side of the body (e.g., the left hemisphere controls the right side of the body). Corpus Callosum The thick bundle of nerve cells that connect the two hemispheres of the brain and allow them to communicate. Dendrites Part of a neuron that extends away from the cell body and is the main input to the neuron. Diffuse Optical Imaging (DOI) A neuroimaging technique that infers brain activity by measuring changes in light as it is passed through the skull and surface of the brain. Electroencephalography (EEG) A neuroimaging technique that measures electrical brain activity via multiple electrodes on the scalp. Frontal Lobe The front most (anterior) part of the cerebrum; anterior to the central sulcus and responsible for motor output and planning, language, judgment, and decision-making. Functional Magnetic Resonance Imaging (fMRI) Functional magnetic resonance imaging (fMRI): A neuroimaging technique that infers brain activity by measuring changes in oxygen levels in the blood. Limbic System Includes the subcortical structures of the amygdala and hippocampal formation as well as some cortical structures; responsible for aversion and gratification. Myelin Sheath Fatty tissue, that insulates the axons of the neurons; myelin is necessary for normal conduction of electrical impulses among neurons. Nervous System The body’s network for electrochemical communication. This system includes all the nerves cells in the body. Neurons Individual brain cells Neurotransmitters Chemical substance released by the presynaptic terminal button that acts on the postsynaptic cell. Occipital Lobe The back most (posterior) part of the cerebrum; involved in vision. Parietal Lobe The part of the cerebrum between the frontal and occipital lobes; involved in bodily sensations, visual attention, and integrating the senses. Peripheral Nervous System All of the nerve cells that connect the central nervous system to all the other parts of the body. Positron Emission Tomography (PET) A neuroimaging technique that measures brain activity by detecting the presence of a radioactive substance in the brain that is initially injected into the bloodstream and then pulled in by active brain tissue. Soma Cell body of a neuron that contains the nucleus and genetic information, and directs protein synthesis. Spatial Resolution A term that refers to how small the elements of an image are; high spatial resolution means the device or technique can resolve very small elements; in neuroscience it describes how small of a structure in the brain can be imaged. Split-brain Patient A patient who has had most or all of his or her corpus callosum severed. Synapses Junction between the presynaptic terminal button of one neuron and the dendrite, axon, or soma of another postsynaptic neuron. Synaptic Gap Also known as the synaptic cleft; the small space between the presynaptic terminal button and the postsynaptic dendritic spine, axon, or soma. Temporal Lobe The part of the cerebrum in front of (anterior to) the occipital lobe and below the lateral fissure; involved in vision, auditory processing, memory, and integrating vision and audition. Temporal Resolution A term that refers to how small a unit of time can be measured; high temporal resolution means capable of resolving very small units of time; in neuroscience it describes how precisely in time a process can be measured in the brain.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/03%3A_BIOLOGY_AS_THE_BASIS_OF_BEHAVIOR/3.01%3A_The_Brain_and_Nervous_System.txt
By David M. Buss University of Texas at Austin Evolution or change over time occurs through the processes of natural and sexual selection. In response to problems in our environment, we adapt both physically and psychologically to ensure our survival and reproduction. Sexual selection theory describes how evolution has shaped us to provide a mating advantage rather than just a survival advantage and occurs through two distinct pathways: intrasexual competition and intersexual selection. Gene selection theory, the modern explanation behind evolutionary biology, occurs through the desire for gene replication. Evolutionary psychology connects evolutionary principles with modern psychology and focuses primarily on psychological adaptations: changes in the way we think in order to improve our survival. Two major evolutionary psychological theories are described: Sexual strategies theory describes the psychology of human mating strategies and the ways in which women and men differ in those strategies. Error management theory describes the evolution of biases in the way we think about everything. Learning objectives • Learn what “evolution” means. • Define the primary mechanisms by which evolution takes place. • Identify the two major classes of adaptations. • Define sexual selection and its two primary processes. • Define gene selection theory. • Understand psychological adaptations. • Identify the core premises of sexual strategies theory. • Identify the core premises of error management theory, and provide two empirical examples of adaptive cognitive biases. Introduction If you have ever been on a first date, you’re probably familiar with the anxiety of trying to figure out what clothes to wear or what perfume or cologne to put on. In fact, you may even consider flossing your teeth for the first time all year. When considering why you put in all this work, you probably recognize that you’re doing it to impress the other person. But how did you learn these particular behaviors? Where did you get the idea that a first date should be at a nice restaurant or someplace unique? It is possible that we have been taught these behaviors by observing others. It is also possible, however, that these behaviors—the fancy clothes, the expensive restaurant—are biologically programmed into us. That is, just as peacocks display their feathers to show how attractive they are, or some lizards do push-ups to show how strong they are, when we style our hair or bring a gift to a date, we’re trying to communicate to the other person: “Hey, I’m a good mate! Choose me! Choose me!" However, we all know that our ancestors hundreds of thousands of years ago weren’t driving sports cars or wearing designer clothes to attract mates. So how could someone ever say that such behaviors are “biologically programmed” into us? Well, even though our ancestors might not have been doing these specific actions, these behaviors are the result of the same driving force: the powerful influence of evolution. Yes, evolution—certain traits and behaviors developing over time because they are advantageous to our survival. In the case of dating, doing something like offering a gift might represent more than a nice gesture. Just as chimpanzees will give food to mates to show they can provide for them, when you offer gifts to your dates, you are communicating that you have the money or “resources” to help take care of them. And even though the person receiving the gift may not realize it, the same evolutionary forces are influencing his or her behavior as well. The receiver of the gift evaluates not only the gift but also the gift-giver's clothes, physical appearance, and many other qualities, to determine whether the individual is a suitable mate. But because these evolutionary processes are hardwired into us, it is easy to overlook their influence. To broaden your understanding of evolutionary processes, this module will present some of the most important elements of evolution as they impact psychology. Evolutionary theory helps us piece together the story of how we humans have prospered. It also helps to explain why we behave as we do on a daily basis in our modern world: why we bring gifts on dates, why we get jealous, why we crave our favorite foods, why we protect our children, and so on. Evolution may seem like a historical concept that applies only to our ancient ancestors but, in truth, it is still very much a part of our modern daily lives. Basics of Evolutionary Theory Evolution simply means change over time. Many think of evolution as the development of traits and behaviors that allow us to survive this “dog-eat-dog” world, like strong leg muscles to run fast, or fists to punch and defend ourselves. However, physical survival is only important if it eventually contributes to successful reproduction. That is, even if you live to be a 100-year-old, if you fail to mate and produce children, your genes will die with your body. Thus, reproductive success, not survival success, is the engine of evolution by natural selection. Every mating success by one person means the loss of a mating opportunity for another. Yet every living human being is an evolutionary success story. Each of us is descended from a long and unbroken line of ancestors who triumphed over others in the struggle to survive (at least long enough to mate) and reproduce. However, in order for our genes to endure over time—to survive harsh climates, to defeat predators—we have inherited adaptive, psychological processes designed to ensure success. At the broadest level, we can think of organisms, including humans, as having two large classes of adaptations—or traits and behaviors that evolved over time to increase our reproductive success. The first class of adaptations are called survival adaptations: mechanisms that helped our ancestors handle the “hostile forces of nature.” For example, in order to survive very hot temperatures, we developed sweat glands to cool ourselves. In order to survive very cold temperatures, we developed shivering mechanisms (the speedy contraction and expansion of muscles to produce warmth). Other examples of survival adaptations include developing a craving for fats and sugars, encouraging us to seek out particular foods rich in fats and sugars that keep us going longer during food shortages. Some threats, such as snakes, spiders, darkness, heights, and strangers, often produce fear in us, which encourages us to avoid them and thereby stay safe. These are also examples of survival adaptations. However, all of these adaptations are for physical survival, whereas the second class of adaptations are for reproduction, and help us compete for mates. These adaptations are described in an evolutionary theory proposed by Charles Darwin, called sexual selection theory. Sexual Selection Theory Darwin noticed that there were many traits and behaviors of organisms that could not be explained by “survival selection.” For example, the brilliant plumage of peacocks should actually lower their rates of survival. That is, the peacocks’ feathers act like a neon sign to predators, advertising “Easy, delicious dinner here!” But if these bright feathers only lower peacocks’ chances at survival, why do they have them? The same can be asked of similar characteristics of other animals, such as the large antlers of male stags or the wattles of roosters, which also seem to be unfavorable to survival. Again, if these traits only make the animals less likely to survive, why did they develop in the first place? And how have these animals continued to survive with these traits over thousands and thousands of years? Darwin’s answer to this conundrum was the theory of sexual selection: the evolution of characteristics, not because of survival advantage, but because of mating advantage. Sexual selection occurs through two processes. The first, intrasexual competition, occurs when members of one sex compete against each other, and the winner gets to mate with a member of the opposite sex. Male stags, for example, battle with their antlers, and the winner (often the stronger one with larger antlers) gains mating access to the female. That is, even though large antlers make it harder for the stags to run through the forest and evade predators (which lowers their survival success), they provide the stags with a better chance of attracting a mate (which increases their reproductive success). Similarly, human males sometimes also compete against each other in physical contests: boxing, wrestling, karate, or group-on-group sports, such as football. Even though engaging in these activities poses a "threat" to their survival success, as with the stag, the victors are often more attractive to potential mates, increasing their reproductive success. Thus, whatever qualities lead to success in intrasexual competition are then passed on with greater frequency due to their association with greater mating success. The second process of sexual selection is preferential mate choice, also called intersexual selection. In this process, if members of one sex are attracted to certain qualities in mates—such as brilliant plumage, signs of good health, or even intelligence—those desired qualities get passed on in greater numbers, simply because their possessors mate more often. For example, the colorful plumage of peacocks exists due to a long evolutionary history of peahens’ (the term for female peacocks) attraction to males with brilliantly colored feathers. In all sexually-reproducing species, adaptations in both sexes (males and females) exist due to survival selection and sexual selection. However, unlike other animals where one sex has dominant control over mate choice, humans have “mutual mate choice.” That is, both women and men typically have a say in choosing their mates. And both mates value qualities such as kindness, intelligence, and dependability that are beneficial to long-term relationships—qualities that make good partners and good parents. Gene Selection Theory In modern evolutionary theory, all evolutionary processes boil down to an organism’s genes. Genes are the basic “units of heredity,” or the information that is passed along in DNA that tells the cells and molecules how to “build” the organism and how that organism should behave. Genes that are better able to encourage the organism to reproduce, and thus replicate themselves in the organism’s offspring, have an advantage over competing genes that are less able. For example, take female sloths: In order to attract a mate, they will scream as loudly as they can, to let potential mates know where they are in the thick jungle. Now, consider two types of genes in female sloths: one gene that allows them to scream extremely loudly, and another that only allows them to scream moderately loudly. In this case, the sloth with the gene that allows her to shout louder will attract more mates—increasing reproductive success—which ensures that her genes are more readily passed on than those of the quieter sloth. Essentially, genes can boost their own replicative success in two basic ways. First, they can influence the odds for survival and reproduction of the organism they are in (individual reproductive success or fitness—as in the example with the sloths). Second, genes can also influence the organism to help other organisms who also likely contain those genes—known as “genetic relatives”—to survive and reproduce (which is called inclusive fitness). For example, why do human parents tend to help their own kids with the financial burdens of a college education and not the kids next door? Well, having a college education increases one’s attractiveness to other mates, which increases one’s likelihood for reproducing and passing on genes. And because parents’ genes are in their own children (and not the neighborhood children), funding their children’s educations increases the likelihood that the parents’ genes will be passed on. Understanding gene replication is the key to understanding modern evolutionary theory. It also fits well with many evolutionary psychological theories. However, for the time being, we’ll ignore genes and focus primarily on actual adaptations that evolved because they helped our ancestors survive and/or reproduce. Evolutionary Psychology Evolutionary psychology aims the lens of modern evolutionary theory on the workings of the human mind. It focuses primarily on psychological adaptations: mechanisms of the mind that have evolved to solve specific problems of survival or reproduction. These kinds of adaptations are in contrast to physiological adaptations, which are adaptations that occur in the body as a consequence of one’s environment. One example of a physiological adaptation is how our skin makes calluses. First, there is an “input,” such as repeated friction to the skin on the bottom of our feet from walking. Second, there is a “procedure,” in which the skin grows new skin cells at the afflicted area. Third, an actual callus forms as an “output” to protect the underlying tissue—the final outcome of the physiological adaptation (i.e., tougher skin to protect repeatedly scraped areas). On the other hand, a psychological adaptation is a development or change of a mechanism in the mind. For example, take sexual jealousy. First, there is an “input,” such as a romantic partner flirting with a rival. Second, there is a “procedure,” in which the person evaluates the threat the rival poses to the romantic relationship. Third, there is a behavioral output, which might range from vigilance (e.g., snooping through a partner’s email) to violence (e.g., threatening the rival). Evolutionary psychology is fundamentally an interactionist framework, or a theory that takes into account multiple factors when determining the outcome. For example, jealousy, like a callus, doesn’t simply pop up out of nowhere. There is an “interaction” between the environmental trigger (e.g., the flirting; the repeated rubbing of the skin) and the initial response (e.g., evaluation of the flirter’s threat; the forming of new skin cells) to produce the outcome. In evolutionary psychology, culture also has a major effect on psychological adaptations. For example, status within one’s group is important in all cultures for achieving reproductive success, because higher status makes someone more attractive to mates. In individualistic cultures, such as the United States, status is heavily determined by individual accomplishments. But in more collectivist cultures, such as Japan, status is more heavily determined by contributions to the group and by that group’s success. For example, consider a group project. If you were to put in most of the effort on a successful group project, the culture in the United States reinforces the psychological adaptation to try to claim that success for yourself (because individual achievements are rewarded with higher status). However, the culture in Japan reinforces the psychological adaptation to attribute that success to the whole group (because collective achievements are rewarded with higher status). Another example of cultural input is the importance of virginity as a desirable quality for a mate. Cultural norms that advise against premarital sex persuade people to ignore their own basic interests because they know that virginity will make them more attractive marriage partners. Evolutionary psychology, in short, does not predict rigid robotic-like “instincts.” That is, there isn’t one rule that works all the time. Rather, evolutionary psychology studies flexible, environmentally-connected and culturally-influenced adaptations that vary according to the situation. Psychological adaptations are hypothesized to be wide-ranging, and include food preferences, habitat preferences, mate preferences, and specialized fears. These psychological adaptations also include many traits that improve people's ability to live in groups, such as the desire to cooperate and make friends, or the inclination to spot and avoid frauds, punish rivals, establish status hierarchies, nurture children, and help genetic relatives. Research programs in evolutionary psychology develop and empirically test predictions about the nature of psychological adaptations. Below, we highlight a few evolutionary psychological theories and their associated research approaches. Sexual Strategies Theory Sexual strategies theory is based on sexual selection theory. It proposes that humans have evolved a list of different mating strategies, both short-term and long-term, that vary depending on culture, social context, parental influence, and personal mate value (desirability in the “mating market”). In its initial formulation, sexual strategies theory focused on the differences between men and women in mating preferences and strategies (Buss & Schmitt, 1993). It started by looking at the minimum parental investment needed to produce a child. For women, even the minimum investment is significant: after becoming pregnant, they have to carry that child for nine months inside of them. For men, on the other hand, the minimum investment to produce the same child is considerably smaller—simply the act of sex. These differences in parental investment have an enormous impact on sexual strategies. For a woman, the risks associated with making a poor mating choice is high. She might get pregnant by a man who will not help to support her and her children, or who might have poor-quality genes. And because the stakes are higher for a woman, wise mating decisions for her are much more valuable. For men, on the other hand, the need to focus on making wise mating decisions isn’t as important. That is, unlike women, men 1) don’t biologically have the child growing inside of them for nine months, and 2) do not have as high a cultural expectation to raise the child. This logic leads to a powerful set of predictions: In short-term mating, women will likely be choosier than men (because the costs of getting pregnant are so high), while men, on average, will likely engage in more casual sexual activities (because this cost is greatly lessened). Due to this, men will sometimes deceive women about their long-term intentions for the benefit of short-term sex, and men are more likely than women to lower their mating standards for short-term mating situations. An extensive body of empirical evidence supports these and related predictions (Buss & Schmitt, 2011). Men express a desire for a larger number of sex partners than women do. They let less time elapse before seeking sex. They are more willing to consent to sex with strangers and are less likely to require emotional involvement with their sex partners. They have more frequent sexual fantasies and fantasize about a larger variety of sex partners. They are more likely to regret missed sexual opportunities. And they lower their standards in short-term mating, showing a willingness to mate with a larger variety of women as long as the costs and risks are low. However, in situations where both the man and woman are interested in long-term mating, both sexes tend to invest substantially in the relationship and in their children. In these cases, the theory predicts that both sexes will be extremely choosy when pursuing a long-term mating strategy. Much empirical research supports this prediction, as well. In fact, the qualities women and men generally look for when choosing long-term mates are very similar: both want mates who are intelligent, kind, understanding, healthy, dependable, honest, loyal, loving, and adaptable. Nonetheless, women and men do differ in their preferences for a few key qualities in long-term mating, because of somewhat distinct adaptive problems. Modern women have inherited the evolutionary trait to desire mates who possess resources, have qualities linked with acquiring resources (e.g., ambition, wealth, industriousness), and are willing to share those resources with them. On the other hand, men more strongly desire youth and health in women, as both are cues to fertility. These male and female differences are universal in humans. They were first documented in 37 different cultures, from Australia to Zambia (Buss, 1989), and have been replicated by dozens of researchers in dozens of additional cultures (for summaries, see Buss, 2012). As we know, though, just because we have these mating preferences (e.g., men with resources; fertile women), people don't always get what they want. There are countless other factors which influence who people ultimately select as their mate. For example, the sex ratio (the percentage of men to women in the mating pool), cultural practices (such as arranged marriages, which inhibit individuals’ freedom to act on their preferred mating strategies), the strategies of others (e.g., if everyone else is pursuing short-term sex, it’s more difficult to pursue a long-term mating strategy), and many others all influence who we select as our mates. Sexual strategies theory—anchored in sexual selection theory— predicts specific similarities and differences in men and women’s mating preferences and strategies. Whether we seek short-term or long-term relationships, many personality, social, cultural, and ecological factors will all influence who our partners will be. Error Management Theory Error management theory (EMT) deals with the evolution of how we think, make decisions, and evaluate uncertain situations—that is, situations where there's no clear answer how we should behave. (Haselton & Buss, 2000; Haselton, Nettle, & Andrews, 2005). Consider, for example, walking through the woods at dusk. You hear a rustle in the leaves on the path in front of you. It could be a snake. Or, it could just be the wind blowing the leaves. Because you can't really tell why the leaves rustled, it’s an uncertain situation. The important question then is, what are the costs of errors in judgment? That is, if you conclude that it’s a dangerous snake so you avoid the leaves, the costs are minimal (i.e., you simply make a short detour around them). However, if you assume the leaves are safe and simply walk over them—when in fact it is a dangerous snake—the decision could cost you your life. Now, think about our evolutionary history and how generation after generation was confronted with similar decisions, where one option had low cost but great reward (walking around the leaves and not getting bitten) and the other had a low reward but high cost (walking through the leaves and getting bitten). These kinds of choices are called “cost asymmetries.” If during our evolutionary history we encountered decisions like these generation after generation, over time an adaptive bias would be created: we would make sure to err in favor of the least costly (in this case, least dangerous) option (e.g., walking around the leaves). To put it another way, EMT predicts that whenever uncertain situations present us with a safer versus more dangerous decision, we will psychologically adapt to prefer choices that minimize the cost of errors. EMT is a general evolutionary psychological theory that can be applied to many different domains of our lives, but a specific example of it is the visual descent illusion. To illustrate: Have you ever thought it would be no problem to jump off of a ledge, but as soon as you stood up there, it suddenly looked much higher than you thought? The visual descent illusion (Jackson & Cormack, 2008) states that people will overestimate the distance when looking down from a height (compared to looking up) so that people will be especially wary of falling from great heights—which would result in injury or death. Another example of EMT is the auditory looming bias: Have you ever noticed how an ambulance seems closer when it's coming toward you, but suddenly seems far away once it's immediately passed? With the auditory looming bias, people overestimate how close objects are when the sound is moving toward them compared to when it is moving away from them. From our evolutionary history, humans learned, "It’s better to be safe than sorry." Therefore, if we think that a threat is closer to us when it’s moving toward us (because it seems louder), we will be quicker to act and escape. In this regard, there may be times we ran away when we didn’t need to (a false alarm), but wasting that time is a less costly mistake than not acting in the first place when a real threat does exist. EMT has also been used to predict adaptive biases in the domain of mating. Consider something as simple as a smile. In one case, a smile from a potential mate could be a sign of sexual or romantic interest. On the other hand, it may just signal friendliness. Because of the costs to men of missing out on chances for reproduction, EMT predicts that men have a sexual overperception bias: they often misread sexual interest from a woman, when really it’s just a friendly smile or touch. In the mating domain, the sexual overperception bias is one of the best-documented phenomena. It’s been shown in studies in which men and women rated the sexual interest between people in photographs and videotaped interactions. As well, it’s been shown in the laboratory with participants engaging in actual “speed dating,” where the men interpret sexual interest from the women more often than the women actually intended it (Perilloux, Easton, & Buss, 2012). In short, EMT predicts that men, more than women, will over-infer sexual interest based on minimal cues, and empirical research confirms this adaptive mating bias. Conclusion Sexual strategies theory and error management theory are two evolutionary psychological theories that have received much empirical support from dozens of independent researchers. But, there are many other evolutionary psychological theories, such as social exchange theory for example, that also make predictions about our modern day behavior and preferences, too. The merits of each evolutionary psychological theory, however, must be evaluated separately and treated like any scientific theory. That is, we should only trust their predictions and claims to the extent they are supported by scientific studies. However, even if the theory is scientifically grounded, just because a psychological adaptation was advantageous in our history, it doesn't mean it's still useful today. For example, even though women may have preferred men with resources in generations ago, our modern society has advanced such that these preferences are no longer apt or necessary. Nonetheless, it's important to consider how our evolutionary history has shaped our automatic or "instinctual" desires and reflexes of today, so that we can better shape them for the future ahead. Outside Resources FAQs http://www.anth.ucsb.edu/projects/human/evpsychfaq.html Web: Articles and books on evolutionary psychology http://homepage.psy.utexas.edu/homep...Group/BussLAB/ Web: Main international scientific organization for the study of evolution and human behavior, HBES http://www.hbes.com/ Discussion Questions 1. How does change take place over time in the living world? 2. Which two potential psychological adaptations to problems of survival are not discussed in this module? 3. What are the psychological and behavioral implications of the fact that women bear heavier costs to produce a child than men do? 4. Can you formulate a hypothesis about an error management bias in the domain of social interaction? Vocabulary Adaptations Evolved solutions to problems that historically contributed to reproductive success. Error management theory (EMT) A theory of selection under conditions of uncertainty in which recurrent cost asymmetries of judgment or inference favor the evolution of adaptive cognitive biases that function to minimize the more costly errors. Evolution Change over time. Is the definition changing? Gene Selection Theory The modern theory of evolution by selection by which differential gene replication is the defining process of evolutionary change. Intersexual selection A process of sexual selection by which evolution (change) occurs as a consequences of the mate preferences of one sex exerting selection pressure on members of the opposite sex. Intrasexual competition A process of sexual selection by which members of one sex compete with each other, and the victors gain preferential mating access to members of the opposite sex. Natural selection Differential reproductive success as a consequence of differences in heritable attributes. Psychological adaptations Mechanisms of the mind that evolved to solve specific problems of survival or reproduction; conceptualized as information processing devices. Sexual selection The evolution of characteristics because of the mating advantage they give organisms. Sexual strategies theory A comprehensive evolutionary theory of human mating that defines the menu of mating strategies humans pursue (e.g., short-term casual sex, long-term committed mating), the adaptive problems women and men face when pursuing these strategies, and the evolved solutions to these mating problems.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/03%3A_BIOLOGY_AS_THE_BASIS_OF_BEHAVIOR/3.02%3A_Evolutionary_Theories_in_Psychology.txt
By Eric Turkheimer University of Virginia People have a deep intuition about what has been called the “nature–nurture question.” Some aspects of our behavior feel as though they originate in our genetic makeup, while others feel like the result of our upbringing or our own hard work. The scientific field of behavior genetics attempts to study these differences empirically, either by examining similarities among family members with different degrees of genetic relatedness, or, more recently, by studying differences in the DNA of people with different behavioral traits. The scientific methods that have been developed are ingenious, but often inconclusive. Many of the difficulties encountered in the empirical science of behavior genetics turn out to be conceptual, and our intuitions about nature and nurture get more complicated the harder we think about them. In the end, it is an oversimplification to ask how “genetic” some particular behavior is. Genes and environments always combine to produce behavior, and the real science is in the discovery of how they combine for a given behavior. Learning objectives • Understand what the nature–nurture debate is and why the problem fascinates us. • Understand why nature–nurture questions are difficult to study empirically. • Know the major research designs that can be used to study nature–nurture questions. • Appreciate the complexities of nature–nurture and why questions that seem simple turn out not to have simple answers. Introduction There are three related problems at the intersection of philosophy and science that are fundamental to our understanding of our relationship to the natural world: the mind–body problem, the free will problem, and the nature–nurture problem. These great questions have a lot in common. Everyone, even those without much knowledge of science or philosophy, has opinions about the answers to these questions that come simply from observing the world we live in. Our feelings about our relationship with the physical and biological world often seem incomplete. We are in control of our actions in some ways, but at the mercy of our bodies in others; it feels obvious that our consciousness is some kind of creation of our physical brains, at the same time we sense that our awareness must go beyond just the physical. This incomplete knowledge of our relationship with nature leaves us fascinated and a little obsessed, like a cat that climbs into a paper bag and then out again, over and over, mystified every time by a relationship between inner and outer that it can see but can’t quite understand. It may seem obvious that we are born with certain characteristics while others are acquired, and yet of the three great questions about humans’ relationship with the natural world, only nature–nurture gets referred to as a “debate.” In the history of psychology, no other question has caused so much controversy and offense: We are so concerned with nature–nurture because our very sense of moral character seems to depend on it. While we may admire the athletic skills of a great basketball player, we think of his height as simply a gift, a payoff in the “genetic lottery.” For the same reason, no one blames a short person for his height or someone’s congenital disability on poor decisions: To state the obvious, it’s “not their fault.” But we do praise the concert violinist (and perhaps her parents and teachers as well) for her dedication, just as we condemn cheaters, slackers, and bullies for their bad behavior. The problem is, most human characteristics aren’t usually as clear-cut as height or instrument-mastery, affirming our nature–nurture expectations strongly one way or the other. In fact, even the great violinist might have some inborn qualities—perfect pitch, or long, nimble fingers—that support and reward her hard work. And the basketball player might have eaten a diet while growing up that promoted his genetic tendency for being tall. When we think about our own qualities, they seem under our control in some respects, yet beyond our control in others. And often the traits that don’t seem to have an obvious cause are the ones that concern us the most and are far more personally significant. What about how much we drink or worry? What about our honesty, or religiosity, or sexual orientation? They all come from that uncertain zone, neither fixed by nature nor totally under our own control. One major problem with answering nature-nurture questions about people is, how do you set up an experiment? In nonhuman animals, there are relatively straightforward experiments for tackling nature–nurture questions. Say, for example, you are interested in aggressiveness in dogs. You want to test for the more important determinant of aggression: being born to aggressive dogs or being raised by them. You could mate two aggressive dogs—angry Chihuahuas—together, and mate two nonaggressive dogs—happy beagles—together, then switch half the puppies from each litter between the different sets of parents to raise. You would then have puppies born to aggressive parents (the Chihuahuas) but being raised by nonaggressive parents (the Beagles), and vice versa, in litters that mirror each other in puppy distribution. The big questions are: Would the Chihuahua parents raise aggressive beagle puppies? Would the beagle parents raise nonaggressive Chihuahua puppies? Would the puppies’ nature win out, regardless of who raised them? Or... would the result be a combination of nature and nurture? Much of the most significant nature–nurture research has been done in this way (Scott & Fuller, 1998), and animal breeders have been doing it successfully for thousands of years. In fact, it is fairly easy to breed animals for behavioral traits. With people, however, we can’t assign babies to parents at random, or select parents with certain behavioral characteristics to mate, merely in the interest of science (though history does include horrific examples of such practices, in misguided attempts at “eugenics,” the shaping of human characteristics through intentional breeding). In typical human families, children’s biological parents raise them, so it is very difficult to know whether children act like their parents due to genetic (nature) or environmental (nurture) reasons. Nevertheless, despite our restrictions on setting up human-based experiments, we do see real-world examples of nature-nurture at work in the human sphere—though they only provide partial answers to our many questions. The science of how genes and environments work together to influence behavior is called behavioral genetics. The easiest opportunity we have to observe this is the adoption study. When children are put up for adoption, the parents who give birth to them are no longer the parents who raise them. This setup isn’t quite the same as the experiments with dogs (children aren’t assigned to random adoptive parents in order to suit the particular interests of a scientist) but adoption still tells us some interesting things, or at least confirms some basic expectations. For instance, if the biological child of tall parents were adopted into a family of short people, do you suppose the child’s growth would be affected? What about the biological child of a Spanish-speaking family adopted at birth into an English-speaking family? What language would you expect the child to speak? And what might these outcomes tell you about the difference between height and language in terms of nature-nurture? Another option for observing nature-nurture in humans involves twin studies. There are two types of twins: monozygotic (MZ) and dizygotic (DZ). Monozygotic twins, also called “identical” twins, result from a single zygote (fertilized egg) and have the same DNA. They are essentially clones. Dizygotic twins, also known as “fraternal” twins, develop from two zygotes and share 50% of their DNA. Fraternal twins are ordinary siblings who happen to have been born at the same time. To analyze nature–nurture using twins, we compare the similarity of MZ and DZ pairs. Sticking with the features of height and spoken language, let’s take a look at how nature and nurture apply: Identical twins, unsurprisingly, are almost perfectly similar for height. The heights of fraternal twins, however, are like any other sibling pairs: more similar to each other than to people from other families, but hardly identical. This contrast between twin types gives us a clue about the role genetics plays in determining height. Now consider spoken language. If one identical twin speaks Spanish at home, the co-twin with whom she is raised almost certainly does too. But the same would be true for a pair of fraternal twins raised together. In terms of spoken language, fraternal twins are just as similar as identical twins, so it appears that the genetic match of identical twins doesn’t make much difference. Twin and adoption studies are two instances of a much broader class of methods for observing nature-nurture called quantitative genetics, the scientific discipline in which similarities among individuals are analyzed based on how biologically related they are. We can do these studies with siblings and half-siblings, cousins, twins who have been separated at birth and raised separately (Bouchard, Lykken, McGue, & Segal, 1990; such twins are very rare and play a smaller role than is commonly believed in the science of nature–nurture), or with entire extended families (see Plomin, DeFries, Knopik, & Neiderhiser, 2012, for a complete introduction to research methods relevant to nature–nurture). For better or for worse, contentions about nature–nurture have intensified because quantitative genetics produces a number called a heritability coefficient, varying from 0 to 1, that is meant to provide a single measure of genetics’ influence of a trait. In a general way, a heritability coefficient measures how strongly differences among individuals are related to differences among their genes. But beware: Heritability coefficients, although simple to compute, are deceptively difficult to interpret. Nevertheless, numbers that provide simple answers to complicated questions tend to have a strong influence on the human imagination, and a great deal of time has been spent discussing whether the heritability of intelligence or personality or depression is equal to one number or another. One reason nature–nurture continues to fascinate us so much is that we live in an era of great scientific discovery in genetics, comparable to the times of Copernicus, Galileo, and Newton, with regard to astronomy and physics. Every day, it seems, new discoveries are made, new possibilities proposed. When Francis Galton first started thinking about nature–nurture in the late-19th century he was very influenced by his cousin, Charles Darwin, but genetics per se was unknown. Mendel’s famous work with peas, conducted at about the same time, went undiscovered for 20 years; quantitative genetics was developed in the 1920s; DNA was discovered by Watson and Crick in the 1950s; the human genome was completely sequenced at the turn of the 21st century; and we are now on the verge of being able to obtain the specific DNA sequence of anyone at a relatively low cost. No one knows what this new genetic knowledge will mean for the study of nature–nurture, but as we will see in the next section, answers to nature–nurture questions have turned out to be far more difficult and mysterious than anyone imagined. What Have We Learned About Nature–Nurture? It would be satisfying to be able to say that nature–nurture studies have given us conclusive and complete evidence about where traits come from, with some traits clearly resulting from genetics and others almost entirely from environmental factors, such as childrearing practices and personal will; but that is not the case. Instead, everything has turned out to have some footing in genetics. The more genetically-related people are, the more similar they are—for everything: height, weight, intelligence, personality, mental illness, etc. Sure, it seems like common sense that some traits have a genetic bias. For example, adopted children resemble their biological parents even if they have never met them, and identical twins are more similar to each other than are fraternal twins. And while certain psychological traits, such as personality or mental illness (e.g., schizophrenia), seem reasonably influenced by genetics, it turns out that the same is true for political attitudes, how much television people watch (Plomin, Corley, DeFries, & Fulker, 1990), and whether or not they get divorced (McGue & Lykken, 1992). It may seem surprising, but genetic influence on behavior is a relatively recent discovery. In the middle of the 20th century, psychology was dominated by the doctrine of behaviorism, which held that behavior could only be explained in terms of environmental factors. Psychiatry concentrated on psychoanalysis, which probed for roots of behavior in individuals’ early life-histories. The truth is, neither behaviorism nor psychoanalysis is incompatible with genetic influences on behavior, and neither Freud nor Skinner was naive about the importance of organic processes in behavior. Nevertheless, in their day it was widely thought that children’s personalities were shaped entirely by imitating their parents’ behavior, and that schizophrenia was caused by certain kinds of “pathological mothering.” Whatever the outcome of our broader discussion of nature–nurture, the basic fact that the best predictors of an adopted child’s personality or mental health are found in the biological parents he or she has never met, rather than in the adoptive parents who raised him or her, presents a significant challenge to purely environmental explanations of personality or psychopathology. The message is clear: You can’t leave genes out of the equation. But keep in mind, no behavioral traits are completely inherited, so you can’t leave the environment out altogether, either. Trying to untangle the various ways nature-nurture influences human behavior can be messy, and often common-sense notions can get in the way of good science. One very significant contribution of behavioral genetics that has changed psychology for good can be very helpful to keep in mind: When your subjects are biologically-related, no matter how clearly a situation may seem to point to environmental influence, it is never safe to interpret a behavior as wholly the result of nurture without further evidence. For example, when presented with data showing that children whose mothers read to them often are likely to have better reading scores in third grade, it is tempting to conclude that reading to your kids out loud is important to success in school; this may well be true, but the study as described is inconclusive, because there are genetic as well asenvironmental pathways between the parenting practices of mothers and the abilities of their children. This is a case where “correlation does not imply causation,” as they say. To establish that reading aloud causes success, a scientist can either study the problem in adoptive families (in which the genetic pathway is absent) or by finding a way to randomly assign children to oral reading conditions. The outcomes of nature–nurture studies have fallen short of our expectations (of establishing clear-cut bases for traits) in many ways. The most disappointing outcome has been the inability to organize traits from more- to less-genetic. As noted earlier, everything has turned out to be at least somewhat heritable (passed down), yet nothing has turned out to be absolutely heritable, and there hasn’t been much consistency as to which traits are moreheritable and which are less heritable once other considerations (such as how accurately the trait can be measured) are taken into account (Turkheimer, 2000). The problem is conceptual: The heritability coefficient, and, in fact, the whole quantitative structure that underlies it, does not match up with our nature–nurture intuitions. We want to know how “important” the roles of genes and environment are to the development of a trait, but in focusing on “important” maybe we’re emphasizing the wrong thing. First of all, genes and environment are both crucial to every trait; without genes the environment would have nothing to work on, and too, genes cannot develop in a vacuum. Even more important, because nature–nurture questions look at the differences among people, the cause of a given trait depends not only on the trait itself, but also on the differences in that trait between members of the group being studied. The classic example of the heritability coefficient defying intuition is the trait of having two arms. No one would argue against the development of arms being a biological, genetic process. But fraternal twins are just as similar for “two-armedness” as identical twins, resulting in a heritability coefficient of zero for the trait of having two arms. Normally, according to the heritability model, this result (coefficient of zero) would suggest all nurture, no nature, but we know that’s not the case. The reason this result is not a tip-off that arm development is less genetic than we imagine is because people do not vary in the genes related to arm development—which essentially upends the heritability formula. In fact, in this instance, the opposite is likely true: the extent that people differ in arm number is likely the result of accidents and, therefore, environmental. For reasons like these, we always have to be very careful when asking nature–nurture questions, especially when we try to express the answer in terms of a single number. The heritability of a trait is not simply a property of that trait, but a property of the trait in a particular context of relevant genes and environmental factors. Another issue with the heritability coefficient is that it divides traits’ determinants into two portions—genes and environment—which are then calculated together for the total variability. This is a little like asking how much of the experience of a symphony comes from the horns and how much from the strings; the ways instruments or genes integrate is more complex than that. It turns out to be the case that, for many traits, genetic differences affect behavior under some environmental circumstances but not others—a phenomenon called gene-environment interaction, or G x E. In one well-known example, Caspi et al. (2002) showed that among maltreated children, those who carried a particular allele of the MAOA gene showed a predisposition to violence and antisocial behavior, while those with other alleles did not. Whereas, in children who had not been maltreated, the gene had no effect. Making matters even more complicated are very recent studies of what is known as epigenetics (see module, “Epigenetics” http://noba.to/37p5cb8v), a process in which the DNA itself is modified by environmental events, and those genetic changes transmitted to children. Some common questions about nature–nurture are, how susceptible is a trait to change, how malleable is it, and do we “have a choice” about it? These questions are much more complex than they may seem at first glance. For example, phenylketonuria is an inborn error of metabolism caused by a single gene; it prevents the body from metabolizing phenylalanine. Untreated, it causes mental retardation and death. But it can be treated effectively by a straightforward environmental intervention: avoiding foods containing phenylalanine. Height seems like a trait firmly rooted in our nature and unchangeable, but the average height of many populations in Asia and Europe has increased significantly in the past 100 years, due to changes in diet and the alleviation of poverty. Even the most modern genetics has not provided definitive answers to nature–nurture questions. When it was first becoming possible to measure the DNA sequences of individual people, it was widely thought that we would quickly progress to finding the specific genes that account for behavioral characteristics, but that hasn’t happened. There are a few rare genes that have been found to have significant (almost always negative) effects, such as the single gene that causes Huntington’s disease, or the Apolipoprotein gene that causes early onset dementia in a small percentage of Alzheimer’s cases. Aside from these rare genes of great effect, however, the genetic impact on behavior is broken up over many genes, each with very small effects. For most behavioral traits, the effects are so small and distributed across so many genes that we have not been able to catalog them in a meaningful way. In fact, the same is true of environmental effects. We know that extreme environmental hardship causes catastrophic effects for many behavioral outcomes, but fortunately extreme environmental hardship is very rare. Within the normal range of environmental events, those responsible for differences (e.g., why some children in a suburban third-grade classroom perform better than others) are much more difficult to grasp. The difficulties with finding clear-cut solutions to nature–nurture problems bring us back to the other great questions about our relationship with the natural world: the mind-body problem and free will. Investigations into what we mean when we say we are aware of something reveal that consciousness is not simply the product of a particular area of the brain, nor does choice turn out to be an orderly activity that we can apply to some behaviors but not others. So it is with nature and nurture: What at first may seem to be a straightforward matter, able to be indexed with a single number, becomes more and more complicated the closer we look. The many questions we can ask about the intersection among genes, environments, and human traits—how sensitive are traits to environmental change, and how common are those influential environments; are parents or culture more relevant; how sensitive are traits to differences in genes, and how much do the relevant genes vary in a particular population; does the trait involve a single gene or a great many genes; is the trait more easily described in genetic or more-complex behavioral terms?—may have different answers, and the answer to one tells us little about the answers to the others. It is tempting to predict that the more we understand the wide-ranging effects of genetic differences on all human characteristics—especially behavioral ones—our cultural, ethical, legal, and personal ways of thinking about ourselves will have to undergo profound changes in response. Perhaps criminal proceedings will consider genetic background. Parents, presented with the genetic sequence of their children, will be faced with difficult decisions about reproduction. These hopes or fears are often exaggerated. In some ways, our thinking may need to change—for example, when we consider the meaning behind the fundamental American principle that all men are created equal. Human beings differ, and like all evolved organisms they differ genetically. The Declaration of Independence predates Darwin and Mendel, but it is hard to imagine that Jefferson—whose genius encompassed botany as well as moral philosophy—would have been alarmed to learn about the genetic diversity of organisms. One of the most important things modern genetics has taught us is that almost all human behavior is too complex to be nailed down, even from the most complete genetic information, unless we’re looking at identical twins. The science of nature and nurture has demonstrated that genetic differences among people are vital to human moral equality, freedom, and self-determination, not opposed to them. As Mordecai Kaplan said about the role of the past in Jewish theology, genetics gets a vote, not a veto, in the determination of human behavior. We should indulge our fascination with nature–nurture while resisting the temptation to oversimplify it. Outside Resources Web: Institute for Behavioral Genetics http://www.colorado.edu/ibg/ Discussion Questions 1. Is your personality more like one of your parents than the other? If you have a sibling, is his or her personality like yours? In your family, how did these similarities and differences develop? What do you think caused them? 2. Can you think of a human characteristic for which genetic differences would play almost no role? Defend your choice. 3. Do you think the time will come when we will be able to predict almost everything about someone by examining their DNA on the day they are born? 4. Identical twins are more similar than fraternal twins for the trait of aggressiveness, as well as for criminal behavior. Do these facts have implications for the courtroom? If it can be shown that a violent criminal had violent parents, should it make a difference in culpability or sentencing? Vocabulary Adoption study A behavior genetic research method that involves comparison of adopted children to their adoptive and biological parents. Behavioral genetics The empirical science of how genes and environments combine to generate behavior. Heritability coefficient An easily misinterpreted statistical construct that purports to measure the role of genetics in the explanation of differences among individuals. Quantitative genetics Scientific and mathematical methods for inferring genetic and environmental processes based on the degree of genetic and environmental similarity among organisms. Twin studies A behavior genetic research method that involves comparison of the similarity of identical (monozygotic; MZ) and fraternal (dizygotic; DZ) twins.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/03%3A_BIOLOGY_AS_THE_BASIS_OF_BEHAVIOR/3.03%3A_The_Nature-Nurture_Question.txt
This module discusses gender and its related concepts, including sex, gender roles, gender identity, sexual orientation, and sexism. In addition, this module includes a discussion of differences that exist between males and females and how these real gender differences compare to the stereotypes society holds about gender differences. In fact, there are significantly fewer real gender differences than one would expect relative to the large number of stereotypes about gender differences. This module then discusses theories of how gender roles develop and how they contribute to strong expectations for gender differences. Finally, the module concludes with a discussion of some of the consequences of relying on and expecting gender differences, such as gender discrimination, sexual harassment, and ambivalent sexism. learning objectives • Distinguish gender and sex, as well as gender identity and sexual orientation. • Discuss gender differences that exist, as well as those that do not actually exist. • Understand and explain different theories of how gender roles are formed. • Discuss sexism and its impact on both genders. Introduction Before we discuss gender in detail, it is important to understand what gender actually is. The terms sex and gender are frequently used interchangeably, though they have different meanings. In this context, sex refers to the biological category of male or female, as defined by physical differences in genetic composition and in reproductive anatomy and function. On the other hand, gender refers to the cultural, social, and psychological meanings that are associated with masculinity and femininity (Wood & Eagly, 2002). You can think of “male” and “female” as distinct categories of sex (a person is typically born a male or a female), but “masculine” and “feminine” as continuums associated with gender (everyone has a certain degree of masculine and feminine traits and qualities). Beyond sex and gender, there are a number of related terms that are also often misunderstood. Gender roles are the behaviors, attitudes, and personality traits that are designated as either masculine or feminine in a given culture. It is common to think of gender roles in terms of gender stereotypes, or the beliefs and expectations people hold about the typical characteristics, preferences, and behaviors of men and women. A person’s gender identityrefers to their psychological sense of being male or female. In contrast, a person’s sexual orientation is the direction of their emotional and erotic attraction toward members of the opposite sex, the same sex, or both sexes. These are important distinctions, and though we will not discuss each of these terms in detail, it is important to recognize that sex, gender, gender identity, and sexual orientation do not always correspond with one another. A person can be biologically male but have a female gender identity while being attracted to women, or any other combination of identities and orientations. Gender Differences Differences between males and females can be based on (a) actual gender differences (i.e., men and women are actually different in some abilities), (b) gender roles (i.e., differences in how men and women are supposed to act), or (c) gender stereotypes (i.e., differences in how we think men and women are). Sometimes gender stereotypes and gender roles reflect actual gender differences, but sometimes they do not. What are actual gender differences? In terms of language and language skills, girls develop language skills earlier and know more words than boys; this does not, however, translate into long-term differences. Girls are also more likely than boys to offer praise, to agree with the person they’re talking to, and to elaborate on the other person’s comments; boys, in contrast, are more likely than girls to assert their opinion and offer criticisms (Leaper & Smith, 2004). In terms of temperament, boys are slightly less able to suppress inappropriate responses and slightly more likely to blurt things out than girls (Else-Quest, Hyde, Goldsmith, & Van Hulle, 2006). With respect to aggression, boys exhibit higher rates of unprovoked physical aggression than girls, but no difference in provoked aggression (Hyde, 2005). Some of the biggest differences involve the play styles of children. Boys frequently play organized rough-and-tumble games in large groups, while girls often play less physical activities in much smaller groups (Maccoby, 1998). There are also differences in the rates of depression, with girls much more likely than boys to be depressed after puberty. After puberty, girls are also more likely to be unhappy with their bodies than boys. However, there is considerable variability between individual males and individual females. Also, even when there are mean level differences, the actual size of most of these differences is quite small. This means, knowing someone’s gender does not help much in predicting his or her actual traits. For example, in terms of activity level, boys are considered more active than girls. However, 42% of girls are more active than the average boy (but so are 50% of boys; see Figure 3.4.1 for a depiction of this phenomenon in a comparison of male and female self-esteem). Furthermore, many gender differences do not reflect innate differences, but instead reflect differences in specific experiences and socialization. For example, one presumed gender difference is that boys show better spatial abilities than girls. However, Tzuriel and Egozi (2010) gave girls the chance to practice their spatial skills (by imagining a line drawing was different shapes) and discovered that, with practice, this gender difference completely disappeared. Many domains we assume differ across genders are really based on gender stereotypes and not actual differences. Based on large meta-analyses, the analyses of thousands of studies across more than one million people, research has shown: Girls are not more fearful, shy, or scared of new things than boys; boys are not more angry than girls and girls are not more emotional than boys; boys do not perform better at math than girls; and girls are not more talkative than boys (Hyde, 2005). In the following sections, we’ll investigate gender roles, the part they play in creating these stereotypes, and how they can affect the development of real gender differences. Gender Roles As mentioned earlier, gender roles are well-established social constructions that may change from culture to culture and over time. In American culture, we commonly think of gender roles in terms of gender stereotypes, or the beliefs and expectations people hold about the typical characteristics, preferences, and behaviors of men and women. By the time we are adults, our gender roles are a stable part of our personalities, and we usually hold many gender stereotypes. When do children start to learn about gender? Very early. By their first birthday, children can distinguish faces by gender. By their second birthday, they can label others’ gender and even sort objects into gender-typed categories. By the third birthday, children can consistently identify their own gender (see Martin, Ruble, & Szkrybalo, 2002, for a review). At this age, children believe sex is determined by external attributes, not biological attributes. Between 3 and 6 years of age, children learn that gender is constant and can’t change simply by changing external attributes, having developed gender constancy. During this period, children also develop strong and rigid gender stereotypes. Stereotypes can refer to play (e.g., boys play with trucks, and girls play with dolls), traits (e.g., boys are strong, and girls like to cry), and occupations (e.g., men are doctors and women are nurses). These stereotypes stay rigid until children reach about age 8 or 9. Then they develop cognitive abilities that allow them to be more flexible in their thinking about others. How do our gender roles and gender stereotypes develop and become so strong? Many of our gender stereotypes are so strong because we emphasize gender so much in culture (Bigler & Liben, 2007). For example, males and females are treated differently before they are even born. When someone learns of a new pregnancy, the first question asked is “Is it a boy or a girl?” Immediately upon hearing the answer, judgments are made about the child: Boys will be rough and like blue, while girls will be delicate and like pink. Developmental intergroup theory postulates that adults’ heavy focus on gender leads children to pay attention to gender as a key source of information about themselves and others, to seek out any possible gender differences, and to form rigid stereotypes based on gender that are subsequently difficult to change. There are also psychological theories that partially explain how children form their own gender roles after they learn to differentiate based on gender. The first of these theories is gender schema theory. Gender schema theory argues that children are active learners who essentially socialize themselves. In this case, children actively organize others’ behavior, activities, and attributes into gender categories, which are known as schemas. These schemas then affect what children notice and remember later. People of all ages are more likely to remember schema-consistent behaviors and attributes than schema-inconsistent behaviors and attributes. So, people are more likely to remember men, and forget women, who are firefighters. They also misremember schema-inconsistent information. If research participants are shown pictures of someone standing at the stove, they are more likely to remember the person to be cooking if depicted as a woman, and the person to be repairing the stove if depicted as a man. By only remembering schema-consistent information, gender schemas strengthen more and more over time. A second theory that attempts to explain the formation of gender roles in children is social learning theory. Social learning theory argues that gender roles are learned through reinforcement, punishment, and modeling. Children are rewarded and reinforced for behaving in concordance with gender roles and punished for breaking gender roles. In addition, social learning theory argues that children learn many of their gender roles by modeling the behavior of adults and older children and, in doing so, develop ideas about what behaviors are appropriate for each gender. Social learning theory has less support than gender schema theory—research shows that parents do reinforce gender-appropriate play, but for the most part treat their male and female children similarly (Lytton & Romney, 1991). Gender Sexism and Socialization Treating boys and girls, and men and women, differently is both a consequence of gender differences and a causeof gender differences. Differential treatment on the basis of gender is also referred to gender discrimination and is an inevitable consequence of gender stereotypes. When it is based on unwanted treatment related to sexual behaviors or appearance, it is called sexual harassment. By the time boys and girls reach the end of high school, most have experienced some form of sexual harassment, most commonly in the form of unwanted touching or comments, being the target of jokes, having their body parts rated, or being called names related to sexual orientation. Different treatment by gender begins with parents. A meta-analysis of research from the United States and Canada found that parents most frequently treated sons and daughters differently by encouraging gender-stereotypical activities (Lytton & Romney, 1991). Fathers, more than mothers, are particularly likely to encourage gender-stereotypical play, especially in sons. Parents also talk to their children differently based on stereotypes. For example, parents talk about numbers and counting twice as often with sons than daughters (Chang, Sandhofer, & Brown, 2011) and talk to sons in more detail about science than with daughters. Parents are also much more likely to discuss emotions with their daughters than their sons. Children do a large degree of socializing themselves. By age 3, children play in gender-segregated play groups and expect a high degree of conformity. Children who are perceived as gender atypical (i.e., do not conform to gender stereotypes) are more likely to be bullied and rejected than their more gender-conforming peers. Gender stereotypes typically maintain gender inequalities in society. The concept of ambivalent sexismrecognizes the complex nature of gender attitudes, in which women are often associated with positive and negative qualities (Glick & Fiske, 2001). It has two components. First, hostile sexism refers to the negative attitudes of women as inferior and incompetent relative to men. Second, benevolent sexism refers to the perception that women need to be protected, supported, and adored by men. There has been considerable empirical support for benevolent sexism, possibly because it is seen as more socially acceptable than hostile sexism. Gender stereotypes are found not just in American culture. Across cultures, males tend to be associated with stronger and more active characteristics than females (Best, 2001). In recent years, gender and related concepts have become a common focus of social change and social debate. Many societies, including American society, have seen a rapid change in perceptions of gender roles, media portrayals of gender, and legal trends relating to gender. For example, there has been an increase in children’s toys attempting to cater to both genders (such as Legos marketed to girls), rather than catering to traditional stereotypes. Nationwide, the drastic surge in acceptance of homosexuality and gender questioning has resulted in a rapid push for legal change to keep up with social change. Laws such as “Don’t Ask, Don’t Tell” and the Defense of Marriage Act (DOMA), both of which were enacted in the 1990s, have met severe resistance on the grounds of being discriminatory toward sexual minority groups and have been accused of unconstitutionality less than 20 years after their implementation. Change in perceptions of gender is also evident in social issues such as sexual harassment, a term that only entered the mainstream mindset in the 1991 Clarence Thomas/Anita Hill scandal. As society’s gender roles and gender restrictions continue to fluctuate, the legal system and the structure of American society will continue to change and adjust. 1920 -- 19th Amendment (women's Suffrage Ratified) 1941-1945 -- World War II forces millions of women to enter the workforce 1948 -- Universal Declaration of Human Rights 1963 -- Congress passes Equal Pay Act 1964 -- Congress passes Civil Rights Act, which outlaws sex discrimination 1969 -- Stonewall riots in NYC, forcing gay rights into the American spotlight 1972 --Congress passes Equal Rights Amendment; TitleIX prohibits sex discrimination is schools and sports 1973 -- American Psychiatric Association removes homosexuality from the DSM 1981 -- First woman appointed to the US Supreme Court 1987 -- Average woman earned \$0.68 for every \$1.00 earned by a man 1992 -- World Health Organization no longer considers homosexuality an illness 1993 -- Supreme Court rules that sexual harassment in the workplace is illegal 2011 -- Don't Ask Don't Tell is repealed, allowing people who identify as gay serve openly in the US military 2012 -- President Barack Obama becomes the first American president to openly support LGBT rights and marriage equality Outside Resources Video: Human Sexuality is Complicated Web: Big Think with Professor of Neuroscience Lise Eliot bigthink.com/users/liseeliot Web: Understanding Prejudice: Sexism http://www.understandingprejudice.or...nks/sexism.htm Discussion Questions 1. What are the differences and associations among gender, sex, gender identity, and sexual orientation? 2. Are the gender differences that exist innate (biological) differences or are they caused by other variables? 3. Discuss the theories relating to the development of gender roles and gender stereotypes. Which theory do you support? Why? 4. Using what you’ve read in this module: a. Why do you think gender stereotypes are so inflated compared with actual gender differences? b. Why do you think people continue to believe in such strong gender differences despite evidence to the contrary? 5. Brainstorm additional forms of gender discrimination aside from sexual harassment. Have you seen or experienced gender discrimination personally? 6. How is benevolent sexism detrimental to women, despite appearing positive? Vocabulary Ambivalent sexism A concept of gender attitudes that encompasses both positive and negative qualities. Benevolent sexism The “positive” element of ambivalent sexism, which recognizes that women are perceived as needing to be protected, supported, and adored by men. Developmental intergroup theory A theory that postulates that adults’ focus on gender leads children to pay attention to gender as a key source of information about themselves and others, to seek out possible gender differences, and to form rigid stereotypes based on gender. Gender The cultural, social, and psychological meanings that are associated with masculinity and femininity. Gender constancy The awareness that gender is constant and does not change simply by changing external attributes; develops between 3 and 6 years of age. Gender discrimination Differential treatment on the basis of gender. Gender identity A person’s psychological sense of being male or female. Gender roles The behaviors, attitudes, and personality traits that are designated as either masculine or feminine in a given culture. Gender schema theory This theory of how children form their own gender roles argues that children actively organize others’ behavior, activities, and attributes into gender categories or schemas. Gender stereotypes The beliefs and expectations people hold about the typical characteristics, preferences, and behaviors of men and women. Hostile sexism The negative element of ambivalent sexism, which includes the attitudes that women are inferior and incompetent relative to men. Schemas The gender categories into which, according to gender schema theory, children actively organize others’ behavior, activities, and attributes. Sex Biological category of male or female as defined by physical differences in genetic composition and in reproductive anatomy and function. Sexual harassment A form of gender discrimination based on unwanted treatment related to sexual behaviors or appearance. Sexual orientation Refers to the direction of emotional and erotic attraction toward members of the opposite sex, the same sex, or both sexes. Social learning theory This theory of how children form their own gender roles argues that gender roles are learned through reinforcement, punishment, and modeling.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/03%3A_BIOLOGY_AS_THE_BASIS_OF_BEHAVIOR/3.04%3A_Gender.txt
• 4.1: Cognitive Development in Childhood This module examines what cognitive development is, major theories about how it occurs, the roles of nature and nurture, whether it is continuous or discontinuous, and how research in the area is being used to improve education. • 4.2: Social and Personality Development in Childhood Childhood social and personality development emerges through the interaction of social influences, biological maturation, and the child’s representations of the social world and the self. This interaction is illustrated in a discussion of the influence of significant relationships, the development of social understanding, the growth of personality, and the development of social and emotional competence in childhood. • 4.3: Aging Traditionally, research on aging described only the lives of people over age 65 and the very old. Contemporary theories and research recognize that biogenetic and psychological processes of aging are complex and lifelong. We consider contemporary questions about cognitive aging and changes in personality, self-related beliefs, social relationships, and subjective well-being. These four aspects of psychosocial aging are related to health and longevity. 04: DEVELOPMENTAL PSYCHOLOGY By Robert Siegler Carnegie Mellon University This module examines what cognitive development is, major theories about how it occurs, the roles of nature and nurture, whether it is continuous or discontinuous, and how research in the area is being used to improve education. learning objectives • Be able to identify and describe the main areas of cognitive development. • Be able to describe major theories of cognitive development and what distinguishes them. • Understand how nature and nurture work together to produce cognitive development. • Understand why cognitive development is sometimes viewed as discontinuous and sometimes as continuous. • Know some ways in which research on cognitive development is being used to improve education. Introduction By the time you reach adulthood you have learned a few things about how the world works. You know, for instance, that you can’t walk through walls or leap into the tops of trees. You know that although you cannot see your car keys they’ve got to be around here someplace. What’s more, you know that if you want to communicate complex ideas like ordering a triple-shot soy vanilla latte with chocolate sprinkles it’s better to use words with meanings attached to them rather than simply gesturing and grunting. People accumulate all this useful knowledge through the process of cognitive development, which involves a multitude of factors, both inherent and learned. Cognitive development refers to the development of thinking across the lifespan. Defining thinking can be problematic, because no clear boundaries separate thinking from other mental activities. Thinking obviously involves the higher mental processes: problem solving, reasoning, creating, conceptualizing, categorizing, remembering, planning, and so on. However, thinking also involves other mental processes that seem more basic and at which even toddlers are skilled—such as perceiving objects and events in the environment, acting skillfully on objects to obtain goals, and understanding and producing language. Yet other areas of human development that involve thinking are not usually associated with cognitive development, because thinking isn’t a prominent feature of them—such as personality and temperament. As the name suggests, cognitive development is about change. Children’s thinking changes in dramatic and surprising ways. Consider DeVries’s (1969) study of whether young children understand the difference between appearance and reality. To find out, she brought an unusually even-tempered cat named Maynard to a psychology laboratory and allowed the 3- to 6-year-old participants in the study to pet and play with him. DeVries then put a mask of a fierce dog on Maynard’s head, and asked the children what Maynard was. Despite all of the children having identified Maynard previously as a cat, now most 3-year-olds said that he was a dog and claimed that he had a dog’s bones and a dog’s stomach. In contrast, the 6-year-olds weren’t fooled; they had no doubt that Maynard remained a cat. Understanding how children’s thinking changes so dramatically in just a few years is one of the fascinating challenges in studying cognitive development. There are several main types of theories of child development. Stage theories, such as Piaget’s stage theory, focus on whether children progress through qualitatively different stages of development. Sociocultural theories, such as that of Lev Vygotsky, emphasize how other people and the attitudes, values, and beliefs of the surrounding culture, influence children’s development. Information processing theories, such as that of David Klahr, examine the mental processes that produce thinking at any one time and the transition processes that lead to growth in that thinking. At the heart of all of these theories, and indeed of all research on cognitive development, are two main questions: (1) How do nature and nurture interact to produce cognitive development? (2) Does cognitive development progress through qualitatively distinct stages? In the remainder of this module, we examine the answers that are emerging regarding these questions, as well as ways in which cognitive developmental research is being used to improve education. Nature and Nurture The most basic question about child development is how nature and nurture together shape development. Nature refers to our biological endowment, the genes we receive from our parents. Nurture refers to the environments, social as well as physical, that influence our development, everything from the womb in which we develop before birth to the homes in which we grow up, the schools we attend, and the many people with whom we interact. The nature-nurture issue is often presented as an either-or question: Is our intelligence (for example) due to our genes or to the environments in which we live? In fact, however, every aspect of development is produced by the interaction of genes and environment. At the most basic level, without genes, there would be no child, and without an environment to provide nurture, there also would be no child. The way in which nature and nurture work together can be seen in findings on visual development. Many people view vision as something that people either are born with or that is purely a matter of biological maturation, but it also depends on the right kind of experience at the right time. For example, development of depth perception, the ability to actively perceive the distance from oneself to objects in the environment, depends on seeing patterned light and having normal brain activity in response to the patterned light, in infancy (Held, 1993). If no patterned light is received, for example when a baby has severe cataracts or blindness that is not surgically corrected until later in development, depth perception remains abnormal even after the surgery. Adding to the complexity of the nature-nurture interaction, children’s genes lead to their eliciting different treatment from other people, which influences their cognitive development. For example, infants’ physical attractiveness and temperament are influenced considerably by their genetic inheritance, but it is also the case that parents provide more sensitive and affectionate care to easygoing and attractive infants than to difficult and less attractive ones, which can contribute to the infants’ later cognitive development (Langlois et al., 1995; van den Boom & Hoeksma, 1994). Also contributing to the complex interplay of nature and nurture is the role of children in shaping their own cognitive development. From the first days out of the womb, children actively choose to attend more to some things and less to others. For example, even 1-month-olds choose to look at their mother’s face more than at the faces of other women of the same age and general level of attractiveness (Bartrip, Morton, & de Schonen, 2001). Children’s contributions to their own cognitive development grow larger as they grow older (Scarr & McCartney, 1983). When children are young, their parents largely determine their experiences: whether they will attend day care, the children with whom they will have play dates, the books to which they have access, and so on. In contrast, older children and adolescents choose their environments to a larger degree. Their parents’ preferences largely determine how 5-year-olds spend time, but 15-year-olds’ own preferences largely determine when, if ever, they set foot in a library. Children’s choices often have large consequences. To cite one example, the more that children choose to read, the more that their reading improves in future years (Baker, Dreher, & Guthrie, 2000). Thus, the issue is not whether cognitive development is a product of nature or nurture; rather, the issue is how nature and nurture work together to produce cognitive development. Does Cognitive Development Progress Through Distinct Stages? Some aspects of the development of living organisms, such as the growth of the width of a pine tree, involve quantitative changes, with the tree getting a little wider each year. Other changes, such as the life cycle of a ladybug, involve qualitative changes, with the creature becoming a totally different type of entity after a transition than before (Figure 6.2.1). The existence of both gradual, quantitative changes and relatively sudden, qualitative changes in the world has led researchers who study cognitive development to ask whether changes in children’s thinking are gradual and continuous or sudden and discontinuous. The great Swiss psychologist Jean Piaget proposed that children’s thinking progresses through a series of four discrete stages. By “stages,” he meant periods during which children reasoned similarly about many superficially different problems, with the stages occurring in a fixed order and the thinking within different stages differing in fundamental ways. The four stages that Piaget hypothesized were the sensorimotor stage (birth to 2 years), thepreoperational reasoning stage (2 to 6 or 7 years), the concrete operational reasoning stage (6 or 7 to 11 or 12 years), and the formal operational reasoning stage (11 or 12 years and throughout the rest of life). During the sensorimotor stage, children’s thinking is largely realized through their perceptions of the world and their physical interactions with it. Their mental representations are very limited. Consider Piaget’s object permanence task, which is one of his most famous problems. If an infant younger than 9 months of age is playing with a favorite toy, and another person removes the toy from view, for example by putting it under an opaque cover and not letting the infant immediately reach for it, the infant is very likely to make no effort to retrieve it and to show no emotional distress (Piaget, 1954). This is not due to their being uninterested in the toy or unable to reach for it; if the same toy is put under a clear cover, infants below 9 months readily retrieve it (Munakata, McClelland, Johnson, & Siegler, 1997). Instead, Piaget claimed that infants less than 9 months do not understand that objects continue to exist even when out of sight. During the preoperational stage, according to Piaget, children can solve not only this simple problem (which they actually can solve after 9 months) but show a wide variety of other symbolic-representation capabilities, such as those involved in drawing and using language. However, such 2- to 7-year-olds tend to focus on a single dimension, even when solving problems would require them to consider multiple dimensions. This is evident in Piaget’s (1952) conservation problems. For example, if a glass of water is poured into a taller, thinner glass, children below age 7 generally say that there now is more water than before. Similarly, if a clay ball is reshaped into a long, thin sausage, they claim that there is now more clay, and if a row of coins is spread out, they claim that there are now more coins. In all cases, the children are focusing on one dimension, while ignoring the changes in other dimensions (for example, the greater width of the glass and the clay ball). Children overcome this tendency to focus on a single dimension during the concrete operations stage, and think logically in most situations. However, according to Piaget, they still cannot think in systematic scientific ways, even when such thinking would be useful. Thus, if asked to find out which variables influence the period that a pendulum takes to complete its arc, and given weights that they can attach to strings in order to do experiments with the pendulum to find out, most children younger than age 12, perform biased experiments from which no conclusion can be drawn, and then conclude that whatever they originally believed is correct. For example, if a boy believed that weight was the only variable that mattered, he might put the heaviest weight on the shortest string and push it the hardest, and then conclude that just as he thought, weight is the only variable that matters (Inhelder & Piaget, 1958). Finally, in the formal operations period, children attain the reasoning power of mature adults, which allows them to solve the pendulum problem and a wide range of other problems. However, this formal operations stagetends not to occur without exposure to formal education in scientific reasoning, and appears to be largely or completely absent from some societies that do not provide this type of education. Although Piaget’s theory has been very influential, it has not gone unchallenged. Many more recent researchers have obtained findings indicating that cognitive development is considerably more continuous than Piaget claimed. For example, Diamond (1985) found that on the object permanence task described above, infants show earlier knowledge if the waiting period is shorter. At age 6 months, they retrieve the hidden object if the wait is no longer than 2 seconds; at 7 months, they retrieve it if the wait is no longer than 4 seconds; and so on. Even earlier, at 3 or 4 months, infants show surprise in the form of longer looking times if objects suddenly appear to vanish with no obvious cause (Baillargeon, 1987). Similarly, children’s specific experiences can greatly influence when developmental changes occur. Children of pottery makers in Mexican villages, for example, know that reshaping clay does not change the amount of clay at much younger ages than children who do not have similar experiences (Price-Williams, Gordon, & Ramirez, 1969). So, is cognitive development fundamentally continuous or fundamentally discontinuous? A reasonable answer seems to be, “It depends on how you look at it and how often you look.” For example, under relatively facilitative circumstances, infants show early forms of object permanence by 3 or 4 months, and they gradually extend the range of times for which they can remember hidden objects as they grow older. However, on Piaget’s original object permanence task, infants do quite quickly change toward the end of their first year from not reaching for hidden toys to reaching for them, even after they’ve experienced a substantial delay before being allowed to reach. Thus, the debate between those who emphasize discontinuous, stage-like changes in cognitive development and those who emphasize gradual continuous changes remains a lively one. Applications to Education Understanding how children think and learn has proven useful for improving education. One example comes from the area of reading. Cognitive developmental research has shown that phonemic awareness—that is, awareness of the component sounds within words—is a crucial skill in learning to read. To measure awareness of the component sounds within words, researchers ask children to decide whether two words rhyme, to decide whether the words start with the same sound, to identify the component sounds within words, and to indicate what would be left if a given sound were removed from a word. Kindergartners’ performance on these tasks is the strongest predictor of reading achievement in third and fourth grade, even stronger than IQ or social class background (Nation, 2008). Moreover, teaching these skills to randomly chosen 4- and 5-year-olds results in their being better readers years later (National Reading Panel, 2000). Another educational application of cognitive developmental research involves the area of mathematics. Even before they enter kindergarten, the mathematical knowledge of children from low-income backgrounds lags far behind that of children from more affluent backgrounds. Ramani and Siegler (2008) hypothesized that this difference is due to the children in middle- and upper-income families engaging more frequently in numerical activities, for example playing numerical board games such as Chutes and Ladders. Chutes and Ladders is a game with a number in each square; children start at the number one and spin a spinner or throw a dice to determine how far to move their token. Playing this game seemed likely to teach children about numbers, because in it, larger numbers are associated with greater values on a variety of dimensions. In particular, the higher the number that a child’s token reaches, the greater the distance the token will have traveled from the starting point, the greater the number of physical movements the child will have made in moving the token from one square to another, the greater the number of number-words the child will have said and heard, and the more time will have passed since the beginning of the game. These spatial, kinesthetic, verbal, and time-based cues provide a broad-based, multisensory foundation for knowledge of numerical magnitudes (the sizes of numbers), a type of knowledge that is closely related to mathematics achievement test scores (Booth & Siegler, 2006). Playing this numerical board game for roughly 1 hour, distributed over a 2-week period, improved low-income children’s knowledge of numerical magnitudes, ability to read printed numbers, and skill at learning novel arithmetic problems. The gains lasted for months after the game-playing experience (Ramani & Siegler, 2008; Siegler & Ramani, 2009). An advantage of this type of educational intervention is that it has minimal if any cost—a parent could just draw a game on a piece of paper. Understanding of cognitive development is advancing on many different fronts. One exciting area is linking changes in brain activity to changes in children’s thinking (Nelson et al., 2006). Although many people believe that brain maturation is something that occurs before birth, the brain actually continues to change in large ways for many years thereafter. For example, a part of the brain called the prefrontal cortex, which is located at the front of the brain and is particularly involved with planning and flexible problem solving, continues to develop throughout adolescence (Blakemore & Choudhury, 2006). Such new research domains, as well as enduring issues such as nature and nurture, continuity and discontinuity, and how to apply cognitive development research to education, insure that cognitive development will continue to be an exciting area of research in the coming years. Conclusion Research into cognitive development has shown us that minds don’t just form according to a uniform blueprint or innate intellect, but through a combination of influencing factors. For instance, if we want our kids to have a strong grasp of language we could concentrate on phonemic awareness early on. If we want them to be good at math and science we could engage them in numerical games and activities early on. Perhaps most importantly, we no longer think of brains as empty vessels waiting to be filled up with knowledge but as adaptable organs that develop all the way through early adulthood. Outside Resources Book: Frye, D., Baroody, A., Burchinal, M., Carver, S. M., Jordan, N. C., & McDowell, J. (2013). Teaching math to young children: A practice guide. Washington, DC: National Center for Education Evaluation and Regional Assistance (NCEE), Institute of Education Sciences, U.S. Department of Education. Book: Goswami, U. G. (2010). The Blackwell Handbook of Childhood Cognitive Development. New York: John Wiley and Sons. Book: Kuhn, D., & Siegler, R. S. (Vol. Eds.). (2006). Volume 2: Cognition, perception, and language. In W. Damon & R. M. Lerner (Series Eds.), Handbook of child psychology (6th ed.). Hoboken, NJ: Wiley. Book: Miller, P. H. (2011). Theories of developmental psychology (5th ed.). New York: Worth. Book: Siegler, R. S., & Alibali, M. W. (2004). Children's thinking (4th ed.). Upper Saddle River, NJ: Prentice-Hall. Discussion Questions 1. Why are there different theories of cognitive development? Why don’t researchers agree on which theory is the right one? 2. Do children’s natures differ, or do differences among children only reflect differences in their experiences? 3. Do you see development as more continuous or more discontinuous? 4. Can you think of ways other than those described in the module in which research on cognitive development could be used to improve education? Vocabulary Chutes and Ladders A numerical board game that seems to be useful for building numerical knowledge. Concrete operations stage Piagetian stage between ages 7 and 12 when children can think logically about concrete situations but not engage in systematic scientific reasoning. Conservation problems Problems pioneered by Piaget in which physical transformation of an object or set of objects changes a perceptually salient dimension but not the quantity that is being asked about. Continuous development Ways in which development occurs in a gradual incremental manner, rather than through sudden jumps. Depth perception The ability to actively perceive the distance from oneself of objects in the environment. Discontinuous development Discontinuous development Formal operations stage Piagetian stage starting at age 12 years and continuing for the rest of life, in which adolescents may gain the reasoning powers of educated adults. Information processing theories Theories that focus on describing the cognitive processes that underlie thinking at any one age and cognitive growth over time. Nature The genes that children bring with them to life and that influence all aspects of their development. Numerical magnitudes The sizes of numbers. Nurture The environments, starting with the womb, that influence all aspects of children’s development. Object permanence task The Piagetian task in which infants below about 9 months of age fail to search for an object that is removed from their sight and, if not allowed to search immediately for the object, act as if they do not know that it continues to exist. Phonemic awareness Awareness of the component sounds within words. Piaget’s theory Theory that development occurs through a sequence of discontinuous stages: the sensorimotor, preoperational, concrete operational, and formal operational stages. Preoperational reasoning stage Period within Piagetian theory from age 2 to 7 years, in which children can represent objects through drawing and language but cannot solve logical reasoning problems, such as the conservation problems. Qualitative changes Large, fundamental change, as when a caterpillar changes into a butterfly; stage theories such as Piaget’s posit that each stage reflects qualitative change relative to previous stages. Quantitative changes Gradual, incremental change, as in the growth of a pine tree’s girth. Sensorimotor stage Period within Piagetian theory from birth to age 2 years, during which children come to represent the enduring reality of objects. Sociocultural theories Theory founded in large part by Lev Vygotsky that emphasizes how other people and the attitudes, values, and beliefs of the surrounding culture influence children’s development.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/04%3A_DEVELOPMENTAL_PSYCHOLOGY/4.01%3A_Cognitive_Development_in_Childhood.txt
By Ross Thompson University of California, Davis Childhood social and personality development emerges through the interaction of social influences, biological maturation, and the child’s representations of the social world and the self. This interaction is illustrated in a discussion of the influence of significant relationships, the development of social understanding, the growth of personality, and the development of social and emotional competence in childhood. learning objectives • Provide specific examples of how the interaction of social experience, biological maturation, and the child’s representations of experience and the self provide the basis for growth in social and personality development. • Describe the significant contributions of parent–child and peer relationships to the development of social skills and personality in childhood. • Explain how achievements in social understanding occur in childhood. Moreover, do scientists believe that infants and young children are egocentric? • Describe the association of temperament with personality development. • Explain what is “social and emotional competence“ and provide some examples of how it develops in childhood. Introduction “How have I become the kind of person I am today?” Every adult ponders this question from time to time. The answers that readily come to mind include the influences of parents, peers, temperament, a moral compass, a strong sense of self, and sometimes critical life experiences such as parental divorce. Social and personality development encompasses these and many other influences on the growth of the person. In addition, it addresses questions that are at the heart of understanding how we develop as unique people. How much are we products of nature or nurture? How enduring are the influences of early experiences? The study of social and personality development offers perspective on these and other issues, often by showing how complex and multifaceted are the influences on developing children, and thus the intricate processes that have made you the person you are today (Thompson, 2006a). Understanding social and personality development requires looking at children from three perspectives that interact to shape development. The first is the social context in which each child lives, especially the relationships that provide security, guidance, and knowledge. The second is biological maturation that supports developing social and emotional competencies and underlies temperamental individuality. The third is children’s developing representations of themselves and the social world. Social and personality development is best understood as the continuous interaction between these social, biological, and representational aspects of psychological development. Relationships This interaction can be observed in the development of the earliest relationships between infants and their parents in the first year. Virtually all infants living in normal circumstances develop strong emotional attachments to those who care for them. Psychologists believe that the development of these attachments is as biologically natural as learning to walk and not simply a byproduct of the parents’ provision of food or warmth. Rather, attachments have evolved in humans because they promote children’s motivation to stay close to those who care for them and, as a consequence, to benefit from the learning, security, guidance, warmth, and affirmation that close relationships provide (Cassidy, 2008). Although nearly all infants develop emotional attachments to their caregivers--parents, relatives, nannies-- their sense of security in those attachments varies. Infants become securely attached when their parents respond sensitively to them, reinforcing the infants’ confidence that their parents will provide support when needed. Infants become insecurely attached when care is inconsistent or neglectful; these infants tend to respond avoidantly, resistantly, or in a disorganized manner (Belsky & Pasco Fearon, 2008). Such insecure attachments are not necessarily the result of deliberately bad parenting but are often a byproduct of circumstances. For example, an overworked single mother may find herself overstressed and fatigued at the end of the day, making fully-involved childcare very difficult. In other cases, some parents are simply poorly emotionally equipped to take on the responsibility of caring for a child. The different behaviors of securely- and insecurely-attached infants can be observed especially when the infant needs the caregiver’s support. To assess the nature of attachment, researchers use a standard laboratory procedure called the “Strange Situation,” which involves brief separations from the caregiver (e.g., mother) (Solomon & George, 2008). In the Strange Situation, the caregiver is instructed to leave the child to play alone in a room for a short time, then return and greet the child while researchers observe the child’s response. Depending on the child’s level of attachment, he or she may reject the parent, cling to the parent, or simply welcome the parent—or, in some instances, react with an agitated combination of responses. Infants can be securely or insecurely attached with mothers, fathers, and other regular caregivers, and they can differ in their security with different people. The security of attachment is an important cornerstone of social and personality development, because infants and young children who are securely attached have been found to develop stronger friendships with peers, more advanced emotional understanding and early conscience development, and more positive self-concepts, compared with insecurely attached children (Thompson, 2008). This is consistent with attachment theory’s premise that experiences of care, resulting in secure or insecure attachments, shape young children’s developing concepts of the self, as well as what people are like, and how to interact with them. As children mature, parent-child relationships naturally change. Preschool and grade-school children are more capable, have their own preferences, and sometimes refuse or seek to compromise with parental expectations. This can lead to greater parent-child conflict, and how conflict is managed by parents further shapes the quality of parent-child relationships. In general, children develop greater competence and self-confidence when parents have high (but reasonable) expectations for children’s behavior, communicate well with them, are warm and responsive, and use reasoning (rather than coercion) as preferred responses to children’s misbehavior. This kind of parenting style has been described as authoritative (Baumrind, 2013). Authoritative parents are supportive and show interest in their kids’ activities but are not overbearing and allow them to make constructive mistakes. By contrast, some less-constructive parent-child relationships result from authoritarian, uninvolved, or permissive parenting styles (see Table 1). Parental roles in relation to their children change in other ways, too. Parents increasingly become mediators (or gatekeepers) of their children’s involvement with peers and activities outside the family. Their communication and practice of values contributes to children’s academic achievement, moral development, and activity preferences. As children reach adolescence, the parent-child relationship increasingly becomes one of “coregulation,” in which both the parent(s) and the child recognizes the child’s growing competence and autonomy, and together they rebalance authority relations. We often see evidence of this as parents start accommodating their teenage kids’ sense of independence by allowing them to get cars, jobs, attend parties, and stay out later. Family relationships are significantly affected by conditions outside the home. For instance, the Family Stress Model describes how financial difficulties are associated with parents’ depressed moods, which in turn lead to marital problems and poor parenting that contributes to poorer child adjustment (Conger, Conger, & Martin, 2010). Within the home, parental marital difficulty or divorce affects more than half the children growing up today in the United States. Divorce is typically associated with economic stresses for children and parents, the renegotiation of parent-child relationships (with one parent typically as primary custodian and the other assuming a visiting relationship), and many other significant adjustments for children. Divorce is often regarded by children as a sad turning point in their lives, although for most it is not associated with long-term problems of adjustment (Emery, 1999). Peer Relationships Parent-child relationships are not the only significant relationships in a child’s life. Peer relationships are also important. Social interaction with another child who is similar in age, skills, and knowledge provokes the development of many social skills that are valuable for the rest of life (Bukowski, Buhrmester, & Underwood, 2011). In peer relationships, children learn how to initiate and maintain social interactions with other children. They learn skills for managing conflict, such as turn-taking, compromise, and bargaining. Play also involves the mutual, sometimes complex, coordination of goals, actions, and understanding. For example, as infants, children get their first encounter with sharing (of each other’s toys); during pretend play as preschoolers they create narratives together, choose roles, and collaborate to act out their stories; and in primary school, they may join a sports team, learning to work together and support each other emotionally and strategically toward a common goal. Through these experiences, children develop friendships that provide additional sources of security and support to those provided by their parents. However, peer relationships can be challenging as well as supportive (Rubin, Coplan, Chen, Bowker, & McDonald, 2011). Being accepted by other children is an important source of affirmation and self-esteem, but peer rejection can foreshadow later behavior problems (especially when children are rejected due to aggressive behavior). With increasing age, children confront the challenges of bullying, peer victimization, and managing conformity pressures. Social comparison with peers is an important means by which children evaluate their skills, knowledge, and personal qualities, but it may cause them to feel that they do not measure up well against others. For example, a boy who is not athletic may feel unworthy of his football-playing peers and revert to shy behavior, isolating himself and avoiding conversation. Conversely, an athlete who doesn’t “get” Shakespeare may feel embarrassed and avoid reading altogether. Also, with the approach of adolescence, peer relationships become focused on psychological intimacy, involving personal disclosure, vulnerability, and loyalty (or its betrayal)—which significantly affects a child’s outlook on the world. Each of these aspects of peer relationships requires developing very different social and emotional skills than those that emerge in parent-child relationships. They also illustrate the many ways that peer relationships influence the growth of personality and self-concept. Social Understanding As we have seen, children’s experience of relationships at home and the peer group contributes to an expanding repertoire of social and emotional skills and also to broadened social understanding. In these relationships, children develop expectations for specific people (leading, for example, to secure or insecure attachments to parents), understanding of how to interact with adults and peers, and developing self-concept based on how others respond to them. These relationships are also significant forums for emotional development. Remarkably, young children begin developing social understanding very early in life. Before the end of the first year, infants are aware that other people have perceptions, feelings, and other mental states that affect their behavior, and which are different from the child’s own mental states. This can be readily observed in a process called social referencing, in which an infant looks to the mother’s face when confronted with an unfamiliar person or situation (Feinman, 1992). If the mother looks calm and reassuring, the infant responds positively as if the situation is safe. If the mother looks fearful or distressed, the infant is likely to respond with wariness or distress because the mother’s expression signals danger. In a remarkably insightful manner, therefore, infants show an awareness that even though they are uncertain about the unfamiliar situation, their mother is not, and that by “reading” the emotion in her face, infants can learn about whether the circumstance is safe or dangerous, and how to respond. Although developmental scientists used to believe that infants are egocentric—that is, focused on their own perceptions and experience—they now realize that the opposite is true. Infants are aware at an early stage that people have different mental states, and this motivates them to try to figure out what others are feeling, intending, wanting, and thinking, and how these mental states affect their behavior. They are beginning, in other words, to develop a theory of mind, and although their understanding of mental states begins very simply, it rapidly expands (Wellman, 2011). For example, if an 18-month-old watches an adult try repeatedly to drop a necklace into a cup but inexplicably fail each time, they will immediately put the necklace into the cup themselves—thus completing what the adult intended, but failed, to do. In doing so, they reveal their awareness of the intentions underlying the adult’s behavior (Meltzoff, 1995). Carefully designed experimental studies show that by late in the preschool years, young children understand that another’s beliefs can be mistaken rather than correct, that memories can affect how you feel, and that one’s emotions can be hidden from others (Wellman, 2011). Social understanding grows significantly as children’s theory of mind develops. How do these achievements in social understanding occur? One answer is that young children are remarkably sensitive observers of other people, making connections between their emotional expressions, words, and behavior to derive simple inferences about mental states (e.g., concluding, for example, that what Mommy is looking at is in her mind) (Gopnik, Meltzoff, & Kuhl, 2001). This is especially likely to occur in relationships with people whom the child knows well, consistent with the ideas of attachment theory discussed above. Growing language skills give young children words with which to represent these mental states (e.g., “mad,” “wants”) and talk about them with others. Thus in conversation with their parents about everyday experiences, children learn much about people’s mental states from how adults talk about them (“Your sister was sad because she thought Daddy was coming home.”) (Thompson, 2006b). Developing social understanding is, in other words, based on children’s everyday interactions with others and their careful interpretations of what they see and hear. There are also some scientists who believe that infants are biologically prepared to perceive people in a special way, as organisms with an internal mental life, and this facilitates their interpretation of people’s behavior with reference to those mental states (Leslie, 1994). Personality Parents look into the faces of their newborn infants and wonder, “What kind of person will this child will become?” They scrutinize their baby’s preferences, characteristics, and responses for clues of a developing personality. They are quite right to do so, because temperament is a foundation for personality growth. But temperament (defined as early-emerging differences in reactivity and self-regulation) is not the whole story. Although temperament is biologically based, it interacts with the influence of experience from the moment of birth (if not before) to shape personality (Rothbart, 2011). Temperamental dispositions are affected, for example, by the support level of parental care. More generally, personality is shaped by the goodness of fit between the child’s temperamental qualities and characteristics of the environment (Chess & Thomas, 1999). For example, an adventurous child whose parents regularly take her on weekend hiking and fishing trips would be a good “fit” to her lifestyle, supporting personality growth. Personality is the result, therefore, of the continuous interplay between biological disposition and experience, as is true for many other aspects of social and personality development. Personality develops from temperament in other ways (Thompson, Winer, & Goodvin, 2010). As children mature biologically, temperamental characteristics emerge and change over time. A newborn is not capable of much self-control, but as brain-based capacities for self-control advance, temperamental changes in self-regulation become more apparent. For example, a newborn who cries frequently doesn’t necessarily have a grumpy personality; over time, with sufficient parental support and increased sense of security, the child might be less likely to cry. In addition, personality is made up of many other features besides temperament. Children’s developing self-concept, their motivations to achieve or to socialize, their values and goals, their coping styles, their sense of responsibility and conscientiousness, and many other qualities are encompassed into personality. These qualities are influenced by biological dispositions, but even more by the child’s experiences with others, particularly in close relationships, that guide the growth of individual characteristics. Indeed, personality development begins with the biological foundations of temperament but becomes increasingly elaborated, extended, and refined over time. The newborn that parents gazed upon thus becomes an adult with a personality of depth and nuance. Social and Emotional Competence Social and personality development is built from the social, biological, and representational influences discussed above. These influences result in important developmental outcomes that matter to children, parents, and society: a young adult’s capacity to engage in socially constructive actions (helping, caring, sharing with others), to curb hostile or aggressive impulses, to live according to meaningful moral values, to develop a healthy identity and sense of self, and to develop talents and achieve success in using them. These are some of the developmental outcomes that denote social and emotional competence. These achievements of social and personality development derive from the interaction of many social, biological, and representational influences. Consider, for example, the development of conscience, which is an early foundation for moral development. Conscience consists of the cognitive, emotional, and social influences that cause young children to create and act consistently with internal standards of conduct (Kochanska, 2002). Conscience emerges from young children’s experiences with parents, particularly in the development of a mutually responsive relationship that motivates young children to respond constructively to the parents’ requests and expectations. Biologically based temperament is involved, as some children are temperamentally more capable of motivated self-regulation (a quality called effortful control) than are others, while some children are dispositionally more prone to the fear and anxiety that parental disapproval can evoke. Conscience development grows through a good fit between the child’s temperamental qualities and how parents communicate and reinforce behavioral expectations. Moreover, as an illustration of the interaction of genes and experience, one research group found that young children with a particular gene allele (the 5-HTTLPR) were low on measures of conscience development when they had previously experienced unresponsive maternal care, but children with the same allele growing up with responsive care showed strong later performance on conscience measures (Kochanska, Kim, Barry, & Philibert, 2011). Conscience development also expands as young children begin to represent moral values and think of themselves as moral beings. By the end of the preschool years, for example, young children develop a “moral self” by which they think of themselves as people who want to do the right thing, who feel badly after misbehaving, and who feel uncomfortable when others misbehave. In the development of conscience, young children become more socially and emotionally competent in a manner that provides a foundation for later moral conduct (Thompson, 2012). The development of gender and gender identity is likewise an interaction among social, biological, and representational influences (Ruble, Martin, & Berenbaum, 2006). Young children learn about gender from parents, peers, and others in society, and develop their own conceptions of the attributes associated with maleness or femaleness (called gender schemas). They also negotiate biological transitions (such as puberty) that cause their sense of themselves and their sexual identity to mature. Each of these examples of the growth of social and emotional competence illustrates not only the interaction of social, biological, and representational influences, but also how their development unfolds over an extended period. Early influences are important, but not determinative, because the capabilities required for mature moral conduct, gender identity, and other outcomes continue to develop throughout childhood, adolescence, and even the adult years. Conclusion As the preceding sentence suggests, social and personality development continues through adolescence and the adult years, and it is influenced by the same constellation of social, biological, and representational influences discussed for childhood. Changing social relationships and roles, biological maturation and (much later) decline, and how the individual represents experience and the self continue to form the bases for development throughout life. In this respect, when an adult looks forward rather than retrospectively to ask, “what kind of person am I becoming?”—a similarly fascinating, complex, multifaceted interaction of developmental processes lies ahead. Outside Resources Web: Center for the Developing Child, Harvard University http://developingchild.harvard.edu Web: Collaborative for Academic, Social, and Emotional Learning http://casel.org Discussion Questions 1. If parent–child relationships naturally change as the child matures, would you expect that the security of attachment might also change over time? What reasons would account for your expectation? 2. In what ways does a child’s developing theory of mind resemble how scientists create, refine, and use theories in their work? In other words, would it be appropriate to think of children as informal scientists in their development of social understanding? 3. If there is a poor goodness of fit between a child’s temperament and characteristics of parental care, what can be done to create a better match? Provide a specific example of how this might occur. 4. What are the contributions that parents offer to the development of social and emotional competence in children? Answer this question again with respect to peer contributions. Vocabulary Authoritative A parenting style characterized by high (but reasonable) expectations for children’s behavior, good communication, warmth and nurturance, and the use of reasoning (rather than coercion) as preferred responses to children’s misbehavior. Conscience The cognitive, emotional, and social influences that cause young children to create and act consistently with internal standards of conduct. Effortful control A temperament quality that enables children to be more successful in motivated self-regulation. Family Stress Model A description of the negative effects of family financial difficulty on child adjustment through the effects of economic stress on parents’ depressed mood, increased marital problems, and poor parenting. Gender schemas Organized beliefs and expectations about maleness and femaleness that guide children’s thinking about gender. Goodness of fit The match or synchrony between a child’s temperament and characteristics of parental care that contributes to positive or negative personality development. A good “fit” means that parents have accommodated to the child’s temperamental attributes, and this contributes to positive personality growth and better adjustment. Security of attachment An infant’s confidence in the sensitivity and responsiveness of a caregiver, especially when he or she is needed. Infants can be securely attached or insecurely attached. Social referencing The process by which one individual consults another’s emotional expressions to determine how to evaluate and respond to circumstances that are ambiguous or uncertain. Temperament Early emerging differences in reactivity and self-regulation, which constitutes a foundation for personality development. Theory of mind Children’s growing understanding of the mental states that affect people’s behavior.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/04%3A_DEVELOPMENTAL_PSYCHOLOGY/4.02%3A_Social_and_Personality_Development_in_Childhood.txt
By Tara Queen and Jacqui Smith University of Michigan Traditionally, research on aging described only the lives of people over age 65 and the very old. Contemporary theories and research recognize that biogenetic and psychological processes of aging are complex and lifelong. Functioning in each period of life is influenced by what happened earlier and, in turn, affects subsequent change. We all age in specific social and historical contexts. Together, these multiple influences on aging make it difficult to define when middle-age or old age begins. This module describes central concepts and research about adult development and aging. We consider contemporary questions about cognitive aging and changes in personality, self-related beliefs, social relationships, and subjective well-being. These four aspects of psychosocial aging are related to health and longevity. learning objectives • Explain research approaches to studying aging. • Describe cognitive, psychosocial, and physical changes that occur with age. • Provide examples of how age-related changes in these domains are observed in the context of everyday life. Introduction We are currently living in an aging society (Rowe, 2009). Indeed, by 2030 when the last of the Baby Boomers reach age 65, the U.S. older population will be double that of 2010. Furthermore, because of increases in average life expectancy, each new generation can expect to live longer than their parents’ generation and certainly longer than their grandparents’ generation. As a consequence, it is time for individuals of all ages to rethink their personal life plans and consider prospects for a long life. When is the best time to start a family? Will the education gained up to age 20 be sufficient to cope with future technological advances and marketplace needs? What is the right balance between work, family, and leisure throughout life? What's the best age to retire? How can I age successfully and enjoy life to the fullest when I'm 80 or 90? In this module we will discuss several different domains of psychological research on aging that will help answer these important questions. Overview: Life Span and Life Course Perspectives on Aging Just as young adults differ from one another, older adults are also not all the same. In each decade of adulthood, we observe substantial heterogeneity in cognitive functioning, personality, social relationships, lifestyle, beliefs, and satisfaction with life. This heterogeneity reflects differences in rates of biogenetic and psychological aging and the sociocultural contexts and history of people's lives (Bronfenbrenner, 1979; Fingerman, Berg, Smith, & Antonucci, 2011). Theories of aging describe how these multiple factors interact and change over time. They describe why functioning differs on average between young, middle-aged, young-old, and very old adults and why there is heterogeneity within these age groups. Life course theories, for example, highlight the effects of social expectations and the normative timing of life events and social roles (e.g., becoming a parent, retirement). They also consider the lifelong cumulative effects of membership in specific cohorts (generations) and sociocultural subgroups (e.g., race, gender, socioeconomic status) and exposure to historical events (e.g., war, revolution, natural disasters; Elder, Johnson, & Crosnoe, 2003; Settersten, 2005). Life span theories complement the life-course perspective with a greater focus on processes within the individual (e.g., the aging brain). This approach emphasizes the patterning of lifelong intra- and inter-individual differences in the shape (gain, maintenance, loss), level, and rate of change (Baltes, 1987, 1997). Both life course and life span researchers generally rely on longitudinal studies to examine hypotheses about different patterns of aging associated with the effects of biogenetic, life history, social, and personal factors. Cross-sectional studies provide information about age-group differences, but these are confounded with cohort, time of study, and historical effects. Cognitive Aging Researchers have identified areas of both losses and gains in cognition in older age. Cognitive ability and intelligence are often measured using standardized tests and validated measures. The psychometric approachhas identified two categories of intelligence that show different rates of change across the life span (Schaie & Willis, 1996). Fluid intelligence refers to information processing abilities, such as logical reasoning, remembering lists, spatial ability, and reaction time. Crystallized intelligence encompasses abilities that draw upon experience and knowledge. Measures of crystallized intelligence include vocabulary tests, solving number problems, and understanding texts. With age, systematic declines are observed on cognitive tasks requiring self-initiated, effortful processing, without the aid of supportive memory cues (Park, 2000). Older adults tend to perform poorer than young adults on memory tasks that involve recallof information, where individuals must retrieve information they learned previously without the help of a list of possible choices. For example, older adults may have more difficulty recalling facts such as names or contextual details about where or when something happened (Craik, 2000). What might explain these deficits as we age? As we age, working memory, or our ability to simultaneously store and use information, becomes less efficient (Craik & Bialystok, 2006). The ability to process information quickly also decreases with age. This slowing of processing speedmay explain age differences on many different cognitive tasks (Salthouse, 2004). Some researchers have argued that inhibitory functioning, or the ability to focus on certain information while suppressing attention to less pertinent information, declines with age and may explain age differences in performance on cognitive tasks (Hasher & Zacks, 1988). Finally, it is well established that our hearing and vision decline as we age. Longitudinal research has proposed that deficits in sensory functioning explain age differences in a variety of cognitive abilities (Baltes & Lindenberger, 1997). Fewer age differences are observed when memory cues are available, such as for recognition memory tasks, or when individuals can draw upon acquired knowledge or experience. For example, older adults often perform as well if not better than young adults on tests of word knowledge or vocabulary. With age often comes expertise, and research has pointed to areas where aging experts perform as well or better than younger individuals. For example, older typists were found to compensate for age-related declines in speed by looking farther ahead at printed text (Salthouse, 1984). Compared to younger players, older chess experts are able to focus on a smaller set of possible moves, leading to greater cognitive efficiency (Charness, 1981). Accrued knowledge of everyday tasks, such as grocery prices, can help older adults to make better decisions than young adults (Tentori, Osheron, Hasher, & May, 2001). How do changes or maintenance of cognitive ability affect older adults’ everyday lives? Researchers have studied cognition in the context of several different everyday activities. One example is driving. Although older adults often have more years of driving experience, cognitive declines related to reaction time or attentional processes may pose limitations under certain circumstances (Park & Gutchess, 2000). Research on interpersonal problem solving suggested that older adults use more effective strategies than younger adults to navigate through social and emotional problems (Blanchard-Fields, 2007). In the context of work, researchers rarely find that older individuals perform poorer on the job (Park & Gutchess, 2000). Similar to everyday problem solving, older workers may develop more efficient strategies and rely on expertise to compensate for cognitive decline. Research on adult personality examines normative age-related increases and decreases in the expression of the so-called "Big Five" traits—extraversion, neuroticism, conscientiousness, agreeableness, and openness to new experience. Does personality change throughout adulthood? Previously the answer was no, but contemporary research shows that although some people’s personalities are relatively stable over time, others’ are not (Lucas & Donnellan, 2011; Roberts & Mroczek, 2008). Longitudinal studies reveal average changes during adulthood in the expression of some traits (e.g., neuroticism and openness decrease with age and conscientiousness increases) and individual differences in these patterns due to idiosyncratic life events (e.g., divorce, illness). Longitudinal research also suggests that adult personality traits, such as conscientiousness, predict important life outcomes including job success, health, and longevity (Friedman, Tucker, Tomlinson-Keasey, Schwartz, Wingard, & Criqui, 1993; Roberts, Kuncel, Shiner, Caspi, & Goldberg, 2007). In contrast to the relative stability of personality traits, theories about the aging self-propose changes in self-related knowledge, beliefs, and autobiographical narratives. Responses to questions such as “Tell me something about yourself. Who are you?” "What are your hopes for the future?" provide insight into the characteristics and life themes that an individual considers uniquely distinguish him or herself from others. These self-descriptions enhance self-esteem and guide behavior (Markus & Nurius, 1986; McAdams, 2006). Theory suggests that as we age, themes that were relatively unimportant in young and middle adulthood gain in salience (e.g., generativity, health) and that people view themselves as improving over time (Ross & Wilson, 2003). Reorganizing personal life narratives and self-descriptions are the major tasks of midlife and young-old age due to transformations in professional and family roles and obligations. In advanced old age, self-descriptions are often characterized by a life review and reflections about having lived a long life. Birren and Schroots (2006), for example, found the process of life review in late life helped individuals confront and cope with the challenges of old age. One aspect of the self that particularly interests life span and life course psychologists is the individual’s perception and evaluation of their own aging and identification with an age group. Subjective age is a multidimensional construct that indicates how old (or young) a person feels and into which age group a person categorizes him- or herself. After early adulthood, most people say that they feel younger than their chronological age and the gap between subjective age and actual age generally increases. On average, after age 40 people report feeling 20% younger than their actual age (e.g., Rubin & Berntsen, 2006). Asking people how satisfied they are with their own aging assesses an evaluative component of age identity. Whereas some aspects of age identity are positively valued (e.g., acquiring seniority in a profession or becoming a grandparent), others may be less valued, depending on societal context. Perceived physical age (i.e., the age one looks in a mirror) is one aspect that requires considerable self-related adaptation in social and cultural contexts that value young bodies. Feeling younger and being satisfied with one’s own aging are expressions of positive self-perceptions of aging. They reflect the operation of self-related processes that enhance well-being. Levy (2009) found that older individuals who are able to adapt to and accept changes in their appearance and physical capacity in a positive way report higher well-being, have better health, and live longer. Social Relationships Social ties to family, friends, mentors, and peers are primary resources of information, support, and comfort. Individuals develop and age together with family and friends and interact with others in the community. Across the life course, social ties are accumulated, lost, and transformed. Already in early life, there are multiple sources of heterogeneity in the characteristics of each person's social network of relationships (e.g., size, composition, and quality). Life course and life span theories and research about age-related patterns in social relationships focus on understanding changes in the processes underlying social connections. Antonucci's Convoy Model of Social Relations (2001; Kahn & Antonucci, 1980), for example, suggests that the social connections that people accumulate are held together by exchanges in social support (e.g., tangible and emotional). The frequency, types, and reciprocity of the exchanges change with age and in response to need, and in turn, these exchanges impact the health and well-being of the givers and receivers in the convoy. In many relationships, it is not the actual objective exchange of support that is critical but instead the perception that support is available if needed (Uchino, 2009). Carstensen’s Socioemotional Selectivity Theory (1993; Carstensen, Isaacowitz, & Charles, 1999) focuses on changes in motivation for actively seeking social contact with others. She proposes that with increasing age our motivational goals change from information gathering to emotion regulation. To optimize the experience of positive affect, older adults actively restrict their social life to prioritize time spent with emotionally close significant others. In line with this, older marriages are found to be characterized by enhanced positive and reduced negative interactions and older partners show more affectionate behavior during conflict discussions than do middle-aged partners (Carstensen, Gottman, & Levenson, 1995). Research showing that older adults have smaller networks compared to young adults and tend to avoid negative interactions also supports this theory. Similar selective processes are also observed when time horizons for interactions with close partners shrink temporarily for young adults (e.g., impending geographical separations). Much research focuses on the associations between specific effects of long-term social relationships and health in later life. Older married individuals who receive positive social and emotional support from their partner generally report better health than their unmarried peers (Antonucci, 2001; Umberson, Williams, Powers, Liu, & Needham, 2006; Waite & Gallagher, 2000). Despite the overall positive health effects of being married in old age (compared with being widowed, divorced, or single), living as a couple can have a "dark side" if the relationship is strained or if one partner is the primary caregiver. The consequences of positive and negative aspects of relationships are complex (Birditt & Antonucci, 2008; Rook, 1998; Uchino, 2009). For example, in some circumstances, criticism from a partner may be perceived as valid and useful feedback whereas in others it is considered unwarranted and hurtful. In long-term relationships, habitual negative exchanges might have diminished effects. Parent-child and sibling relationships are often the most long-term and emotion-laden social ties. Across the life span, the parent-child tie, for example, is characterized by a paradox of solidarity, conflict, and ambivalence (Fingerman, Chen, Hay, Cichy, & Lefkowitz, 2006). Emotion and Well-being As we get older, the likelihood of losing loved ones or experiencing declines in health increases. Does the experience of such losses result in decreases in well-being in older adulthood? Researchers have found that well-being differs across the life span and that the patterns of these differences depend on how well-being is measured. Measures of global subjective well-being assess individuals’ overall perceptions of their lives. This can include questions about life satisfaction or judgments of whether individuals are currently living the best life possible. What factors may contribute to how people respond to these questions? Age, health, personality, social support, and life experiences have been shown to influence judgments of global well-being. It is important to note that predictors of well-being may change as we age. What is important to life satisfaction in young adulthood can be different in later adulthood (George, 2010). Early research on well-being argued that life events such as marriage or divorce can temporarily influence well-being, but people quickly adapt and return to a neutral baseline (called the hedonic treadmill; Diener, Lucas, & Scollon, 2006). More recent research suggests otherwise. Using longitudinal data, researchers have examined well-being prior to, during, and after major life events such as widowhood, marriage, and unemployment (Lucas, 2007). Different life events influence well-being in different ways, and individuals do not often adapt back to baseline levels of well-being. The influence of events, such as unemployment, may have a lasting negative influence on well-being as people age. Research suggests that global well-being is highest in early and later adulthood and lowest in midlife (Stone, Schwartz, Broderick, & Deaton, 2010). Hedonic well-being refers to the emotional component of well-being and includes measures of positive (e.g., happiness, contentment) and negative affect (e.g., stress, sadness). The pattern of positive affect across the adult life span is similar to that of global well-being, with experiences of positive emotions such as happiness and enjoyment being highest in young and older adulthood. Experiences of negative affect, particularly stress and anger, tend to decrease with age. Experiences of sadness are lowest in early and later adulthood compared to midlife (Stone et al., 2010). Other research finds that older adults report more positive and less negative affect than middle age and younger adults (Magai, 2008; Mroczek, 2001). It should be noted that both global well-being and positive affect tend to taper off during late older adulthood and these declines may be accounted for by increases in health-related losses during these years (Charles & Carstensen, 2010). Psychological well-being aims to evaluate the positive aspects of psychosocial development, as opposed to factors of ill-being, such as depression or anxiety. Ryff’s model of psychological well-being proposes six core dimensions of positive well-being. Older adults tend to report higher environmental mastery (feelings of competence and control in managing everyday life) and autonomy (independence), lower personal growth and purpose in life, and similar levels of positive relations with others as younger individuals (Ryff, 1995). Links between health and interpersonal flourishing, or having high-quality connections with others, may be important in understanding how to optimize quality of life in old age (Ryff & Singer, 2000). Successful Aging and Longevity Increases in average life expectancy in the 20th century and evidence from twin studies that suggests that genes account for only 25% of the variance in human life spans have opened new questions about implications for individuals and society (Christensen, Doblhammer, Rau, & Vaupel, 2009). What environmental and behavioral factors contribute to a healthy long life? Is it possible to intervene to slow processes of aging or to minimize cognitive decline, prevent dementia, and ensure life quality at the end of life (Fratiglioni, Paillard-Borg, & Winblad, 2004; Hertzog, Kramer, Wilson, & Lindenberger, 2009; Lang, Baltes, & Wagner, 2007)? Should interventions focus on late life, midlife, or indeed begin in early life? Suggestions that pathological change (e.g., dementia) is not an inevitable component of aging and that pathology could at least be delayed until the very end of life led to theories about successful aging and proposals about targets for intervention. Rowe and Kahn (1997) defined three criteria of successful aging: (a) the relative avoidance of disease, disability, and risk factors like high blood pressure, smoking, or obesity; (b) the maintenance of high physical and cognitive functioning; and (c) active engagement in social and productive activities. Although such definitions of successful aging are value-laden, research and behavioral interventions have subsequently been guided by this model. For example, research has suggested that age-related declines in cognitive functioning across the adult life span may be slowed through physical exercise and lifestyle interventions (Kramer & Erickson, 2007). It is recognized, however, that societal and environmental factors also play a role and that there is much room for social change and technical innovation to accommodate the needs of the Baby Boomers and later generations as they age in the next decades. Outside Resources Web: Columbia Aging Society http://www.agingsocietynetwork.org/ Web: Columbia International Longevity Center www.mailman.columbia.edu/acad...ledge-transfer Web: National Institute on Aging http://www.nia.nih.gov/ Web: Stanford Center Longevity http://longevity3.stanford.edu/ Discussion Questions 1. How do age stereotypes and intergenerational social interactions shape quality of life in older adults? What are the implications of the research of Levy and others? 2. Researchers suggest that there is both stability and change in Big Five personality traits after age 30. What is stable? What changes? 3. Describe the Social Convoy Model of Antonucci. What are the implications of this model for older adults? 4. Memory declines during adulthood. Is this statement correct? What does research show? 5. Is dementia inevitable in old age? What factors are currently thought to be protective? 6. What are the components of successful aging described by Rowe and Kahn (1998) and others? What outcomes are used to evaluate successful aging? Vocabulary Age identity How old or young people feel compared to their chronological age; after early adulthood, most people feel younger than their chronological age. Autobiographical narratives A qualitative research method used to understand characteristics and life themes that an individual considers to uniquely distinguish him- or herself from others. Average life expectancy Mean number of years that 50% of people in a specific birth cohort are expected to survive. This is typically calculated from birth but is also sometimes re-calculated for people who have already reached a particular age (e.g., 65). Cohort Group of people typically born in the same year or historical period, who share common experiences over time; sometimes called a generation (e.g., Baby Boom Generation). Convoy Model of Social Relations Theory that proposes that the frequency, types, and reciprocity of social exchanges change with age. These social exchanges impact the health and well-being of the givers and receivers in the convoy. Cross-sectional studies Research method that provides information about age group differences; age differences are confounded with cohort differences and effects related to history and time of study. Crystallized intelligence Type of intellectual ability that relies on the application of knowledge, experience, and learned information. Fluid intelligence Type of intelligence that relies on the ability to use information processing resources to reason logically and solve novel problems. Global subjective well-being Individuals’ perceptions of and satisfaction with their lives as a whole. Hedonic well-being Component of well-being that refers to emotional experiences, often including measures of positive (e.g., happiness, contentment) and negative affect (e.g., stress, sadness). Heterogeneity Inter-individual and subgroup differences in level and rate of change over time. Inhibitory functioning Ability to focus on a subset of information while suppressing attention to less relevant information. Intra- and inter-individual differences Different patterns of development observed within an individual (intra-) or between individuals (inter-). Life course theories Theory of development that highlights the effects of social expectations of age-related life events and social roles; additionally considers the lifelong cumulative effects of membership in specific cohorts and sociocultural subgroups and exposure to historical events. Life span theories Theory of development that emphasizes the patterning of lifelong within- and between-person differences in the shape, level, and rate of change trajectories. Longitudinal studies Research method that collects information from individuals at multiple time points over time, allowing researchers to track cohort differences in age-related change to determine cumulative effects of different life experiences. Processing speed The time it takes individuals to perform cognitive operations (e.g., process information, react to a signal, switch attention from one task to another, find a specific target object in a complex picture). Psychometric approach Approach to studying intelligence that examines performance on tests of intellectual functioning. Recall Type of memory task where individuals are asked to remember previously learned information without the help of external cues. Recognition Type of memory task where individuals are asked to remember previously learned information with the assistance of cues. Self-perceptions of aging An individual’s perceptions of their own aging process; positive perceptions of aging have been shown to be associated with greater longevity and health. Social network Network of people with whom an individual is closely connected; social networks provide emotional, informational, and material support and offer opportunities for social engagement. Socioemotional Selectivity Theory Theory proposed to explain the reduction of social partners in older adulthood; posits that older adults focus on meeting emotional over information-gathering goals, and adaptively select social partners who meet this need. Subjective age A multidimensional construct that indicates how old (or young) a person feels and into which age group a person categorizes him- or herself Successful aging Includes three components: avoiding disease, maintaining high levels of cognitive and physical functioning, and having an actively engaged lifestyle. Working memory Memory system that allows for information to be simultaneously stored and utilized or manipulated.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/04%3A_DEVELOPMENTAL_PSYCHOLOGY/4.03%3A_Aging.txt
• 5.1: Sensation and Perception The topics of sensation and perception are among the oldest and most important in all of psychology. People are equipped with senses such as sight, hearing and taste that help us to take in the world around us.  In this module, you will learn about the biological processes of sensation and how these can be combined to create perceptions. 05: SENSATION AND PERCEPTION By Adam John Privitera Chemeketa Community College The topics of sensation and perception are among the oldest and most important in all of psychology. People are equipped with senses such as sight, hearing and taste that help us to take in the world around us. Amazingly, our senses have the ability to convert real-world information into electrical information that can be processed by the brain. The way we interpret this information-- our perceptions-- is what leads to our experiences of the world. In this module, you will learn about the biological processes of sensation and how these can be combined to create perceptions. learning objectives • Differentiate the processes of sensation and perception. • Explain the basic principles of sensation and perception. • Describe the function of each of our senses. • Outline the anatomy of the sense organs and their projections to the nervous system. • Apply knowledge of sensation and perception to real world examples. • Explain the consequences of multimodal perception. Introduction "Once I was hiking at Cape Lookout State Park in Tillamook, Oregon. After passing through a vibrantly colored, pleasantly scented, temperate rainforest, I arrived at a cliff overlooking the Pacific Ocean. I grabbed the cold metal railing near the edge and looked out at the sea. Below me, I could see a pod of sea lions swimming in the deep blue water. All around me I could smell the salt from the sea and the scent of wet, fallen leaves." This description of a single memory highlights the way a person’s senses are so important to our experience of the world around us. Before discussing each of our extraordinary senses individually, it is necessary to cover some basic concepts that apply to all of them. It is probably best to start with one very important distinction that can often be confusing: the difference between sensation and perception. The physical process during which our sensory organs—those involved with hearing and taste, for example—respond to external stimuli is called sensation. Sensation happens when you eat noodles or feel the wind on your face or hear a car horn honking in the distance. During sensation, our sense organs are engaging in transduction, the conversion of one form of energy into another. Physical energy such as light or a sound wave is converted into a form of energy the brain can understand: electrical stimulation. After our brain receives the electrical signals, we make sense of all this stimulation and begin to appreciate the complex world around us. This psychological process—making sense of the stimuli—is called perception. It is during this process that you are able to identify a gas leak in your home or a song that reminds you of a specific afternoon spent with friends. Regardless of whether we are talking about sight or taste or any of the individual senses, there are a number of basic principles that influence the way our sense organs work. The first of these influences is our ability to detect an external stimulus. Each sense organ—our eyes or tongue, for instance—requires a minimal amount of stimulation in order to detect a stimulus. This absolute threshold explains why you don’t smell the perfume someone is wearing in a classroom unless they are somewhat close to you. The way we measure absolute thresholds is by using a method called signal detection. This process involves presenting stimuli of varying intensities to a research participant in order to determine the level at which he or she can reliably detect stimulation in a given sense. During one type of hearing test, for example, a person listens to increasingly louder tones (starting from silence) in an effort to determine the threshold at which he or she begins to hear (see Additional Resources for a video demonstration of a high-frequency ringtone that can only be heard by young people). Correctly indicating that a sound was heard is called a hit; failing to do so is called a miss. Additionally, indicating that a sound was heard when one wasn’t played is called a false alarm, and correctly identifying when a sound wasn’t played is a correct rejection. Through these and other studies, we have been able to gain an understanding of just how remarkable our senses are. For example, the human eye is capable of detecting candlelight from 30 miles away in the dark. We are also capable of hearing the ticking of a watch in a quiet environment from 20 feet away. If you think that’s amazing, I encourage you to read more about the extreme sensory capabilities of nonhuman animals; many animals possess what we would consider super-human abilities. A similar principle to the absolute threshold discussed above underlies our ability to detect the difference between two stimuli of different intensities. The differential threshold, or just noticeable difference (JND), for each sense has been studied using similar methods to signal detection. To illustrate, find a friend and a few objects of known weight (you’ll need objects that weigh 1, 2, 10 and 11 lbs.—or in metric terms: 1, 2, 5 and 5.5 kg). Have your friend hold the lightest object (1 lb. or 1 kg). Then, replace this object with the next heaviest and ask him or her to tell you which one weighs more. Reliably, your friend will say the second object every single time. It’s extremely easy to tell the difference when something weighs double what another weighs! However, it is not so easy when the difference is a smaller percentage of the overall weight. It will be much harder for your friend to reliably tell the difference between 10 and 11 lbs. (or 5 versus 5.5 kg) than it is for 1 and 2 lbs. This is phenomenon is called Weber’s Law, and it is the idea that bigger stimuli require larger differences to be noticed. Crossing into the world of perception, it is clear that our experience influences how our brain processes things. You have tasted food that you like and food that you don’t like. There are some bands you enjoy and others you can’t stand. However, during the time you first eat something or hear a band, you process those stimuli using bottom-up processing. This is when we build up to perception from the individual pieces. Sometimes, though, stimuli we’ve experienced in our past will influence how we process new ones. This is called top-down processing. The best way to illustrate these two concepts is with our ability to read. Read the following quote out loud: Notice anything odd while you were reading the text in the triangle? Did you notice the second “the”? If not, it’s likely because you were reading this from a top-down approach. Having a second “the” doesn’t make sense. We know this. Our brain knows this and doesn’t expect there to be a second one, so we have a tendency to skip right over it. In other words, your past experience has changed the way you perceive the writing in the triangle! A beginning reader—one who is using a bottom-up approach by carefully attending to each piece—would be less likely to make this error. Finally, it should be noted that when we experience a sensory stimulus that doesn’t change, we stop paying attention to it. This is why we don’t feel the weight of our clothing, hear the hum of a projector in a lecture hall, or see all the tiny scratches on the lenses of our glasses. When a stimulus is constant and unchanging, we experience sensory adaptation. During this process we become less sensitive to that stimulus. A great example of this occurs when we leave the radio on in our car after we park it at home for the night. When we listen to the radio on the way home from work the volume seems reasonable. However, the next morning when we start the car, we might be startled by how loud the radio is. We don’t remember it being that loud last night. What happened? What happened is that we adapted to the constant stimulus of the radio volume over the course of the previous day. This required us to continue to turn up the volume of the radio to combat the constantly decreasing sensitivity. However, after a number of hours away from that constant stimulus, the volume that was once reasonable is entirely too loud. We are no longer adapted to that stimulus! Now that we have introduced some basic sensory principles, let us take on each one of our fascinating senses individually. Vision How vision works Vision is a tricky matter. When we see a pizza, a feather, or a hammer, we are actually seeing light bounce off that object and into our eye. Light enters the eye through the pupil, a tiny opening behind the cornea. The pupil regulates the amount of light entering the eye by contracting (getting smaller) in bright light and dilating (getting larger) in dimmer light. Once past the pupil, light passes through the lens, which focuses an image on a thin layer of cells in the back of the eye, called the retina. Because we have two eyes in different locations, the image focused on each retina is from a slightly different angle (binocular disparity), providing us with our perception of 3D space (binocular vision). You can appreciate this by holding a pen in your hand, extending your arm in front of your face, and looking at the pen while closing each eye in turn. Pay attention to the apparent position of the pen relative to objects in the background. Depending on which eye is open, the pen appears to jump back and forth! This is how video game manufacturers create the perception of 3D without special glasses; two slightly different images are presented on top of one another. It is in the retina that light is transduced, or converted into electrical signals, by specialized cells called photoreceptors. The retina contains two main kinds of photoreceptors: rods and cones. Rods are primarily responsible for our ability to see in dim light conditions, such as during the night. Cones, on the other hand, provide us with the ability to see color and fine detail when the light is brighter. Rods and cones differ in their distribution across the retina, with the highest concentration of cones found in the fovea (the central region of focus), and rods dominating the periphery (see Figure 8.1.2). The difference in distribution can explain why looking directly at a dim star in the sky makes it seem to disappear; there aren’t enough rods to process the dim light! Next, the electrical signal is sent through a layer of cells in the retina, eventually traveling down the optic nerve. After passing through the thalamus, this signal makes it to the primary visual cortex, where information about light orientation and movement begin to come together (Hubel & Wiesel, 1962). Information is then sent to a variety of different areas of the cortex for more complex processing. Some of these cortical regions are fairly specialized—for example, for processing faces (fusiform face area) and body parts (extrastriate body area). Damage to these areas of the cortex can potentially result in a specific kind of agnosia, whereby a person loses the ability to perceive visual stimuli. A great example of this is illustrated in the writing of famous neurologist Dr. Oliver Sacks; he experienced prosopagnosia, the inability to recognize faces. These specialized regions for visual recognition comprise the ventral pathway (also called the “what” pathway). Other areas involved in processing location and movement make up the dorsal pathway (also called the “where” pathway). Together, these pathways process a large amount of information about visual stimuli (Goodale & Milner, 1992). Phenomena we often refer to as optical illusions provide misleading information to these “higher” areas of visual processing (see Additional Resources for websites containing amazing optical illusions). Dark and light adaptation Humans have the ability to adapt to changes in light conditions. As mentioned before, rods are primarily involved in our ability to see in dim light. They are the photoreceptors responsible for allowing us to see in a dark room. You might notice that this night vision ability takes around 10 minutes to turn on, a process called dark adaptation. This is because our rods become bleached in normal light conditions and require time to recover. We experience the opposite effect when we leave a dark movie theatre and head out into the afternoon sun. During light adaptation, a large number of rods and cones are bleached at once, causing us to be blinded for a few seconds. Light adaptation happens almost instantly compared with dark adaptation. Interestingly, some people think pirates wore a patch over one eye in order to keep it adapted to the dark while the other was adapted to the light. If you want to turn on a light without losing your night vision, don’t worry about wearing an eye patch, just use a red light; this wavelength doesn’t bleach your rods. Color vision Our cones allow us to see details in normal light conditions, as well as color. We have cones that respond preferentially, not exclusively, for red, green and blue (Svaetichin, 1955). This trichromatic theory is not new; it dates back to the early 19th century (Young, 1802; Von Helmholtz, 1867). This theory, however, does not explain the odd effect that occurs when we look at a white wall after staring at a picture for around 30 seconds. Try this: stare at the image of the flag in Figure 8.1.3 for 30 seconds and then immediately look at a sheet of white paper or a wall. According to the trichromatic theory of color vision, you should see white when you do that. Is that what you experienced? As you can see, the trichromatic theory doesn’t explain the afterimage you just witnessed. This is where the opponent-process theory comes in (Hering, 1920). This theory states that our cones send information to retinal ganglion cells that respond to pairs of colors (red-green, blue-yellow, black-white). These specialized cells take information from the cones and compute the difference between the two colors—a process that explains why we cannot see reddish-green or bluish-yellow, as well as why we see afterimages. Color blindness can result from issues with the cones or retinal ganglion cells involved in color vision. Hearing (Audition) Some of the most well-known celebrities and top earners in the world are musicians. Our worship of musicians may seem silly when you consider that all they are doing is vibrating the air a certain way to create sound waves, the physical stimulus for audition. People are capable of getting a large amount of information from the basic qualities of sound waves. The amplitude (or intensity) of a sound wave codes for the loudness of a stimulus; higher amplitude sound waves result in louder sounds. The pitch of a stimulus is coded in the frequency of a sound wave; higher frequency sounds are higher pitched. We can also gauge the quality, or timbre, of a sound by the complexity of the sound wave. This allows us to tell the difference between bright and dull sounds as well as natural and synthesized instruments (Välimäki & Takala, 1996). In order for us to sense sound waves from our environment they must reach our inner ear. Lucky for us, we have evolved tools that allow those waves to be funneled and amplified during this journey. Initially, sound waves are funneled by your pinna (the external part of your ear that you can actually see) into your auditory canal (the hole you stick Q-tips into despite the box advising against it). During their journey, sound waves eventually reach a thin, stretched membrane called the tympanic membrane (eardrum), which vibrates against the three smallest bones in the body—the malleus (hammer), the incus (anvil), and the stapes (stirrup)—collectively called the ossicles. Both the tympanic membrane and the ossicles amplify the sound waves before they enter the fluid-filled cochlea, a snail-shell-like bone structure containing auditory hair cells arranged on the basilar membrane (see Figure 8.1.4) according to the frequency they respond to (called tonotopic organization). Depending on age, humans can normally detect sounds between 20 Hz and 20 kHz. It is inside the cochlea that sound waves are converted into an electrical message. Because we have an ear on each side of our head, we are capable of localizing sound in 3D space pretty well (in the same way that having two eyes produces 3D vision). Have you ever dropped something on the floor without seeing where it went? Did you notice that you were somewhat capable of locating this object based on the sound it made when it hit the ground? We can reliably locate something based on which ear receives the sound first. What about the height of a sound? If both ears receive a sound at the same time, how are we capable of localizing sound vertically? Research in cats (Populin & Yin, 1998) and humans (Middlebrooks & Green, 1991) has pointed to differences in the quality of sound waves depending on vertical positioning. After being processed by auditory hair cells, electrical signals are sent through the cochlear nerve (a division of the vestibulocochlear nerve) to the thalamus, and then the primary auditory cortex of the temporal lobe. Interestingly, the tonotopic organization of the cochlea is maintained in this area of the cortex (Merzenich, Knight, & Roth, 1975; Romani, Williamson, & Kaufman, 1982). However, the role of the primary auditory cortex in processing the wide range of features of sound is still being explored (Walker, Bizley, & Schnupp, 2011). Balance and the vestibular system The inner ear isn’t only involved in hearing; it’s also associated with our ability to balance and detect where we are in space. The vestibular system is comprised of three semicircular canals—fluid-filled bone structures containing cells that respond to changes in the head’s orientation in space. Information from the vestibular system is sent through the vestibular nerve (the other division of the vestibulocochlear nerve) to muscles involved in the movement of our eyes, neck, and other parts of our body. This information allows us to maintain our gaze on an object while we are in motion. Disturbances in the vestibular system can result in issues with balance, including vertigo. Touch Who doesn’t love the softness of an old t-shirt or the smoothness of a clean shave? Who actually enjoys having sand in their swimsuit? Our skin, the body’s largest organ, provides us with all sorts of information, such as whether something is smooth or bumpy, hot or cold, or even if it’s painful. Somatosensation—which includes our ability to sense touch, temperature and pain—transduces physical stimuli, such as fuzzy velvet or scalding water, into electrical potentials that can be processed by the brain. Tactile sensation Tactile stimuli—those that are associated with texture—are transduced by special receptors in the skin called mechanoreceptors. Just like photoreceptors in the eye and auditory hair cells in the ear, these allow for the conversion of one kind of energy into a form the brain can understand. After tactile stimuli are converted by mechanoreceptors, information is sent through the thalamus to the primary somatosensory cortex for further processing. This region of the cortex is organized in a somatotopic map where different regions are sized based on the sensitivity of specific parts on the opposite side of the body (Penfield & Rasmussen, 1950). Put simply, various areas of the skin, such as lips and fingertips, are more sensitive than others, such as shoulders or ankles. This sensitivity can be represented with the distorted proportions of the human body shown in Figure 8.1.5. Pain Most people, if asked, would love to get rid of pain (nociception), because the sensation is very unpleasant and doesn’t appear to have obvious value. But the perception of pain is our body’s way of sending us a signal that something is wrong and needs our attention. Without pain, how would we know when we are accidentally touching a hot stove, or that we should rest a strained arm after a hard workout? Phantom limbs Records of people experiencing phantom limbs after amputations have been around for centuries (Mitchell, 1871). As the name suggests, people with a phantom limb have the sensations such as itching seemingly coming from their missing limb. A phantom limb can also involve phantom limb pain, sometimes described as the muscles of the missing limb uncomfortably clenching. While the mechanisms underlying these phenomena are not fully understood, there is evidence to support that the damaged nerves from the amputation site are still sending information to the brain (Weinstein, 1998) and that the brain is reacting to this information (Ramachandran & Rogers-Ramachandran, 2000). There is an interesting treatment for the alleviation of phantom limb pain that works by tricking the brain, using a special mirror box to create a visual representation of the missing limb. The technique allows the patient to manipulate this representation into a more comfortable position (Ramachandran & Rogers-Ramachandran, 1996). Smell and Taste: The Chemical Senses The two most underappreciated senses can be lumped into the broad category of chemical senses. Both olfaction (smell) and gustation (taste) require the transduction of chemical stimuli into electrical potentials. I say these senses are underappreciated because most people would give up either one of these if they were forced to give up a sense. While this may not shock a lot of readers, take into consideration how much money people spend on the perfume industry annually (\$29 billion US Dollars). Many of us pay a lot more for a favorite brand of food because we prefer the taste. Clearly, we humans care about our chemical senses. Olfaction (smell) Unlike any of the other senses discussed so far, the receptors involved in our perception of both smell and taste bind directly with the stimuli they transduce. Odorants in our environment, very often mixtures of them, bind with olfactory receptors found in the olfactory epithelium. The binding of odorants to receptors is thought to be similar to how a lock and key operates, with different odorants binding to different specialized receptors based on their shape. However, the shape theory of olfaction isn’t universally accepted and alternative theories exist, including one that argues that the vibrations of odorant molecules correspond to their subjective smells (Turin, 1996). Regardless of how odorants bind with receptors, the result is a pattern of neural activity. It is thought that our memories of these patterns of activity underlie our subjective experience of smell (Shepherd, 2005). Interestingly, because olfactory receptors send projections to the brain through the cribriform plate of the skull, head trauma has the potential to cause anosmia, due to the severing of these connections. If you are in a line of work where you constantly experience head trauma (e.g. professional boxer) and you develop anosmia, don’t worry—your sense of smell will probably come back (Sumner, 1964). Gustation (taste) Taste works in a similar fashion to smell, only with receptors found in the taste buds of the tongue, called taste receptor cells. To clarify a common misconception, taste buds are not the bumps on your tongue (papillae), but are located in small divots around these bumps. These receptors also respond to chemicals from the outside environment, except these chemicals, called tastants, are contained in the foods we eat. The binding of these chemicals with taste receptor cells results in our perception of the five basic tastes: sweet, sour, bitter, salty and umami (savory)—although some scientists argue that there are more (Stewart et al., 2010). Researchers used to think these tastes formed the basis for a map-like organization of the tongue; there was even a clever rationale for the concept, about how the back of the tongue sensed bitter so we would know to spit out poisons, and the front of the tongue sensed sweet so we could identify high-energy foods. However, we now know that all areas of the tongue with taste receptor cells are capable of responding to every taste (Chandrashekar, Hoon, Ryba, & Zuker, 2006). During the process of eating we are not limited to our sense of taste alone. While we are chewing, food odorants are forced back up to areas that contain olfactory receptors. This combination of taste and smell gives us the perception of flavor. If you have doubts about the interaction between these two senses, I encourage you to think back to consider how the flavors of your favorite foods are impacted when you have a cold; everything is pretty bland and boring, right? Putting it all Together: Multimodal Perception Though we have spent the majority of this module covering the senses individually, our real-world experience is most often multimodal, involving combinations of our senses into one perceptual experience. This should be clear after reading the description of walking through the forest at the beginning of the module; it was the combination of senses that allowed for that experience. It shouldn’t shock you to find out that at some point information from each of our senses becomes integrated. Information from one sense has the potential to influence how we perceive information from another, a process called multimodal perception. Interestingly, we actually respond more strongly to multimodal stimuli compared to the sum of each single modality together, an effect called the superadditive effect of multisensory integration. This can explain how you’re still able to understand what friends are saying to you at a loud concert, as long as you are able to get visual cues from watching them speak. If you were having a quiet conversation at a café, you likely wouldn’t need these additional cues. In fact, the principle of inverse effectiveness states that you are less likely to benefit from additional cues from other modalities if the initial unimodal stimulus is strong enough (Stein & Meredith, 1993). Because we are able to process multimodal sensory stimuli, and the results of those processes are qualitatively different from those of unimodal stimuli, it’s a fair assumption that the brain is doing something qualitatively different when they’re being processed. There has been a growing body of evidence since the mid-90’s on the neural correlates of multimodal perception. For example, neurons that respond to both visual and auditory stimuli have been identified in the superior temporal sulcus (Calvert, Hansen, Iversen, & Brammer, 2001). Additionally, multimodal “what” and “where” pathways have been proposed for auditory and tactile stimuli (Renier et al., 2009). We aren’t limited to reading about these regions of the brain and what they do; we can experience them with a few interesting examples (see Additional Resources for the “McGurk Effect,” the “Double Flash Illusion,” and the “Rubber Hand Illusion”). Conclusion Our impressive sensory abilities allow us to experience the most enjoyable and most miserable experiences, as well as everything in between. Our eyes, ears, nose, tongue and skin provide an interface for the brain to interact with the world around us. While there is simplicity in covering each sensory modality independently, we are organisms that have evolved the ability to process multiple modalities as a unified experience. Outside Resources Audio: Auditory Demonstrations from Richard Warren’s lab at the University of Wisconsin, Milwaukee www4.uwm.edu/APL/demonstrations.html Audio: Auditory Demonstrations. CD published by the Acoustical Society of America (ASA). You can listen to the demonstrations here www.feilding.net/sfuad/musi30...1/demos/audio/ Book: Ackerman, D. (1990). A natural history of the senses. Vintage. http://www.dianeackerman.com/a-natur...diane-ackerman Book: Sacks, O. (1998). The man who mistook his wife for a hat: And other clinical tales. Simon and Schuster. http://www.oliversacks.com/books-by-...took-wife-hat/ Video: Acquired knowledge and its impact on our three-dimensional interpretation of the world - 3D Street Art Video: Acquired knowledge and its impact on our three-dimensional interpretation of the world - Anamorphic Illusions Video: Cybersenses Video: Seeing Sound, Tasting Color Video: The Phantom Limb Phenomenon Web: A regularly updated website covering some of the amazing sensory capabilities of non-human animals. phenomena.nationalgeographic....animal-senses/ Web: A special ringtone that is only audible to younger people. Web: Amazing library with visual phenomena and optical illusions, explained http://michaelbach.de/ot/index.html Web: An article on the discoveries in echolocation: the use of sound in locating people and things http://www.psychologicalscience.org/...et-around.html Web: An optical illusion demonstration the opponent-process theory of color vision. Web: Anatomy of the eye http://www.eyecareamerica.org/eyecare/anatomy/ Web: Animation showing tonotopic organization of the basilar membrane. Web: Best Illusion of the Year Contest website http://illusionoftheyear.com/ Web: Demonstration of contrast gain adaptation http://www.michaelbach.de/ot/lum_contrast-adapt/ Web: Demonstration of illusory contours and lateral inhibition. Mach bands http://michaelbach.de/ot/lum-MachBands/index.html Web: Demonstration of illusory contrast and lateral inhibition. The Hermann grid http://michaelbach.de/ot/lum_herGrid/ Web: Demonstrations and illustrations of cochlear mechanics can be found here http://lab.rockefeller.edu/hudspeth/...calSimulations Web: Double Flash Illusion Web: Further information regarding what and where/how pathways http://www.scholarpedia.org/article/...where_pathways Web: Great website with a large collection of optical illusions http://www.michaelbach.de/ot/ Web: McGurk Effect Video Web: More demonstrations and illustrations of cochlear mechanics www.neurophys.wisc.edu/animations/ Web: Scientific American Frontiers: Cybersenses www.pbs.org/saf/1509/ Web: The Genetics of Taste http://www.smithsonianmag.com/arts-c...797110/?no-ist Web: The Monell Chemical Sense Center website http://www.monell.org/ Web: The Rubber Hand Illusion Web: The Tongue Map: Tasteless Myth Debunked http://www.livescience.com/7113-tong...-debunked.html Discussion Questions 1. What physical features would an organism need in order to be really good at localizing sound in 3D space? Are there any organisms that currently excel in localizing sound? What features allow them to do this? 2. What issues would exist with visual recognition of an object if a research participant had his/her corpus callosum severed? What would you need to do in order to observe these deficits? 3. There are a number of myths that exist about the sensory capabilities of infants. How would you design a study to determine what the true sensory capabilities of infants are? 4. A well-documented phenomenon experienced by millennials is the phantom vibration of a cell phone when no actual text message has been received. How can we use signal detection theory to explain this? Vocabulary Absolute threshold The smallest amount of stimulation needed for detection by a sense. Agnosia Loss of the ability to perceive stimuli. Anosmia Loss of the ability to smell. Audition Ability to process auditory stimuli. Also called hearing. Auditory canal Tube running from the outer ear to the middle ear. Auditory hair cells Receptors in the cochlea that transduce sound into electrical potentials. Binocular disparity Difference is images processed by the left and right eyes. Binocular vision Our ability to perceive 3D and depth because of the difference between the images on each of our retinas. Bottom-up processing Building up to perceptual experience from individual pieces. Chemical senses Our ability to process the environmental stimuli of smell and taste. Cochlea Spiral bone structure in the inner ear containing auditory hair cells. Cones Photoreceptors of the retina sensitive to color. Located primarily in the fovea. Dark adaptation Adjustment of eye to low levels of light. Differential threshold The smallest difference needed in order to differentiate two stimuli. (See Just Noticeable Difference (JND)) Dorsal pathway Pathway of visual processing. The “where” pathway. Flavor The combination of smell and taste. Gustation Ability to process gustatory stimuli. Also called taste. Just noticeable difference (JND) The smallest difference needed in order to differentiate two stimuli. (see Differential Threshold) Light adaptation Adjustment of eye to high levels of light. Mechanoreceptors Mechanical sensory receptors in the skin that response to tactile stimulation. Multimodal perception The effects that concurrent stimulation in more than one sensory modality has on the perception of events and objects in the world. Nociception Our ability to sense pain. Odorants Chemicals transduced by olfactory receptors. Olfaction Ability to process olfactory stimuli. Also called smell. Olfactory epithelium Organ containing olfactory receptors. Opponent-process theory Theory proposing color vision as influenced by cells responsive to pairs of colors. Ossicles A collection of three small bones in the middle ear that vibrate against the tympanic membrane. Perception The psychological process of interpreting sensory information. Phantom limb The perception that a missing limb still exists. Phantom limb pain Pain in a limb that no longer exists. Pinna Outermost portion of the ear. Primary auditory cortex Area of the cortex involved in processing auditory stimuli. Primary somatosensory cortex Area of the cortex involved in processing somatosensory stimuli. Primary visual cortex Area of the cortex involved in processing visual stimuli. Principle of inverse effectiveness The finding that, in general, for a multimodal stimulus, if the response to each unimodal component (on its own) is weak, then the opportunity for multisensory enhancement is very large. However, if one component—by itself—is sufficient to evoke a strong response, then the effect on the response gained by simultaneously processing the other components of the stimulus will be relatively small. Retina Cell layer in the back of the eye containing photoreceptors. Rods Photoreceptors of the retina sensitive to low levels of light. Located around the fovea. Sensation The physical processing of environmental stimuli by the sense organs. Sensory adaptation Decrease in sensitivity of a receptor to a stimulus after constant stimulation. Shape theory of olfaction Theory proposing that odorants of different size and shape correspond to different smells. Signal detection Method for studying the ability to correctly identify sensory stimuli. Somatosensation Ability to sense touch, pain and temperature. Somatotopic map Organization of the primary somatosensory cortex maintaining a representation of the arrangement of the body. Sound waves Changes in air pressure. The physical stimulus for audition. Superadditive effect of multisensory integration The finding that responses to multimodal stimuli are typically greater than the sum of the independent responses to each unimodal component if it were presented on its own. Tastants Chemicals transduced by taste receptor cells. Taste receptor cells Receptors that transduce gustatory information. Top-down processing Experience influencing the perception of stimuli. Transduction The conversion of one form of energy into another. Trichromatic theory Theory proposing color vision as influenced by three different cones responding preferentially to red, green and blue. Tympanic membrane Thin, stretched membrane in the middle ear that vibrates in response to sound. Also called the eardrum. Ventral pathway Pathway of visual processing. The “what” pathway. Vestibular system Parts of the inner ear involved in balance. Weber’s law States that just noticeable difference is proportional to the magnitude of the initial stimulus.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/05%3A_SENSATION_AND_PERCEPTION/5.01%3A_Sensation_and_Perception.txt
• 6.1: States of Consciousness No matter what you’re doing--solving homework, playing a video game, simply picking out a shirt--all of your actions and decisions relate to your consciousness. But as frequently as we use it, have you ever stopped to ask yourself: What really is consciousness? In this module, we discuss the different levels of consciousness and how they can affect your behavior in a variety of situations. As well, we explore the role of consciousness in other, “altered” states like hypnosis and sleep. 06: CONSCIOUSNESS By Robert Biswas-Diener and Jake Teeny Portland State University, The Ohio State University No matter what you’re doing--solving homework, playing a video game, simply picking out a shirt--all of your actions and decisions relate to your consciousness. But as frequently as we use it, have you ever stopped to ask yourself: What really is consciousness? In this module, we discuss the different levels of consciousness and how they can affect your behavior in a variety of situations. As well, we explore the role of consciousness in other, “altered” states like hypnosis and sleep. Learning Objectives • Define consciousness and distinguish between high and low conscious states • Explain the relationship between consciousness and bias • Understand the difference between popular portrayals of hypnosis and how it is currently used therapeutically Introduction Have you ever had a fellow motorist stopped beside you at a red light, singing his brains out, or picking his nose, or otherwise behaving in ways he might not normally do in public? There is something about being alone in a car that encourages people to zone out and forget that others can see them. Although these little lapses of attention are amusing for the rest of us, they are also instructive when it comes to the topic of consciousness. Consciousness is a term meant to indicate awareness. It includes awareness of the self, of bodily sensations, of thoughts and of the environment. In English, we use the opposite word “unconscious” to indicate senselessness or a barrier to awareness, as in the case of “Theresa fell off the ladder and hit her head, knocking herself unconscious.” And yet, psychological theory and research suggest that consciousness and unconsciousness are more complicated than falling off a ladder. That is, consciousness is more than just being “on” or “off.” For instance, Sigmund Freud (1856 – 1939)—a psychological theorist—understood that even while we are awake, many things lay outside the realm of our conscious awareness (like being in the car and forgetting the rest of the world can see into your windows). In response to this notion, Freud introduced the concept of the “subconscious” (Freud, 2001) and proposed that some of our memories and even our basic motivations are not always accessible to our conscious minds. Upon reflection, it is easy to see how slippery a topic consciousness is. For example, are people conscious when they are daydreaming? What about when they are drunk? In this module, we will describe several levels of consciousness and then discuss altered states of consciousness such as hypnosis and sleep. Levels of Awareness In 1957, a marketing researcher inserted the words “Eat Popcorn” onto one frame of a film being shown all across the United States. And although that frame was only projected onto the movie screen for 1/24th of a second—a speed too fast to be perceived by conscious awareness—the researcher reported an increase in popcorn sales by nearly 60%. Almost immediately, all forms of “subliminal messaging” were regulated in the US and banned in countries such as Australia and the United Kingdom. Even though it was later shown that the researcher had made up the data (he hadn’t even inserted the words into the film), this fear about influences on our subconscious persists. At its heart, this issue pits various levels of awareness against one another. On the one hand, we have the “low awareness” of subtle, even subliminal influences. On the other hand, there is you—the conscious thinking, feeling you which includes all that you are currently aware of, even reading this sentence. However, when we consider these different levels of awareness separately, we can better understand how they operate. Low Awareness You are constantly receiving and evaluating sensory information. Although each moment has too many sights, smells, and sounds for them all to be consciously considered, our brains are nonetheless processing all that information. For example, have you ever been at a party, overwhelmed by all the people and conversation, when out of nowhere you hear your name called? Even though you have no idea what else the person is saying, you are somehow conscious of your name (for more on this, “the cocktail party effect,” see Noba’s Module on Attention). So, even though you may not be aware of various stimuli in your environment, your brain is paying closer attention than you think. Similar to a reflex (like jumping when startled), some cues, or significant sensory information, will automatically elicit a response from us even though we never consciously perceive it. For example, Öhman and Soares (1994) measured subtle variations in sweating of participants with a fear of snakes. The researchers flashed pictures of different objects (e.g., mushrooms, flowers, and most importantly, snakes) on a screen in front of them, but did so at speeds that left the participant clueless as to what he or she had actually seen. However, when snake pictures were flashed, these participants started sweating more (i.e., a sign of fear), even though they had no idea what they’d just viewed! Although our brains perceive some stimuli without our conscious awareness, do they really affect our subsequent thoughts and behaviors? In a landmark study, Bargh, Chen, and Burrows (1996) had participants solve a word search puzzle where the answers pertained to words about the elderly (e.g., “old,” “grandma”) or something random (e.g., “notebook,” “tomato”). Afterward, the researchers secretly measured how fast the participants walked down the hallway exiting the experiment. And although none of the participants were aware of a theme to the answers, those who had solved a puzzle with elderly words (vs. those with other types of words) walked more slowly down the hallway! This effect is called priming (i.e., readily “activating” certain concepts and associations from one’s memory) has been found in a number of other studies. For example, priming people by having them drink from a warm glass (vs. a cold one) resulted in behaving more “warmly” toward others (Williams & Bargh, 2008). Although all of these influences occur beneath one’s conscious awareness, they still have a significant effect on one’s subsequent thoughts and behaviors. In the last two decades, researchers have made advances in studying aspects of psychology that exist beyond conscious awareness. As you can understand, it is difficult to use self-reports and surveys to ask people about motives or beliefs that they, themselves, might not even be aware of! One way of side-stepping this difficulty can be found in the implicit associations test, or IAT (Greenwald, McGhee & Schwartz, 1998). This research method uses computers to assess people’s reaction times to various stimuli and is a very difficult test to fake because it records automatic reactions that occur in milliseconds. For instance, to shed light on deeply held biases, the IAT might present photographs of Caucasian faces and Asian faces while asking research participants to click buttons indicating either “good” or “bad” as quickly as possible. Even if the participant clicks “good” for every face shown, the IAT can still pick up tiny delays in responding. Delays are associated with more mental effort needed to process information. When information is processed quickly—as in the example of white faces being judged as “good”—it can be contrasted with slower processing—as in the example of Asian faces being judged as “good”—and the difference in processing speed is reflective of bias. In this regard, the IAT has been used for investigating stereotypes (Nosek, Banaji & Greenwald, 2002) as well as self-esteem (Greenwald & Farnam, 2000). This method can help uncover non-conscious biases as well as those that we are motivated to suppress. High Awareness Just because we may be influenced by these “invisible” factors, it doesn’t mean we are helplessly controlled by them. The other side of the awareness continuum is known as “high awareness.” This includes effortful attention and careful decision making. For example, when you listen to a funny story on a date, or consider which class schedule would be preferable, or complete a complex math problem, you are engaging a state of consciousness that allows you to be highly aware of and focused on particular details in your environment. Mindfulness is a state of higher consciousness that includes an awareness of the thoughts passing through one’s head. For example, have you ever snapped at someone in frustration, only to take a moment and reflect on why you responded so aggressively? This more effortful consideration of your thoughts could be described as an expansion of your conscious awareness as you take the time to consider the possible influences on your thoughts. Research has shown that when you engage in this more deliberate consideration, you are less persuaded by irrelevant yet biasing influences, like the presence of a celebrity in an advertisement (Petty & Cacioppo, 1986). Higher awareness is also associated with recognizing when you’re using a stereotype, rather than fairly evaluating another person (Gilbert & Hixon, 1991). Humans alternate between low and high thinking states. That is, we shift between focused attention and a less attentive default sate, and we have neural networks for both (Raichle, 2015). Interestingly, the the less we’re paying attention, the more likely we are to be influenced by non-conscious stimuli (Chaiken, 1980). Although these subtle influences may affect us, we can use our higher conscious awareness to protect against external influences. In what’s known as the Flexible Correction Model (Wegener & Petty, 1997), people who are aware that their thoughts or behavior are being influenced by an undue, outside source, can correct their attitude against the bias. For example, you might be aware that you are influenced by mention of specific political parties. If you were motivated to consider a government policy you can take your own biases into account to attempt to consider the policy in a fair way (on its own merits rather than being attached to a certain party). To help make the relationship between lower and higher consciousness clearer, imagine the brain is like a journey down a river. In low awareness, you simply float on a small rubber raft and let the currents push you. It's not very difficult to just drift along but you also don't have total control. Higher states of consciousness are more like traveling in a canoe. In this scenario, you have a paddle and can steer, but it requires more effort. This analogy applies to many states of consciousness, but not all. What about other states such as like sleeping, daydreaming, or hypnosis? How are these related to our conscious awareness? Other States of Consciousness Hypnosis If you’ve ever watched a stage hypnotist perform, it may paint a misleading portrait of this state of consciousness. The hypnotized people on stage, for example, appear to be in a state similar to sleep. However, as the hypnotist continues with the show, you would recognize some profound differences between sleep and hypnosis. Namely, when you’re asleep, hearing the word “strawberry” doesn’t make you flap your arms like a chicken. In stage performances, the hypnotized participants appear to be highly suggestible, to the point that they are seemingly under the hypnotist’s control. Such performances are entertaining but have a way of sensationalizing the true nature of hypnotic states. Hypnosis is an actual, documented phenomenon—one that has been studied and debated for over 200 years (Pekala et al., 2010). Franz Mesmer (1734 – 1815) is often credited as among the first people to “discover” hypnosis, which he used to treat members of elite society who were experiencing psychological distress. It is from Mesmer’s name that we get the English word, “mesmerize” meaning “to entrance or transfix a person’s attention.” Mesmer attributed the effect of hypnosis to “animal magnetism,” a supposed universal force (similar to gravity) that operates through all human bodies. Even at the time, such an account of hypnosis was not scientifically supported, and Mesmer himself was frequently the center of controversy. Over the years, researchers have proposed that hypnosis is a mental state characterized by reduced peripheral awareness and increased focus on a singular stimulus, which results in an enhanced susceptibility to suggestion (Kihlstrom, 2003). For example, the hypnotist will usually induce hypnosis by getting the person to pay attention only to the hypnotist’s voice. As the individual focuses more and more on that, s/he begins to forget the context of the setting and responds to the hypnotist’s suggestions as if they were his or her own. Some people are naturally more suggestible, and therefore more “hypnotizable” than are others, and this is especially true for those who score high in empathy (Wickramasekera II & Szlyk, 2003). One common “trick” of stage hypnotists is to discard volunteers who are less suggestible than others. Dissociation is the separation of one’s awareness from everything besides what one is centrally focused on. For example, if you’ve ever been daydreaming in class, you were likely so caught up in the fantasy that you didn’t hear a word the teacher said. During hypnosis, this dissociation becomes even more extreme. That is, a person concentrates so much on the words of the hypnotist that s/he loses perspective of the rest of the world around them. As a consequence of dissociation, a person is less effortful, and less self-conscious in consideration of his or her own thoughts and behaviors. Similar to low awareness states, where one often acts on the first thought that comes to mind, so, too, in hypnosis does the individual simply follow the first thought that comes to mind, i.e., the hypnotist’s suggestion. Still, just because one is more susceptible to suggestion under hypnosis, it doesn’t mean s/he will do anything that’s ordered. To be hypnotized, you must first want to be hypnotized (i.e., you can’t be hypnotized against your will; Lynn & Kirsh, 2006), and once you are hypnotized, you won’t do anything you wouldn’t also do while in a more natural state of consciousness (Lynn, Rhue, & Weekes, 1990). Today, hypnotherapy is still used in a variety of formats, and it has evolved from Mesmer’s early tinkering with the concept. Modern hypnotherapy often uses a combination of relaxation, suggestion, motivation and expectancies to create a desired mental or behavioral state. Although there is mixed evidence on whether hypnotherapy can help with addiction reduction (e.g., quitting smoking; Abbot et al., 1998) there is some evidence that it can be successful in treating sufferers of acute and chronic pain (Ewin, 1978; Syrjala et al., 1992). For example, one study examined the treatment of burn patients with either hypnotherapy, pseudo-hypnosis (i.e., a placebo condition), or no treatment at all. Afterward, even though people in the placebo condition experienced a 16% decrease in pain, those in the actual hypnosis condition experienced a reduction of nearly 50% (Patterson et al., 1996). Thus, even though hypnosis may be sensationalized for television and movies, its ability to disassociate a person from their environment (or their pain) in conjunction with increased suggestibility to a clinician’s recommendations (e.g., “you will feel less anxiety about your chronic pain”) is a documented practice with actual medical benefits. Now, similar to hypnotic states, trance states also involve a dissociation of the self; however, people in a trance state are said to have less voluntary control over their behaviors and actions. Trance states often occur in religious ceremonies, where the person believes he or she is “possessed” by an otherworldly being or force. While in trance, people report anecdotal accounts of a “higher consciousness” or communion with a greater power. However, the body of research investigating this phenomenon tends to reject the claim that these experiences constitute an “altered state of consciousness.” Most researchers today describe both hypnosis and trance states as “subjective” alterations of consciousness, not an actually distinct or evolved form (Kirsch & Lynn, 1995). Just like you feel different when you’re in a state of deep relaxation, so, too, are hypnotic and trance states simply shifts from the standard conscious experience. Researchers contend that even though both hypnotic and trance states appear and feel wildly different than the normal human experience, they can be explained by standard socio-cognitive factors like imagination, expectation, and the interpretation of the situation. Sleep You may have experienced the sensation-- as you are falling asleep-- of falling and then found yourself physically jerking forward and grabbing out as if you were really falling. Sleep is a unique state of consciousness; it lacks full awareness but the brain is still active. People generally follow a “biological clock” that impacts when they naturally become drowsy, when they fall asleep, and the time they naturally awaken. The hormone melatonin increases at night and is associated with becoming sleepy. Your natural daily rhythm, or Circadian Rhythm, can be influenced by the amount of daylight to which you are exposed as well as your work and activity schedule. Changing your location, such as flying from Canada to England, can disrupt your natural sleep rhythms, and we call this jet lag. You can overcome jet lag by synchronizing yourself to the local schedule by exposing yourself to daylight and forcing yourself to stay awake even though you are naturally sleepy. Interestingly, sleep itself is more than shutting off for the night (or for a nap). Instead of turning off like a light with a flick of a switch, your shift in consciousness is reflected in your brain’s electrical activity. While you are awake and alert your brain activity is marked by betawaves. Beta waves are characterized by being high in frequency but low in intensity. In addition, they are the most inconsistent brain wave and this reflects the wide variation in sensory input that a person processes during the day. As you begin to relax these change to alpha waves. These waves reflect brain activity that is less frequent, more consistent and more intense. As you slip into actual sleep you transition through many stages. Scholars differ on how they characterize sleep stages with some experts arguing that there are four distinct stages (Manoach et al., 2010), while others recognize five (Šušmáková, & Krakovská, 2008) but they all distinguish between those that include rapid eye movement (REM) and those that are non-rapid eye movement (NREM). In addition, each stage is typically characterized by its own unique pattern of brain activity: • Stage 1 (called NREM 1, or N1) is the "falling asleep" stage and is marked by theta waves. • Stage 2 (called NREM 2, or N2) is considered a light sleep. Here, there are occasional “sleep spindles,” or very high intensity brain waves. These are thought to be associated with the processing of memories. NREM 2 makes up about 55% of all sleep. • Stage 3 (called NREM 3, or N3) makes up between 20-25% of all sleep and is marked by greater muscle relaxation and the appearance of delta waves. • Finally, REM sleep is marked by rapid eye movement (REM). Interestingly, this stage—in terms of brain activity—is similar to wakefulness. That is, the brain waves occur less intensely than in other stages of sleep. REM sleep accounts for about 20% of all sleep and is associated with dreaming. Dreams are, arguably, the most interesting aspect of sleep. Throughout history dreams have been given special importance because of their unique, almost mystical nature. They have been thought to be predictions of the future, hints of hidden aspects of the self, important lessons about how to live life, or opportunities to engage in impossible deeds like flying. There are several competing theories of why humans dream. One is that it is our nonconscious attempt to make sense of our daily experiences and learning. Another, popularized by Freud, is that dreams represent taboo or troublesome wishes or desires. Regardless of the specific reason we know a few facts about dreams: all humans dream, we dream at every stage of sleep, but dreams during REM sleep are especially vivid. One under-explored area of dream research is the possible social functions of dreams: we often share our dreams with others and use them for entertainment value. Sleep serves many functions, one of which is to give us a period of mental and physical restoration. Children generally need more sleep than adults since they are developing. It is so vital, in fact, that a lack of sleep is associated with a wide range of problems. People who do not receive adequate sleep are more irritable, have slower reaction time, have more difficulty sustaining attention, and make poorer decisions. Interestingly, this is an issue relevant to the lives of college students. In one highly cited study researchers found that 1 in 5 students took more than 30 minutes to fall asleep at night, 1 in 10 occasionally took sleep medications, and more than half reported being “mostly tired” in the mornings (Buboltz, et al, 2001). Psychoactive Drugs On April 16, 1943, Albert Hoffman—a Swiss chemist working in a pharmaceutical company—accidentally ingested a newly synthesized drug. The drug—lysergic acid diethylimide (LSD)—turned out to be a powerful hallucinogen. Hoffman went home and later reported the effects of the drug, describing them as seeing the world through a “warped mirror” and experiencing visions of “extraordinary shapes with intense, kaleidoscopic play of colors.” Hoffman had discovered what members of many traditional cultures around the world already knew: there are substances that, when ingested, can have a powerful effect on perception and on consciousness. Drugs operate on human physiology in a variety of ways and researchers and medical doctors tend to classify drugs according to their effects. Here we will briefly cover 3 categories of drugs: hallucinogens, depressants, and stimulants. Hallucinogens It is possible that hallucinogens are the substance that have, historically, been used the most widely. Traditional societies have used plant-based hallucinogens such as peyote, ebene, and psilocybin mushrooms in a wide range of religious ceremonies. Hallucinogens are substances that alter a person’s perceptions, often by creating visions or hallucinations that are not real. There are a wide range of hallucinogens and many are used as recreational substances in industrialized societies. Common examples include marijuana, LSD, and MDMA (also known as “ecstasy”). Marijuana is the dried flowers of the hemp plant and is often smoked to produce euphoria. The active ingredient in marijuana is called THC and can produce distortions in the perception of time, can create a sense of rambling, unrelated thoughts, and is sometimes associated with increased hunger or excessive laughter. The use and possession of marijuana is illegal in most places but this appears to be a trend that is changing. Uruguay, Bangladesh, and several of the United States, have recently legalized marijuana. This may be due, in part, to changing public attitudes or to the fact that marijuana is increasingly used for medical purposes such as the management of nausea or treating glaucoma. Depressants Depressants are substances that, as their name suggests, slow down the body’s physiology and mental processes. Alcohol is the most widely used depressant. Alcohol’s effects include the reduction of inhibition, meaning that intoxicated people are more likely to act in ways they would otherwise be reluctant to. Alcohol’s psychological effects are the result of it increasing the neurotransmitter GABA. There are also physical effects, such as loss of balance and coordination, and these stem from the way that alcohol interferes with the coordination of the visual and motor systems of the brain. Despite the fact that alcohol is so widely accepted in many cultures it is also associated with a variety of dangers. First, alcohol is toxic, meaning that it acts like a poison because it is possible to drink more alcohol than the body can effectively remove from the bloodstream. When a person’s blood alcohol content (BAC) reaches .3 to .4% there is a serious risk of death. Second, the lack of judgment and physical control associated with alcohol is associated with more risk taking behavior or dangerous behavior such as drunk driving. Finally, alcohol is addictive and heavy drinkers often experience significant interference with their ability to work effectively or in their close relationships. Other common depressants include opiates (also called “narcotics”), which are substances synthesized from the poppy flower. Opiates stimulate endorphin production in the brain and because of this they are often used as pain killers by medical professionals. Unfortunately, because opiates such as Oxycontin so reliably produce euphoria they are increasingly used—illegally—as recreational substances. Opiates are highly addictive. Stimulants Stimulants are substances that “speed up” the body’s physiological and mental processes. Two commonly used stimulants are caffeine—the drug found in coffee and tea—and nicotine, the active drug in cigarettes and other tobacco products. These substances are both legal and relatively inexpensive, leading to their widespread use. Many people are attracted to stimulants because they feel more alert when under the influence of these drugs. As with any drug there are health risks associated with consumption. For example, excessive consumption of these types of stimulants can result in anxiety, headaches, and insomnia. Similarly, smoking cigarettes—the most common means of ingesting nicotine—is associated with higher risks of cancer. For instance, among heavy smokers 90% of lung cancer is directly attributable to smoking (Stewart & Kleihues, 2003). There are other stimulants such as cocaine and methamphetamine (also known as “crystal meth” or “ice”) that are illegal substances that are commonly used. These substances act by blocking “re-uptake” of dopamine in the brain. This means that the brain does not naturally clear out the dopamine and that it builds up in the synapse, creating euphoria and alertness. As the effects wear off it stimulates strong cravings for more of the drug. Because of this these powerful stimulants are highly addictive. Conclusion When you think about your daily life it is easy to get lulled into the belief that there is one “setting” for your conscious thought. That is, you likely believe that you hold the same opinions, values, and memories across the day and throughout the week. But “you” are like a dimmer switch on a light that can be turned from full darkness increasingly on up to full brightness. This switch is consciousness. At your brightest setting you are fully alert and aware; at dimmer settings you are day dreaming; and sleep or being knocked unconscious represent dimmer settings still. The degree to which you are in high, medium, or low states of conscious awareness affect how susceptible you are to persuasion, how clear your judgment is, and how much detail you can recall. Understanding levels of awareness, then, is at the heart of understanding how we learn, decide, remember and many other vital psychological processes. Outside Resources App: Visual illusions for the iPad. http://www.exploratorium.edu/explore...olor-uncovered Book: A wonderful book about how little we know about ourselves: Wilson, T. D. (2004). Strangers to ourselves. Cambridge, MA: Harvard University Press. http://www.hup.harvard.edu/catalog.p...=9780674013827 Book: Another wonderful book about free will—or its absence?: Wegner, D. M. (2002). The illusion of conscious will. Cambridge, MA: MIT Press. https://mitpress.mit.edu/books/illus...conscious-will Information on alcoholism, alcohol abuse, and treatment: http://www.niaaa.nih.gov/alcohol-hea...port-treatment The American Psychological Association has information on getting a good night’s sleep as well as on sleep disorders http://www.apa.org/helpcenter/sleep-disorders.aspx The LSD simulator: This simulator uses optical illusions to simulate the halluginogenic experience of LSD. Simply follow the instructions in this two minute video. After looking away you may see the world around you in a warped or pulsating way similar to the effects of LSD. The effect is temporary and will disappear in about a minute. The National Sleep Foundation is a non-profit with videos on insomnia, sleep training in children, and other topics https://sleepfoundation.org/video-library Video: An artist who periodically took LSD and drew self-portraits: http://www.openculture.com/2013/10/a...xperiment.html Video: An interesting video on attention: http://www.dansimons.com/videos.html Video: Clip on out-of-body experiences induced using virtual reality. Video: Clip on the rubber hand illusion, from the BBC science series \\\\"Horizon.\\\\" Video: Clip showing a patient with blindsight, from the documentary \\\\"Phantoms in the Brain.\\\\" Video: Demonstration of motion-induced blindness - Look steadily at the blue moving pattern. One or more of the yellow spots may disappear: Video: Howie Mandel from America\\'s Got Talent being hypnotized into shaking hands with people: Video: Imaging the Brain, Reading the Mind - A talk by Marsel Mesulam. http://video.at.northwestern.edu/lores/SO_marsel.m4v Video: Lucas Handwerker – a stage hypnotist discusses the therapeutic aspects of hypnosis: Video: Ted Talk - Simon Lewis: Don\\\\'t take consciousness for granted http://www.ted.com/talks/simon_lewis...r_granted.html Video: TED Talk on Dream Research: Video: The mind-body problem - An interview with Ned Block: Want a quick demonstration of priming? (Want a quick demonstration of how powerful these effects can be? Check out: Web: A good overview of priming: en.Wikipedia.org/wiki/Priming_(psychology) Web: Definitions of Consciousness: http://www.consciousentities.com/definitions.htm Web: Learn more about motion-induced blindness on Michael Bach\\\\'s website: http://www.michaelbach.de/ot/mot-mib/index.html Discussion Questions 1. If someone were in a coma after an accident, and you wanted to better understand how “conscious” or aware s/he were, how might you go about it? 2. What are some of the factors in daily life that interfere with people’s ability to get adequate sleep? What interferes with your sleep? 3. How frequently do you remember your dreams? Do you have recurring images or themes in your dreams? Why do you think that is? 4. Consider times when you fantasize or let your mind wander? Describe these times: are you more likely to be alone or with others? Are there certain activities you engage in that seem particularly prone to daydreaming? 5. A number of traditional societies use consciousness altering substances in ceremonies. Why do you think they do this? 6. Do you think attitudes toward drug use are changing over time? If so, how? Why do you think these changes occur? 7. Students in high school and college are increasingly using stimulants such as Adderol as study aids and “performance enhancers.” What is your opinion of this trend? Vocabulary Blood Alcohol Content (BAC) Blood Alcohol Content (BAC): a measure of the percentage of alcohol found in a person’s blood. This measure is typically the standard used to determine the extent to which a person is intoxicated, as in the case of being too impaired to drive a vehicle. Circadian Rhythm Circadian Rhythm: The physiological sleep-wake cycle. It is influenced by exposure to sunlight as well as daily schedule and activity. Biologically, it includes changes in body temperature, blood pressure and blood sugar. Consciousness Consciousness: the awareness or deliberate perception of a stimulus Cues Cues: a stimulus that has a particular significance to the perceiver (e.g., a sight or a sound that has special relevance to the person who saw or heard it) Depressants Depressants: a class of drugs that slow down the body’s physiological and mental processes. Dissociation Dissociation: the heightened focus on one stimulus or thought such that many other things around you are ignored; a disconnect between one’s awareness of their environment and the one object the person is focusing on Euphoria Euphoria: an intense feeling of pleasure, excitement or happiness. Flexible Correction Model Flexible Correction Model: the ability for people to correct or change their beliefs and evaluations if they believe these judgments have been biased (e.g., if someone realizes they only thought their day was great because it was sunny, they may revise their evaluation of the day to account for this “biasing” influence of the weather) Hallucinogens Hallucinogens: substances that, when ingested, alter a person’s perceptions, often by creating hallucinations that are not real or distorting their perceptions of time. Hypnosis Hypnosis: the state of consciousness whereby a person is highly responsive to the suggestions of another; this state usually involves a dissociation with one’s environment and an intense focus on a single stimulus, which is usually accompanied by a sense of relaxation Hypnotherapy Hypnotherapy: The use of hypnotic techniques such as relaxation and suggestion to help engineer desirable change such as lower pain or quitting smoking. Implicit Associations Test Implicit Associations Test (IAT): A computer reaction time test that measures a person’s automatic associations with concepts. For instance, the IAT could be used to measure how quickly a person makes positive or negative evaluations of members of various ethnic groups. Jet Lag Jet Lag: The state of being fatigued and/or having difficulty adjusting to a new time zone after traveling a long distance (across multiple time zones). Melatonin Melatonin: A hormone associated with increased drowsiness and sleep. Mindfulness Mindfulness: a state of heightened focus on the thoughts passing through one’s head, as well as a more controlled evaluation of those thoughts (e.g., do you reject or support the thoughts you’re having?) Priming Priming: the activation of certain thoughts or feelings that make them easier to think of and act upon Stimulants Stimulants: a class of drugs that speed up the body’s physiological and mental processes. Trance States Trance: a state of consciousness characterized by the experience of “out-of-body possession,” or an acute dissociation between one’s self and the current, physical environment surrounding them.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/06%3A_CONSCIOUSNESS/6.01%3A_States_of_Consciousness.txt
• 7.1: Conditioning and Learning Basic principles of learning are always operating and always influencing human behavior. This module discusses the two most fundamental forms of learning -- classical (Pavlovian) and instrumental (operant) conditioning. This module describes some of the most important things you need to know about classical and instrumental conditioning, and it illustrates some of the many ways they help us understand normal and disordered behavior in humans. 07: LEARNING By Mark E. Bouton University of Vermont Basic principles of learning are always operating and always influencing human behavior. This module discusses the two most fundamental forms of learning -- classical (Pavlovian) and instrumental (operant) conditioning. Through them, we respectively learn to associate 1) stimuli in the environment, or 2) our own behaviors, with significant events, such as rewards and punishments. The two types of learning have been intensively studied because they have powerful effects on behavior, and because they provide methods that allow scientists to analyze learning processes rigorously. This module describes some of the most important things you need to know about classical and instrumental conditioning, and it illustrates some of the many ways they help us understand normal and disordered behavior in humans. The module concludes by introducing the concept of observational learning, which is a form of learning that is largely distinct from classical and operant conditioning. learning objectives • Distinguish between classical (Pavlovian) conditioning and instrumental (operant) conditioning. • Understand some important facts about each that tell us how they work. • Understand how they work separately and together to influence human behavior in the world outside the laboratory. • Students will be able to list the four aspects of observational learning according to Social Learning Theory. Two Types of Conditioning Although Ivan Pavlov won a Nobel Prize for studying digestion, he is much more famous for something else: working with a dog, a bell, and a bowl of saliva. Many people are familiar with the classic study of “Pavlov’s dog,” but rarely do they understand the significance of its discovery. In fact, Pavlov’s work helps explain why some people get anxious just looking at a crowded bus, why the sound of a morning alarm is so hated, and even why we swear off certain foods we’ve only tried once. Classical (or Pavlovian) conditioning is one of the fundamental ways we learn about the world around us. But it is far more than just a theory of learning; it is also arguably a theory of identity. For, once you understand classical conditioning, you’ll recognize that your favorite music, clothes, even political candidate, might all be a result of the same process that makes a dog drool at the sound of bell. Around the turn of the 20th century, scientists who were interested in understanding the behavior of animals and humans began to appreciate the importance of two very basic forms of learning. One, which was first studied by the Russian physiologist Ivan Pavlov, is known as classical, or Pavlovian conditioning. In his famous experiment, Pavlov rang a bell and then gave a dog some food. After repeating this pairing multiple times, the dog eventually treated the bell as a signal for food, and began salivating in anticipation of the treat. This kind of result has been reproduced in the lab using a wide range of signals (e.g., tones, light, tastes, settings) paired with many different events besides food (e.g., drugs, shocks, illness; see below). We now believe that this same learning process is engaged, for example, when humans associate a drug they’ve taken with the environment in which they’ve taken it; when they associate a stimulus (e.g., a symbol for vacation, like a big beach towel) with an emotional event (like a burst of happiness); and when they associate the flavor of a food with getting food poisoning. Although classical conditioning may seem “old” or “too simple” a theory, it is still widely studied today for at least two reasons: First, it is a straightforward test of associative learning that can be used to study other, more complex behaviors. Second, because classical conditioning is always occurring in our lives, its effects on behavior have important implications for understanding normal and disordered behavior in humans. In a general way, classical conditioning occurs whenever neutral stimuli are associated with psychologically significant events. With food poisoning, for example, although having fish for dinner may not normally be something to be concerned about (i.e., a “neutral stimuli”), if it causes you to get sick, you will now likely associate that neutral stimuli (the fish) with the psychologically significant event of getting sick. These paired events are often described using terms that can be applied to any situation. The dog food in Pavlov’s experiment is called the unconditioned stimulus (US) because it elicits an unconditioned response (UR). That is, without any kind of “training” or “teaching,” the stimulus produces a natural or instinctual reaction. In Pavlov’s case, the food (US) automatically makes the dog drool (UR). Other examples of unconditioned stimuli include loud noises (US) that startle us (UR), or a hot shower (US) that produces pleasure (UR). On the other hand, a conditioned stimulus produces a conditioned response. A conditioned stimulus (CS) is a signal that has no importance to the organism until it is paired with something that does have importance. For example, in Pavlov’s experiment, the bell is the conditioned stimulus. Before the dog has learned to associate the bell (CS) with the presence of food (US), hearing the bell means nothing to the dog. However, after multiple pairings of the bell with the presentation of food, the dog starts to drool at the sound of the bell. This drooling in response to the bell is the conditioned response (CR). Although it can be confusing, the conditioned response is almost always the same as the unconditioned response. However, it is called the conditioned response because it is conditional on (or, depends on) being paired with the conditioned stimulus (e.g., the bell). To help make this clearer, consider becoming really hungry when you see the logo for a fast food restaurant. There’s a good chance you’ll start salivating. Although it is the actual eating of the food (US) that normally produces the salivation (UR), simply seeing the restaurant’s logo (CS) can trigger the same reaction (CR). Another example you are probably very familiar with involves your alarm clock. If you’re like most people, waking up early usually makes you unhappy. In this case, waking up early (US) produces a natural sensation of grumpiness (UR). Rather than waking up early on your own, though, you likely have an alarm clock that plays a tone to wake you. Before setting your alarm to that particular tone, let’s imagine you had neutral feelings about it (i.e., the tone had no prior meaning for you). However, now that you use it to wake up every morning, you psychologically “pair” that tone (CS) with your feelings of grumpiness in the morning (UR). After enough pairings, this tone (CS) will automatically produce your natural response of grumpiness (CR). Thus, this linkage between the unconditioned stimulus (US; waking up early) and the conditioned stimulus (CS; the tone) is so strong that the unconditioned response (UR; being grumpy) will become a conditioned response (CR; e.g., hearing the tone at any point in the day—whether waking up or walking down the street—will make you grumpy). Modern studies of classical conditioning use a very wide range of CSs and USs and measure a wide range of conditioned responses. Although classical conditioning is a powerful explanation for how we learn many different things, there is a second form of conditioning that also helps explain how we learn. First studied by Edward Thorndike, and later extended by B. F. Skinner, this second type of conditioning is known as instrumentalor operant conditioning. Operant conditioning occurs when a behavior (as opposed to a stimulus) is associated with the occurrence of a significant event. In the best-known example, a rat in a laboratory learns to press a lever in a cage (called a “Skinner box”) to receive food. Because the rat has no “natural” association between pressing a lever and getting food, the rat has to learn this connection. At first, the rat may simply explore its cage, climbing on top of things, burrowing under things, in search of food. Eventually while poking around its cage, the rat accidentally presses the lever, and a food pellet drops in. This voluntary behavior is called an operant behavior, because it “operates” on the environment (i.e., it is an action that the animal itself makes). Now, once the rat recognizes that it receives a piece of food every time it presses the lever, the behavior of lever-pressing becomes reinforced. That is, the food pellets serve as reinforcers because they strengthen the rat’s desire to engage with the environment in this particular manner. In a parallel example, imagine that you’re playing a street-racing video game. As you drive through one city course multiple times, you try a number of different streets to get to the finish line. On one of these trials, you discover a shortcut that dramatically improves your overall time. You have learned this new path through operant conditioning. That is, by engaging with your environment (operant responses), you performed a sequence of behaviors that that was positively reinforced (i.e., you found the shortest distance to the finish line). And now that you’ve learned how to drive this course, you will perform that same sequence of driving behaviors (just as the rat presses on the lever) to receive your reward of a faster finish. Operant conditioning research studies how the effects of a behavior influence the probability that it will occur again. For example, the effects of the rat’s lever-pressing behavior (i.e., receiving a food pellet) influences the probability that it will keep pressing the lever. For, according to Thorndike’s law of effect, when a behavior has a positive (satisfying) effect or consequence, it is likely to be repeated in the future. However, when a behavior has a negative (painful/annoying) consequence, it is less likely to be repeated in the future. Effects that increase behaviors are referred to as reinforcers, and effects that decrease them are referred to as punishers. An everyday example that helps to illustrate operant conditioning is striving for a good grade in class—which could be considered a reward for students (i.e., it produces a positive emotional response). In order to get that reward (similar to the rat learning to press the lever), the student needs to modify his/her behavior. For example, the student may learn that speaking up in class gets him/her participation points (a reinforcer), so the student speaks up repeatedly. However, the student also learns that s/he shouldn’t speak up about just anything; talking about topics unrelated to school actually costs points. Therefore, through the student’s freely chosen behaviors, s/he learns which behaviors are reinforced and which are punished. An important distinction of operant conditioning is that it provides a method for studying how consequences influence “voluntary” behavior. The rat’s decision to press the lever is voluntary, in the sense that the rat is free to make and repeat that response whenever it wants. Classical conditioning, on the other hand, is just the opposite—depending instead on “involuntary” behavior (e.g., the dog doesn’t choose to drool; it just does). So, whereas the rat must actively participate and perform some kind of behavior to attain its reward, the dog in Pavlov’s experiment is a passive participant. One of the lessons of operant conditioning research, then, is that voluntary behavior is strongly influenced by its consequences. The illustration on the left summarizes the basic elements of classical and instrumental conditioning. The two types of learning differ in many ways. However, modern thinkers often emphasize the fact that they differ—as illustrated here—in what is learned. In classical conditioning, the animal behaves as if it has learned to associate a stimulus with a significant event. In operant conditioning, the animal behaves as if it has learned to associate a behavior with a significant event. Another difference is that the response in the classical situation (e.g., salivation) is elicited by a stimulus that comes before it, whereas the response in the operant case is not elicited by any particular stimulus. Instead, operant responses are said to be emitted. The word “emitted” further conveys the idea that operant behaviors are essentially voluntary in nature. Understanding classical and operant conditioning provides psychologists with many tools for understanding learning and behavior in the world outside the lab. This is in part because the two types of learning occur continuously throughout our lives. It has been said that “much like the laws of gravity, the laws of learning are always in effect” (Spreat & Spreat, 1982). Useful Things to Know about Classical Conditioning Classical Conditioning Has Many Effects on Behavior A classical CS (e.g., the bell) does not merely elicit a simple, unitary reflex. Pavlov emphasized salivation because that was the only response he measured. But his bell almost certainly elicited a whole system of responses that functioned to get the organism ready for the upcoming US (food) (see Timberlake, 2001). For example, in addition to salivation, CSs (such as the bell) that signal that food is near also elicit the secretion of gastric acid, pancreatic enzymes, and insulin (which gets blood glucose into cells). All of these responses prepare the body for digestion. Additionally, the CS elicits approach behavior and a state of excitement. And presenting a CS for food can also cause animals whose stomachs are full to eat more food if it is available. In fact, food CSs are so prevalent in modern society, humans are likewise inclined to eat or feel hungry in response to cues associated with food, such as the sound of a bag of potato chips opening, the sight of a well-known logo (e.g., Coca-Cola), or the feel of the couch in front of the television. Classical conditioning is also involved in other aspects of eating. Flavors associated with certain nutrients (such as sugar or fat) can become preferred without arousing any awareness of the pairing. For example, protein is a US that your body automatically craves more of once you start to consume it (UR): since proteins are highly concentrated in meat, the flavor of meat becomes a CS (or cue, that proteins are on the way), which perpetuates the cycle of craving for yet more meat (this automatic bodily reaction now a CR). In a similar way, flavors associated with stomach pain or illness become avoided and disliked. For example, a person who gets sick after drinking too much tequila may acquire a profound dislike of the taste and odor of tequila—a phenomenon called taste aversion conditioning. The fact that flavors are often associated with so many consequences of eating is important for animals (including rats and humans) that are frequently exposed to new foods. And it is clinically relevant. For example, drugs used in chemotherapy often make cancer patients sick. As a consequence, patients often acquire aversions to foods eaten just before treatment, or even aversions to such things as the waiting room of the chemotherapy clinic itself (see Bernstein, 1991; Scalera & Bavieri, 2009). Classical conditioning occurs with a variety of significant events. If an experimenter sounds a tone just before applying a mild shock to a rat’s feet, the tone will elicit fear or anxiety after one or two pairings. Similar fear conditioning plays a role in creating many anxiety disorders in humans, such as phobias and panic disorders, where people associate cues (such as closed spaces, or a shopping mall) with panic or other emotional trauma (see Mineka & Zinbarg, 2006). Here, rather than a physical response (like drooling), the CS triggers an emotion. Another interesting effect of classical conditioning can occur when we ingest drugs. That is, when a drug is taken, it can be associated with the cues that are present at the same time (e.g., rooms, odors, drug paraphernalia). In this regard, if someone associates a particular smell with the sensation induced by the drug, whenever that person smells the same odor afterward, it may cue responses (physical and/or emotional) related to taking the drug itself. But drug cues have an even more interesting property: They elicit responses that often “compensate” for the upcoming effect of the drug (see Siegel, 1989). For example, morphine itself suppresses pain; however, if someone is used to taking morphine, a cue that signals the “drug is coming soon” can actually make the person more sensitive to pain. Because the person knows a pain suppressant will soon be administered, the body becomes more sensitive, anticipating that “the drug will soon take care of it.” Remarkably, such conditioned compensatory responses in turn decrease the impact of the drug on the body—because the body has become more sensitive to pain. This conditioned compensatory response has many implications. For instance, a drug user will be most “tolerant” to the drug in the presence of cues that have been associated with it (because such cues elicit compensatory responses). As a result, overdose is usually not due to an increase in dosage, but to taking the drug in a new place without the familiar cues—which would have otherwise allowed the user to tolerate the drug (see Siegel, Hinson, Krank, & McCully, 1982). Conditioned compensatory responses (which include heightened pain sensitivity and decreased body temperature, among others) might also cause discomfort, thus motivating the drug user to continue usage of the drug to reduce them. This is one of several ways classical conditioning might be a factor in drug addiction and dependence. A final effect of classical cues is that they motivate ongoing operant behavior (see Balleine, 2005). For example, if a rat has learned via operant conditioning that pressing a lever will give it a drug, in the presence of cues that signal the “drug is coming soon” (like the sound of the lever squeaking), the rat will work harder to press the lever than if those cues weren’t present (i.e., there is no squeaking lever sound). Similarly, in the presence of food-associated cues (e.g., smells), a rat (or an overeater) will work harder for food. And finally, even in the presence of negative cues (like something that signals fear), a rat, a human, or any other organism will work harder to avoid those situations that might lead to trauma. Classical CSs thus have many effects that can contribute to significant behavioral phenomena. The Learning Process As mentioned earlier, classical conditioning provides a method for studying basic learning processes. Somewhat counterintuitively, though, studies show that pairing a CS and a US together is not sufficient for an association to be learned between them. Consider an effect called blocking (see Kamin, 1969). In this effect, an animal first learns to associate one CS—call it stimulus A—with a US. In the illustration above, the sound of a bell (stimulus A) is paired with the presentation of food. Once this association is learned, in a second phase, a second stimulus—stimulus B—is presented alongside stimulus A, such that the two stimuli are paired with the US together. In the illustration, a light is added and turned on at the same time the bell is rung. However, because the animal has already learned the association between stimulus A (the bell) and the food, the animal doesn’t learn an association between stimulus B (the light) and the food. That is, the conditioned response only occurs during the presentation of stimulus A, because the earlier conditioning of A “blocks” the conditioning of B when B is added to A. The reason? Stimulus A already predicts the US, so the US is not surprising when it occurs with Stimulus B. Learning depends on such a surprise, or a discrepancy between what occurs on a conditioning trial and what is already predicted by cues that are present on the trial. To learn something through classical conditioning, there must first be some prediction error, or the chance that a conditioned stimulus won’t lead to the expected outcome. With the example of the bell and the light, because the bell always leads to the reward of food, there’s no “prediction error” that the addition of the light helps to correct. However, if the researcher suddenly requires that the bell and the light both occur in order to receive the food, the bell alone will produce a prediction error that the animal has to learn. Blocking and other related effects indicate that the learning process tends to take in the most valid predictors of significant events and ignore the less useful ones. This is common in the real world. For example, imagine that your supermarket puts big star-shaped stickers on products that are on sale. Quickly, you learn that items with the big star-shaped stickers are cheaper. However, imagine you go into a similar supermarket that not only uses these stickers, but also uses bright orange price tags to denote a discount. Because of blocking (i.e., you already know that the star-shaped stickers indicate a discount), you don’t have to learn the color system, too. The star-shaped stickers tell you everything you need to know (i.e. there’s no prediction error for the discount), and thus the color system is irrelevant. Classical conditioning is strongest if the CS and US are intense or salient. It is also best if the CS and US are relatively new and the organism hasn’t been frequently exposed to them before. And it is especially strong if the organism’s biology has prepared it to associate a particular CS and US. For example, rats and humans are naturally inclined to associate an illness with a flavor, rather than with a light or tone. Because foods are most commonly experienced by taste, if there is a particular food that makes us ill, associating the flavor (rather than the appearance—which may be similar to other foods) with the illness will more greatly ensure we avoid that food in the future, and thus avoid getting sick. This sorting tendency, which is set up by evolution, is called preparedness. There are many factors that affect the strength of classical conditioning, and these have been the subject of much research and theory (see Rescorla & Wagner, 1972; Pearce & Bouton, 2001). Behavioral neuroscientists have also used classical conditioning to investigate many of the basic brain processes that are involved in learning (see Fanselow & Poulos, 2005; Thompson & Steinmetz, 2009). Erasing Classical Learning After conditioning, the response to the CS can be eliminated if the CS is presented repeatedly without the US. This effect is called extinction, and the response is said to become “extinguished.” For example, if Pavlov kept ringing the bell but never gave the dog any food afterward, eventually the dog’s CR (drooling) would no longer happen when it heard the CS (the bell), because the bell would no longer be a predictor of food. Extinction is important for many reasons. For one thing, it is the basis for many therapies that clinical psychologists use to eliminate maladaptive and unwanted behaviors. Take the example of a person who has a debilitating fear of spiders: one approach might include systematic exposure to spiders. Whereas, initially the person has a CR (e.g., extreme fear) every time s/he sees the CS (e.g., the spider), after repeatedly being shown pictures of spiders in neutral conditions, pretty soon the CS no longer predicts the CR (i.e., the person doesn’t have the fear reaction when seeing spiders, having learned that spiders no longer serve as a “cue” for that fear). Here, repeated exposure to spiders without an aversive consequence causes extinction. Psychologists must accept one important fact about extinction, however: it does not necessarily destroy the original learning (see Bouton, 2004). For example, imagine you strongly associate the smell of chalkboards with the agony of middle school detention. Now imagine that, after years of encountering chalkboards, the smell of them no longer recalls the agony of detention (an example of extinction). However, one day, after entering a new building for the first time, you suddenly catch a whiff of a chalkboard and WHAM!, the agony of detention returns. This is called spontaneous recovery: following a lapse in exposure to the CS after extinction has occurred, sometimes re-exposure to the CS (e.g., the smell of chalkboards) can evoke the CR again (e.g., the agony of detention). Another related phenomenon is the renewal effect: After extinction, if the CS is tested in a new context, such as a different room or location, the CR can also return. In the chalkboard example, the action of entering a new building—where you don’t expect to smell chalkboards—suddenly renews the sensations associated with detention. These effects have been interpreted to suggest that extinction inhibits rather than erases the learned behavior, and this inhibition is mainly expressed in the context in which it is learned (see “context” in the Key Vocabulary section below). This does not mean that extinction is a bad treatment for behavior disorders. Instead, clinicians can increase its effectiveness by using basic research on learning to help defeat these relapse effects (see Craske et al., 2008). For example, conducting extinction therapies in contexts where patients might be most vulnerable to relapsing (e.g., at work), might be a good strategy for enhancing the therapy’s success. Useful Things to Know about Instrumental Conditioning Most of the things that affect the strength of classical conditioning also affect the strength of instrumental learning—whereby we learn to associate our actions with their outcomes. As noted earlier, the “bigger” the reinforcer (or punisher), the stronger the learning. And, if an instrumental behavior is no longer reinforced, it will also be extinguished. Most of the rules of associative learning that apply to classical conditioning also apply to instrumental learning, but other facts about instrumental learning are also worth knowing. Instrumental Responses Come Under Stimulus Control As you know, the classic operant response in the laboratory is lever-pressing in rats, reinforced by food. However, things can be arranged so that lever-pressing only produces pellets when a particular stimulus is present. For example, lever-pressing can be reinforced only when a light in the Skinner box is turned on; when the light is off, no food is released from lever-pressing. The rat soon learns to discriminate between the light-on and light-off conditions, and presses the lever only in the presence of the light (responses in light-off are extinguished). In everyday life, think about waiting in the turn lane at a traffic light. Although you know that green means go, only when you have the green arrow do you turn. In this regard, the operant behavior is now said to be under stimulus control. And, as is the case with the traffic light, in the real world, stimulus control is probably the rule. The stimulus controlling the operant response is called a discriminative stimulus. It can be associated directly with the response, or the reinforcer (see below). However, it usually does not elicit the response the way a classical CS does. Instead, it is said to “set the occasion for” the operant response. For example, a canvas put in front of an artist does not elicit painting behavior or compel her to paint. It allows, or sets the occasion for, painting to occur. Stimulus-control techniques are widely used in the laboratory to study perception and other psychological processes in animals. For example, the rat would not be able to respond appropriately to light-on and light-off conditions if it could not see the light. Following this logic, experiments using stimulus-control methods have tested how well animals see colors, hear ultrasounds, and detect magnetic fields. That is, researchers pair these discriminative stimuli with those they know the animals already understand (such as pressing the lever). In this way, the researchers can test if the animals can learn to press the lever only when an ultrasound is played, for example. These methods can also be used to study “higher” cognitive processes. For example, pigeons can learn to peck at different buttons in a Skinner box when pictures of flowers, cars, chairs, or people are shown on a miniature TV screen (see Wasserman, 1995). Pecking button 1 (and no other) is reinforced in the presence of a flower image, button 2 in the presence of a chair image, and so on. Pigeons can learn the discrimination readily, and, under the right conditions, will even peck the correct buttons associated with pictures of new flowers, cars, chairs, and people they have never seen before. The birds have learned to categorize the sets of stimuli. Stimulus-control methods can be used to study how such categorization is learned. Operant Conditioning Involves Choice Another thing to know about operant conditioning is that the response always requires choosing one behavior over others. The student who goes to the bar on Thursday night chooses to drink instead of staying at home and studying. The rat chooses to press the lever instead of sleeping or scratching its ear in the back of the box. The alternative behaviors are each associated with their own reinforcers. And the tendency to perform a particular action depends on both the reinforcers earned for it and the reinforcers earned for its alternatives. To investigate this idea, choice has been studied in the Skinner box by making two levers available for the rat (or two buttons available for the pigeon), each of which has its own reinforcement or payoff rate. A thorough study of choice in situations like this has led to a rule called the quantitative law of effect (see Herrnstein, 1970), which can be understood without going into quantitative detail: The law acknowledges the fact that the effects of reinforcing one behavior depend crucially on how much reinforcement is earned for the behavior’s alternatives. For example, if a pigeon learns that pecking one light will reward two food pellets, whereas the other light only rewards one, the pigeon will only peck the first light. However, what happens if the first light is more strenuous to reach than the second one? Will the cost of energy outweigh the bonus of food? Or will the extra food be worth the work? In general, a given reinforcer will be less reinforcing if there are many alternative reinforcers in the environment. For this reason, alcohol, sex, or drugs may be less powerful reinforcers if the person’s environment is full of other sources of reinforcement, such as achievement at work or love from family members. Cognition in Instrumental Learning Modern research also indicates that reinforcers do more than merely strengthen or “stamp in” the behaviors they are a consequence of, as was Thorndike’s original view. Instead, animals learn about the specific consequences of each behavior, and will perform a behavior depending on how much they currently want—or “value”—its consequence. This idea is best illustrated by a phenomenon called the reinforcer devaluation effect (see Colwill & Rescorla, 1986). A rat is first trained to perform two instrumental actions (e.g., pressing a lever on the left, and on the right), each paired with a different reinforcer (e.g., a sweet sucrose solution, and a food pellet). At the end of this training, the rat tends to press both levers, alternating between the sucrose solution and the food pellet. In a second phase, one of the reinforcers (e.g., the sucrose) is then separately paired with illness. This conditions a taste aversion to the sucrose. In a final test, the rat is returned to the Skinner box and allowed to press either lever freely. No reinforcers are presented during this test (i.e., no sucrose or food comes from pressing the levers), so behavior during testing can only result from the rat’s memory of what it has learned earlier. Importantly here, the rat chooses not to perform the response that once produced the reinforcer that it now has an aversion to (e.g., it won’t press the sucrose lever). This means that the rat has learned and remembered the reinforcer associated with each response, and can combine that knowledge with the knowledge that the reinforcer is now “bad.” Reinforcers do not merely stamp in responses; the animal learns much more than that. The behavior is said to be “goal-directed” (see Dickinson & Balleine, 1994), because it is influenced by the current value of its associated goal (i.e., how much the rat wants/doesn’t want the reinforcer). Things can get more complicated, however, if the rat performs the instrumental actions frequently and repeatedly. That is, if the rat has spent many months learning the value of pressing each of the levers, the act of pressing them becomes automatic and routine. And here, this once goal-directed action (i.e., the rat pressing the lever for the goal of getting sucrose/food) can become a habit. Thus, if a rat spends many months performing the lever-pressing behavior (turning such behavior into a habit), even when sucrose is again paired with illness, the rat will continue to press that lever (see Holland, 2004). After all the practice, the instrumental response (pressing the lever) is no longer sensitive to reinforcer devaluation. The rat continues to respond automatically, regardless of the fact that the sucrose from this lever makes it sick. Habits are very common in human experience, and can be useful. You do not need to relearn each day how to make your coffee in the morning or how to brush your teeth. Instrumental behaviors can eventually become habitual, letting us get the job done while being free to think about other things. Putting Classical and Instrumental Conditioning Together Classical and operant conditioning are usually studied separately. But outside of the laboratory they almost always occur at the same time. For example, a person who is reinforced for drinking alcohol or eating excessively learns these behaviors in the presence of certain stimuli—a pub, a set of friends, a restaurant, or possibly the couch in front of the TV. These stimuli are also available for association with the reinforcer. In this way, classical and operant conditioning are always intertwined. The figure below summarizes this idea, and helps review what we have discussed in this module. Generally speaking, any reinforced or punished operant response (R) is paired with an outcome (O) in the presence of some stimulus or set of stimuli (S). The figure illustrates the types of associations that can be learned in this very general scenario. For one thing, the organism will learn to associate the response and the outcome (R – O). This is instrumental conditioning. The learning process here is probably similar to classical conditioning, with all its emphasis on surprise and prediction error. And, as we discussed while considering the reinforcer devaluation effect, once R – O is learned, the organism will be ready to perform the response if the outcome is desired or valued. The value of the reinforcer can also be influenced by other reinforcers earned for other behaviors in the situation. These factors are at the heart of instrumental learning. Second, the organism can also learn to associate the stimulus with the reinforcing outcome (S – O). This is the classical conditioning component, and as we have seen, it can have many consequences on behavior. For one thing, the stimulus will come to evoke a system of responses that help the organism prepare for the reinforcer (not shown in the figure): The drinker may undergo changes in body temperature; the eater may salivate and have an increase in insulin secretion. In addition, the stimulus will evoke approach (if the outcome is positive) or retreat (if the outcome is negative). Presenting the stimulus will also prompt the instrumental response. The third association in the diagram is the one between the stimulus and the response (S – R). As discussed earlier, after a lot of practice, the stimulus may begin to elicit the response directly. This is habit learning, whereby the response occurs relatively automatically, without much mental processing of the relation between the action and the outcome and the outcome’s current value. The final link in the figure is between the stimulus and the response-outcome association [S – (R – O)]. More than just entering into a simple association with the R or the O, the stimulus can signal that the R – O relationship is now in effect. This is what we mean when we say that the stimulus can “set the occasion” for the operant response: It sets the occasion for the response-reinforcer relationship. Through this mechanism, the painter might begin to paint when given the right tools and the opportunity enabled by the canvas. The canvas theoretically signals that the behavior of painting will now be reinforced by positive consequences. The figure provides a framework that you can use to understand almost any learned behavior you observe in yourself, your family, or your friends. If you would like to understand it more deeply, consider taking a course on learning in the future, which will give you a fuller appreciation of how classical learning, instrumental learning, habit learning, and occasion setting actually work and interact. Observational Learning Not all forms of learning are accounted for entirely by classical and operant conditioning. Imagine a child walking up to a group of children playing a game on the playground. The game looks fun, but it is new and unfamiliar. Rather than joining the game immediately, the child opts to sit back and watch the other children play a round or two. Observing the others, the child takes note of the ways in which they behave while playing the game. By watching the behavior of the other kids, the child can figure out the rules of the game and even some strategies for doing well at the game. This is called observational learning. Observational learning is a component of Albert Bandura’s Social Learning Theory (Bandura, 1977), which posits that individuals can learn novel responses via observation of key others’ behaviors. Observational learning does not necessarily require reinforcement, but instead hinges on the presence of others, referred to as social models. Social models are typically of higher status or authority compared to the observer, examples of which include parents, teachers, and police officers. In the example above, the children who already know how to play the game could be thought of as being authorities—and are therefore social models—even though they are the same age as the observer. By observing how the social models behave, an individual is able to learn how to act in a certain situation. Other examples of observational learning might include a child learning to place her napkin in her lap by watching her parents at the dinner table, or a customer learning where to find the ketchup and mustard after observing other customers at a hot dog stand. Bandura theorizes that the observational learning process consists of four parts. The first is attention—as, quite simply, one must pay attention to what s/he is observing in order to learn. The second part is retention: to learn one must be able to retain the behavior s/he is observing in memory.The third part of observational learning, initiation, acknowledges that the learner must be able to execute (or initiate) the learned behavior. Lastly, the observer must possess the motivation to engage in observational learning. In our vignette, the child must want to learn how to play the game in order to properly engage in observational learning. Researchers have conducted countless experiments designed to explore observational learning, the most famous of which is Albert Bandura’s “Bobo doll experiment.” In this experiment (Bandura, Ross & Ross 1961), Bandura had children individually observe an adult social model interact with a clown doll (“Bobo”). For one group of children, the adult interacted aggressively with Bobo: punching it, kicking it, throwing it, and even hitting it in the face with a toy mallet. Another group of children watched the adult interact with other toys, displaying no aggression toward Bobo. In both instances the adult left and the children were allowed to interact with Bobo on their own. Bandura found that children exposed to the aggressive social model were significantly more likely to behave aggressively toward Bobo, hitting and kicking him, compared to those exposed to the non-aggressive model. The researchers concluded that the children in the aggressive group used their observations of the adult social model’s behavior to determine that aggressive behavior toward Bobo was acceptable. While reinforcement was not required to elicit the children’s behavior in Bandura’s first experiment, it is important to acknowledge that consequences do play a role within observational learning. A future adaptation of this study (Bandura, Ross, & Ross, 1963) demonstrated that children in the aggression group showed less aggressive behavior if they witnessed the adult model receive punishment for aggressing against Bobo. Bandura referred to this process as vicarious reinforcement, as the children did not experience the reinforcement or punishment directly, yet were still influenced by observing it. Conclusion We have covered three primary explanations for how we learn to behave and interact with the world around us. Considering your own experiences, how well do these theories apply to you? Maybe when reflecting on your personal sense of fashion, you realize that you tend to select clothes others have complimented you on (operant conditioning). Or maybe, thinking back on a new restaurant you tried recently, you realize you chose it because its commercials play happy music (classical conditioning). Or maybe you are now always on time with your assignments, because you saw how others were punished when they were late (observational learning). Regardless of the activity, behavior, or response, there’s a good chance your “decision” to do it can be explained based on one of the theories presented in this module. Outside Resources Article: Rescorla, R. A. (1988). Pavlovian conditioning: It’s not what you think it is. American Psychologist, 43, 151–160. Book: Bouton, M. E. (2007). Learning and behavior: A contemporary synthesis. Sunderland, MA: Sinauer Associates. Book: Bouton, M. E. (2009). Learning theory. In B. J. Sadock, V. A. Sadock, & P. Ruiz (Eds.), Kaplan & Sadock’s comprehensive textbook of psychiatry (9th ed., Vol. 1, pp. 647–658). New York, NY: Lippincott Williams & Wilkins. Book: Domjan, M. (2010). The principles of learning and behavior (6th ed.). Belmont, CA: Wadsworth. Video: Albert Bandura discusses the Bobo Doll Experiment. Discussion Questions 1. Describe three examples of Pavlovian (classical) conditioning that you have seen in your own behavior, or that of your friends or family, in the past few days. 2. Describe three examples of instrumental (operant) conditioning that you have seen in your own behavior, or that of your friends or family, in the past few days. 3. Drugs can be potent reinforcers. Discuss how Pavlovian conditioning and instrumental conditioning can work together to influence drug taking. 4. In the modern world, processed foods are highly available and have been engineered to be highly palatable and reinforcing. Discuss how Pavlovian and instrumental conditioning can work together to explain why people often eat too much. 5. How does blocking challenge the idea that pairings of a CS and US are sufficient to cause Pavlovian conditioning? What is important in creating Pavlovian learning? 6. How does the reinforcer devaluation effect challenge the idea that reinforcers merely “stamp in” the operant response? What does the effect tell us that animals actually learn in operant conditioning? 7. With regards to social learning do you think people learn violence from observing violence in movies? Why or why not? 8. What do you think you have learned through social learning? Who are your social models? Vocabulary Blocking In classical conditioning, the finding that no conditioning occurs to a stimulus if it is combined with a previously conditioned stimulus during conditioning trials. Suggests that information, surprise value, or prediction error is important in conditioning. Categorize To sort or arrange different items into classes or categories. Classical conditioning The procedure in which an initially neutral stimulus (the conditioned stimulus, or CS) is paired with an unconditioned stimulus (or US). The result is that the conditioned stimulus begins to elicit a conditioned response (CR). Classical conditioning is nowadays considered important as both a behavioral phenomenon and as a method to study simple associative learning. Same as Pavlovian conditioning. Conditioned compensatory response In classical conditioning, a conditioned response that opposes, rather than is the same as, the unconditioned response. It functions to reduce the strength of the unconditioned response. Often seen in conditioning when drugs are used as unconditioned stimuli. Conditioned response (CR) The response that is elicited by the conditioned stimulus after classical conditioning has taken place. Conditioned stimulus (CS) An initially neutral stimulus (like a bell, light, or tone) that elicits a conditioned response after it has been associated with an unconditioned stimulus. Context Stimuli that are in the background whenever learning occurs. For instance, the Skinner box or room in which learning takes place is the classic example of a context. However, “context” can also be provided by internal stimuli, such as the sensory effects of drugs (e.g., being under the influence of alcohol has stimulus properties that provide a context) and mood states (e.g., being happy or sad). It can also be provided by a specific period in time—the passage of time is sometimes said to change the “temporal context.” Discriminative stimulus In operant conditioning, a stimulus that signals whether the response will be reinforced. It is said to “set the occasion” for the operant response. Extinction Decrease in the strength of a learned behavior that occurs when the conditioned stimulus is presented without the unconditioned stimulus (in classical conditioning) or when the behavior is no longer reinforced (in instrumental conditioning). The term describes both the procedure (the US or reinforcer is no longer presented) as well as the result of the procedure (the learned response declines). Behaviors that have been reduced in strength through extinction are said to be “extinguished.” Fear conditioning A type of classical or Pavlovian conditioning in which the conditioned stimulus (CS) is associated with an aversive unconditioned stimulus (US), such as a foot shock. As a consequence of learning, the CS comes to evoke fear. The phenomenon is thought to be involved in the development of anxiety disorders in humans. Goal-directed behavior Instrumental behavior that is influenced by the animal’s knowledge of the association between the behavior and its consequence and the current value of the consequence. Sensitive to the reinforcer devaluation effect. Habit Instrumental behavior that occurs automatically in the presence of a stimulus and is no longer influenced by the animal’s knowledge of the value of the reinforcer. Insensitive to the reinforcer devaluation effect. Instrumental conditioning Process in which animals learn about the relationship between their behaviors and their consequences. Also known as operant conditioning. Law of effect The idea that instrumental or operant responses are influenced by their effects. Responses that are followed by a pleasant state of affairs will be strengthened and those that are followed by discomfort will be weakened. Nowadays, the term refers to the idea that operant or instrumental behaviors are lawfully controlled by their consequences. Observational learning Learning by observing the behavior of others. Operant A behavior that is controlled by its consequences. The simplest example is the rat’s lever-pressing, which is controlled by the presentation of the reinforcer. Operant conditioning See instrumental conditioning. Pavlovian conditioning See classical conditioning. Prediction error When the outcome of a conditioning trial is different from that which is predicted by the conditioned stimuli that are present on the trial (i.e., when the US is surprising). Prediction error is necessary to create Pavlovian conditioning (and associative learning generally). As learning occurs over repeated conditioning trials, the conditioned stimulus increasingly predicts the unconditioned stimulus, and prediction error declines. Conditioning works to correct or reduce prediction error. Preparedness The idea that an organism’s evolutionary history can make it easy to learn a particular association. Because of preparedness, you are more likely to associate the taste of tequila, and not the circumstances surrounding drinking it, with getting sick. Similarly, humans are more likely to associate images of spiders and snakes than flowers and mushrooms with aversive outcomes like shocks. Punisher A stimulus that decreases the strength of an operant behavior when it is made a consequence of the behavior. Quantitative law of effect A mathematical rule that states that the effectiveness of a reinforcer at strengthening an operant response depends on the amount of reinforcement earned for all alternative behaviors. A reinforcer is less effective if there is a lot of reinforcement in the environment for other behaviors. Reinforcer Any consequence of a behavior that strengthens the behavior or increases the likelihood that it will be performed it again. Reinforcer devaluation effect The finding that an animal will stop performing an instrumental response that once led to a reinforcer if the reinforcer is separately made aversive or undesirable. Renewal effect Recovery of an extinguished response that occurs when the context is changed after extinction. Especially strong when the change of context involves return to the context in which conditioning originally occurred. Can occur after extinction in either classical or instrumental conditioning. Social Learning Theory The theory that people can learn new responses and behaviors by observing the behavior of others. Social models Authorities that are the targets for observation and who model behaviors. Spontaneous recovery Recovery of an extinguished response that occurs with the passage of time after extinction. Can occur after extinction in either classical or instrumental conditioning. Stimulus control When an operant behavior is controlled by a stimulus that precedes it. Taste aversion learning The phenomenon in which a taste is paired with sickness, and this causes the organism to reject—and dislike—that taste in the future. Unconditioned response (UR) In classical conditioning, an innate response that is elicited by a stimulus before (or in the absence of) conditioning. Unconditioned stimulus (US) In classical conditioning, the stimulus that elicits the response before conditioning occurs. Vicarious reinforcement Learning that occurs by observing the reinforcement or punishment of another person.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/07%3A_LEARNING/7.01%3A_Conditioning_and_Learning.txt
• 8.1: Memory (Encoding, Storage, Retrieval) “Memory” is a single term that reflects a number of different abilities: holding information briefly while working with it (working memory), remembering episodes of one’s life (episodic memory), and our general knowledge of facts of the world (semantic memory), among other types. Remembering episodes involves three processes: encoding information (learning it, by perceiving it and relating it to past knowledge), storing it (maintaining it over time), and then retrieving it (accessing the informa • 8.2: Eyewitness Testimony and Memory Biases Eyewitnesses can provide very compelling legal testimony, but rather than recording experiences flawlessly, their memories are susceptible to a variety of errors and biases. They (like the rest of us) can make errors in remembering specific details and can even remember whole events that did not actually happen. In this module, we discuss several of the common types of errors, and what they can tell us about human memory and its interactions with the legal system. 08: MEMORY By Kathleen B. McDermott and Henry L. Roediger III Washington University in St. Louis “Memory” is a single term that reflects a number of different abilities: holding information briefly while working with it (working memory), remembering episodes of one’s life (episodic memory), and our general knowledge of facts of the world (semantic memory), among other types. Remembering episodes involves three processes: encoding information (learning it, by perceiving it and relating it to past knowledge), storing it (maintaining it over time), and then retrieving it (accessing the information when needed). Failures can occur at any stage, leading to forgetting or to having false memories. The key to improving one’s memory is to improve processes of encoding and to use techniques that guarantee effective retrieval. Good encoding techniques include relating new information to what one already knows, forming mental images, and creating associations among information that needs to be remembered. The key to good retrieval is developing effective cues that will lead the rememberer back to the encoded information. Classic mnemonic systems, known since the time of the ancient Greeks and still used by some today, can greatly improve one’s memory abilities. learning objectives • Define and note differences between the following forms of memory: working memory, episodic memory, semantic memory, collective memory. • Describe the three stages in the process of learning and remembering. • Describe strategies that can be used to enhance the original learning or encoding of information. • Describe strategies that can improve the process of retrieval. • Describe why the classic mnemonic device, the method of loci, works so well. Introduction In 2013, Simon Reinhard sat in front of 60 people in a room at Washington University, where he memorized an increasingly long series of digits. On the first round, a computer generated 10 random digits—6 1 9 4 8 5 6 3 7 1—on a screen for 10 seconds. After the series disappeared, Simon typed them into his computer. His recollection was perfect. In the next phase, 20 digits appeared on the screen for 20 seconds. Again, Simon got them all correct. No one in the audience (mostly professors, graduate students, and undergraduate students) could recall the 20 digits perfectly. Then came 30 digits, studied for 30 seconds; once again, Simon didn’t misplace even a single digit. For a final trial, 50 digits appeared on the screen for 50 seconds, and again, Simon got them all right. In fact, Simon would have been happy to keep going. His record in this task—called “forward digit span”—is 240 digits! When most of us witness a performance like that of Simon Reinhard, we think one of two things: First, maybe he’s cheating somehow. (No, he is not.) Second, Simon must have abilities more advanced than the rest of humankind. After all, psychologists established many years ago that the normal memory span for adults is about 7 digits, with some of us able to recall a few more and others a few less (Miller, 1956). That is why the first phone numbers were limited to 7 digits—psychologists determined that many errors occurred (costing the phone company money) when the number was increased to even 8 digits. But in normal testing, no one gets 50 digits correct in a row, much less 240. So, does Simon Reinhard simply have a photographic memory? He does not. Instead, Simon has taught himself simple strategies for remembering that have greatly increased his capacity for remembering virtually any type of material—digits, words, faces and names, poetry, historical dates, and so on. Twelve years earlier, before he started training his memory abilities, he had a digit span of 7, just like most of us. Simon has been training his abilities for about 10 years as of this writing, and has risen to be in the top two of “memory athletes.” In 2012, he came in second place in the World Memory Championships (composed of 11 tasks), held in London. He currently ranks second in the world, behind another German competitor, Johannes Mallow. In this module, we reveal what psychologists and others have learned about memory, and we also explain the general principles by which you can improve your own memory for factual material. Varieties of Memory For most of us, remembering digits relies on short-term memory, or working memory—the ability to hold information in our minds for a brief time and work with it (e.g., multiplying 24 x 17 without using paper would rely on working memory). Another type of memory is episodic memory—the ability to remember the episodes of our lives. If you were given the task of recalling everything you did 2 days ago, that would be a test of episodic memory; you would be required to mentally travel through the day in your mind and note the main events. Semantic memory is our storehouse of more-or-less permanent knowledge, such as the meanings of words in a language (e.g., the meaning of “parasol”) and the huge collection of facts about the world (e.g., there are 196 countries in the world, and 206 bones in your body). Collective memory refers to the kind of memory that people in a group share (whether family, community, schoolmates, or citizens of a state or a country). For example, residents of small towns often strongly identify with those towns, remembering the local customs and historical events in a unique way. That is, the community’s collective memory passes stories and recollections between neighbors and to future generations, forming a memory system unto itself. Psychologists continue to debate the classification of types of memory, as well as which types rely on others (Tulving, 2007), but for this module we will focus on episodic memory. Episodic memory is usually what people think of when they hear the word “memory.” For example, when people say that an older relative is “losing her memory” due to Alzheimer’s disease, the type of memory-loss they are referring to is the inability to recall events, or episodic memory. (Semantic memory is actually preserved in early-stage Alzheimer’s disease.) Although remembering specific events that have happened over the course of one’s entire life (e.g., your experiences in sixth grade) can be referred to as autobiographical memory, we will focus primarily on the episodic memories of more recent events. Three Stages of the Learning/Memory Process Psychologists distinguish between three necessary stages in the learning and memory process: encoding, storage, and retrieval (Melton, 1963). Encoding is defined as the initial learning of information; storage refers to maintaining information over time; retrieval is the ability to access information when you need it. If you meet someone for the first time at a party, you need to encode her name (Lyn Goff) while you associate her name with her face. Then you need to maintain the information over time. If you see her a week later, you need to recognize her face and have it serve as a cue to retrieve her name. Any successful act of remembering requires that all three stages be intact. However, two types of errors can also occur. Forgetting is one type: you see the person you met at the party and you cannot recall her name. The other error is misremembering (false recall or false recognition): you see someone who looks like Lyn Goff and call the person by that name (false recognition of the face). Or, you might see the real Lyn Goff, recognize her face, but then call her by the name of another woman you met at the party (misrecall of her name). Whenever forgetting or misremembering occurs, we can ask, at which stage in the learning/memory process was there a failure?—though it is often difficult to answer this question with precision. One reason for this inaccuracy is that the three stages are not as discrete as our description implies. Rather, all three stages depend on one another. How we encode information determines how it will be stored and what cues will be effective when we try to retrieve it. And too, the act of retrieval itself also changes the way information is subsequently remembered, usually aiding later recall of the retrieved information. The central point for now is that the three stages—encoding, storage, and retrieval—affect one another, and are inextricably bound together. Encoding Encoding refers to the initial experience of perceiving and learning information. Psychologists often study recall by having participants study a list of pictures or words. Encoding in these situations is fairly straightforward. However, “real life” encoding is much more challenging. When you walk across campus, for example, you encounter countless sights and sounds—friends passing by, people playing Frisbee, music in the air. The physical and mental environments are much too rich for you to encode all the happenings around you or the internal thoughts you have in response to them. So, an important first principle of encoding is that it is selective: we attend to some events in our environment and we ignore others. A second point about encoding is that it is prolific; we are always encoding the events of our lives—attending to the world, trying to understand it. Normally this presents no problem, as our days are filled with routine occurrences, so we don’t need to pay attention to everything. But if something does happen that seems strange—during your daily walk across campus, you see a giraffe—then we pay close attention and try to understand why we are seeing what we are seeing. Right after your typical walk across campus (one without the appearance of a giraffe), you would be able to remember the events reasonably well if you were asked. You could say whom you bumped into, what song was playing from a radio, and so on. However, suppose someone asked you to recall the same walk a month later. You wouldn’t stand a chance. You would likely be able to recount the basics of a typical walk across campus, but not the precise details of that particular walk. Yet, if you had seen a giraffe during that walk, the event would have been fixed in your mind for a long time, probably for the rest of your life. You would tell your friends about it, and, on later occasions when you saw a giraffe, you might be reminded of the day you saw one on campus. Psychologists have long pinpointed distinctiveness—having an event stand out as quite different from a background of similar events—as a key to remembering events (Hunt, 2003). In addition, when vivid memories are tinged with strong emotional content, they often seem to leave a permanent mark on us. Public tragedies, such as terrorist attacks, often create vivid memories in those who witnessed them. But even those of us not directly involved in such events may have vivid memories of them, including memories of first hearing about them. For example, many people are able to recall their exact physical location when they first learned about the assassination or accidental death of a national figure. The term flashbulb memory was originally coined by Brown and Kulik (1977) to describe this sort of vivid memory of finding out an important piece of news. The name refers to how some memories seem to be captured in the mind like a flash photograph; because of the distinctiveness and emotionality of the news, they seem to become permanently etched in the mind with exceptional clarity compared to other memories. Take a moment and think back on your own life. Is there a particular memory that seems sharper than others? A memory where you can recall unusual details, like the colors of mundane things around you, or the exact positions of surrounding objects? Although people have great confidence in flashbulb memories like these, the truth is, our objective accuracy with them is far from perfect (Talarico & Rubin, 2003). That is, even though people may have great confidence in what they recall, their memories are not as accurate (e.g., what the actual colors were; where objects were truly placed) as they tend to imagine. Nonetheless, all other things being equal, distinctive and emotional events are well-remembered. Details do not leap perfectly from the world into a person’s mind. We might say that we went to a party and remember it, but what we remember is (at best) what we encoded. As noted above, the process of encoding is selective, and in complex situations, relatively few of many possible details are noticed and encoded. The process of encoding always involves recoding—that is, taking the information from the form it is delivered to us and then converting it in a way that we can make sense of it. For example, you might try to remember the colors of a rainbow by using the acronym ROY G BIV (red, orange, yellow, green, blue, indigo, violet). The process of recoding the colors into a name can help us to remember. However, recoding can also introduce errors—when we accidentally add information during encoding, then remember that new material as if it had been part of the actual experience (as discussed below). Psychologists have studied many recoding strategies that can be used during study to improve retention. First, research advises that, as we study, we should think of the meaning of the events (Craik & Lockhart, 1972), and we should try to relate new events to information we already know. This helps us form associations that we can use to retrieve information later. Second, imagining events also makes them more memorable; creating vivid images out of information (even verbal information) can greatly improve later recall (Bower & Reitman, 1972). Creating imagery is part of the technique Simon Reinhard uses to remember huge numbers of digits, but we can all use images to encode information more effectively. The basic concept behind good encoding strategies is to form distinctive memories (ones that stand out), and to form links or associations among memories to help later retrieval (Hunt & McDaniel, 1993). Using study strategies such as the ones described here is challenging, but the effort is well worth the benefits of enhanced learning and retention. We emphasized earlier that encoding is selective: people cannot encode all information they are exposed to. However, recoding can add information that was not even seen or heard during the initial encoding phase. Several of the recoding processes, like forming associations between memories, can happen without our awareness. This is one reason people can sometimes remember events that did not actually happen—because during the process of recoding, details got added. One common way of inducing false memories in the laboratory employs a word-list technique (Deese, 1959; Roediger & McDermott, 1995). Participants hear lists of 15 words, like door, glass, pane, shade, ledge, sill, house, open, curtain, frame, view, breeze, sash, screen, and shutter. Later, participants are given a test in which they are shown a list of words and asked to pick out the ones they’d heard earlier. This second list contains some words from the first list (e.g., door, pane, frame) and some words not from the list (e.g., arm, phone, bottle). In this example, one of the words on the test is window, which—importantly—does not appear in the first list, but which is related to other words in that list. When subjects were tested, they were reasonably accurate with the studied words (door, etc.), recognizing them 72% of the time. However, when window was on the test, they falsely recognized it as having been on the list 84% of the time (Stadler, Roediger, & McDermott, 1999). The same thing happened with many other lists the authors used. This phenomenon is referred to as the DRM (for Deese-Roediger-McDermott) effect. One explanation for such results is that, while students listened to items in the list, the words triggered the students to think about window, even though windowwas never presented. In this way, people seem to encode events that are not actually part of their experience. Because humans are creative, we are always going beyond the information we are given: we automatically make associations and infer from them what is happening. But, as with the word association mix-up above, sometimes we make false memories from our inferences—remembering the inferences themselves as if they were actual experiences. To illustrate this, Brewer (1977) gave people sentences to remember that were designed to elicit pragmatic inferences. Inferences, in general, refer to instances when something is not explicitly stated, but we are still able to guess the undisclosed intention. For example, if your friend told you that she didn’t want to go out to eat, you may infer that she doesn’t have the money to go out, or that she’s too tired. With pragmatic inferences, there is usually one particular inference you’re likely to make. Consider the statement Brewer (1977) gave her participants: “The karate champion hit the cinder block.” After hearing or seeing this sentence, participants who were given a memory test tended to remember the statement as having been, “The karate champion broke the cinder block.” This remembered statement is not necessarily a logical inference (i.e., it is perfectly reasonable that a karate champion could hit a cinder block without breaking it). Nevertheless, the pragmatic conclusion from hearing such a sentence is that the block was likely broken. The participants remembered this inference they made while hearing the sentence in place of the actual words that were in the sentence (see also McDermott & Chan, 2006). Encoding—the initial registration of information—is essential in the learning and memory process. Unless an event is encoded in some fashion, it will not be successfully remembered later. However, just because an event is encoded (even if it is encoded well), there’s no guarantee that it will be remembered later. Storage Every experience we have changes our brains. That may seem like a bold, even strange, claim at first, but it’s true. We encode each of our experiences within the structures of the nervous system, making new impressions in the process—and each of those impressions involves changes in the brain. Psychologists (and neurobiologists) say that experiences leave memory traces, or engrams (the two terms are synonyms). Memories have to be stored somewhere in the brain, so in order to do so, the brain biochemically alters itself and its neural tissue. Just like you might write yourself a note to remind you of something, the brain “writes” a memory trace, changing its own physical composition to do so. The basic idea is that events (occurrences in our environment) create engrams through a process of consolidation: the neural changes that occur after learning to create the memory trace of an experience. Although neurobiologists are concerned with exactly what neural processes change when memories are created, for psychologists, the term memory trace simply refers to the physical change in the nervous system (whatever that may be, exactly) that represents our experience. Although the concept of engram or memory trace is extremely useful, we shouldn’t take the term too literally. It is important to understand that memory traces are not perfect little packets of information that lie dormant in the brain, waiting to be called forward to give an accurate report of past experience. Memory traces are not like video or audio recordings, capturing experience with great accuracy; as discussed earlier, we often have errors in our memory, which would not exist if memory traces were perfect packets of information. Thus, it is wrong to think that remembering involves simply “reading out” a faithful record of past experience. Rather, when we remember past events, we reconstruct them with the aid of our memory traces—but also with our current belief of what happened. For example, if you were trying to recall for the police who started a fight at a bar, you may not have a memory trace of who pushed whom first. However, let’s say you remember that one of the guys held the door open for you. When thinking back to the start of the fight, this knowledge (of how one guy was friendly to you) may unconsciously influence your memory of what happened in favor of the nice guy. Thus, memory is a construction of what you actually recall and what you believe happened. In a phrase, remembering is reconstructive (we reconstruct our past with the aid of memory traces) not reproductive (a perfect reproduction or recreation of the past). Psychologists refer to the time between learning and testing as the retention interval. Memories can consolidate during that time, aiding retention. However, experiences can also occur that undermine the memory. For example, think of what you had for lunch yesterday—a pretty easy task. However, if you had to recall what you had for lunch 17 days ago, you may well fail (assuming you don’t eat the same thing every day). The 16 lunches you’ve had since that one have created retroactive interference. Retroactive interference refers to new activities (i.e., the subsequent lunches) during the retention interval (i.e., the time between the lunch 17 days ago and now) that interfere with retrieving the specific, older memory (i.e., the lunch details from 17 days ago). But just as newer things can interfere with remembering older things, so can the opposite happen. Proactive interference is when past memories interfere with the encoding of new ones. For example, if you have ever studied a second language, often times the grammar and vocabulary of your native language will pop into your head, impairing your fluency in the foreign language. Retroactive interference is one of the main causes of forgetting (McGeoch, 1932). In the module Eyewitness Testimony and Memory Biases http://noba.to/uy49tm37 Elizabeth Loftus describes her fascinating work on eyewitness memory, in which she shows how memory for an event can be changed via misinformation supplied during the retention interval. For example, if you witnessed a car crash but subsequently heard people describing it from their own perspective, this new information may interfere with or disrupt your own personal recollection of the crash. In fact, you may even come to remember the event happening exactly as the others described it! This misinformation effect in eyewitness memory represents a type of retroactive interference that can occur during the retention interval (see Loftus [2005] for a review). Of course, if correct information is given during the retention interval, the witness’s memory will usually be improved. Although interference may arise between the occurrence of an event and the attempt to recall it, the effect itself is always expressed when we retrieve memories, the topic to which we turn next. Retrieval Endel Tulving argued that “the key process in memory is retrieval” (1991, p. 91). Why should retrieval be given more prominence than encoding or storage? For one thing, if information were encoded and stored but could not be retrieved, it would be useless. As discussed previously in this module, we encode and store thousands of events—conversations, sights and sounds—every day, creating memory traces. However, we later access only a tiny portion of what we’ve taken in. Most of our memories will never be used—in the sense of being brought back to mind, consciously. This fact seems so obvious that we rarely reflect on it. All those events that happened to you in the fourth grade that seemed so important then? Now, many years later, you would struggle to remember even a few. You may wonder if the traces of those memories still exist in some latent form. Unfortunately, with currently available methods, it is impossible to know. Psychologists distinguish information that is available in memory from that which is accessible (Tulving & Pearlstone, 1966). Available information is the information that is stored in memory—but precisely how much and what types are stored cannot be known. That is, all we can know is what information we can retrieve—accessibleinformation. The assumption is that accessible information represents only a tiny slice of the information available in our brains. Most of us have had the experience of trying to remember some fact or event, giving up, and then—all of a sudden!—it comes to us at a later time, even after we’ve stopped trying to remember it. Similarly, we all know the experience of failing to recall a fact, but then, if we are given several choices (as in a multiple-choice test), we are easily able to recognize it. What factors determine what information can be retrieved from memory? One critical factor is the type of hints, or cues, in the environment. You may hear a song on the radio that suddenly evokes memories of an earlier time in your life, even if you were not trying to remember it when the song came on. Nevertheless, the song is closely associated with that time, so it brings the experience to mind. The general principle that underlies the effectiveness of retrieval cues is the encoding specificity principle (Tulving & Thomson, 1973): when people encode information, they do so in specific ways. For example, take the song on the radio: perhaps you heard it while you were at a terrific party, having a great, philosophical conversation with a friend. Thus, the song became part of that whole complex experience. Years later, even though you haven’t thought about that party in ages, when you hear the song on the radio, the whole experience rushes back to you. In general, the encoding specificity principle states that, to the extent a retrieval cue (the song) matches or overlaps the memory trace of an experience (the party, the conversation), it will be effective in evoking the memory. A classic experiment on the encoding specificity principle had participants memorize a set of words in a unique setting. Later, the participants were tested on the word sets, either in the same location they learned the words or a different one. As a result of encoding specificity, the students who took the test in the same place they learned the words were actually able to recall more words (Godden & Baddeley, 1975) than the students who took the test in a new setting. In this instance, the physical context itself provided cues for retrieval. This is why it’s good to study for midterms and finals in the same room you’ll be taking them in. One caution with this principle, though, is that, for the cue to work, it can’t match too many other experiences (Nairne, 2002; Watkins, 1975). Consider a lab experiment. Suppose you study 100 items; 99 are words, and one is a picture—of a penguin, item 50 in the list. Afterwards, the cue “recall the picture” would evoke “penguin” perfectly. No one would miss it. However, if the word “penguin” were placed in the same spot among the other 99 words, its memorability would be exceptionally worse. This outcome shows the power of distinctiveness that we discussed in the section on encoding: one picture is perfectly recalled from among 99 words because it stands out. Now consider what would happen if the experiment were repeated, but there were 25 pictures distributed within the 100-item list. Although the picture of the penguin would still be there, the probability that the cue “recall the picture” (at item 50) would be useful for the penguin would drop correspondingly. Watkins (1975) referred to this outcome as demonstrating the cue overload principle. That is, to be effective, a retrieval cue cannot be overloaded with too many memories. For the cue “recall the picture” to be effective, it should only match one item in the target set (as in the one-picture, 99-word case). To sum up how memory cues function: for a retrieval cue to be effective, a match must exist between the cue and the desired target memory; furthermore, to produce the best retrieval, the cue-target relationship should be distinctive. Next, we will see how the encoding specificity principle can work in practice. Psychologists measure memory performance by using production tests (involving recall) or recognition tests (involving the selection of correct from incorrect information, e.g., a multiple-choice test). For example, with our list of 100 words, one group of people might be asked to recall the list in any order (a free recall test), while a different group might be asked to circle the 100 studied words out of a mix with another 100, unstudied words (a recognition test). In this situation, the recognition test would likely produce better performance from participants than the recall test. We usually think of recognition tests as being quite easy, because the cue for retrieval is a copy of the actual event that was presented for study. After all, what could be a better cue than the exact target (memory) the person is trying to access? In most cases, this line of reasoning is true; nevertheless, recognition tests do not provide perfect indexes of what is stored in memory. That is, you can fail to recognize a target staring you right in the face, yet be able to recall it later with a different set of cues (Watkins & Tulving, 1975). For example, suppose you had the task of recognizing the surnames of famous authors. At first, you might think that being given the actual last name would always be the best cue. However, research has shown this not necessarily to be true (Muter, 1984). When given names such as Tolstoy, Shaw, Shakespeare, and Lee, subjects might well say that Tolstoy and Shakespeare are famous authors, whereas Shaw and Lee are not. But, when given a cued recall test using first names, people often recall items (produce them) that they had failed to recognize before. For example, in this instance, a cue like George Bernard ________ often leads to a recall of “Shaw,” even though people initially failed to recognize Shaw as a famous author’s name. Yet, when given the cue “William,” people may not come up with Shakespeare, because William is a common name that matches many people (the cue overload principle at work). This strange fact—that recall can sometimes lead to better performance than recognition—can be explained by the encoding specificity principle. As a cue, George Bernard _________ matches the way the famous writer is stored in memory better than does his surname, Shaw, does (even though it is the target). Further, the match is quite distinctive with George Bernard ___________, but the cue William _________________ is much more overloaded (Prince William, William Yeats, William Faulkner, will.i.am). The phenomenon we have been describing is called the recognition failure of recallable words, which highlights the point that a cue will be most effective depending on how the information has been encoded (Tulving & Thomson, 1973). The point is, the cues that work best to evoke retrieval are those that recreate the event or name to be remembered, whereas sometimes even the target itself, such as Shaw in the above example, is not the best cue. Which cue will be most effective depends on how the information has been encoded. Whenever we think about our past, we engage in the act of retrieval. We usually think that retrieval is an objective act because we tend to imagine that retrieving a memory is like pulling a book from a shelf, and after we are done with it, we return the book to the shelf just as it was. However, research shows this assumption to be false; far from being a static repository of data, the memory is constantly changing. In fact, every time we retrieve a memory, it is altered. For example, the act of retrieval itself (of a fact, concept, or event) makes the retrieved memory much more likely to be retrieved again, a phenomenon called the testing effect or the retrieval practice effect (Pyc & Rawson, 2009; Roediger & Karpicke, 2006). However, retrieving some information can actually cause us to forget other information related to it, a phenomenon called retrieval-induced forgetting (Anderson, Bjork, & Bjork, 1994). Thus the act of retrieval can be a double-edged sword—strengthening the memory just retrieved (usually by a large amount) but harming related information (though this effect is often relatively small). As discussed earlier, retrieval of distant memories is reconstructive. We weave the concrete bits and pieces of events in with assumptions and preferences to form a coherent story (Bartlett, 1932). For example, if during your 10th birthday, your dog got to your cake before you did, you would likely tell that story for years afterward. Say, then, in later years you misremember where the dog actually found the cake, but repeat that error over and over during subsequent retellings of the story. Over time, that inaccuracy would become a basic fact of the event in your mind. Just as retrieval practice (repetition) enhances accurate memories, so will it strengthen errors or false memories (McDermott, 2006). Sometimes memories can even be manufactured just from hearing a vivid story. Consider the following episode, recounted by Jean Piaget, the famous developmental psychologist, from his childhood: One of my first memories would date, if it were true, from my second year. I can still see, most clearly, the following scene, in which I believed until I was about 15. I was sitting in my pram . . . when a man tried to kidnap me. I was held in by the strap fastened round me while my nurse bravely tried to stand between me and the thief. She received various scratches, and I can still vaguely see those on her face. . . . When I was about 15, my parents received a letter from my former nurse saying that she had been converted to the Salvation Army. She wanted to confess her past faults, and in particular to return the watch she had been given as a reward on this occasion. She had made up the whole story, faking the scratches. I therefore must have heard, as a child, this story, which my parents believed, and projected it into the past in the form of a visual memory. . . . Many real memories are doubtless of the same order. (Norman & Schacter, 1997, pp. 187–188) Piaget’s vivid account represents a case of a pure reconstructive memory. He heard the tale told repeatedly, and doubtless told it (and thought about it) himself. The repeated telling cemented the events as though they had really happened, just as we are all open to the possibility of having “many real memories ... of the same order.” The fact that one can remember precise details (the location, the scratches) does not necessarily indicate that the memory is true, a point that has been confirmed in laboratory studies, too (e.g., Norman & Schacter, 1997). Putting It All Together: Improving Your Memory A central theme of this module has been the importance of the encoding and retrieval processes, and their interaction. To recap: to improve learning and memory, we need to encode information in conjunction with excellent cues that will bring back the remembered events when we need them. But how do we do this? Keep in mind the two critical principles we have discussed: to maximize retrieval, we should construct meaningful cues that remind us of the original experience, and those cues should be distinctive and not associated with other memories. These two conditions are critical in maximizing cue effectiveness (Nairne, 2002). So, how can these principles be adapted for use in many situations? Let’s go back to how we started the module, with Simon Reinhard’s ability to memorize huge numbers of digits. Although it was not obvious, he applied these same general memory principles, but in a more deliberate way. In fact, all mnemonic devices, or memory aids/tricks, rely on these fundamental principles. In a typical case, the person learns a set of cues and then applies these cues to learn and remember information. Consider the set of 20 items below that are easy to learn and remember (Bower & Reitman, 1972). 1. is a gun. 11 is penny-one, hot dog bun. 2. is a shoe. 12 is penny-two, airplane glue. 3. is a tree. 13 is penny-three, bumble bee. 4. is a door. 14 is penny-four, grocery store. 5. is knives. 15 is penny-five, big beehive. 6. is sticks. 16 is penny-six, magic tricks. 7. is oven. 17 is penny-seven, go to heaven. 8. is plate. 18 is penny-eight, golden gate. 9. is wine. 19 is penny-nine, ball of twine. 10. is hen. 20 is penny-ten, ballpoint pen. It would probably take you less than 10 minutes to learn this list and practice recalling it several times (remember to use retrieval practice!). If you were to do so, you would have a set of peg words on which you could “hang” memories. In fact, this mnemonic device is called the peg word technique. If you then needed to remember some discrete items—say a grocery list, or points you wanted to make in a speech—this method would let you do so in a very precise yet flexible way. Suppose you had to remember bread, peanut butter, bananas, lettuce, and so on. The way to use the method is to form a vivid image of what you want to remember and imagine it interacting with your peg words (as many as you need). For example, for these items, you might imagine a large gun (the first peg word) shooting a loaf of bread, then a jar of peanut butter inside a shoe, then large bunches of bananas hanging from a tree, then a door slamming on a head of lettuce with leaves flying everywhere. The idea is to provide good, distinctive cues (the weirder the better!) for the information you need to remember while you are learning it. If you do this, then retrieving it later is relatively easy. You know your cues perfectly (one is gun, etc.), so you simply go through your cue word list and “look” in your mind’s eye at the image stored there (bread, in this case). This peg word method may sound strange at first, but it works quite well, even with little training (Roediger, 1980). One word of warning, though, is that the items to be remembered need to be presented relatively slowly at first, until you have practice associating each with its cue word. People get faster with time. Another interesting aspect of this technique is that it’s just as easy to recall the items in backwards order as forwards. This is because the peg words provide direct access to the memorized items, regardless of order. How did Simon Reinhard remember those digits? Essentially he has a much more complex system based on these same principles. In his case, he uses “memory palaces” (elaborate scenes with discrete places) combined with huge sets of images for digits. For example, imagine mentally walking through the home where you grew up and identifying as many distinct areas and objects as possible. Simon has hundreds of such memory palaces that he uses. Next, for remembering digits, he has memorized a set of 10,000 images. Every four-digit number for him immediately brings forth a mental image. So, for example, 6187 might recall Michael Jackson. When Simon hears all the numbers coming at him, he places an image for every four digits into locations in his memory palace. He can do this at an incredibly rapid rate, faster than 4 digits per 4 seconds when they are flashed visually, as in the demonstration at the beginning of the module. As noted, his record is 240 digits, recalled in exact order. Simon also holds the world record in an event called “speed cards,” which involves memorizing the precise order of a shuffled deck of cards. Simon was able to do this in 21.19 seconds! Again, he uses his memory palaces, and he encodes groups of cards as single images. Many books exist on how to improve memory using mnemonic devices, but all involve forming distinctive encoding operations and then having an infallible set of memory cues. We should add that to develop and use these memory systems beyond the basic peg system outlined above takes a great amount of time and concentration. The World Memory Championships are held every year and the records keep improving. However, for most common purposes, just keep in mind that to remember well you need to encode information in a distinctive way and to have good cues for retrieval. You can adapt a system that will meet most any purpose. Outside Resources Book: Brown, P.C., Roediger, H. L. & McDaniel, M. A. (2014). Make it stick: The science of successful learning.Cambridge, MA: Harvard University Press. www.amazon.com/Make-Stick-Sc.../dp/0674729013 Student Video 1: Eureka Foong\\\\'s - The Misinformation Effect. This is a student-made video illustrating this phenomenon of altered memory. It was one of the winning entries in the 2014 Noba Student Video Award. Student Video 2: Kara McCord\\\\'s - Flashbulb Memories. This is a student-made video illustrating this phenomenon of autobiographical memory. It was one of the winning entries in the 2014 Noba Student Video Award. Student Video 3: Ang Rui Xia & Ong Jun Hao\\\\'s - The Misinformation Effect. Another student-made video exploring the misinformation effect. Also an award winner from 2014. Video: Simon Reinhard breaking the world record in speedcards. Web: Retrieval Practice, a website with research, resources, and tips for both educators and learners around the memory-strengthening skill of retrieval practice. http://www.retrievalpractice.org/ Discussion Questions 1. Mnemonists like Simon Reinhard develop mental “journeys,” which enable them to use the method of loci. Develop your own journey, which contains 20 places, in order, that you know well. One example might be: the front walkway to your parents’ apartment; their doorbell; the couch in their living room; etc. Be sure to use a set of places that you know well and that have a natural order to them (e.g., the walkway comes before the doorbell). Now you are more than halfway toward being able to memorize a set of 20 nouns, in order, rather quickly. As an optional second step, have a friend make a list of 20 such nouns and read them to you, slowly (e.g., one every 5 seconds). Use the method to attempt to remember the 20 items. 2. Recall a recent argument or misunderstanding you have had about memory (e.g., a debate over whether your girlfriend/boyfriend had agreed to something). In light of what you have just learned about memory, how do you think about it? Is it possible that the disagreement can be understood by one of you making a pragmatic inference? 3. Think about what you’ve learned in this module and about how you study for tests. On the basis of what you have learned, is there something you want to try that might help your study habits? Vocabulary Autobiographical memory Memory for the events of one’s life. Consolidation The process occurring after encoding that is believed to stabilize memory traces. Cue overload principle The principle stating that the more memories that are associated to a particular retrieval cue, the less effective the cue will be in prompting retrieval of any one memory. Distinctiveness The principle that unusual events (in a context of similar events) will be recalled and recognized better than uniform (nondistinctive) events. Encoding The initial experience of perceiving and learning events. Encoding specificity principle The hypothesis that a retrieval cue will be effective to the extent that information encoded from the cue overlaps or matches information in the engram or memory trace. Engrams A term indicating the change in the nervous system representing an event; also, memory trace. Episodic memory Memory for events in a particular time and place. Flashbulb memory Vivid personal memories of receiving the news of some momentous (and usually emotional) event. Memory traces A term indicating the change in the nervous system representing an event. Misinformation effect When erroneous information occurring after an event is remembered as having been part of the original event. Mnemonic devices A strategy for remembering large amounts of information, usually involving imaging events occurring on a journey or with some other set of memorized cues. Recoding The ubiquitous process during learning of taking information in one form and converting it to another form, usually one more easily remembered. Retrieval The process of accessing stored information. Retroactive interference The phenomenon whereby events that occur after some particular event of interest will usually cause forgetting of the original event. Semantic memory The more or less permanent store of knowledge that people have. Storage The stage in the learning/memory process that bridges encoding and retrieval; the persistence of memory over time.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/08%3A_MEMORY/8.01%3A_Memory_%28Encoding%2C_Storage%2C_Retrieval%29.txt
By Cara Laney and Elizabeth F. Loftus Reed College, University of California, Irvine Eyewitnesses can provide very compelling legal testimony, but rather than recording experiences flawlessly, their memories are susceptible to a variety of errors and biases. They (like the rest of us) can make errors in remembering specific details and can even remember whole events that did not actually happen. In this module, we discuss several of the common types of errors, and what they can tell us about human memory and its interactions with the legal system. learning objectives • Describe the kinds of mistakes that eyewitnesses commonly make and some of the ways that this can impede justice. • Explain some of the errors that are common in human memory. • Describe some of the important research that has demonstrated human memory errors and their consequences. What Is Eyewitness Testimony? Eyewitness testimony is what happens when a person witnesses a crime (or accident, or other legally important event) and later gets up on the stand and recalls for the court all the details of the witnessed event. It involves a more complicated process than might initially be presumed. It includes what happens during the actual crime to facilitate or hamper witnessing, as well as everything that happens from the time the event is over to the later courtroom appearance. The eyewitness may be interviewed by the police and numerous lawyers, describe the perpetrator to several different people, and make an identification of the perpetrator, among other things. Why Is Eyewitness Testimony an Important Area of Psychological Research? When an eyewitness stands up in front of the court and describes what happened from her own perspective, this testimony can be extremely compelling—it is hard for those hearing this testimony to take it “with a grain of salt,” or otherwise adjust its power. But to what extent is this necessary? There is now a wealth of evidence, from research conducted over several decades, suggesting that eyewitness testimony is probably the most persuasive form of evidence presented in court, but in many cases, its accuracy is dubious. There is also evidence that mistaken eyewitness evidence can lead to wrongful conviction—sending people to prison for years or decades, even to death row, for crimes they did not commit. Faulty eyewitness testimony has been implicated in at least 75% of DNA exoneration cases—more than any other cause (Garrett, 2011). In a particularly famous case, a man named Ronald Cotton was identified by a rape victim, Jennifer Thompson, as her rapist, and was found guilty and sentenced to life in prison. After more than 10 years, he was exonerated (and the real rapist identified) based on DNA evidence. For details on this case and other (relatively) lucky individuals whose false convictions were subsequently overturned with DNA evidence, see the Innocence Project website (http://www.innocenceproject.org/). There is also hope, though, that many of the errors may be avoidable if proper precautions are taken during the investigative and judicial processes. Psychological science has taught us what some of those precautions might involve, and we discuss some of that science now. Misinformation In an early study of eyewitness memory, undergraduate subjects first watched a slideshow depicting a small red car driving and then hitting a pedestrian (Loftus, Miller, & Burns, 1978). Some subjects were then asked leading questions about what had happened in the slides. For example, subjects were asked, “How fast was the car traveling when it passed the yield sign?” But this question was actually designed to be misleading, because the original slide included a stop sign rather than a yield sign. Later, subjects were shown pairs of slides. One of the pair was the original slide containing the stop sign; the other was a replacement slide containing a yield sign. Subjects were asked which of the pair they had previously seen. Subjects who had been asked about the yield sign were likely to pick the slide showing the yield sign, even though they had originally seen the slide with the stop sign. In other words, the misinformation in the leading question led to inaccurate memory. This phenomenon is called the misinformation effect, because the misinformation that subjects were exposed to after the event (here in the form of a misleading question) apparently contaminates subjects’ memories of what they witnessed. Hundreds of subsequent studies have demonstrated that memory can be contaminated by erroneous information that people are exposed to after they witness an event (see Frenda, Nichols, & Loftus, 2011; Loftus, 2005). The misinformation in these studies has led people to incorrectly remember everything from small but crucial details of a perpetrator’s appearance to objects as large as a barn that wasn’t there at all. These studies have demonstrated that young adults (the typical research subjects in psychology) are often susceptible to misinformation, but that children and older adults can be even more susceptible (Bartlett & Memon, 2007; Ceci & Bruck, 1995). In addition, misinformation effects can occur easily, and without any intention to deceive (Allan & Gabbert, 2008). Even slight differences in the wording of a question can lead to misinformation effects. Subjects in one study were more likely to say yes when asked “Did you see the broken headlight?” than when asked “Did you see a broken headlight?” (Loftus, 1975). Other studies have shown that misinformation can corrupt memory even more easily when it is encountered in social situations (Gabbert, Memon, Allan, & Wright, 2004). This is a problem particularly in cases where more than one person witnesses a crime. In these cases, witnesses tend to talk to one another in the immediate aftermath of the crime, including as they wait for police to arrive. But because different witnesses are different people with different perspectives, they are likely to see or notice different things, and thus remember different things, even when they witness the same event. So when they communicate about the crime later, they not only reinforce common memories for the event, they also contaminate each other’s memories for the event (Gabbert, Memon, & Allan, 2003; Paterson & Kemp, 2006; Takarangi, Parker, & Garry, 2006). The misinformation effect has been modeled in the laboratory. Researchers had subjects watch a video in pairs. Both subjects sat in front of the same screen, but because they wore differently polarized glasses, they saw two different versions of a video, projected onto a screen. So, although they were both watching the same screen, and believed (quite reasonably) that they were watching the same video, they were actually watching two different versions of the video (Garry, French, Kinzett, & Mori, 2008). In the video, Eric the electrician is seen wandering through an unoccupied house and helping himself to the contents thereof. A total of eight details were different between the two videos. After watching the videos, the “co-witnesses” worked together on 12 memory test questions. Four of these questions dealt with details that were different in the two versions of the video, so subjects had the chance to influence one another. Then subjects worked individually on 20 additional memory test questions. Eight of these were for details that were different in the two videos. Subjects’ accuracy was highly dependent on whether they had discussed the details previously. Their accuracy for items they had not previously discussed with their co-witness was 79%. But for items that they had discussed, their accuracy dropped markedly, to 34%. That is, subjects allowed their co-witnesses to corrupt their memories for what they had seen. Identifying Perpetrators In addition to correctly remembering many details of the crimes they witness, eyewitnesses often need to remember the faces and other identifying features of the perpetrators of those crimes. Eyewitnesses are often asked to describe that perpetrator to law enforcement and later to make identifications from books of mug shots or lineups. Here, too, there is a substantial body of research demonstrating that eyewitnesses can make serious, but often understandable and even predictable, errors (Caputo & Dunning, 2007; Cutler & Penrod, 1995). In most jurisdictions in the United States, lineups are typically conducted with pictures, called photo spreads, rather than with actual people standing behind one-way glass (Wells, Memon, & Penrod, 2006). The eyewitness is given a set of small pictures of perhaps six or eight individuals who are dressed similarly and photographed in similar circumstances. One of these individuals is the police suspect, and the remainder are “foils” or “fillers” (people known to be innocent of the particular crime under investigation). If the eyewitness identifies the suspect, then the investigation of that suspect is likely to progress. If a witness identifies a foil or no one, then the police may choose to move their investigation in another direction. This process is modeled in laboratory studies of eyewitness identifications. In these studies, research subjects witness a mock crime (often as a short video) and then are asked to make an identification from a photo or a live lineup. Sometimes the lineups are target present, meaning that the perpetrator from the mock crime is actually in the lineup, and sometimes they are target absent, meaning that the lineup is made up entirely of foils. The subjects, or mock witnesses, are given some instructions and asked to pick the perpetrator out of the lineup. The particular details of the witnessing experience, the instructions, and the lineup members can all influence the extent to which the mock witness is likely to pick the perpetrator out of the lineup, or indeed to make any selection at all. Mock witnesses (and indeed real witnesses) can make errors in two different ways. They can fail to pick the perpetrator out of a target present lineup (by picking a foil or by neglecting to make a selection), or they can pick a foil in a target absent lineup (wherein the only correct choice is to not make a selection). Some factors have been shown to make eyewitness identification errors particularly likely. These include poor vision or viewing conditions during the crime, particularly stressful witnessing experiences, too little time to view the perpetrator or perpetrators, too much delay between witnessing and identifying, and being asked to identify a perpetrator from a race other than one’s own (Bornstein, Deffenbacher, Penrod, & McGorty, 2012; Brigham, Bennett, Meissner, & Mitchell, 2007; Burton, Wilson, Cowan, & Bruce, 1999; Deffenbacher, Bornstein, Penrod, & McGorty, 2004). It is hard for the legal system to do much about most of these problems. But there are some things that the justice system can do to help lineup identifications “go right.” For example, investigators can put together high-quality, fair lineups. A fair lineup is one in which the suspect and each of the foils is equally likely to be chosen by someone who has read an eyewitness description of the perpetrator but who did not actually witness the crime (Brigham, Ready, & Spier, 1990). This means that no one in the lineup should “stick out,” and that everyone should match the description given by the eyewitness. Other important recommendations that have come out of this research include better ways to conduct lineups, “double blind” lineups, unbiased instructions for witnesses, and conducting lineups in a sequential fashion (see Technical Working Group for Eyewitness Evidence, 1999; Wells et al., 1998; Wells & Olson, 2003). Kinds of Memory Biases Memory is also susceptible to a wide variety of other biases and errors. People can forget events that happened to them and people they once knew. They can mix up details across time and place. They can even remember whole complex events that never happened at all. Importantly, these errors, once made, can be very hard to unmake. A memory is no less “memorable” just because it is wrong. Some small memory errors are commonplace, and you have no doubt experienced many of them. You set down your keys without paying attention, and then cannot find them later when you go to look for them. You try to come up with a person’s name but cannot find it, even though you have the sense that it is right at the tip of your tongue (psychologists actually call this the tip-of-the-tongue effect, or TOT) (Brown, 1991). Other sorts of memory biases are more complicated and longer lasting. For example, it turns out that our expectations and beliefs about how the world works can have huge influences on our memories. Because many aspects of our everyday lives are full of redundancies, our memory systems take advantage of the recurring patterns by forming and using schemata, or memory templates (Alba & Hasher, 1983; Brewer & Treyens, 1981). Thus, we know to expect that a library will have shelves and tables and librarians, and so we don’t have to spend energy noticing these at the time. The result of this lack of attention, however, is that one is likely to remember schema-consistent information (such as tables), and to remember them in a rather generic way, whether or not they were actually present. False Memory Some memory errors are so “large” that they almost belong in a class of their own: false memories. Back in the early 1990s a pattern emerged whereby people would go into therapy for depression and other everyday problems, but over the course of the therapy develop memories for violent and horrible victimhood (Loftus & Ketcham, 1994). These patients’ therapists claimed that the patients were recovering genuine memories of real childhood abuse, buried deep in their minds for years or even decades. But some experimental psychologists believed that the memories were instead likely to be false—created in therapy. These researchers then set out to see whether it would indeed be possible for wholly false memories to be created by procedures similar to those used in these patients’ therapy. In early false memory studies, undergraduate subjects’ family members were recruited to provide events from the students’ lives. The student subjects were told that the researchers had talked to their family members and learned about four different events from their childhoods. The researchers asked if the now undergraduate students remembered each of these four events—introduced via short hints. The subjects were asked to write about each of the four events in a booklet and then were interviewed two separate times. The trick was that one of the events came from the researchers rather than the family (and the family had actually assured the researchers that this event had not happened to the subject). In the first such study, this researcher-introduced event was a story about being lost in a shopping mall and rescued by an older adult. In this study, after just being asked whether they remembered these events occurring on three separate occasions, a quarter of subjects came to believe that they had indeed been lost in the mall (Loftus & Pickrell, 1995). In subsequent studies, similar procedures were used to get subjects to believe that they nearly drowned and had been rescued by a lifeguard, or that they had spilled punch on the bride’s parents at a family wedding, or that they had been attacked by a vicious animal as a child, among other events (Heaps & Nash, 1999; Hyman, Husband, & Billings, 1995; Porter, Yuille, & Lehman, 1999). More recent false memory studies have used a variety of different manipulations to produce false memories in substantial minorities and even occasional majorities of manipulated subjects (Braun, Ellis, & Loftus, 2002; Lindsay, Hagen, Read, Wade, & Garry, 2004; Mazzoni, Loftus, Seitz, & Lynn, 1999; Seamon, Philbin, & Harrison, 2006; Wade, Garry, Read, & Lindsay, 2002). For example, one group of researchers used a mock-advertising study, wherein subjects were asked to review (fake) advertisements for Disney vacations, to convince subjects that they had once met the character Bugs Bunny at Disneyland—an impossible false memory because Bugs is a Warner Brothers character (Braun et al., 2002). Another group of researchers photoshopped childhood photographs of their subjects into a hot air balloon picture and then asked the subjects to try to remember and describe their hot air balloon experience (Wade et al., 2002). Other researchers gave subjects unmanipulated class photographs from their childhoods along with a fake story about a class prank, and thus enhanced the likelihood that subjects would falsely remember the prank (Lindsay et al., 2004). Using a false feedback manipulation, we have been able to persuade subjects to falsely remember having a variety of childhood experiences. In these studies, subjects are told (falsely) that a powerful computer system has analyzed questionnaires that they completed previously and has concluded that they had a particular experience years earlier. Subjects apparently believe what the computer says about them and adjust their memories to match this new information. A variety of different false memories have been implanted in this way. In some studies, subjects are told they once got sick on a particular food (Bernstein, Laney, Morris, & Loftus, 2005). These memories can then spill out into other aspects of subjects’ lives, such that they often become less interested in eating that food in the future (Bernstein & Loftus, 2009b). Other false memories implanted with this methodology include having an unpleasant experience with the character Pluto at Disneyland and witnessing physical violence between one’s parents (Berkowitz, Laney, Morris, Garry, & Loftus, 2008; Laney & Loftus, 2008). Importantly, once these false memories are implanted—whether through complex methods or simple ones—it is extremely difficult to tell them apart from true memories (Bernstein & Loftus, 2009a; Laney & Loftus, 2008). Conclusion To conclude, eyewitness testimony is very powerful and convincing to jurors, even though it is not particularly reliable. Identification errors occur, and these errors can lead to people being falsely accused and even convicted. Likewise, eyewitness memory can be corrupted by leading questions, misinterpretations of events, conversations with co-witnesses, and their own expectations for what should have happened. People can even come to remember whole events that never occurred. The problems with memory in the legal system are real. But what can we do to start to fix them? A number of specific recommendations have already been made, and many of these are in the process of being implemented (e.g., Steblay & Loftus, 2012; Technical Working Group for Eyewitness Evidence, 1999; Wells et al., 1998). Some of these recommendations are aimed at specific legal procedures, including when and how witnesses should be interviewed, and how lineups should be constructed and conducted. Other recommendations call for appropriate education (often in the form of expert witness testimony) to be provided to jury members and others tasked with assessing eyewitness memory. Eyewitness testimony can be of great value to the legal system, but decades of research now argues that this testimony is often given far more weight than its accuracy justifies. Outside Resources Video 1: Eureka Foong's - The Misinformation Effect. This is a student-made video illustrating this phenomenon of altered memory. It was one of the winning entries in the 2014 Noba Student Video Award. Video 2: Ang Rui Xia & Ong Jun Hao's - The Misinformation Effect. Another student-made video exploring the misinformation effect. Also an award winner from 2014. Discussion Questions 1. Imagine that you are a juror in a murder case where an eyewitness testifies. In what ways might your knowledge of memory errors affect your use of this testimony? 2. How true to life do you think television shows such as CSI or Law & Order are in their portrayals of eyewitnesses? 3. Many jurisdictions in the United States use “show-ups,” where an eyewitness is brought to a suspect (who may be standing on the street or in handcuffs in the back of a police car) and asked, “Is this the perpetrator?” Is this a good or bad idea, from a psychological perspective? Why? Vocabulary False memories Memory for an event that never actually occurred, implanted by experimental manipulation or other means. Foils Any member of a lineup (whether live or photograph) other than the suspect. Misinformation effect A memory error caused by exposure to incorrect information between the original event (e.g., a crime) and later memory test (e.g., an interview, lineup, or day in court). Mock witnesses A research subject who plays the part of a witness in a study. Photo spreads A selection of normally small photographs of faces given to a witness for the purpose of identifying a perpetrator. Schema (plural: schemata) A memory template, created through repeated exposure to a particular class of objects or events.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/08%3A_MEMORY/8.02%3A_Eyewitness_Testimony_and_Memory_Biases.txt
• 9.1: Judgement and Decision Making Humans are not perfect decision makers. Not only are we not perfect, but we depart from perfection or rationality in systematic and predictable ways. The understanding of these systematic and predictable departures is core to the field of judgment and decision making. By understanding these limitations, we can also identify strategies for making better and more effective decisions. • 9.2: Language and Language Use Humans have the capacity to use complex language, far more than any other species on Earth. We cooperate with each other to use language for communication; language is often used to communicate about and even construct and maintain our social world. Language use and human sociality are inseparable parts of Homo sapiens as a biological species. • 9.3: Intelligence Intelligence is among the oldest and longest studied topics in all of psychology. The development of assessments to measure this concept is at the core of the development of psychological science itself. This module introduces key historical figures, major theories of intelligence, and common assessment strategies related to intelligence. This module will also discuss controversies related to the study of group differences in intelligence. 09: COGNITION LANGUAGE AND INTELLIGENCE By Max H. Bazerman Harvard University Humans are not perfect decision makers. Not only are we not perfect, but we depart from perfection or rationality in systematic and predictable ways. The understanding of these systematic and predictable departures is core to the field of judgment and decision making. By understanding these limitations, we can also identify strategies for making better and more effective decisions. learning objectives • Understand the systematic biases that affect our judgment and decision making. • Develop strategies for making better decisions. • Experience some of the biases through sample decisions. Introduction Every day you have the opportunity to make countless decisions: should you eat dessert, cheat on a test, or attend a sports event with your friends. If you reflect on your own history of choices you will realize that they vary in quality; some are rational and some are not. This module provides an overview of decision making and includes discussion of many of the common biases involved in this process. In his Nobel Prize–winning work, psychologist Herbert Simon (1957; March & Simon, 1958) argued that our decisions are bounded in their rationality. According to the bounded rationality framework, human beings try to make rational decisions (such as weighing the costs and benefits of a choice) but our cognitive limitations prevent us from being fully rational. Time and cost constraints limit the quantity and quality of the information that is available to us. Moreover, we only retain a relatively small amount of information in our usable memory. And limitations on intelligence and perceptions constrain the ability of even very bright decision makers to accurately make the best choice based on the information that is available. About 15 years after the publication of Simon’s seminal work, Tversky and Kahneman (1973, 1974; Kahneman & Tversky, 1979) produced their own Nobel Prize–winning research, which provided critical information about specific systematic and predictable biases, or mistakes, that influence judgment (Kahneman received the prize after Tversky’s death). The work of Simon, Tversky, and Kahneman paved the way to our modern understanding of judgment and decision making. And their two Nobel prizes signaled the broad acceptance of the field of behavioral decision research as a mature area of intellectual study. What Would a Rational Decision Look Like? Imagine that during your senior year in college, you apply to a number of doctoral programs, law schools, or business schools (or another set of programs in whatever field most interests you). The good news is that you receive many acceptance letters. So, how should you decide where to go? Bazerman and Moore (2013) outline the following six steps that you should take to make a rational decision: (1) define the problem (i.e., selecting the right graduate program), (2) identify the criteria necessary to judge the multiple options (location, prestige, faculty, etc.), (3) weight the criteria (rank them in terms of importance to you), (4) generate alternatives (the schools that admitted you), (5) rate each alternative on each criterion (rate each school on each criteria that you identified, and (6) compute the optimal decision. Acting rationally would require that you follow these six steps in a fully rational manner. I strongly advise people to think through important decisions such as this in a manner similar to this process. Unfortunately, we often don’t. Many of us rely on our intuitions far more than we should. And when we do try to think systematically, the way we enter data into such formal decision-making processes is often biased. Fortunately, psychologists have learned a great deal about the biases that affect our thinking. This knowledge about the systematic and predictable mistakes that even the best and the brightest make can help you identify flaws in your thought processes and reach better decisions. Biases in Our Decision Process Simon’s concept of bounded rationality taught us that judgment deviates from rationality, but it did not tell us how judgment is biased. Tversky and Kahneman’s (1974) research helped to diagnose the specific systematic, directional biases that affect human judgment. These biases are created by the tendency to short-circuit a rational decision process by relying on a number of simplifying strategies, or rules of thumb, known as heuristics. Heuristics allow us to cope with the complex environment surrounding our decisions. Unfortunately, they also lead to systematic and predictable biases. To highlight some of these biases please answer the following three quiz items: Problem 1 (adapted from Alpert & Raiffa, 1969): Listed below are 10 uncertain quantities. Do not look up any information on these items. For each, write down your best estimate of the quantity. Next, put a lower and upper bound around your estimate, such that you are 98 percent confident that your range surrounds the actual quantity. Respond to each of these items even if you admit to knowing very little about these quantities. 1. The first year the Nobel Peace Prize was awarded 2. The date the French celebrate "Bastille Day" 3. The distance from the Earth to the Moon 4. The height of the Leaning Tower of Pisa 5. Number of students attending Oxford University (as of 2014) 6. Number of people who have traveled to space (as of 2013) 7. 2012-2013 annual budget for the University of Pennsylvania 8. Average life expectancy in Bangladesh (as of 2012) 9. World record for pull-ups in a 24-hour period 10. Number of colleges and universities in the Boston metropolitan area Problem 2 (adapted from Joyce & Biddle, 1981): We know that executive fraud occurs and that it has been associated with many recent financial scandals. And, we know that many cases of management fraud go undetected even when annual audits are performed. Do you think that the incidence of significant executive-level management fraud is more than 10 in 1,000 firms (that is, 1 percent) audited by Big Four accounting firms? 1. Yes, more than 10 in 1,000 Big Four clients have significant executive-level management fraud. 2. No, fewer than 10 in 1,000 Big Four clients have significant executive-level management fraud. What is your estimate of the number of Big Four clients per 1,000 that have significant executive-level management fraud? (Fill in the blank below with the appropriate number.) ________ in 1,000 Big Four clients have significant executive-level management fraud. Problem 3 (adapted from Tversky & Kahneman, 1981): Imagine that the United States is preparing for the outbreak of an unusual avian disease that is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows. 1. Program A: If Program A is adopted, 200 people will be saved. 2. Program B: If Program B is adopted, there is a one-third probability that 600 people will be saved and a two-thirds probability that no people will be saved. Which of the two programs would you favor? Overconfidence On the first problem, if you set your ranges so that you were justifiably 98 percent confident, you should expect that approximately 9.8, or nine to 10, of your ranges would include the actual value. So, let’s look at the correct answers: 1. 1901 2. 14th of July 3. 384,403 km (238,857 mi) 4. 56.67 m (183 ft) 5. 22,384 (as of 2014) 6. 536 people (as of 2013) 7. \$6.007 billion 8. 70.3 years (as of 2012) 9. 4,321 10. 52 Count the number of your 98% ranges that actually surrounded the true quantities. If you surrounded nine to 10, you were appropriately confident in your judgments. But most readers surround only between three (30%) and seven (70%) of the correct answers, despite claiming 98% confidence that each range would surround the true value. As this problem shows, humans tend to be overconfident in their judgments. Anchoring Regarding the second problem, people vary a great deal in their final assessment of the level of executive-level management fraud, but most think that 10 out of 1,000 is too low. When I run this exercise in class, half of the students respond to the question that I asked you to answer. The other half receive a similar problem, but instead are asked whether the correct answer is higher or lower than 200 rather than 10. Most people think that 200 is high. But, again, most people claim that this “anchor” does not affect their final estimate. Yet, on average, people who are presented with the question that focuses on the number 10 (out of 1,000) give answers that are about one-half the size of the estimates of those facing questions that use an anchor of 200. When we are making decisions, any initial anchor that we face is likely to influence our judgments, even if the anchor is arbitrary. That is, we insufficiently adjust our judgments away from the anchor. Framing Turning to Problem 3, most people choose Program A, which saves 200 lives for sure, over Program B. But, again, if I was in front of a classroom, only half of my students would receive this problem. The other half would have received the same set-up, but with the following two options: 1. Program C: If Program C is adopted, 400 people will die. 2. Program D: If Program D is adopted, there is a one-third probability that no one will die and a two-thirds probability that 600 people will die. Which of the two programs would you favor? Careful review of the two versions of this problem clarifies that they are objectively the same. Saving 200 people (Program A) means losing 400 people (Program C), and Programs B and D are also objectively identical. Yet, in one of the most famous problems in judgment and decision making, most individuals choose Program A in the first set and Program D in the second set (Tversky & Kahneman, 1981). People respond very differently to saving versus losing lives—even when the difference is based just on the “framing” of the choices. The problem that I asked you to respond to was framed in terms of saving lives, and the implied reference point was the worst outcome of 600 deaths. Most of us, when we make decisions that concern gains, are risk averse; as a consequence, we lock in the possibility of saving 200 lives for sure. In the alternative version, the problem is framed in terms of losses. Now the implicit reference point is the best outcome of no deaths due to the avian disease. And in this case, most people are risk seeking when making decisions regarding losses. These are just three of the many biases that affect even the smartest among us. Other research shows that we are biased in favor of information that is easy for our minds to retrieve, are insensitive to the importance of base rates and sample sizes when we are making inferences, assume that random events will always look random, search for information that confirms our expectations even when disconfirming information would be more informative, claim a priori knowledge that didn’t exist due to the hindsight bias, and are subject to a host of other effects that continue to be developed in the literature (Bazerman & Moore, 2013). Contemporary Developments Bounded rationality served as the integrating concept of the field of behavioral decision research for 40 years. Then, in 2000, Thaler (2000) suggested that decision making is bounded in two ways not precisely captured by the concept of bounded rationality. First, he argued that our willpower is bounded and that, as a consequence, we give greater weight to present concerns than to future concerns. Our immediate motivations are often inconsistent with our long-term interests in a variety of ways, such as the common failure to save adequately for retirement or the difficulty many people have staying on a diet. Second, Thaler suggested that our self-interest is bounded such that we care about the outcomes of others. Sometimes we positively value the outcomes of others—giving them more of a commodity than is necessary out of a desire to be fair, for example. And, in unfortunate contexts, we sometimes are willing to forgo our own benefits out of a desire to harm others. My colleagues and I have recently added two other important bounds to the list. Chugh, Banaji, and Bazerman (2005) and Banaji and Bhaskar (2000) introduced the concept of bounded ethicality, which refers to the notion that our ethics are limited in ways we are not even aware of ourselves. Second, Chugh and Bazerman (2007) developed the concept of bounded awareness to refer to the broad array of focusing failures that affect our judgment, specifically the many ways in which we fail to notice obvious and important information that is available to us. A final development is the application of judgment and decision-making research to the areas of behavioral economics, behavioral finance, and behavioral marketing, among others. In each case, these fields have been transformed by applying and extending research from the judgment and decision-making literature. Fixing Our Decisions Ample evidence documents that even smart people are routinely impaired by biases. Early research demonstrated, unfortunately, that awareness of these problems does little to reduce bias (Fischhoff, 1982). The good news is that more recent research documents interventions that do help us overcome our faulty thinking (Bazerman & Moore, 2013). One critical path to fixing our biases is provided in Stanovich and West’s (2000) distinction between System 1 and System 2 decision making. System 1 processing is our intuitive system, which is typically fast, automatic, effortless, implicit, and emotional. System 2 refers to decision making that is slower, conscious, effortful, explicit, and logical. The six logical steps of decision making outlined earlier describe a System 2 process. Clearly, a complete System 2 process is not required for every decision we make. In most situations, our System 1 thinking is quite sufficient; it would be impractical, for example, to logically reason through every choice we make while shopping for groceries. But, preferably, System 2 logic should influence our most important decisions. Nonetheless, we use our System 1 processes for most decisions in life, relying on it even when making important decisions. The key to reducing the effects of bias and improving our decisions is to transition from trusting our intuitive System 1 thinking toward engaging more in deliberative System 2 thought. Unfortunately, the busier and more rushed people are, the more they have on their minds, and the more likely they are to rely on System 1 thinking (Chugh, 2004). The frantic pace of professional life suggests that executives often rely on System 1 thinking (Chugh, 2004). Fortunately, it is possible to identify conditions where we rely on intuition at our peril and substitute more deliberative thought. One fascinating example of this substitution comes from journalist Michael Lewis’ (2003) account of how Billy Beane, the general manager of the Oakland Athletics, improved the outcomes of the failing baseball team after recognizing that the intuition of baseball executives was limited and systematically biased and that their intuitions had been incorporated into important decisions in ways that created enormous mistakes. Lewis (2003) documents that baseball professionals tend to overgeneralize from their personal experiences, be overly influenced by players’ very recent performances, and overweigh what they see with their own eyes, despite the fact that players’ multiyear records provide far better data. By substituting valid predictors of future performance (System 2 thinking), the Athletics were able to outperform expectations given their very limited payroll. Another important direction for improving decisions comes from Thaler and Sunstein’s (2008) book Nudge: Improving Decisions about Health, Wealth, and Happiness. Rather than setting out to debias human judgment, Thaler and Sunstein outline a strategy for how “decision architects” can change environments in ways that account for human bias and trigger better decisions as a result. For example, Beshears, Choi, Laibson, and Madrian (2008) have shown that simple changes to defaults can dramatically improve people’s decisions. They tackle the failure of many people to save for retirement and show that a simple change can significantly influence enrollment in 401(k) programs. In most companies, when you start your job, you need to proactively sign up to join the company’s retirement savings plan. Many people take years before getting around to doing so. When, instead, companies automatically enroll their employees in 401(k) programs and give them the opportunity to “opt out,” the net enrollment rate rises significantly. By changing defaults, we can counteract the human tendency to live with the status quo. Similarly, Johnson and Goldstein’s (2003) cross-European organ donation study reveals that countries that have opt-in organ donation policies, where the default is not to harvest people’s organs without their prior consent, sacrifice thousands of lives in comparison to opt-out policies, where the default is to harvest organs. The United States and too many other countries require that citizens opt in to organ donation through a proactive effort; as a consequence, consent rates range between 4.25%–44% across these countries. In contrast, changing the decision architecture to an opt-out policy improves consent rates to 85.9% to 99.98%. Designing the donation system with knowledge of the power of defaults can dramatically change donation rates without changing the options available to citizens. In contrast, a more intuitive strategy, such as the one in place in the United States, inspires defaults that result in many unnecessary deaths. Concluding Thoughts Our days are filled with decisions ranging from the small (what should I wear today?) to the important (should we get married?). Many have real world consequences on our health, finances and relationships. Simon, Kahneman, and Tversky created a field that highlights the surprising and predictable deficiencies of the human mind when making decisions. As we understand more about our own biases and thinking shortcomings we can begin to take them into account or to avoid them. Only now have we reached the frontier of using this knowledge to help people make better decisions. Outside Resources Book: Bazerman, M. H., & Moore, D. (2013). Judgment in managerial decision making (8th ed.). John Wiley & Sons Inc. Book: Kahneman, D. (2011) Thinking, Fast and Slow. New York, NY: Farrar, Straus and Giroux. Book: Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving Decisions about Health, Wealth, and Happiness. New Haven, CT: Yale University Press. Discussion Questions 1. Are the biases in this module a problem in the real world? 2. How would you use this module to be a better decision maker? 3. Can you see any biases in today’s newspaper? Vocabulary Anchoring The bias to be affected by an initial anchor, even if the anchor is arbitrary, and to insufficiently adjust our judgments away from that anchor. Biases The systematic and predictable mistakes that influence the judgment of even very talented human beings. Bounded awareness The systematic ways in which we fail to notice obvious and important information that is available to us. Bounded ethicality The systematic ways in which our ethics are limited in ways we are not even aware of ourselves. Bounded rationality Model of human behavior that suggests that humans try to make rational decisions but are bounded due to cognitive limitations. Bounded self-interest The systematic and predictable ways in which we care about the outcomes of others. Bounded willpower The tendency to place greater weight on present concerns rather than future concerns. Framing The bias to be systematically affected by the way in which information is presented, while holding the objective information constant. Heuristics cognitive (or thinking) strategies that simplify decision making by using mental short-cuts Overconfident The bias to have greater confidence in your judgment than is warranted based on a rational assessment. System 1 Our intuitive decision-making system, which is typically fast, automatic, effortless, implicit, and emotional. System 2 Our more deliberative decision-making system, which is slower, conscious, effortful, explicit, and logical.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/09%3A_COGNITION_LANGUAGE_AND_INTELLIGENCE/9.01%3A_Judgement_and_Decision_Making.txt
By Yoshihisa Kashima University of Melbourne Humans have the capacity to use complex language, far more than any other species on Earth. We cooperate with each other to use language for communication; language is often used to communicate about and even construct and maintain our social world. Language use and human sociality are inseparable parts of Homo sapiens as a biological species. learning objectives • Define basic terms used to describe language use. • Describe the process by which people can share new information by using language. • Characterize the typical content of conversation and its social implications. • Characterize psychological consequences of language use and give an example. Introduction Imagine two men of 30-something age, Adam and Ben, walking down the corridor. Judging from their clothing, they are young businessmen, taking a break from work. They then have this exchange. Adam: “You know, Gary bought a ring.” Ben: "Oh yeah? For Mary, isn't it?" (Adam nods.) If you are watching this scene and hearing their conversation, what can you guess from this? First of all, you’d guess that Gary bought a ring for Mary, whoever Gary and Mary might be. Perhaps you would infer that Gary is getting married to Mary. What else can you guess? Perhaps that Adam and Ben are fairly close colleagues, and both of them know Gary and Mary reasonably well. In other words, you can guess the social relationships surrounding the people who are engaging in the conversation and the people whom they are talking about. Language is used in our everyday lives. If psychology is a science of behavior, scientific investigation of language use must be one of the most central topics—this is because language use is ubiquitous. Every human group has a language; human infants (except those who have unfortunate disabilities) learn at least one language without being taught explicitly. Even when children who don’t have much language to begin with are brought together, they can begin to develop and use their own language. There is at least one known instance where children who had had little language were brought together and developed their own language spontaneously with minimum input from adults. In Nicaragua in the 1980s, deaf children who were separately raised in various locations were brought together to schools for the first time. Teachers tried to teach them Spanish with little success. However, they began to notice that the children were using their hands and gestures, apparently to communicate with each other. Linguists were brought in to find out what was happening—it turned out the children had developed their own sign language by themselves. That was the birth of a new language, Nicaraguan Sign Language (Kegl, Senghas, & Coppola, 1999). Language is ubiquitous, and we humans are born to use it. How Do We Use Language? If language is so ubiquitous, how do we actually use it? To be sure, some of us use it to write diaries and poetry, but the primary form of language use is interpersonal. That’s how we learn language, and that’s how we use it. Just like Adam and Ben, we exchange words and utterances to communicate with each other. Let’s consider the simplest case of two people, Adam and Ben, talking with each other. According to Clark (1996), in order for them to carry out a conversation, they must keep track of common ground. Common ground is a set of knowledge that the speaker and listener share and they think, assume, or otherwise take for granted that they share. So, when Adam says, “Gary bought a ring,” he takes for granted that Ben knows the meaning of the words he is using, whom Gary is, and what buying a ring means. When Ben says, “For Mary, isn’t it?” he takes for granted that Adam knows the meaning of these words, who Mary is, and what buying a ring for someone means. All these are part of their common ground. Note that, when Adam presents the information about Gary’s purchase of a ring, Ben responds by presenting his inference about who the recipient of the ring might be, namely, Mary. In conversational terms, Ben’s utterance acts as evidence for his comprehension of Adam’s utterance—“Yes, I understood that Gary bought a ring”—and Adam’s nod acts as evidence that he now has understood what Ben has said too—“Yes, I understood that you understood that Gary has bought a ring for Mary.” This new information is now added to the initial common ground. Thus, the pair of utterances by Adam and Ben (called an adjacency pair) together with Adam’s affirmative nod jointly completes one proposition, “Gary bought a ring for Mary,” and adds this information to their common ground. This way, common ground changes as we talk, gathering new information that we agree on and have evidence that we share. It evolves as people take turns to assume the roles of speaker and listener, and actively engage in the exchange of meaning. Common ground helps people coordinate their language use. For instance, when a speaker says something to a listener, he or she takes into account their common ground, that is, what the speaker thinks the listener knows. Adam said what he did because he knew Ben would know who Gary was. He’d have said, “A friend of mine is getting married,” to another colleague who wouldn’t know Gary. This is called audience design (Fussell & Krauss, 1992); speakers design their utterances for their audiences by taking into account the audiences’ knowledge. If their audiences are seen to be knowledgeable about an object (such as Ben about Gary), they tend to use a brief label of the object (i.e., Gary); for a less knowledgeable audience, they use more descriptive words (e.g., “a friend of mine”) to help the audience understand their utterances (Box 1). So, language use is a cooperative activity, but how do we coordinate our language use in a conversational setting? To be sure, we have a conversation in small groups. The number of people engaging in a conversation at a time is rarely more than four. By some counts (e.g., Dunbar, Duncan, & Nettle, 1995; James, 1953), more than 90 percent of conversations happen in a group of four individuals or less. Certainly, coordinating conversation among four is not as difficult as coordinating conversation among 10. But, even among only four people, if you think about it, everyday conversation is an almost miraculous achievement. We typically have a conversation by rapidly exchanging words and utterances in real time in a noisy environment. Think about your conversation at home in the morning, at a bus stop, in a shopping mall. How can we keep track of our common ground under such circumstances? Pickering and Garrod (2004) argue that we achieve our conversational coordination by virtue of our ability to interactively align each other’s actions at different levels of language use: lexicon (i.e., words and expressions), syntax (i.e., grammatical rules for arranging words and expressions together), as well as speech rate and accent. For instance, when one person uses a certain expression to refer to an object in a conversation, others tend to use the same expression (e.g., Clark & Wilkes-Gibbs, 1986). Furthermore, if someone says “the cowboy offered a banana to the robber,” rather than “the cowboy offered the robber a banana,” others are more likely to use the same syntactic structure (e.g., “the girl gave a book to the boy” rather than “the girl gave the boy a book”) even if different words are involved (Branigan, Pickering, & Cleland, 2000). Finally, people in conversation tend to exhibit similar accents and rates of speech, and they are often associated with people’s social identity (Giles, Coupland, & Coupland, 1991). So, if you have lived in different places where people have somewhat different accents (e.g., United States and United Kingdom), you might have noticed that you speak with Americans with an American accent, but speak with Britons with a British accent. Pickering and Garrod (2004) suggest that these interpersonal alignments at different levels of language use can activate similar situation models in the minds of those who are engaged in a conversation. Situation models are representations about the topic of a conversation. So, if you are talking about Gary and Mary with your friends, you might have a situation model of Gary giving Mary a ring in your mind. Pickering and Garrod’s theory is that as you describe this situation using language, others in the conversation begin to use similar words and grammar, and many other aspects of language use converge. As you all do so, similar situation models begin to be built in everyone’s mind through the mechanism known as priming. Priming occurs when your thinking about one concept (e.g., “ring”) reminds you about other related concepts (e.g., “marriage”, “wedding ceremony”). So, if everyone in the conversation knows about Gary, Mary, and the usual course of events associated with a ring—engagement, wedding, marriage, etc.— everyone is likely to construct a shared situation model about Gary and Mary. Thus, making use of our highly developed interpersonal ability to imitate (i.e., executing the same action as another person) and cognitive ability to infer (i.e., one idea leading to other ideas), we humans coordinate our common ground, share situation models, and communicate with each other. What Do We Talk About? What are humans doing when we are talking? Surely, we can communicate about mundane things such as what to have for dinner, but also more complex and abstract things such as the meaning of life and death, liberty, equality, and fraternity, and many other philosophical thoughts. Well, when naturally occurring conversations were actually observed (Dunbar, Marriott, & Duncan, 1997), a staggering 60%–70% of everyday conversation, for both men and women, turned out to be gossip—people talk about themselves and others whom they know. Just like Adam and Ben, more often than not, people use language to communicate about their social world. Gossip may sound trivial and seem to belittle our noble ability for language—surely one of the most remarkable human abilities of all that distinguish us from other animals. Au contraire, some have argued that gossip—activities to think and communicate about our social world—is one of the most critical uses to which language has been put. Dunbar (1996) conjectured that gossiping is the human equivalent of grooming, monkeys and primates attending and tending to each other by cleaning each other’s fur. He argues that it is an act of socializing, signaling the importance of one’s partner. Furthermore, by gossiping, humans can communicate and share their representations about their social world—who their friends and enemies are, what the right thing to do is under what circumstances, and so on. In so doing, they can regulate their social world—making more friends and enlarging one’s own group (often called the ingroup, the group to which one belongs) against other groups (outgroups) that are more likely to be one’s enemies. Dunbar has argued that it is these social effects that have given humans an evolutionary advantage and larger brains, which, in turn, help humans to think more complex and abstract thoughts and, more important, maintain larger ingroups. Dunbar (1993) estimated an equation that predicts average group size of nonhuman primate genera from their average neocortex size (the part of the brain that supports higher order cognition). In line with his social brain hypothesis, Dunbar showed that those primate genera that have larger brains tend to live in larger groups. Furthermore, using the same equation, he was able to estimate the group size that human brains can support, which turned out to be about 150—approximately the size of modern hunter-gatherer communities. Dunbar’s argument is that language, brain, and human group living have co-evolved—language and human sociality are inseparable. Dunbar’s hypothesis is controversial. Nonetheless, whether or not he is right, our everyday language use often ends up maintaining the existing structure of intergroup relationships. Language use can have implications for how we construe our social world. For one thing, there are subtle cues that people use to convey the extent to which someone’s action is just a special case in a particular context or a pattern that occurs across many contexts and more like a character trait of the person. According to Semin and Fiedler (1988), someone’s action can be described by an action verb that describes a concrete action (e.g., he runs), a state verb that describes the actor’s psychological state (e.g., he likes running), an adjective that describes the actor’s personality (e.g., he is athletic), or a noun that describes the actor’s role (e.g., he is an athlete). Depending on whether a verb or an adjective (or noun) is used, speakers can convey the permanency and stability of an actor’s tendency to act in a certain way—verbs convey particularity, whereas adjectives convey permanency. Intriguingly, people tend to describe positive actions of their ingroup members using adjectives (e.g., he is generous) rather than verbs (e.g., he gave a blind man some change), and negative actions of outgroup members using adjectives (e.g., he is cruel) rather than verbs (e.g., he kicked a dog). Maass, Salvi, Arcuri, and Semin (1989) called this a linguistic intergroup bias, which can produce and reproduce the representation of intergroup relationships by painting a picture favoring the ingroup. That is, ingroup members are typically good, and if they do anything bad, that’s more an exception in special circumstances; in contrast, outgroup members are typically bad, and if they do anything good, that’s more an exception. In addition, when people exchange their gossip, it can spread through broader social networks. If gossip is transmitted from one person to another, the second person can transmit it to a third person, who then in turn transmits it to a fourth, and so on through a chain of communication. This often happens for emotive stories (Box 2). If gossip is repeatedly transmitted and spread, it can reach a large number of people. When stories travel through communication chains, they tend to become conventionalized (Bartlett, 1932). A Native American tale of the “War of the Ghosts” recounts a warrior’s encounter with ghosts traveling in canoes and his involvement with their ghostly battle. He is shot by an arrow but doesn’t die, returning home to tell the tale. After his narration, however, he becomes still, a black thing comes out of his mouth, and he eventually dies. When it was told to a student in England in the 1920s and retold from memory to another person, who, in turn, retold it to another and so on in a communication chain, the mythic tale became a story of a young warrior going to a battlefield, in which canoes became boats, and the black thing that came out of his mouth became simply his spirit (Bartlett, 1932). In other words, information transmitted multiple times was transformed to something that was easily understood by many, that is, information was assimilated into the common ground shared by most people in the linguistic community. More recently, Kashima (2000) conducted a similar experiment using a story that contained a sequence of events that described a young couple’s interaction that included both stereotypical and counter-stereotypical actions (e.g., a man watching sports on TV on Sunday vs. a man vacuuming the house). After the retelling of this story, much of the counter-stereotypical information was dropped, and stereotypical information was more likely to be retained. Because stereotypes are part of the common ground shared by the community, this finding too suggests that conversational retellings are likely to reproduce conventional content. Psychological Consequences of Language Use What are the psychological consequences of language use? When people use language to describe an experience, their thoughts and feelings are profoundly shaped by the linguistic representation that they have produced rather than the original experience per se (Holtgraves & Kashima, 2008). For example, Halberstadt (2003) showed a picture of a person displaying an ambiguous emotion and examined how people evaluated the displayed emotion. When people verbally explained why the target person was expressing a particular emotion, they tended to remember the person as feeling that emotion more intensely than when they simply labeled the emotion. Thus, constructing a linguistic representation of another person’s emotion apparently biased the speaker’s memory of that person’s emotion. Furthermore, linguistically labeling one’s own emotional experience appears to alter the speaker’s neural processes. When people linguistically labeled negative images, the amygdala—a brain structure that is critically involved in the processing of negative emotions such as fear—was activated less than when they were not given a chance to label them (Lieberman et al., 2007). Potentially because of these effects of verbalizing emotional experiences, linguistic reconstructions of negative life events can have some therapeutic effects on those who suffer from the traumatic experiences (Pennebaker & Seagal, 1999). Lyubomirsky, Sousa, and Dickerhoof (2006) found that writing and talking about negative past life events improved people’s psychological well-being, but just thinking about them worsened it. There are many other examples of effects of language use on memory and decision making (Holtgraves & Kashima, 2008). Furthermore, if a certain type of language use (linguistic practice) (Holtgraves & Kashima, 2008) is repeated by a large number of people in a community, it can potentially have a significant effect on their thoughts and action. This notion is often called Sapir-Whorf hypothesis (Sapir, 1921; Whorf, 1956; Box 3). For instance, if you are given a description of a man, Steven, as having greater than average experience of the world (e.g., well-traveled, varied job experience), a strong family orientation, and well-developed social skills, how do you describe Steven? Do you think you can remember Steven’s personality five days later? It will probably be difficult. But if you know Chinese and are reading about Steven in Chinese, as Hoffman, Lau, and Johnson (1986) showed, the chances are that you can remember him well. This is because English does not have a word to describe this kind of personality, whereas Chinese does (shì gù). This way, the language you use can influence your cognition. In its strong form, it has been argued that language determines thought, but this is probably wrong. Language does not completely determine our thoughts—our thoughts are far too flexible for that—but habitual uses of language can influence our habit of thought and action. For instance, some linguistic practice seems to be associated even with cultural values and social institution. Pronoun drop is the case in point. Pronouns such as “I” and “you” are used to represent the speaker and listener of a speech in English. In an English sentence, these pronouns cannot be dropped if they are used as the subject of a sentence. So, for instance, “I went to the movie last night” is fine, but “Went to the movie last night” is not in standard English. However, in other languages such as Japanese, pronouns can be, and in fact often are, dropped from sentences. It turned out that people living in those countries where pronoun drop languages are spoken tend to have more collectivistic values (e.g., employees having greater loyalty toward their employers) than those who use non–pronoun drop languages such as English (Kashima & Kashima, 1998). It was argued that the explicit reference to “you” and “I” may remind speakers the distinction between the self and other, and the differentiation between individuals. Such a linguistic practice may act as a constant reminder of the cultural value, which, in turn, may encourage people to perform the linguistic practice. Conclusion Language and language use constitute a central ingredient of human psychology. Language is an essential tool that enables us to live the kind of life we do. Can you imagine a world in which machines are built, farms are cultivated, and goods and services are transported to our household without language? Is it possible for us to make laws and regulations, negotiate contracts, and enforce agreements and settle disputes without talking? Much of contemporary human civilization wouldn’t have been possible without the human ability to develop and use language. Like the Tower of Babel, language can divide humanity, and yet, the core of humanity includes the innate ability for language use. Whether we can use it wisely is a task before us in this globalized world. Discussion Questions 1. In what sense is language use innate and learned? 2. Is language a tool for thought or a tool for communication? 3. What sorts of unintended consequences can language use bring to your psychological processes? Vocabulary Audience design Constructing utterances to suit the audience’s knowledge. Common ground Information that is shared by people who engage in a conversation. Ingroup Group to which a person belongs. Lexicon Words and expressions. Linguistic intergroup bias A tendency for people to characterize positive things about their ingroup using more abstract expressions, but negative things about their outgroups using more abstract expressions. Outgroup Group to which a person does not belong. Priming A stimulus presented to a person reminds him or her about other ideas associated with the stimulus. Sapir-Whorf hypothesis The hypothesis that the language that people use determines their thoughts. Situation model A mental representation of an event, object, or situation constructed at the time of comprehending a linguistic description. Social brain hypothesis The hypothesis that the human brain has evolved, so that humans can maintain larger ingroups. Social networks Networks of social relationships among individuals through which information can travel. Syntax Rules by which words are strung together to form sentences.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/09%3A_COGNITION_LANGUAGE_AND_INTELLIGENCE/9.02%3A_Language_and_Language_Use.txt
By Robert Biswas-Diener Portland State University Intelligence is among the oldest and longest studied topics in all of psychology. The development of assessments to measure this concept is at the core of the development of psychological science itself. This module introduces key historical figures, major theories of intelligence, and common assessment strategies related to intelligence. This module will also discuss controversies related to the study of group differences in intelligence. learning objectives • List at least two common strategies for measuring intelligence. • Name at least one “type” of intelligence. • Define intelligence in simple terms. • Explain the controversy relating to differences in intelligence between groups. Introduction Every year hundreds of grade school students converge on Washington, D.C., for the annual Scripps National Spelling Bee. The “bee” is an elite event in which children as young as 8 square off to spell words like “cymotrichous” and “appoggiatura.” Most people who watch the bee think of these kids as being “smart” and you likely agree with this description. What makes a person intelligent? Is it heredity (two of the 2014 contestants in the bee have siblings who have previously won)(National Spelling Bee, 2014a)? Is it interest (the most frequently listed favorite subject among spelling bee competitors is math)(NSB, 2014b)? In this module we will cover these and other fascinating aspects of intelligence. By the end of the module you should be able to define intelligence and discuss some common strategies for measuring intelligence. In addition, we will tackle the politically thorny issue of whether there are differences in intelligence between groups such as men and women. Defining and Measuring Intelligence When you think of “smart people” you likely have an intuitive sense of the qualities that make them intelligent. Maybe you think they have a good memory, or that they can think quickly, or that they simply know a whole lot of information. Indeed, people who exhibit such qualities appear very intelligent. That said, it seems that intelligence must be more than simply knowing facts and being able to remember them. One point in favor of this argument is the idea of animal intelligence. It will come as no surprise to you that a dog, which can learn commands and tricks seems smarter than a snake that cannot. In fact, researchers and lay people generally agree with one another that primates—monkeys and apes (including humans)—are among the most intelligent animals. Apes such as chimpanzees are capable of complex problem solving and sophisticated communication (Kohler, 1924). Scientists point to the social nature of primates as one evolutionary source of their intelligence. Primates live together in troops or family groups and are, therefore, highly social creatures. As such, primates tend to have brains that are better developed for communication and long term thinking than most other animals. For instance, the complex social environment has led primates to develop deception, altruism, numerical concepts, and “theory of mind” (a sense of the self as a unique individual separate from others in the group; Gallup, 1982; Hauser, MacNeilage & Ware, 1996).[Also see Noba module Theory of Mind noba.to/a8wpytg3] The question of what constitutes human intelligence is one of the oldest inquiries in psychology. When we talk about intelligence we typically mean intellectual ability. This broadly encompasses the ability to learn, remember and use new information, to solve problems and to adapt to novel situations. An early scholar of intelligence, Charles Spearman, proposed the idea that intelligence was one thing, a “general factor” sometimes known as simply “g.” He based this conclusion on the observation that people who perform well in one intellectual area such as verbal ability also tend to perform well in other areas such as logic and reasoning (Spearman, 1904). A contemporary of Spearman’s named Francis Galton—himself a cousin of Charles Darwin-- was among those who pioneered psychological measurement (Hunt, 2009). For three pence Galton would measure various physical characteristics such as grip strength but also some psychological attributes such as the ability to judge distance or discriminate between colors. This is an example of one of the earliest systematic measures of individual ability. Galton was particularly interested in intelligence, which he thought was heritable in much the same way that height and eye color are. He conceived of several rudimentary methods for assessing whether his hypothesis was true. For example, he carefully tracked the family tree of the top-scoring Cambridge students over the previous 40 years. Although he found specific families disproportionately produced top scholars, intellectual achievement could still be the product of economic status, family culture or other non-genetic factors. Galton was also, possibly, the first to popularize the idea that the heritability of psychological traits could be studied by looking at identical and fraternal twins. Although his methods were crude by modern standards, Galton established intelligence as a variable that could be measured (Hunt, 2009). The person best known for formally pioneering the measurement of intellectual ability is Alfred Binet. Like Galton, Binet was fascinated by individual differences in intelligence. For instance, he blindfolded chess players and saw that some of them had the ability to continue playing using only their memory to keep the many positions of the pieces in mind (Binet, 1894). Binet was particularly interested in the development of intelligence, a fascination that led him to observe children carefully in the classroom setting. Along with his colleague Theodore Simon, Binet created a test of children’s intellectual capacity. They created individual test items that should be answerable by children of given ages. For instance, a child who is three should be able to point to her mouth and eyes, a child who is nine should be able to name the months of the year in order, and a twelve year old ought to be able to name sixty words in three minutes. Their assessment became the first “IQ test.” “IQ” or “intelligence quotient” is a name given to the score of the Binet-Simon test. The score is derived by dividing a child’s mental age (the score from the test) by their chronological age to create an overall quotient. These days, the phrase “IQ” does not apply specifically to the Binet-Simon test and is used to generally denote intelligence or a score on any intelligence test. In the early 1900s the Binet-Simon test was adapted by a Stanford professor named Lewis Terman to create what is, perhaps, the most famous intelligence test in the world, the Stanford-Binet (Terman, 1916). The major advantage of this new test was that it was standardized. Based on a large sample of children Terman was able to plot the scores in a normal distribution, shaped like a “bell curve” (see Fig. 7.5.1). To understand a normal distribution think about the height of people. Most people are average in height with relatively fewer being tall or short, and fewer still being extremely tall or extremely short. Terman (1916) laid out intelligence scores in exactly the same way, allowing for easy and reliable categorizations and comparisons between individuals. Looking at another modern intelligence test—the Wechsler Adult Intelligence Scale (WAIS)—can provide clues to a definition of intelligence itself. Motivated by several criticisms of the Stanford-Binet test, psychologist David Wechsler sought to create a superior measure of intelligence. He was critical of the way that the Stanford-Binet relied so heavily on verbal ability and was also suspicious of using a single score to capture all of intelligence. To address these issues Wechsler created a test that tapped a wide range of intellectual abilities. This understanding of intelligence—that it is made up of a pool of specific abilities—is a notable departure from Spearman’s concept of general intelligence. The WAIS assesses people's ability to remember, compute, understand language, reason well, and process information quickly (Wechsler, 1955). One interesting by-product of measuring intelligence for so many years is that we can chart changes over time. It might seem strange to you that intelligence can change over the decades but that appears to have happened over the last 80 years we have been measuring this topic. Here’s how we know: IQ tests have an average score of 100. When new waves of people are asked to take older tests they tend to outperform the original sample from years ago on which the test was normed. This gain is known as the “Flynn Effect,” named after James Flynn, the researcher who first identified it (Flynn, 1987). Several hypotheses have been put forth to explain the Flynn Effect including better nutrition (healthier brains!), greater familiarity with testing in general, and more exposure to visual stimuli. Today, there is no perfect agreement among psychological researchers with regards to the causes of increases in average scores on intelligence tests. Perhaps if you choose a career in psychology you will be the one to discover the answer! Types of Intelligence David Wechsler’s approach to testing intellectual ability was based on the fundamental idea that there are, in essence, many aspects to intelligence. Other scholars have echoed this idea by going so far as to suggest that there are actually even different types of intelligence. You likely have heard distinctions made between “street smarts” and “book learning.” The former refers to practical wisdom accumulated through experience while the latter indicates formal education. A person high in street smarts might have a superior ability to catch a person in a lie, to persuade others, or to think quickly under pressure. A person high in book learning, by contrast, might have a large vocabulary and be able to remember a large number of references to classic novels. Although psychologists don’t use street smarts or book smarts as professional terms they do believe that intelligence comes in different types. There are many ways to parse apart the concept of intelligence. Many scholars believe that Carroll ‘s (1993) review of more than 400 data sets provides the best currently existing single source for organizing various concepts related to intelligence. Carroll divided intelligence into three levels, or strata, descending from the most abstract down to the most specific (see Fig. 7.5.2). To understand this way of categorizing simply think of a “car.” Car is a general word that denotes all types of motorized vehicles. At the more specific level under “car” might be various types of cars such as sedans, sports cars, SUVs, pick-up trucks, station wagons, and so forth. More specific still would be certain models of each such as a Honda Civic or Ferrari Enzo. In the same manner, Carroll called the highest level (stratum III) the general intelligence factor “g.” Under this were more specific stratum II categories such as fluid intelligence and visual perception and processing speed. Each of these, in turn, can be sub-divided into very specific components such as spatial scanning, reaction time, and word fluency. Thinking of intelligence as Carroll (1993) does, as a collection of specific mental abilities, has helped researchers conceptualize this topic in new ways. For example, Horn and Cattell (1966) distinguish between “fluid” and “crystalized” intelligence, both of which show up on stratum II of Carroll’s model. Fluid intelligence is the ability to “think on your feet;” that is, to solve problems. Crystalized intelligence, on the other hand, is the ability to use language, skills and experience to address problems. The former is associated more with youth while the latter increases with age. You may have noticed the way in which younger people can adapt to new situations and use trial and error to quickly figure out solutions. By contrast, older people tend to rely on their relatively superior store of knowledge to solve problems. Harvard professor Howard Gardner is another figure in psychology who is well-known for championing the notion that there are different types of intelligence. Gardner’s theory is appropriately, called “multiple intelligences.” Gardner’s theory is based on the idea that people process information through different “channels” and these are relatively independent of one another. He has identified 8 common intelligences including 1) logic-math, 2) visual-spatial, 3) music-rhythm, 4) verbal-linguistic, 5) bodily-kinesthetic, 6) interpersonal, 7) intrapersonal, and 8) naturalistic (Gardner, 1985). Many people are attracted to Gardner’s theory because it suggests that people each learn in unique ways. There are now many Gardner- influenced schools in the world. Another type of intelligence is Emotional intelligence. Unlike traditional models of intelligence that emphasize cognition (thinking) the idea of emotional intelligence emphasizes the experience and expression of emotion. Some researchers argue that emotional intelligence is a set of skills in which an individual can accurately understand the emotions of others, can identify and label their own emotions, and can use emotions. (Mayer & Salovey, 1997). Other researchers believe that emotional intelligence is a mixture of abilities, such as stress management, and personality, such as a person’s predisposition for certain moods (Bar-On, 2006). Regardless of the specific definition of emotional intelligence, studies have shown a link between this concept and job performance (Lopes, Grewal, Kadis, Gall, & Salovey, 2006). In fact, emotional intelligence is similar to more traditional notions of cognitive intelligence with regards to workplace benefits. Schmidt and Hunter (1998), for example, reviewed research on intelligence in the workplace context and show that intelligence is the single best predictor of doing well in job training programs, of learning on the job. They also report that general intelligence is moderately correlated with all types of jobs but especially with managerial and complex, technical jobs. There is one last point that is important to bear in mind about intelligence. It turns out that the way an individual thinks about his or her own intelligence is also important because it predicts performance. Researcher Carol Dweck has made a career out of looking at the differences between high IQ children who perform well and those who do not, so-called “under achievers.” Among her most interesting findings is that it is not gender or social class that sets apart the high and low performers. Instead, it is their mindset. The children who believe that their abilities in general—and their intelligence specifically—is a fixed trait tend to underperform. By contrast, kids who believe that intelligence is changeable and evolving tend to handle failure better and perform better (Dweck, 1986). Dweck refers to this as a person’s “mindset” and having a growth mindset appears to be healthier. Correlates of Intelligence The research on mindset is interesting but there can also be a temptation to interpret it as suggesting that every human has an unlimited potential for intelligence and that becoming smarter is only a matter of positive thinking. There is some evidence that genetics is an important factor in the intelligence equation. For instance, a number of studies on genetics in adults have yielded the result that intelligence is largely, but not totally, inherited (Bouchard,2004). Having a healthy attitude about the nature of smarts and working hard can both definitely help intellectual performance but it also helps to have the genetic leaning toward intelligence. Carol Dweck’s research on the mindset of children also brings one of the most interesting and controversial issues surrounding intelligence research to the fore: group differences. From the very beginning of the study of intelligence researchers have wondered about differences between groups of people such as men and women. With regards to potential differences between the sexes some people have noticed that women are under-represented in certain fields. In 1976, for example, women comprised just 1% of all faculty members in engineering (Ceci, Williams & Barnett, 2009). Even today women make up between 3% and 15% of all faculty in math-intensive fields at the 50 top universities. This phenomenon could be explained in many ways: it might be the result of inequalities in the educational system, it might be due to differences in socialization wherein young girls are encouraged to develop other interests, it might be the result of that women are—on average—responsible for a larger portion of childcare obligations and therefore make different types of professional decisions, or it might be due to innate differences between these groups, to name just a few possibilities. The possibility of innate differences is the most controversial because many people see it as either the product of or the foundation for sexism. In today’s political landscape it is easy to see that asking certain questions such as “are men smarter than women?” would be inflammatory. In a comprehensive review of research on intellectual abilities and sex Ceci and colleagues (2009) argue against the hypothesis that biological and genetic differences account for much of the sex differences in intellectual ability. Instead, they believe that a complex web of influences ranging from societal expectations to test taking strategies to individual interests account for many of the sex differences found in math and similar intellectual abilities. A more interesting question, and perhaps a more sensitive one, might be to inquire in which ways men and women might differ in intellectual ability, if at all. That is, researchers should not seek to prove that one group or another is better but might examine the ways that they might differ and offer explanations for any differences that are found. Researchers have investigated sex differences in intellectual ability. In a review of the research literature Halpern (1997) found that women appear, on average, superior to men on measures of fine motor skill, acquired knowledge, reading comprehension, decoding non-verbal expression, and generally have higher grades in school. Men, by contrast, appear, on average, superior to women on measures of fluid reasoning related to math and science, perceptual tasks that involve moving objects, and tasks that require transformations in working memory such as mental rotations of physical spaces. Halpern also notes that men are disproportionately represented on the low end of cognitive functioning including in mental retardation, dyslexia, and attention deficit disorders (Halpern, 1997). Other researchers have examined various explanatory hypotheses for why sex differences in intellectual ability occur. Some studies have provided mixed evidence for genetic factors while others point to evidence for social factors (Neisser, et al, 1996; Nisbett, et al., 2012). One interesting phenomenon that has received research scrutiny is the idea of stereotype threat. Stereotype threat is the idea that mental access to a particular stereotype can have real-world impact on a member of the stereotyped group. In one study (Spencer, Steele, & Quinn, 1999), for example, women who were informed that women tend to fare poorly on math exams just before taking a math test actually performed worse relative to a control group who did not hear the stereotype. One possible antidote to stereotype threat, at least in the case of women, is to make a self-affirmation (such as listing positive personal qualities) before the threat occurs. In one study, for instance, Martens and her colleagues (2006) had women write about personal qualities that they valued before taking a math test. The affirmation largely erased the effect of stereotype by improving math scores for women relative to a control group but similar affirmations had little effect for men (Martens, Johns, Greenberg, & Schimel, 2006). These types of controversies compel many lay people to wonder if there might be a problem with intelligence measures. It is natural to wonder if they are somehow biased against certain groups. Psychologists typically answer such questions by pointing out that bias in the testing sense of the word is different than how people use the word in everyday speech. Common use of bias denotes a prejudice based on group membership. Scientific bias, on the other hand, is related to the psychometric properties of the test such as validity and reliability. Validity is the idea that an assessment measures what it claims to measure and that it can predict future behaviors or performance. To this end, intelligence tests are not biased because they are fairly accurate measures and predictors. There are, however, real biases, prejudices, and inequalities in the social world that might benefit some advantaged group while hindering some disadvantaged others. Conclusion Although you might not be able to spell “esquamulose” or “staphylococci” – indeed, you might not even know what they mean—you don’t need to count yourself out in the intelligence department. Now that we have examined intelligence in depth we can return to our intuitive view of those students who compete in the National Spelling Bee. Are they smart? Certainly, they seem to have high verbal intelligence. There is also the possibility that they benefit from either a genetic boost in intelligence, a supportive social environment, or both. Watching them spell difficult words there is also much we do not know about them. We cannot tell, for instance, how emotionally intelligent they are or how they might use bodily-kinesthetic intelligence. This highlights the fact that intelligence is a complicated issue. Fortunately, psychologists continue to research this fascinating topic and their studies continue to yield new insights. Outside Resources Blog: Dr. Jonathan Wai has an excellent blog on Psychology Today discussing many of the most interesting issue related to intelligence. http://www.psychologytoday.com/blog/...-next-einstein Video: Hank Green gives a fun and interesting overview of the concept of intelligence in this installment of the Crash Course series. Discussion Questions 1. Do you think that people get smarter as they get older? In what ways might people gain or lose intellectual abilities as they age? 2. When you meet someone who strikes you as being smart what types of cues or information do you typically attend to in order to arrive at this judgment? 3. How do you think socio-economic status affects an individual taking an intellectual abilities test? 4. Should psychologists be asking about group differences in intellectual ability? What do you think? 5. Which of Howard Gardner’s 8 types of intelligence do you think describes the way you learn best? Vocabulary G Short for “general factor” and is often used to be synonymous with intelligence itself. Intelligence An individual’s cognitive capability. This includes the ability to acquire, process, recall and apply information. IQ Short for “intelligence quotient.” This is a score, typically obtained from a widely used measure of intelligence that is meant to rank a person’s intellectual ability against that of others. Norm Assessments are given to a representative sample of a population to determine the range of scores for that population. These “norms” are then used to place an individual who takes that assessment on a range of scores in which he or she is compared to the population at large. Standardize Assessments that are given in the exact same manner to all people . With regards to intelligence tests standardized scores are individual scores that are computed to be referenced against normative scores for a population (see “norm”). Stereotype threat The phenomenon in which people are concerned that they will conform to a stereotype or that their performance does conform to that stereotype, especially in instances in which the stereotype is brought to their conscious awareness.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/09%3A_COGNITION_LANGUAGE_AND_INTELLIGENCE/9.03%3A_Intelligence.txt
• 10.1: Drive States Our thoughts and behaviors are strongly influenced by affective experiences known as drive states. These drive states motivate us to fulfill goals that are beneficial to our survival and reproduction. This module provides an overview of key drive states, including information about their neurobiology and their psychological effects. • 10.2: Culture and Emotion How do people’s cultural ideas and practices shape their emotions (and other types of feelings)? In this module, we will discuss findings from studies comparing North American (United States, Canada) and East Asian (Chinese, Japanese, Korean) contexts. These studies reveal both cultural similarities and differences in various aspects of emotional life. Throughout, we will highlight the scientific and practical importance of these findings and conclude with recommendations for future research. • 10.3: Motives and Goals Your decisions and behaviors are often the result of a goal or motive you possess. This module provides an overview of the main theories and findings on goals and motivation. We address the origins, manifestations, and types of goals, and the various factors that influence motivation in goal pursuit. We further address goal conflict and, specifically, the exercise of self-control in protecting long-term goals from momentary temptations. 10: EMOTION AND MOTIVATION By Sudeep Bhatia and George Loewenstein Carnegie Mellon University Our thoughts and behaviors are strongly influenced by affective experiences known as drive states. These drive states motivate us to fulfill goals that are beneficial to our survival and reproduction. This module provides an overview of key drive states, including information about their neurobiology and their psychological effects. learning objectives • Identify the key properties of drive states • Describe biological goals accomplished by drive states • Give examples of drive states • Outline the neurobiological basis of drive states such as hunger and arousal • Discuss the main moderators and determinants of drive states such as hunger and arousal Introduction What is the longest you’ve ever gone without eating? A couple of hours? An entire day? How did it feel? Humans rely critically on food for nutrition and energy, and the absence of food can create drastic changes, not only in physical appearance, but in thoughts and behaviors. If you’ve ever fasted for a day, you probably noticed how hunger can take over your mind, directing your attention to foods you could be eating (a cheesy slice of pizza, or perhaps some sweet, cold ice cream), and motivating you to obtain and consume these foods. And once you have eaten and your hunger has been satisfied, your thoughts and behaviors return to normal. Hunger is a drive state, an affective experience (something you feel, like the sensation of being tired or hungry) that motivates organisms to fulfill goals that are generally beneficial to their survival and reproduction. Like other drive states, such as thirst or sexual arousal, hunger has a profound impact on the functioning of the mind. It affects psychological processes, such as perception, attention, emotion, and motivation, and influences the behaviors that these processes generate. Key Properties of Drive States Drive states differ from other affective or emotional states in terms of the biological functions they accomplish. Whereas all affective states possess valence (i.e., they are positive or negative) and serve to motivate approach or avoidance behaviors (Zajonc, 1998), drive states are unique in that they generate behaviors that result in specific benefits for the body. For example, hunger directs individuals to eat foods that increase blood sugar levels in the body, while thirst causes individuals to drink fluids that increase water levels in the body. Different drive states have different triggers. Most drive states respond to both internal and external cues, but the combinations of internal and external cues, and the specific types of cues, differ between drives. Hunger, for example, depends on internal, visceral signals as well as sensory signals, such as the sight or smell of tasty food. Different drive states also result in different cognitive and emotional states, and are associated with different behaviors. Yet despite these differences, there are a number of properties common to all drive states. Homeostasis Humans, like all organisms, need to maintain a stable state in their various physiological systems. For example, the excessive loss of body water results in dehydration, a dangerous and potentially fatal state. However, too much water can be damaging as well. Thus, a moderate and stable level of body fluid is ideal. The tendency of an organism to maintain this stability across all the different physiological systems in the body is called homeostasis. Homeostasis is maintained via two key factors. First, the state of the system being regulated must be monitored and compared to an ideal level, or a set point. Second, there need to be mechanisms for moving the system back to this set point—that is, to restore homeostasis when deviations from it are detected. To better understand this, think of the thermostat in your own home. It detects when the current temperature in the house is different than the temperature you have it set at (i.e., the set point). Once the thermostat recognizes the difference, the heating or air conditioning turns on to bring the overall temperature back to the designated level. Many homeostatic mechanisms, such as blood circulation and immune responses, are automatic and nonconscious. Others, however, involve deliberate action. Most drive states motivate action to restore homeostasis using both “punishments” and “rewards.” Imagine that these homeostatic mechanisms are like molecular parents. When you behave poorly by departing from the set point (such as not eating or being somewhere too cold), they raise their voice at you. You experience this as the bad feelings, or “punishments,” of hunger, thirst, or feeling too cold or too hot. However, when you behave well (such as eating nutritious foods when hungry), these homeostatic parents reward you with the pleasure that comes from any activity that moves the system back toward the set point. For example, when body temperature declines below the set point, any activity that helps to restore homeostasis (such as putting one’s hand in warm water) feels pleasurable; and likewise, when body temperature rises above the set point, anything that cools it feels pleasurable. The Narrowing of Attention As drive states intensify, they direct attention toward elements, activities, and forms of consumption that satisfy the biological needs associated with the drive. Hunger, for example, draws attention toward food. Outcomes and objects that are not related to satisfying hunger lose their value (Easterbrook, 1959). For instance, has anyone ever invited you to do a fun activity while you were hungry? Likely your response was something like: “I’m not doing anything until I eat first.” Indeed, at a sufficient level of intensity, individuals will sacrifice almost any quantity of goods that do not address the needs signaled by the drive state. For example, cocaine addicts, according to Gawin (1991:1581), “report that virtually all thoughts are focused on cocaine during binges; nourishment, sleep, money, loved ones, responsibility, and survival lose all significance.” Drive states also produce a second form of attention-narrowing: a collapsing of time-perspective toward the present. That is, they make us impatient. While this form of attention-narrowing is particularly pronounced for the outcomes and behaviors directly related to the biological function being served by the drive state at issue (e.g., “I need food now”), it applies to general concerns for the future as well. Ariely and Loewenstein (2006), for example, investigated the impact of sexual arousal on the thoughts and behaviors of a sample of male undergraduates. These undergraduates were lent laptop computers that they took to their private residences, where they answered a series of questions, both in normal states and in states of high sexual arousal. Ariely and Loewenstein found that being sexually aroused made people extremely impatient for both sexual outcomes and for outcomes in other domains, such as those involving money. In another study Giordano et al. (2002) found that heroin addicts were more impatient with respect to heroin when they were craving it than when they were not. More surprisingly, they were also more impatient toward money (they valued delayed money less) when they were actively craving heroin. Yet a third form of attention-narrowing involves thoughts and outcomes related to the self versus others. Intense drive states tend to narrow one’s focus inwardly and to undermine altruism—or the desire to do good for others. People who are hungry, in pain, or craving drugs tend to be selfish. Indeed, popular interrogation methods involve depriving individuals of sleep, food, or water, so as to trigger intense drive states leading the subject of the interrogation to divulge information that may betray comrades, friends, and family (Biderman, 1960). Two Illustrative Drive States Thus far we have considered drive states abstractly. We have discussed the ways in which they relate to other affective and motivational mechanisms, as well as their main biological purpose and general effects on thought and behavior. Yet, despite serving the same broader goals, different drive states are often remarkably different in terms of their specific properties. To understand some of these specific properties, we will explore two different drive states that play very important roles in determining behavior, and in ensuring human survival: hunger and sexual arousal. Hunger Hunger is a classic example of a drive state, one that results in thoughts and behaviors related to the consumption of food. Hunger is generally triggered by low glucose levels in the blood (Rolls, 2000), and behaviors resulting from hunger aim to restore homeostasis regarding those glucose levels. Various other internal and external cues can also cause hunger. For example, when fats are broken down in the body for energy, this initiates a chemical cue that the body should search for food (Greenberg, Smith, & Gibbs, 1990). External cues include the time of day, estimated time until the next feeding (hunger increases immediately prior to food consumption), and the sight, smell, taste, and even touch of food and food-related stimuli. Note that while hunger is a generic feeling, it has nuances that can provoke the eating of specific foods that correct for nutritional imbalances we may not even be conscious of. For example, a couple who was lost adrift at sea found they inexplicably began to crave the eyes of fish. Only later, after they had been rescued, did they learn that fish eyes are rich in vitamin C—a very important nutrient that they had been depleted of while lost in the ocean (Walker, 2014). The hypothalamus (located in the lower, central part of the brain) plays a very important role in eating behavior. It is responsible for synthesizing and secreting various hormones. The lateral hypothalamus (LH) is concerned largely with hunger and, in fact, lesions (i.e., damage) of the LH can eliminate the desire for eating entirely—to the point that animals starve themselves to death unless kept alive by force feeding (Anand & Brobeck, 1951). Additionally, artificially stimulating the LH, using electrical currents, can generate eating behavior if food is available (Andersson, 1951). Activation of the LH can not only increase the desirability of food but can also reduce the desirability of nonfood-related items. For example, Brendl, Markman, and Messner (2003) found that participants who were given a handful of popcorn to trigger hunger not only had higher ratings of food products, but also had lower ratings of nonfood products—compared with participants whose appetites were not similarly primed. That is, because eating had become more important, other non-food products lost some of their value. Hunger is only part of the story of when and why we eat. A related process, satiation, refers to the decline of hunger and the eventual termination of eating behavior. Whereas the feeling of hunger gets you to start eating, the feeling of satiation gets you to stop. Perhaps surprisingly, hunger and satiation are two distinct processes, controlled by different circuits in the brain and triggered by different cues. Distinct from the LH, which plays an important role in hunger, the ventromedial hypothalamus (VMH) plays an important role in satiety. Though lesions of the VMH can cause an animal to overeat to the point of obesity, the relationship between the LH and the VMB is quite complicated. Rats with VMH lesions can also be quite finicky about their food (Teitelbaum, 1955). Other brain areas, besides the LH and VMH, also play important roles in eating behavior. The sensory cortices (visual, olfactory, and taste), for example, are important in identifying food items. These areas provide informational value, however, not hedonic evaluations. That is, these areas help tell a person what is good or safe to eat, but they don’t provide the pleasure (or hedonic) sensations that actually eating the food produces. While many sensory functions are roughly stable across different psychological states, other functions, such as the detection of food-related stimuli, are enhanced when the organism is in a hungry drive state. After identifying a food item, the brain also needs to determine its reward value, which affects the organism’s motivation to consume the food. The reward value ascribed to a particular item is, not surprisingly, sensitive to the level of hunger experienced by the organism. The hungrier you are, the greater the reward value of the food. Neurons in the areas where reward values are processed, such as the orbitofrontal cortex, fire more rapidly at the sight or taste of food when the organism is hungry relative to if it is satiated. Sexual Arousal A second drive state, especially critical to reproduction, is sexual arousal. Sexual arousal results in thoughts and behaviors related to sexual activity. As with hunger, it is generated by a large range of internal and external mechanisms that are triggered either after the extended absence of sexual activity or by the immediate presence and possibility of sexual activity (or by cues commonly associated with such possibilities). Unlike hunger, however, these mechanisms can differ substantially between males and females, indicating important evolutionary differences in the biological functions that sexual arousal serves for different sexes. Sexual arousal and pleasure in males, for example, is strongly related to the preoptic area, a region in the anterior hypothalamus (or the front of the hypothalamus). If the preoptic area is damaged, male sexual behavior is severely impaired. For example, rats that have had prior sexual experiences will still seek out sexual partners after their preoptic area is lesioned. However, once having secured a sexual partner, rats with lesioned preoptic areas will show no further inclination to actually initiate sex. For females, though, the preoptic area fulfills different roles, such as functions involved with eating behaviors. Instead, there is a different region of the brain, the ventromedial hypothalamus (the lower, central part) that plays a similar role for females as the preoptic area does for males. Neurons in the ventromedial hypothalamus determine the excretion of estradiol, an estrogen hormone that regulates sexual receptivity (or the willingness to accept a sexual partner). In many mammals, these neurons send impulses to the periaqueductal gray (a region in the midbrain) which is responsible for defensive behaviors, such as freezing immobility, running, increases in blood pressure, and other motor responses. Typically, these defensive responses might keep the female rat from interacting with the male one. However, during sexual arousal, these defensive responses are weakened and lordosis behavior, a physical sexual posture that serves as an invitation to mate, is initiated (Kow and Pfaff, 1998). Thus, while the preoptic area encourages males to engage in sexual activity, the ventromedial hypothalamus fulfills that role for females. Other differences between males and females involve overlapping functions of neural modules. These neural modules often provide clues about the biological roles played by sexual arousal and sexual activity in males and females. Areas of the brain that are important for male sexuality overlap to a great extent with areas that are also associated with aggression. In contrast, areas important for female sexuality overlap extensively with those that are also connected to nurturance (Panksepp, 2004). One region of the brain that seems to play an important role in sexual pleasure for both males and females is the septal nucleus, an area that receives reciprocal connections from many other brain regions, including the hypothalamus and the amygdala (a region of the brain primarily involved with emotions). This region shows considerable activity, in terms of rhythmic spiking, during sexual orgasm. It is also one of the brain regions that rats will most reliably voluntarily self-stimulate (Olds & Milner, 1954). In humans, placing a small amount of acetylcholine into this region, or stimulating it electrically, has been reported to produce a feeling of imminent orgasm (Heath, 1964). Conclusion Drive states are evolved motivational mechanisms designed to ensure that organisms take self-beneficial actions. In this module, we have reviewed key properties of drive states, such as homeostasis and the narrowing of attention. We have also discussed, in some detail, two important drive states—hunger and sexual arousal—and explored their underlying neurobiology and the ways in which various environmental and biological factors affect their properties. There are many drive states besides hunger and sexual arousal that affect humans on a daily basis. Fear, thirst, exhaustion, exploratory and maternal drives, and drug cravings are all drive states that have been studied by researchers (see e.g., Buck, 1999; Van Boven & Loewenstein, 2003). Although these drive states share some of the properties discussed in this module, each also has unique features that allow it to effectively fulfill its evolutionary function. One key difference between drive states is the extent to which they are triggered by internal as opposed to external stimuli. Thirst, for example, is induced both by decreased fluid levels and an increased concentration of salt in the body. Fear, on the other hand, is induced by perceived threats in the external environment. Drug cravings are triggered both by internal homeostatic mechanisms and by external visual, olfactory, and contextual cues. Other drive states, such as those pertaining to maternity, are triggered by specific events in the organism’s life. Differences such as these make the study of drive states a scientifically interesting and important endeavor. Drive states are rich in their diversity, and many questions involving their neurocognitive underpinnings, environmental determinants, and behavioral effects, have yet to be answered. One final thing to consider, not discussed in this module, relates to the real-world consequences of drive states. Hunger, sexual arousal, and other drive states are all psychological mechanisms that have evolved gradually over millions of years. We share these drive states not only with our human ancestors but with other animals, such as monkeys, dogs, and rats. It is not surprising then that these drive states, at times, lead us to behave in ways that are ill-suited to our modern lives. Consider, for example, the obesity epidemic that is affecting countries around the world. Like other diseases of affluence, obesity is a product of drive states that are too easily fulfilled: homeostatic mechanisms that once worked well when food was scarce now backfire when meals rich in fat and sugar are readily available. Unrestricted sexual arousal can have similarly perverse effects on our well-being. Countless politicians have sacrificed their entire life’s work (not to mention their marriages) by indulging adulterous sexual impulses toward colleagues, staffers, prostitutes, and others over whom they have social or financial power. It not an overstatement to say that many problems of the 21st century, from school massacres to obesity to drug addiction, are influenced by the mismatch between our drive states and our uniquely modern ability to fulfill them at a moment’s notice. Outside Resources Web: An open textbook chapter on homeostasis http://en.wikibooks.org/wiki/Human_P...gy/Homeostasis Web: Motivation and emotion in psychology http://allpsych.com/psychology101/mo...n_emotion.html Web: The science of sexual arousal http://www.apa.org/monitor/apr03/arousal.aspx Discussion Questions 1. The ability to maintain homeostasis is important for an organism’s survival. What are the ways in which homeostasis ensures survival? Do different drive states accomplish homeostatic goals differently? 2. Drive states result in the narrowing of attention toward the present and toward the self. Which drive states lead to the most pronounced narrowing of attention toward the present? Which drive states lead to the most pronounced narrowing of attention toward the self? 3. What are important differences between hunger and sexual arousal, and in what ways do these differences reflect the biological needs that hunger and sexual arousal have been evolved to address? 4. Some of the properties of sexual arousal vary across males and females. What other drives states affect males and females differently? Are there drive states that vary with other differences in humans (e.g., age)? Vocabulary Drive state Affective experiences that motivate organisms to fulfill goals that are generally beneficial to their survival and reproduction. Homeostasis The tendency of an organism to maintain a stable state across all the different physiological systems in the body. Homeostatic set point An ideal level that the system being regulated must be monitored and compared to. Hypothalamus A portion of the brain involved in a variety of functions, including the secretion of various hormones and the regulation of hunger and sexual arousal. Lordosis A physical sexual posture in females that serves as an invitation to mate. Preoptic area A region in the anterior hypothalamus involved in generating and regulating male sexual behavior. Reward value A neuropsychological measure of an outcome’s affective importance to an organism. Satiation The state of being full to satisfaction and no longer desiring to take on more.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/10%3A_EMOTION_AND_MOTIVATION/10.01%3A_Drive_States.txt
By Jeanne Tsai Stanford University How do people’s cultural ideas and practices shape their emotions (and other types of feelings)? In this module, we will discuss findings from studies comparing North American (United States, Canada) and East Asian (Chinese, Japanese, Korean) contexts. These studies reveal both cultural similarities and differences in various aspects of emotional life. Throughout, we will highlight the scientific and practical importance of these findings and conclude with recommendations for future research. Learning Objectives • Review the history of cross-cultural studies of emotion • Learn about recent empirical findings and theories of culture and emotion • Understand why cultural differences in emotion matter • Explore current and future directions in culture and emotion research Take a moment and imagine you are traveling in a country you’ve never been to before. Everything—the sights, the smells, the sounds—seems strange. People are speaking a language you don’t understand and wearing clothes unlike yours. But they greet you with a smile and you sense that, despite the differences you observe, deep down inside these people have the same feelings as you. But is this true? Do people from opposite ends of the world really feel the same emotions? While most scholars agree that members of different cultures may vary in the foods they eat, the languages they speak, and the holidays they celebrate, there is disagreement about the extent to which culture shapes people’s emotions and feelings—including what people feel, what they express, and what they do during an emotional event. Understanding how culture shapes people’s emotional lives and what impact emotion has on psychological health and well-being in different cultures will not only advance the study of human behavior but will also benefit multicultural societies. Across a variety of settings—academic, business, medical—people worldwide are coming into more contact with people from foreign cultures. In order to communicate and function effectively in such situations, we must understand the ways cultural ideas and practices shape our emotions. Historical Background In the 1950s and 1960s, social scientists tended to fall into either one of two camps. The universalist camp claimed that, despite cultural differences in customs and traditions, at a fundamental level all humans feel similarly. These universalists believed that emotions evolved as a response to the environments of our primordial ancestors, so they are the same across all cultures. Indeed, people often describe their emotions as “automatic,” “natural,” “physiological,” and “instinctual,” supporting the view that emotions are hard-wired and universal. The social constructivist camp, however, claimed that despite a common evolutionary heritage, different groups of humans evolved to adapt to their distinctive environments. And because human environments vary so widely, people’s emotions are also culturally variable. For instance, Lutz (1988) argued that many Western views of emotion assume that emotions are “singular events situated within individuals.” However, people from Ifaluk (a small island near Micronesia) view emotions as “exchanges between individuals” (p. 212). Social constructivists contended that because cultural ideas and practices are all-encompassing, people are often unaware of how their feelings are shaped by their culture. Therefore emotions can feel automatic, natural, physiological, and instinctual, and yet still be primarily culturally shaped. In the 1970s, Paul Ekman conducted one of the first scientific studies to address the universalist–social constructivist debate. He and Wallace Friesen devised a system to measure people’s facial muscle activity, called the Facial Action Coding System (FACS; Ekman & Friesen, 1978). Using FACS, Ekman and Friesen analyzed people’s facial expressions and identified specific facial muscle configurations associated with specific emotions, such as happiness, anger, sadness, fear, disgust. Ekman and Friesen then took photos of people posing with these different expressions (Figure 1). With the help of colleagues at different universities around the world, Ekman and Friesen showed these pictures to members of vastly different cultures, gave them a list of emotion words (translated into the relevant languages), and asked them to match the facial expressions in the photos with their corresponding emotion words on the list (Ekman & Friesen, 1971; Ekman et al., 1987). Across cultures, participants “recognized” the emotional facial expressions, matching each picture with its “correct” emotion word at levels greater than chance. This led Ekman and his colleagues to conclude that there are universally recognized emotional facial expressions. At the same time, though, they found considerable variability across cultures in recognition rates. For instance, whereas 95% of U.S. participants associated a smile with “happiness,” only 69% of Sumatran participants did. Similarly, 86% of U.S. participants associated wrinkling of the nose with “disgust,” but only 60% of Japanese did (Ekman et al., 1987). Ekman and colleagues interpreted this variation as demonstrating cultural differences in “display rules,” or rules about what emotions are appropriate to show in a given situation (Ekman, 1972). Indeed, since this initial work, Matsumoto and his colleagues have demonstrated widespread cultural differences in display rules (Safdar et al., 2009). One prominent example of such differences is biting one’s tongue. In India, this signals embarrassment; however, in the U.S. this expression has no such meaning (Haidt & Keltner, 1999). These findings suggest both cultural similarities and differences in the recognition of emotional facial expressions (although see Russell, 1994, for criticism of this work). Interestingly, since the mid-2000s, increasing research has demonstrated cultural differences not only in display rules, but also the degree to which people focus on the face (versus other aspects of the social context; Masuda, Ellsworth, Mesquita, Leu, Tanida, & Van de Veerdonk, 2008), and on different features of the face (Yuki, Maddux, & Matsuda, 2007) when perceiving others’ emotions. For example, people from the United States tend to focus on the mouth when interpreting others’ emotions, whereas people from Japan tend to focus on the eyes. But how does culture shape other aspects of emotional life—such as how people emotionally respond to different situations, how they want to feel generally, and what makes them happy? Today, most scholars agree that emotions and other related states are multifaceted, and that cultural similarities and differences exist for each facet. Thus, rather than classifying emotions as either universal or socially-constructed, scholars are now attempting to identify the specific similarities and differences of emotional life across cultures. These endeavors are yielding new insights into the effects of cultural on emotion. Current and Research Theory Given the wide range of cultures and facets of emotion in the world, for the remainder of the module we will limit our scope to the two cultural contexts that have received the most empirical attention by social scientists: North America (United States, Canada) and East Asia (China, Japan, and Korea). Social scientists have focused on North American and East Asian contexts because they differ in obvious ways, including their geographical locations, histories, languages, and religions. Moreover, since the 1980s large-scale studies have revealed that North American and East Asian contexts differ in their overall values and attitudes, such as the prioritization of personal vs. group needs (individualism vs. collectivism; Hofstede, 2001). Whereas North American contexts encourage members to prioritize personal over group needs (to be “individualistic”), East Asian contexts encourage members to prioritize group over personal needs (to be “collectivistic”). Cultural Models of Self in North American and East Asian Contexts In a landmark paper, cultural psychologists Markus and Kitayama (1991) proposed that previously observed differences in individualism and collectivism translated into different models of the self—or one’s personal concept of who s/he is as a person. Specifically, the researchers argued that in North American contexts, the dominant model of the self is an independent one, in which being a person means being distinct from others and behaving accordingly across situations. In East Asian contexts, however, the dominant model of the self is an interdependent one, in which being a person means being fundamentally connected to others and being responsive to situational demands. For example, in a classic study (Cousins, 1989), American and Japanese students were administered the Twenty Statements Test, in which they were asked to complete the sentence stem, “I am ______,” twenty times. U.S. participants were more likely than Japanese participants to complete the stem with psychological attributes (e.g., friendly, cheerful); Japanese participants, on the other hand, were more likely to complete the stem with references to social roles and responsibilities (e.g., a daughter, a student) (Cousins, 1989). These different models of the self result in different principles for interacting with others. An independent model of self teaches people to express themselves and try to influence others (i.e., change their environments to be consistent with their own beliefs and desires). In contrast, an interdependent model of self teaches people to suppress their own beliefs and desires and adjust to others’ (i.e., fit in with their environment) (Heine, Lehman, Markus, & Kitayama, 1999; Morling, Kitayama, & Miyamoto, 2002; Weisz, Rothbaum, & Blackburn, 1984). Markus and Kitayama (1991) argue that these different models of self have significant implications for how people in Western and East Asian contexts feel. Cultural Similarities and Differences in Emotion: Comparisons of North American and East Asian Contexts A considerable body of empirical research suggests that these different models of self shape various aspects of emotional dynamics. Next we will discuss several ways culture shapes emotion, starting with emotional response. People’s Physiological Responses to Emotional Events Are Similar Across Cultures, but Culture Influences People’s Facial Expressive Behavior How does culture influence people’s responses to emotional events? Studies of emotional response tend to focus on three components: physiology (e.g., how fast one’s heart beats), subjective experience (e.g., feeling intensely happy or sad), and facial expressive behavior (e.g., smiling or frowning). Although only a few studies have simultaneously measured these different aspects of emotional response, those that do tend to observe more similarities than differences in physiological responses between cultures. That is, regardless of culture, people tend to respond similarly in terms of physiological (or bodily) expression. For instance, in one study, European American and Hmong (pronounced “muhng”) American participants were asked to relive various emotional episodes in their lives (e.g., when they lost something or someone they loved; when something good happened) (Tsai, Chentsova-Dutton, Freire-Bebeau, & Przymus, 2002). At the level of physiological arousal (e.g., heart rate), there were no differences in how the participants responded. However, their facial expressive behavior told a different story. When reliving events that elicited happiness, pride, and love, European Americans smiled more frequently and more intensely than did their Hmong counterparts—though all participants reported feeling happy, proud, and in love at similar levels of intensity. And similar patterns have emerged in studies comparing European Americans with Chinese Americans during different emotion-eliciting tasks (Tsai et al., 2002; Tsai, Levenson, & McCoy, 2006; Tsai, Levenson, & Carstensen, 2000). Thus, while the physiological aspects of emotional responses appear to be similar across cultures, their accompanying facial expressions are more culturally distinctive. Again, these differences in facial expressions during positive emotional events are consistent with findings from cross-cultural studies of display rules, and stem from the models of self-description discussed above: In North American contexts that promote an independent self , individuals tend to express their emotions to influence others. Conversely, in East Asian contexts that promote an interdependent self, individuals tend to control and suppress their emotions to adjust to others. People Suppress Their Emotions Across Cultures, but Culture Influences the Consequences of Suppression for Psychological Well-Being If the cultural ideal in North American contexts is to express oneself, then suppressing emotions (not showing how one feels) should have negative consequences. This is the assumption underlying hydraulic models of emotion: the idea that emotional suppression and repression impair psychological functioning (Freud, 1910). Indeed, significant empirical research shows that suppressing emotions can have negative consequences for psychological well-being in North American contexts (Gross, 1998). However, Soto and colleagues (2011) find that the relationship between suppression and psychological well-being varies by culture. True, with European Americans, emotional suppression is associated with higher levels of depression and lower levels of life satisfaction. (Remember, in these individualistic societies, the expression of emotion is a fundamental aspect of positive interactions with others.) On the other hand, since for Hong Kong Chinese, emotional suppression is needed to adjust to others (in this interdependent community, suppressing emotions is how to appropriately interact with others), it is simply a part of normal life and therefore not associated with depression or life satisfaction. These findings are consistent with research suggesting that factors related to clinical depression vary between European Americans and Asian Americans. European Americans diagnosed with depression show dampened or muted emotional responses (Bylsma, Morris, & Rottenberg, 2008). For instance, when shown sad or amusing film clips, depressed European Americans respond less intensely than their nondepressed counterparts.However, other studies have shown that depressed East Asian Americans (i.e., people of East Asian descent who live in the United States) demonstrate similar or increased emotional responses compared with their nondepressed counterparts (Chentsova-Dutton et al., 2007; Chentsova-Dutton, Tsai, & Gotlib, 2010). In other words, depressed European Americans show reduced emotional expressions, but depressed East Asian Americans do not—and, in fact, may express more emotion. Thus, muted responses (which resemble suppression) are associated with depression in European American contexts, but not in East Asian contexts. People Feel Good During Positive Events, but Culture Influences Whether People Feel Bad During Positive Events What about people’s subjective emotional experiences? Do people across cultures feel the same emotions in similar situations, despite how they show them? Recent studies indicate that culture affects whether people are likely to feel bad during good events. In North American contexts, people rarely feel bad after good experiences. However, a number of research teams have observed that, compared with people in North American contexts, people in East Asian contexts are more likely to feel bad and good (“mixed” emotions) during positive events (e.g., feeling worried after winning an important competition; Miyamoto, Uchida, & Ellsworth, 2010). This may be because, compared with North Americans, East Asians engage in more dialectical thinking (i.e., they are more tolerant of contradiction and change). Therefore, they accept that positive and negative feelings can occur simultaneously. In addition, whereas North Americans value maximizing positive states and minimizing negative ones, East Asians value a greater balance between the two (Sims, Tsai, Wang, Fung, & Zhang, 2013). To better understand this, think about how you would feel after getting the top score on a test that’s graded on a curve. In North American contexts, such success is considered an individual achievement and worth celebrating. But what about the other students who will now receive a lower grade because you “raised the curve” with your good grade? In East Asian contexts, not only would students be more thoughtful of the overall group’s success, but they would also be more comfortable acknowledging both the positive (their own success on the test) and the negative (their classmates’ lower grades). Again, these differences can be linked to cultural differences in models of the self. An interdependent model encourages people to think about how their accomplishments might affect others (e.g., make others feel bad or jealous). Thus, awareness of negative emotions during positive events may discourage people from expressing their excitement and standing out (as in East Asian contexts). Such emotional suppression helps individuals feel in sync with those around them. An independent model, however, encourages people to express themselves and stand out, so when something good happens, they have no reason to feel bad. So far, we have reviewed research that demonstrates cultural similarities in physiological responses and in the ability to suppress emotions. We have also discussed the cultural differences in facial expressive behavior and the likelihood of experiencing negative feelings during positive events. Next, we will explore how culture shapes people’s ideal or desired states. People Want to Feel Good Across Cultures, but Culture Influences the Specific Good States People Want to Feel (Their “Ideal Affect”) Everyone welcomes positive feelings, but cultures vary in the specific types of positive affective states (see Figure 4.8.2) their people favor. An affective state is essentially the type of emotional arousal one feels coupled with its intensity—which can vary from pleasant to unpleasant (e.g., happy to sad), with high to low arousal (e.g., energetic to passive). Although people of all cultures experience this range of affective states, they can vary in their preferences for each. For example, people in North American contexts lean toward feeling excited, enthusiastic, energetic, and other “high arousal positive” states. People in East Asian contexts, however, generally prefer feeling calm, peaceful, and other “low arousal positive” states (Tsai, Knutson, & Fung, 2006). These cultural differences have been observed in young children between the ages of 3 and 5, college students, and adults between the ages of 60 and 80 (Tsai, Louie, Chen, & Uchida, 2007; Tsai, Sims, Thomas, & Fung, 2013), and are reflected in widely-distributed cultural products. For example, wherever you look in American contexts—women’s magazines, children’s storybooks, company websites, and even Facebook profiles (Figure 3)—you will find more open, excited smiles and fewer closed, calm smiles compared to Chinese contexts (Chim, Moon, Ang, Tsai, 2013; Tsai, 2007;Tsai, Louie, et al., 2007). Again, these differences in ideal affect (i.e., the emotional states that people believe are best) correspond to the independent and interdependent models described earlier: Independent selves want to influence others, which requires action (doing something), and action involves high arousal states. Conversely, interdependent selves want to adjust to others, which requires suspending action and attending to others—both of which involve low arousal states. Thus, the more that individuals and cultures want to influence others (as in North American contexts), the more they value excitement, enthusiasm, and other high arousal positive states. And, the more that individuals and cultures want to adjust to others (as in East Asian contexts), the more they value calm, peacefulness, and other low arousal positive states (Tsai, Miao, Seppala, Fung, & Yeung, 2007). Because one’s ideal affect functions as a guide for behavior and a way of evaluating one’s emotional states, cultural differences in ideal affect can result in different emotional lives. For example, several studies have shown that people engage in activities (e.g., recreational pastimes, musical styles) consistent with their cultural ideal affect. That is, people from North American contexts (who value high arousal affective states) tend to prefer thrilling activities like skydiving, whereas people from East Asian contexts (who value low arousal affective states) prefer tranquil activities like lounging on the beach (Tsai, 2007). In addition, people base their conceptions of well-being and happiness on their ideal affect. Therefore, European Americans are more likely to define well-being in terms of excitement, whereas Hong Kong Chinese are more likely to define well-being in terms of calmness. Indeed, among European Americans, the less people experience high arousal positive states, the more depressed they are. But, among Hong Kong Chinese—you guessed it!—the less people experience low arousal positive states, the more depressed they are (Tsai, Knutson, & Fung, 2006). People Base Their Happiness on Similar Factors Across Cultures, but Culture Influences the Weight Placed on Each Factor What factors make people happy or satisfied with their lives? We have seen that discrepancies between how people actually feel (actual affect) and how they want to feel (ideal affect)—as well as people’s suppression of their ideal affect—are associated with depression. But happiness is based on other factors as well. For instance, Kwan, Bond, & Singelis (1997) found that while European Americans and Hong Kong Chinese subjects both based life satisfaction on how they felt about themselves (self-esteem) and their relationships (relationship harmony), their weighting of each factor was different. That is, European Americans based their life satisfaction primarily on self-esteem, whereas Hong Kong Chinese based their life satisfaction equally on self-esteem and relationship harmony. Consistent with these findings, Oishi and colleagues (1999) found in a study of 39 nations that self-esteem was more strongly correlated with life satisfaction in more individualistic nations compared to more collectivistic ones. Researchers also found that in individualistic cultures people rated life satisfaction based on their emotions more so than on social definitions (or norms). In other words, rather than using social norms as a guideline for what constitutes an ideal life, people in individualistic cultures tend to evaluate their satisfaction according to how they feel emotionally. In collectivistic cultures, however, people’s life satisfaction tends to be based on a balance between their emotions and norms (Suh, Diener, Oishi, & Triandis, 1998). Similarly, other researchers have recently found that people in North American contexts are more likely to feel negative when they have poor mental and physical health, while people in Japanese contexts don’t have this association (Curhan et al., 2013). Again, these findings are consistent with cultural differences in models of the self. In North American, independent contexts, feelings about the self matter more, whereas in East Asian, interdependent contexts, feelings about others matter as much as or even more than feelings about the self. Why Do Cultural Similarities And Differences In Emotion Matter? Understanding cultural similarities and differences in emotion is obviously critical to understanding emotions in general, and the flexibility of emotional processes more specifically. Given the central role that emotions play in our interaction, understanding cultural similarities and differences is especially critical to preventing potentially harmful miscommunications. Although misunderstandings are unintentional, they can result in negative consequences—as we’ve seen historically for ethnic minorities in many cultures. For instance, across a variety of North American settings, Asian Americans are often characterized as too “quiet” and “reserved,” and these low arousal states are often misinterpreted as expressions of disengagement or boredom—rather than expressions of the ideal of calmness. Consequently, Asian Americans may be perceived as “cold,” “stoic,” and “unfriendly,” fostering stereotypes of Asian Americans as “perpetual foreigners” (Cheryan & Monin, 2005). Indeed, this may be one reason Asian Americans are often overlooked for top leadership positions (Hyun, 2005). In addition to averting cultural miscommunications, recognizing cultural similarities and differences in emotion may provide insights into other paths to psychological health and well-being. For instance, findings from a recent series of studies suggest that calm states are easier to elicit than excited states, suggesting that one way of increasing happiness in cultures that value excitement may be to increase the value placed on calm states (Chim, Tsai, Hogan, & Fung, 2013). Current Directions In Culture And Emotion Research What About Other Cultures? In this brief review, we’ve focused primarily on comparisons between North American and East Asian contexts because most of the research in cultural psychology has focused on these comparisons. However, there are obviously a multitude of other cultural contexts in which emotional differences likely exist. For example, although Western contexts are similar in many ways, specific Western contexts (e.g., American vs. German) also differ from each other in substantive ways related to emotion (Koopmann-Holm & Matsumoto, 2011). Thus, future research examining other cultural contexts is needed. Such studies may also reveal additional, uninvestigated dimensions or models that have broad implications for emotion. In addition, because more and more people are being raised within multiple cultural contexts (e.g., for many Chinese Americans, a Chinese immigrant culture at home and mainstream American culture at school), more research is needed to examine how people negotiate and integrate these different cultures in their emotional lives (for examples, see De Leersnyder, Mesquita, & Kim, 2011; Perunovic, Heller, & Rafaeli, 2007). How Are Cultural Differences in Beliefs About Emotion Transmitted? According to Kroeber and Kluckhohn (1952), cultural ideas are reflected in and reinforced by practices, institutions, and products. As an example of this phenomenon—and illustrating the point regarding cultural differences in ideal affect—bestselling children’s storybooks in the United States often contain more exciting and less calm content (smiles and activities) than do bestselling children’s storybooks in Taiwan (Tsai, Louie, et al., 2007). To investigate this further, the researchers randomly assigned European American, Asian American, and Taiwanese Chinese preschoolers to be read either stories with exciting content or stories with calm content. Across all of these cultures, the kids who were read stories with exciting content were afterward more likely to value excited states, whereas those who were read stories with calm content were more likely to value calm states. As a test, after hearing the stories, the kids were shown a list of toys and asked to select their favorites. Those who heard the exciting stories wanted to play with more arousing toys (like a drum that beats loud and fast), whereas those who heard the calm stories wanted to play with less arousing toys (like a drum that beats quiet and slow). These findings suggest that regardless of ethnic background, direct exposure to storybook content alters children’s ideal affect. More studies are needed to assess whether a similar process occurs when children and adults are chronically exposed to various types of cultural products. As well, future studies should examine other ways cultural ideas regarding emotion are transmitted (e.g., via interactions with parents and teachers). Could These Cultural Differences Be Due to Temperament? An alternative explanation for cultural differences in emotion is that they are due to temperamental factors—that is, biological predispositions to respond in certain ways. (Might European Americans just be more emotional than East Asians because of genetics?) Indeed, most models of emotion acknowledge that both culture andtemperament play roles in emotional life, yet few if any models indicate how. Nevertheless, most researchers believe that despite genetic differences in founder populations (i.e., the migrants from a population who leave to create their own societies), culture has a greater impact on emotions. For instance, one theoretical framework, Affect Valuation Theory, proposes that cultural factors shape how people want to feel (“ideal affect”) more than how they actually feel (“actual affect”); conversely, temperamental factors influence how people actually feel more than how they want to feel (Tsai, 2007) (see Figure 4.8.4). To test this hypothesis, European American, Asian American, and Hong Kong Chinese participants completed measures of temperament (i.e., stable dispositions, such as neuroticism or extraversion), actual affect (i.e., how people actually feel in given situations), ideal affect (i.e., how people would like to feel in given situations), and influential cultural values (i.e., personal beliefs transmitted through culture). When researchers analyzed the participants’ responses, they found that differences in ideal affect between cultures were associated more with cultural factors than with temperamental factors (Tsai, Knutson, & Fung, 2006). However, when researchers examined actual affect, they found this to be reversed: actual affect was more strongly associated with temperamental factors than cultural factors. Not all of the studies described above have ruled out a temperamental explanation, though, and more studies are needed to rule out the possibility that the observed group differences are due to genetic factors instead of, or in addition to, cultural factors. Moreover, future studies should examine whether the links between temperament and emotions might vary across cultures, and how cultural and temperamental factors work together to shape emotion. Summary Based on studies comparing North American and East Asian contexts, there is clear evidence for cultural similarities and differences in emotions, and most of the differences can be traced to different cultural models of the self. Consider your own concept of self for a moment. What kinds of pastimes do you prefer—activities that make you excited, or ones that make you calm? What kinds of feelings do you strive for? What is your ideal affect? Because emotions seem and feel so instinctual to us, it’s hard to imagine that the way we experience them and the ones we desire are anything other than biologically programmed into us. However, as current research has shown (and as future research will continue to explore), there are myriad ways in which culture, both consciously and unconsciously, shapes people’s emotional lives. Outside Resources Audio Interview: The Really Big Questions “What Are Emotions?” Interview with Paul Ekman, Martha Nussbaum, Dominique Moisi, and William Reddy http://www.trbq.org/index.php?option...d=16&Itemid=43 Book: Ed Diener and Robert Biswas-Diener: Happiness: Unlocking the Mysteries of Psychological Wealth Book: Eric Weiner: The Geography of Bliss Book: Eva Hoffmann: Lost in Translation: Life in a New Language Book: Hazel Markus: Clash: 8 Cultural Conflicts That Make Us Who We Are Video: Social Psychology Alive psychology.stanford.edu/~tsai...psychalive.wmv Video: The Really Big Questions “Culture and Emotion,” Dr. Jeanne Tsai Video: Tsai’s description of cultural differences in emotion Web: Acculturation and Culture Collaborative at Leuven http://ppw.kuleuven.be/home/english/...p/acc-research Web: Culture and Cognition at the University of Michigan culturecognition.isr.umich.edu/ Web: Experts In Emotion Series, Dr. June Gruber, Department of Psychology, Yale University www.yalepeplab.com/teaching/p...pertseries.php Web: Georgetown Culture and Emotion Lab http://georgetownculturelab.wordpress.com/ Web: Paul Ekman’s website http://www.paulekman.com Web: Penn State Culture, Health, and Emotion Lab http://www.personal.psu.edu/users/m/...m280/sotosite/ Web: Stanford Culture and Emotion Lab www-psych.stanford.edu/~tsailab/index.htm Web: Wesleyan Culture and Emotion Lab http://culture-and-emotion.research.wesleyan.edu/ Discussion Questions 1. What cultural ideas and practices related to emotion were you exposed to when you were a child? What cultural ideas and practices related to emotion are you currently exposed to as an adult? How do you think they shape your emotional experiences and expressions? 2. How can researchers avoid inserting their own beliefs about emotion in their research? 3. Most of the studies described above are based on self-report measures. What are some of the advantages and disadvantages of using self-report measures to understand the cultural shaping of emotion? How might the use of other behavioral methods (e.g., neuroimaging) address some of these limitations? 4. Do the empirical findings described above change your beliefs about emotion? How? 5. Imagine you are a manager of a large American company that is beginning to do work in China and Japan. How will you apply your current knowledge about culture and emotion to prevent misunderstandings between you and your Chinese and Japanese employees? Vocabulary Affect Feelings that can be described in terms of two dimensions, the dimensions of arousal and valence (Figure 2). For example, high arousal positive states refer to excitement, elation, and enthusiasm. Low arousal positive states refer to calm, peacefulness, and relaxation. Whereas “actual affect” refers to the states that people actually feel, “ideal affect” refers to the states that people ideally want to feel. Culture Shared, socially transmitted ideas (e.g., values, beliefs, attitudes) that are reflected in and reinforced by institutions, products, and rituals. Emotions Changes in subjective experience, physiological responding, and behavior in response to a meaningful event. Emotions tend to occur on the order of seconds (in contract to moods which may last for days). Feelings A general term used to describe a wide range of states that include emotions, moods, traits and that typically involve changes in subjective experience, physiological responding, and behavior in response to a meaningful event. Emotions typically occur on the order of seconds, whereas moods may last for days, and traits are tendencies to respond a certain way across various situations. Independent self A model or view of the self as distinct from others and as stable across different situations. The goal of the independent self is to express and assert the self, and to influence others. This model of self is prevalent in many individualistic, Western contexts (e.g., the United States, Australia, Western Europe). Interdependent self A model or view of the self as connected to others and as changing in response to different situations. The goal of the interdependent self is to suppress personal preferences and desires, and to adjust to others. This model of self is prevalent in many collectivistic, East Asian contexts (e.g., China, Japan, Korea). Social constructivism Social constructivism proposes that knowledge is first created and learned within a social context and is then adopted by individuals. Universalism Universalism proposes that there are single objective standards, independent of culture, in basic domains such as learning, reasoning, and emotion that are a part of all human experience.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/10%3A_EMOTION_AND_MOTIVATION/10.02%3A_Culture_and_Emotion.txt
By Ayelet Fishbach and Maferima Touré-Tillery University of Chicago, Northwestern University Your decisions and behaviors are often the result of a goal or motive you possess. This module provides an overview of the main theories and findings on goals and motivation. We address the origins, manifestations, and types of goals, and the various factors that influence motivation in goal pursuit. We further address goal conflict and, specifically, the exercise of self-control in protecting long-term goals from momentary temptations. learning objectives • Define the basic terminology related to goals, motivation, self-regulation, and self-control. • Describe the antecedents and consequences of goal activation. • Describe the factors that influence motivation in the course of goal pursuit. • Explain the process underlying goal activation, self-regulation, and self-control. • Give examples of goal activation effects, self-regulation processes, and self-control processes. Introduction Every New Year, many people make resolutions—or goals—that go unsatisfied: eat healthier; pay better attention in class; lose weight. As much as we know our lives would improve if we actually achieved these goals, people quite often don’t follow through. But what if that didn’t have to be the case? What if every time we made a goal, we actually accomplished it? Each day, our behavior is the result of countless goals—maybe not goals in the way we think of them, like getting that beach body or being the first person to land on Mars. But even with “mundane” goals, like getting food from the grocery store, or showing up to work on time, we are often enacting the same psychological processes involved with achieving loftier dreams. To understand how we can better attain our goals, let’s begin with defining what a goal is and what underlies it, psychologically. A goal is the cognitive representation of a desired state, or, in other words, our mental idea of how we’d like things to turn out (Fishbach & Ferguson 2007; Kruglanski, 1996). This desired end state of a goal can be clearly defined (e.g., stepping on the surface of Mars), or it can be more abstract and represent a state that is never fully completed (e.g., eating healthy). Underlying all of these goals, though, is motivation, or the psychological driving force that enables action in the pursuit of that goal (Lewin, 1935). Motivation can stem from two places. First, it can come from the benefits associated with the process of pursuing a goal (intrinsic motivation). For example, you might be driven by the desire to have a fulfilling experience while working on your Mars mission. Second, motivation can also come from the benefits associated with achieving a goal (extrinsic motivation), such as the fame and fortune that come with being the first person on Mars (Deci & Ryan, 1985). One easy way to consider intrinsic and extrinsic motivation is through the eyes of a student. Does the student work hard on assignments because the act of learning is pleasing (intrinsic motivation)? Or does the student work hard to get good grades, which will help land a good job (extrinsic motivation)? Social psychologists recognize that goal pursuit and the motivations that underlie it do not depend solely on an individual’s personality. Rather, they are products of personal characteristics and situational factors. Indeed, cues in a person’s immediate environment—including images, words, sounds, and the presence of other people—can activate, or prime, a goal. This activation can be conscious, such that the person is aware of the environmental cues influencing his/her pursuit of a goal. However, this activation can also occur outside a person’s awareness, and lead to nonconscious goal pursuit. In this case, the person is unaware of why s/he is pursuing a goal and may not even realize that s/he is pursuing it. In this module, we review key aspects of goals and motivation. First, we discuss the origins and manifestation of goals. Then, we review factors that influence individuals’ motivation in the course of pursuing a goal (self-regulation). Finally, we discuss what motivates individuals to keep following their goals when faced with other conflicting desires—for example, when a tempting opportunity to socialize on Facebook presents itself in the course of studying for an exam (self-control). The Origins and Manifestation of Goals Goal Adoption What makes us commit to a goal? Researchers tend to agree that commitment stems from the sense that a goal is both valuable and attainable, and that we adopt goals that are highly likely to bring positive outcomes (i.e., one’s commitment = the value of the goal × the expectancy it will be achieved) (Fishbein & Ajzen, 1974; Liberman & Förster, 2008). This process of committing to a goal can occur without much conscious deliberation. For example, people infer value and attainability, and will nonconsciously determine their commitment based on those factors, as well as the outcomes of past goals. Indeed, people often learn about themselves the same way they learn about other people—by observing their behaviors (in this case, their own) and drawing inferences about their preferences. For example, after taking a kickboxing class, you might infer from your efforts that you are indeed committed to staying physically fit (Fishbach, Zhang, & Koo, 2009). Goal Priming We don’t always act on our goals in every context. For instance, sometimes we’ll order a salad for lunch, in keeping with our dietary goals, while other times we’ll order only dessert. So, what makes people adhere to a goal in any given context? Cues in the immediate environment (e.g., objects, images, sounds—anything that primes a goal) can have a remarkable influence on the pursuit of goals to which people are already committed (Bargh, 1990; Custers, Aarts, Oikawa, & Elliot, 2009; Förster, Liberman, & Friedman, 2007). How do these cues work? In memory, goals are organized in associative networks. That is, each goal is connected to other goals, concepts, and behaviors. Particularly, each goal is connected to corresponding means—activities and objects that help us attain the goal (Kruglanski et al., 2002). For example, the goal to stay physically fit may be associated with several means, including a nearby gym, one’s bicycle, or even a training partner. Cues related to the goal or means (e.g., an ad for running shoes, a comment about weight loss) can activate or prime the pursuit of that goal. For example, the presence of one’s training partner, or even seeing the word “workout” in a puzzle, can activate the goal of staying physically fit and, hence, increase a person’s motivation to exercise. Soon after goal priming, the motivation to act on the goal peaks then slowly declines, after some delay, as the person moves away from the primer or after s/he pursues the goal (Bargh, Gollwitzer, Lee-Chai, Barndollar, & Trotschel, 2001). Consequences of Goal Activation The activation of a goal and the accompanying increase in motivation can influence many aspects of behavior and judgment, including how people perceive, evaluate, and feel about the world around them. Indeed, motivational states can even alter something as fundamental as visual perception. For example, Balcetis and Dunning (2006) showed participants an ambiguous figure (e.g., “I3”) and asked them whether they saw the letter B or the number 13. The researchers found that when participants had the goal of seeing a letter (e.g., because seeing a number required the participants to drink a gross tasting juice), they in fact saw a B. It wasn’t that the participants were simply lying, either; their goal literally changed how they perceived the world! Goals can also exert a strong influence on how people evaluate the objects (and people) around them. When pursuing a goal such as quenching one’s thirst, people evaluate goal-relevant objects (e.g., a glass) more positively than objects that are not relevant to the goal (e.g., a pencil). Furthermore, those with the goal of quenching their thirst rate the glass more positively than people who are not pursuing the goal (Ferguson & Bargh, 2004). As discussed earlier, priming a goal can lead to behaviors like this (consistent with the goal), even though the person isn’t necessarily aware of why (i.e., the source of the motivation). For example, after research participants saw words related to achievement (in the context of solving a word search), they automatically performed better on a subsequent achievement test—without being at all aware that the achievement words had influenced them (Bargh & Chartrand, 1999; Srull & Wyer, 1979). Self-Regulation in Goal Pursuit Many of the behaviors we like to engage in are inconsistent with achieving our goals. For example, you may want to be physically fit, but you may also really like German chocolate cake. Self-regulation refers to the process through which individuals alter their perceptions, feelings, and actions in the pursuit of a goal. For example, filling up on fruits at a dessert party is one way someone might alter his or her actions to help with goal attainment. In the following section, we review the main theories and findings on self-regulation. From Deliberation to Implementation Self-regulation involves two basic stages, each with its own distinct mindset. First, a person must decide which of many potential goals to pursue at a given point in time (deliberative phase). While in the deliberative phase, a person often has a mindset that fosters an effective assessment of goals. That is, one tends to be open-minded and realistic about available goals to pursue. However, such scrutiny of one’s choices sometimes hinders action. For example, in the deliberative phase about how to spend time, someone might consider improving health, academic performance, or developing a hobby. At the same time, though, this deliberation involves considering realistic obstacles, such as one’s busy schedule, which may discourage the person from believing the goals can likely be achieved (and thus, doesn’t work toward any of them). However, after deciding which goal to follow, the second stage involves planning specific actions related to the goal (implemental phase). In the implemental phase, a person tends to have a mindset conducive to the effective implementation of a goal through immediate action—i.e., with the planning done, we’re ready to jump right into attaining our goal. Unfortunately, though, this mindset often leads to closed-mindedness and unrealistically positive expectations about the chosen goal (Gollwitzer, Heckhausen, & Steller, 1990; Kruglanski et al., 2000; Thaler & Shefrin, 1981). For example, in order to follow a health goal, a person might register for a gym membership and start exercising. In doing so, s/he assumes this is all that’s needed to achieve the goal (closed-mindedness), and after a few weeks, it should be accomplished (unrealistic expectations). Regulation of Ought- and Ideals-Goals In addition to two phases in goal pursuit, research also distinguishes between two distinct self-regulatory orientations (or perceptions of effectiveness) in pursuing a goal: prevention and promotion. A prevention focus emphasizes safety, responsibility, and security needs, and views goals as “oughts.” That is, for those who are prevention-oriented, a goal is viewed as something they should be doing, and they tend to focus on avoiding potential problems (e.g., exercising to avoid health threats). This self-regulatory focus leads to a vigilant strategy aimed at avoiding losses (the presence of negatives) and approaching non-losses (the absence of negatives). On the other hand, a promotion focus views goals as “ideals,” and emphasizes hopes, accomplishments, and advancement needs. Here, people view their goals as something they want to do that will bring them added pleasure (e.g., exercising because being healthy allows them to do more activities). This type of orientation leads to the adoption of an eager strategy concerned with approaching gains (the presence of positives) and avoiding non-gains (the absence of positives). To compare these two strategies, consider the goal of saving money. Prevention-focused people will save money because they believe it’s what they should be doing (an ought), and because they’re concerned about not having any money (avoiding a harm). Promotion-focused people, on the other hand, will save money because they want to have extra funds (a desire) so they can do new and fun activities (attaining an advancement). Although these two strategies result in very similar behaviors, emphasizing potential losses will motivate individuals with a prevention focus, whereas emphasizing potential gains will motivate individuals with a promotion focus. And these orientations—responding better to either a prevention or promotion focus— differ across individuals (chronic regulatory focus) and situations (momentary regulatory focus; Higgins, 1997). A Cybernetic Process of Self-Regulation Self-regulation depends on feelings that arise from comparing actual progress to expected progress. During goal pursuit, individuals calculate the discrepancy between their current state (i.e., all goal-related actions completed so far) and their desired end state (i.e., what they view as “achieving the goal”). After determining this difference, the person then acts to close that gap (Miller, Galanter, & Pribram, 1960; Powers, 1973). In this cybernetic process of self-regulation (or, internal system directing how a person should control behavior), a higher-than-expected rate of closing the discrepancy creates a signal in the form of positive feelings. For example, if you’re nearly finished with a class project (i.e., a low discrepancy between your progress and what it will take to completely finish), you feel good about yourself. However, these positive feelings tend to make individuals “coast,” or reduce their efforts on the focal goal, and shift their focus to other goals (e.g., you’re almost done with your project for one class, so you start working on a paper for another). By contrast, a lower-than-expected rate of closing the gap elicits negative feelings, which leads to greater effort investment on the focal goal (Carver & Scheier, 1998). If it is the day before a project’s due and you’ve hardly started it, you will likely feel anxious and stop all other activities to make progress on your project. Highlighting One Goal or Balancing Between Goals When we’ve completed steps toward achieving our goal, looking back on the behaviors or actions that helped us make such progress can have implications for future behaviors and actions (see The Dynamics of Self-Regulation framework; Fishbach et al., 2009). Remember, commitment results from the perceived value and attainability of a goal, whereas progress describes the perception of a reduced discrepancy between the current state and desired end state (i.e., the cybernetic process). After achieving a goal, when people interpret their previous actions as a sign of commitment to it, they tend to highlight the pursuit of that goal, prioritizing it and putting more effort toward it. However, when people interpret their previous actions as a sign of progress, they tend to balancebetween the goal and other goals, putting less effort into the focal goal. For example, if buying a product on sale reinforces your commitment to the goal of saving money, you will continue to behave financially responsibly. However, if you perceive the same action (buying the sale item) as evidence of progress toward the goal of saving money, you might feel like you can “take a break” from your goal, justifying splurging on a subsequent purchase. Several factors can influence the meanings people assign to previous goal actions. For example, the more confident a person is about a commitment to a goal, the more likely s/he is to infer progress rather than commitment from his/her actions (Koo & Fishbach, 2008). Conflicting Goals and Self-Control In the pursuit of our ordinary and extraordinary goals (e.g., staying physically or financially healthy, landing on Mars), we inevitably come across other goals (e.g., eating delicious food, exploring Earth) that might get in the way of our lofty ambitions. In such situations, we must exercise self-control to stay on course. Self-control is the capacity to control impulses, emotions, desires, and actions in order to resist a temptation (e.g., going on a shopping spree) and protect a valued goal (e.g., stay financially sound). As such, self-control is a process of self-regulation in contexts involving a clear trade-off between long-term interests (e.g., health, financial, or Martian) and some form of immediate gratification (Fishbach & Converse, 2010; Rachlin, 2000; Read, Loewenstein, & Rabin, 1999; Thaler & Shefrin, 1981). For example, whereas reading each page of a textbook requires self-regulation, doing so while resisting the tempting sounds of friends socializing in the next room requires self-control. And although you may tend to believe self-control is just a personal characteristic that varies across individuals, it is like a muscle, in that it becomes drained by being used but is also strengthened in the process. Self-Control as an Innate Ability Mischel, Shoda, and Rodriguez (1989) identified enduring individual differences in self-control and found that the persistent capacity to postpone immediate gratification for the sake of future interests leads to greater cognitive and social competence over the course of a lifetime. In a famous series of lab experiments (first conducted by Mischel & Baker, 1975), preschoolers 3–5 years old were asked to choose between getting a smaller treat immediately (e.g., a single marshmallow) or waiting as long as 15 minutes to get a better one (e.g., two marshmallows). Some children were better-able to exercise self-control than others, resisting the temptation to take the available treat and waiting for the better one. Following up with these preschoolers ten years later, the researchers found that the children who were able to wait longer in the experiment for the second marshmallow (vs. those who more quickly ate the single marshmallow) performed better academically and socially, and had better psychological coping skills as adolescents. Self-Control as a Limited Resource Beyond personal characteristics, the ability to exercise self-control can fluctuate from one context to the next. In particular, previous exertion of self-control (e.g., choosing not to eat a donut) drains individuals of the limited physiological and psychological resources required to continue the pursuit of a goal (e.g., later in the day, again resisting a sugary treat). Ego-depletion refers to this exhaustion of resources from resisting a temptation. That is, just like bicycling for two hours would exhaust someone before a basketball game, exerting self-control reduces individuals’ capacity to exert more self-control in a consequent task—whether that task is in the same domain (e.g., resisting a donut and then continuing to eat healthy) or a different one (e.g., resisting a donut and then continuing to be financially responsible; Baumeister, Bratslavsky, Muraven, & Tice, 1998; Vohs & Heatherton, 2000). For example, in a study by Baumeister et al. (1998), research participants who forced themselves to eat radishes instead of tempting chocolates were subsequently less persistent (i.e., gave up sooner) at attempting an unsolvable puzzle task compared to the participants who had not exerted self-control to resist the chocolates. A Prerequisite to Self-Control: Identification Although factors such as resources and personal characteristics contribute to the successful exercise of self-control, identifying the self-control conflict inherent to a particular situation is an important—and often overlooked—prerequisite. For example, if you have a long-term goal of getting better sleep but don’t perceive that staying up late on a Friday night is inconsistent with this goal, you won’t have a self-control conflict. The successful pursuit of a goal in the face of temptation requires that individuals first identify they are having impulses that need to be controlled. However, individuals often fail to identify self-control conflicts because many everyday temptations seem to have very minimal negative consequences: one bowl of ice cream is unlikely to destroy a person’s health, but what about 200 bowls of ice cream over the course of a few months? People are more likely to identify a self-control conflict, and exercise self-control, when they think of a choice as part of a broader pattern of repeated behavior rather than as an isolated choice. For example, rather than seeing one bowl of ice cream as an isolated behavioral decision, the person should try to recognize that this “one bowl of ice cream” is actually part of a nightly routine. Indeed, when considering broader decision patterns, consistent temptations become more problematic for long-term interests (Rachlin, 2000; Read, Loewenstein, & Kalyanaraman, 1999). Moreover, conflict identification is more likely if people see their current choices as similar to their future choices. Self-Control Processes: Counteracting Temptation The protection of a valued goal involves several cognitive and behavioral strategies ultimately aimed at “counteracting” the pull of temptations and pushing oneself toward goal-related alternatives (Fishbach & Trope, 2007). One such cognitive process involves decreasing the value of temptations and increasing the value of goal-consistent objects or actions. For example, health-conscious individuals might tell themselves a sugary treat is less appealing than a piece of fruit in order to direct their choice toward the latter. Other behavioral strategies include a precommitment to pursue goals and forgo temptation (e.g., leaving one’s credit card at home before going to the mall), establishing rewards for goals and penalties for temptations, or physically approaching goals and distancing oneself from temptations (e.g., pushing away a dessert plate). These self-control processes can benefit individuals’ long-term interests, either consciously or without conscious awareness. Thus, at times, individuals automatically activate goal-related thoughts in response to temptation, and inhibit temptation-related thoughts in the presence of goal cues (Fishbach, Friedman, & Kruglanski, 2003). Conclusion People often make New Year’s resolutions with the idea that attaining one’s goals is simple: “I just have to choose to eat healthier, right?” However, after going through this module and learning a social-cognitive approach to the main theories and findings on goals and motivation, we see that even the most basic decisions take place within a much larger and more complex mental framework. From the principles of goal priming and how goals influence perceptions, feelings, and actions, to the factors of self-regulation and self-control, we have learned the phases, orientations, and fluctuations involved in the course of everyday goal pursuit. Looking back on prior goal failures, it may seem impossible to achieve some of our desires. But, through understanding our own mental representation of our goals (i.e., the values and expectancies behind them), we can help cognitively modify our behavior to achieve our dreams. If you do, who knows?—maybe you will be the first person to step on Mars. Discussion Questions 1. What is the difference between goal and motivation? 2. What is the difference between self-regulation and self-control? 3. How do positive and negative feelings inform goal pursuit in a cybernetic self-regulation process? 4. Describe the characteristics of the deliberative mindset that allows individuals to decide between different goals. How might these characteristics hinder the implemental phase of self-regulation? 5. You just read a module on “Goals and Motivation,” and you believe it is a sign of commitment to the goal of learning about social psychology. Define commitment in this context. How would interpreting your efforts as a sign of commitment influence your motivation to read more about social psychology? By contrast, how would interpreting your efforts as a sign of progress influence your motivation to read more? 6. Mel and Alex are friends. Mel has a prevention focus self-regulatory orientation, whereas Alex has a promotion focus. They are both training for a marathon and are looking for motivational posters to hang in their respective apartments. While shopping, they find a poster with the following Confucius quote: “The will to win, the desire to succeed, the urge to reach your full potential ... . These are the keys that will unlock the door to personal excellence.” Who is this poster more likely to help stay motivated for the marathon (Mel or Alex)? Why? Find or write a quote that might help the other friend. 7. Give an example in which an individual fails to exercise self-control. What are some factors that can cause such a self-control failure? Vocabulary Balancing between goals Shifting between a focal goal and other goals or temptations by putting less effort into the focal goal—usually with the intention of coming back to the focal goal at a later point in time. Commitment The sense that a goal is both valuable and attainable Conscious goal activation When a person is fully aware of contextual influences and resulting goal-directed behavior. Deliberative phase The first of the two basic stages of self-regulation in which individuals decide which of many potential goals to pursue at a given point in time. Ego-depletion The exhaustion of physiological and/or psychological resources following the completion of effortful self-control tasks, which subsequently leads to reduction in the capacity to exert more self-control. Extrinsic motivation Motivation stemming from the benefits associated with achieving a goal such as obtaining a monetary reward. Goal The cognitive representation of a desired state (outcome). Goal priming The activation of a goal following exposure to cues in the immediate environment related to the goal or its corresponding means (e.g., images, words, sounds). Highlighting a goal Prioritizing a focal goal over other goals or temptations by putting more effort into the focal goal. Implemental phase The second of the two basic stages of self-regulation in which individuals plan specific actions related to their selected goal. Intrinsic motivation Motivation stemming from the benefits associated with the process of pursuing a goal such as having a fulfilling experience. Means Activities or objects that contribute to goal attainment. Motivation The psychological driving force that enables action in the course of goal pursuit. Nonconscious goal activation When activation occurs outside a person’s awareness, such that the person is unaware of the reasons behind her goal-directed thoughts and behaviors. Prevention focus One of two self-regulatory orientations emphasizing safety, responsibility, and security needs, and viewing goals as “oughts.” This self-regulatory focus seeks to avoid losses (the presence of negatives) and approach non-losses (the absence of negatives). Progress The perception of reducing the discrepancy between one’s current state and one’s desired state in goal pursuit. Promotion focus One of two self-regulatory orientations emphasizing hopes, accomplishments, and advancement needs, and viewing goals as “ideals.” This self-regulatory focus seeks to approach gains (the presence of positives) and avoid non-gains (the absence of positives). Self-control The capacity to control impulses, emotions, desires, and actions in order to resist a temptation and adhere to a valued goal. Self-regulation The processes through which individuals alter their emotions, desires, and actions in the course of pursuing a goal.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/10%3A_EMOTION_AND_MOTIVATION/10.03%3A_Motives_and_Goals.txt
• 11.1: Personality Traits Personality traits reflect people’s characteristic patterns of thoughts, feelings, and behaviors. Personality traits imply consistency and stability—someone who scores high on a specific trait like Extraversion is expected to be sociable in different situations and over time. Thus, trait psychology rests on the idea that people differ from one another in terms of where they stand on a set of basic trait dimensions that persist over time and across situations. • 11.2: Personality Assessment This module provides a basic overview to the assessment of personality. It discusses objective personality tests (based on both self-report and informant ratings), projective and implicit tests, and behavioral/performance measures. It describes the basic features of each method, as well as reviewing the strengths, weaknesses, and overall validity of each approach. • 11.3: Self and Identity Psychologists have approached the study of self in many different ways, but three central metaphors for the self repeatedly emerge. First, the self may be seen as a social actor, who enacts roles and displays traits by performing behaviors in the presence of others. Second, the self is a motivated agent, who acts upon inner desires and formulates goals, values, and plans to guide behavior in the future. Third, the self eventually becomes an autobiographical author. 11: PERSONALITY By Edward Diener and Richard E. Lucas University of Utah, University of Virginia, Michigan State University Personality traits reflect people’s characteristic patterns of thoughts, feelings, and behaviors. Personality traits imply consistency and stability—someone who scores high on a specific trait like Extraversion is expected to be sociable in different situations and over time. Thus, trait psychology rests on the idea that people differ from one another in terms of where they stand on a set of basic trait dimensions that persist over time and across situations. The most widely used system of traits is called the Five-Factor Model. This system includes five broad traits that can be remembered with the acronym OCEAN: Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. Each of the major traits from the Big Five can be divided into facets to give a more fine-grained analysis of someone's personality. In addition, some trait theorists argue that there are other traits that cannot be completely captured by the Five-Factor Model. Critics of the trait concept argue that people do not act consistently from one situation to the next and that people are very influenced by situational forces. Thus, one major debate in the field concerns the relative power of people’s traits versus the situations in which they find themselves as predictors of their behavior. learning objectives • List and describe the “Big Five” (“OCEAN”) personality traits that comprise the Five-Factor Model of personality. • Describe how the facet approach extends broad personality traits. • Explain a critique of the personality-trait concept. • Describe in what ways personality traits may be manifested in everyday behavior. • Describe each of the Big Five personality traits, and the low and high end of the dimension. • Give examples of each of the Big Five personality traits, including both a low and high example. • Describe how traits and social learning combine to predict your social activities. • Describe your theory of how personality traits get refined by social learning. Introduction When we observe people around us, one of the first things that strikes us is how different people are from one another. Some people are very talkative while others are very quiet. Some are active whereas others are couch potatoes. Some worry a lot, others almost never seem anxious. Each time we use one of these words, words like “talkative,” “quiet,” “active,” or “anxious,” to describe those around us, we are talking about a person’s personalitythe characteristic ways that people differ from one another. Personality psychologists try to describe and understand these differences. Although there are many ways to think about the personalities that people have, Gordon Allport and other “personologists” claimed that we can best understand the differences between individuals by understanding their personality traits. Personality traits reflect basic dimensions on which people differ (Matthews, Deary, & Whiteman, 2003). According to trait psychologists, there are a limited number of these dimensions (dimensions like Extraversion, Conscientiousness, or Agreeableness), and each individual falls somewhere on each dimension, meaning that they could be low, medium, or high on any specific trait. An important feature of personality traits is that they reflect continuous distributions rather than distinct personality types. This means that when personality psychologists talk about Introverts and Extraverts, they are not really talking about two distinct types of people who are completely and qualitatively different from one another. Instead, they are talking about people who score relatively low or relatively high along a continuous distribution. In fact, when personality psychologists measure traits like Extraversion, they typically find that most people score somewhere in the middle, with smaller numbers showing more extreme levels. The figure below shows the distribution of Extraversion scores from a survey of thousands of people. As you can see, most people report being moderately, but not extremely, extraverted, with fewer people reporting very high or very low scores. There are three criteria that are characterize personality traits: (1) consistency, (2) stability, and (3) individual differences. 1. To have a personality trait, individuals must be somewhat consistent across situations in their behaviors related to the trait. For example, if they are talkative at home, they tend also to be talkative at work. 2. Individuals with a trait are also somewhat stable over time in behaviors related to the trait. If they are talkative, for example, at age 30, they will also tend to be talkative at age 40. 3. People differ from one another on behaviors related to the trait. Using speech is not a personality trait and neither is walking on two feet—virtually all individuals do these activities, and there are almost no individual differences. But people differ on how frequently they talk and how active they are, and thus personality traits such as Talkativeness and Activity Level do exist. A challenge of the trait approach was to discover the major traits on which all people differ. Scientists for many decades generated hundreds of new traits, so that it was soon difficult to keep track and make sense of them. For instance, one psychologist might focus on individual differences in “friendliness,” whereas another might focus on the highly related concept of “sociability.” Scientists began seeking ways to reduce the number of traits in some systematic way and to discover the basic traits that describe most of the differences between people. The way that Gordon Allport and his colleague Henry Odbert approached this was to search the dictionary for all descriptors of personality (Allport & Odbert, 1936). Their approach was guided by the lexical hypothesis, which states that all important personality characteristics should be reflected in the language that we use to describe other people. Therefore, if we want to understand the fundamental ways in which people differ from one another, we can turn to the words that people use to describe one another. So if we want to know what words people use to describe one another, where should we look? Allport and Odbert looked in the most obvious place—the dictionary. Specifically, they took all the personality descriptors that they could find in the dictionary (they started with almost 18,000 words but quickly reduced that list to a more manageable number) and then used statistical techniques to determine which words “went together.” In other words, if everyone who said that they were “friendly” also said that they were “sociable,” then this might mean that personality psychologists would only need a single trait to capture individual differences in these characteristics. Statistical techniques were used to determine whether a small number of dimensions might underlie all of the thousands of words we use to describe people. The Five-Factor Model of Personality Research that used the lexical approach showed that many of the personality descriptors found in the dictionary do indeed overlap. In other words, many of the words that we use to describe people are synonyms. Thus, if we want to know what a person is like, we do not necessarily need to ask how sociable they are, how friendly they are, and how gregarious they are. Instead, because sociable people tend to be friendly and gregarious, we can summarize this personality dimension with a single term. Someone who is sociable, friendly, and gregarious would typically be described as an “Extravert.” Once we know she is an extravert, we can assume that she is sociable, friendly, and gregarious. Statistical methods (specifically, a technique called factor analysis) helped to determine whether a small number of dimensions underlie the diversity of words that people like Allport and Odbert identified. The most widely accepted system to emerge from this approach was “The Big Five” or “Five-Factor Model” (Goldberg, 1990; McCrae & John, 1992; McCrae & Costa, 1987). The Big Five comprises five major traits shown in the Figure 3.2.2 below. A way to remember these five is with the acronym OCEAN (O is for Openness; C is for Conscientiousness; E is for Extraversion; A is for Agreeableness; N is for Neuroticism). Figure 3.2.3 provides descriptions of people who would score high and low on each of these traits. Scores on the Big Five traits are mostly independent. That means that a person’s standing on one trait tells very little about their standing on the other traits of the Big Five. For example, a person can be extremely high in Extraversion and be either high or low on Neuroticism. Similarly, a person can be low in Agreeableness and be either high or low in Conscientiousness. Thus, in the Five-Factor Model, you need five scores to describe most of an individual’s personality. In the Appendix to this module, we present a short scale to assess the Five-Factor Model of personality (Donnellan, Oswald, Baird, & Lucas, 2006). You can take this test to see where you stand in terms of your Big Five scores. John Johnson has also created a helpful website that has personality scales that can be used and taken by the general public: http://www.personal.psu.edu/j5j/IPIP/ipipneo120.htm After seeing your scores, you can judge for yourself whether you think such tests are valid. Traits are important and interesting because they describe stable patterns of behavior that persist for long periods of time (Caspi, Roberts, & Shiner, 2005). Importantly, these stable patterns can have broad-ranging consequences for many areas of our life (Roberts, Kuncel, Shiner, Caspi, & Goldberg, 2007). For instance, think about the factors that determine success in college. If you were asked to guess what factors predict good grades in college, you might guess something like intelligence. This guess would be correct, but we know much more about who is likely to do well. Specifically, personality researchers have also found the personality traits like Conscientiousness play an important role in college and beyond, probably because highly conscientious individuals study hard, get their work done on time, and are less distracted by nonessential activities that take time away from school work. In addition, highly conscientious people are often healthier than people low in conscientiousness because they are more likely to maintain healthy diets, to exercise, and to follow basic safety procedures like wearing seat belts or bicycle helmets. Over the long term, this consistent pattern of behaviors can add up to meaningful differences in health and longevity. Thus, personality traits are not just a useful way to describe people you know; they actually help psychologists predict how good a worker someone will be, how long he or she will live, and the types of jobs and activities the person will enjoy. Thus, there is growing interest in personality psychology among psychologists who work in applied settings, such as health psychology or organizational psychology. Facets of Traits (Subtraits) So how does it feel to be told that your entire personality can be summarized with scores on just five personality traits? Do you think these five scores capture the complexity of your own and others’ characteristic patterns of thoughts, feelings, and behaviors? Most people would probably say no, pointing to some exception in their behavior that goes against the general pattern that others might see. For instance, you may know people who are warm and friendly and find it easy to talk with strangers at a party yet are terrified if they have to perform in front of others or speak to large groups of people. The fact that there are different ways of being extraverted or conscientious shows that there is value in considering lower-level units of personality that are more specific than the Big Five traits. These more specific, lower-level units of personality are often called facets. To give you a sense of what these narrow units are like, Figure 3.2.4 shows facets for each of the Big Five traits. It is important to note that although personality researchers generally agree about the value of the Big Five traits as a way to summarize one’s personality, there is no widely accepted list of facets that should be studied. The list seen here, based on work by researchers Paul Costa and Jeff McCrae, thus reflects just one possible list among many. It should, however, give you an idea of some of the facets making up each of the Five-Factor Model. Facets can be useful because they provide more specific descriptions of what a person is like. For instance, if we take our friend who loves parties but hates public speaking, we might say that this person scores high on the “gregariousness” and “warmth” facets of extraversion, while scoring lower on facets such as “assertiveness” or “excitement-seeking.” This precise profile of facet scores not only provides a better description, it might also allow us to better predict how this friend will do in a variety of different jobs (for example, jobs that require public speaking versus jobs that involve one-on-one interactions with customers; Paunonen & Ashton, 2001). Because different facets within a broad, global trait like extraversion tend to go together (those who are gregarious are often but not always assertive), the broad trait often provides a useful summary of what a person is like. But when we really want to know a person, facet scores add to our knowledge in important ways. Other Traits Beyond the Five-Factor Model Despite the popularity of the Five-Factor Model, it is certainly not the only model that exists. Some suggest that there are more than five major traits, or perhaps even fewer. For example, in one of the first comprehensive models to be proposed, Hans Eysenck suggested that Extraversion and Neuroticism are most important. Eysenck believed that by combining people’s standing on these two major traits, we could account for many of the differences in personality that we see in people (Eysenck, 1981). So for instance, a neurotic introvert would be shy and nervous, while a stable introvert might avoid social situations and prefer solitary activities, but he may do so with a calm, steady attitude and little anxiety or emotion. Interestingly, Eysenck attempted to link these two major dimensions to underlying differences in people’s biology. For instance, he suggested that introverts experienced too much sensory stimulation and arousal, which made them want to seek out quiet settings and less stimulating environments. More recently, Jeffrey Gray suggested that these two broad traits are related to fundamental reward and avoidance systems in the brain—extraverts might be motivated to seek reward and thus exhibit assertive, reward-seeking behavior, whereas people high in neuroticism might be motivated to avoid punishment and thus may experience anxiety as a result of their heightened awareness of the threats in the world around them (Gray, 1981. This model has since been updated; see Gray & McNaughton, 2000). These early theories have led to a burgeoning interest in identifying the physiological underpinnings of the individual differences that we observe. Another revision of the Big Five is the HEXACO model of traits (Ashton & Lee, 2007). This model is similar to the Big Five, but it posits slightly different versions of some of the traits, and its proponents argue that one important class of individual differences was omitted from the Five-Factor Model. The HEXACO adds Honesty-Humility as a sixth dimension of personality. People high in this trait are sincere, fair, and modest, whereas those low in the trait are manipulative, narcissistic, and self-centered. Thus, trait theorists are agreed that personality traits are important in understanding behavior, but there are still debates on the exact number and composition of the traits that are most important. There are other important traits that are not included in comprehensive models like the Big Five. Although the five factors capture much that is important about personality, researchers have suggested other traits that capture interesting aspects of our behavior. In Figure 5 below we present just a few, out of hundreds, of the other traits that have been studied by personologists. Not all of the above traits are currently popular with scientists, yet each of them has experienced popularity in the past. Although the Five-Factor Model has been the target of more rigorous research than some of the traits above, these additional personality characteristics give a good idea of the wide range of behaviors and attitudes that traits can cover. The Person-Situation Debate and Alternatives to the Trait Perspective The ideas described in this module should probably seem familiar, if not obvious to you. When asked to think about what our friends, enemies, family members, and colleagues are like, some of the first things that come to mind are their personality characteristics. We might think about how warm and helpful our first teacher was, how irresponsible and careless our brother is, or how demanding and insulting our first boss was. Each of these descriptors reflects a personality trait, and most of us generally think that the descriptions that we use for individuals accurately reflect their “characteristic pattern of thoughts, feelings, and behaviors,” or in other words, their personality. But what if this idea were wrong? What if our belief in personality traits were an illusion and people are not consistent from one situation to the next? This was a possibility that shook the foundation of personality psychology in the late 1960s when Walter Mischel published a book called Personality and Assessment (1968). In this book, Mischel suggested that if one looks closely at people’s behavior across many different situations, the consistency is really not that impressive. In other words, children who cheat on tests at school may steadfastly follow all rules when playing games and may never tell a lie to their parents. In other words, he suggested, there may not be any general trait of honesty that links these seemingly related behaviors. Furthermore, Mischel suggested that observers may believe that broad personality traits like honesty exist, when in fact, this belief is an illusion. The debate that followed the publication of Mischel’s book was called the person-situation debatebecause it pitted the power of personality against the power of situational factors as determinants of the behavior that people exhibit. Because of the findings that Mischel emphasized, many psychologists focused on an alternative to the trait perspective. Instead of studying broad, context-free descriptions, like the trait terms we’ve described so far, Mischel thought that psychologists should focus on people’s distinctive reactions to specific situations. For instance, although there may not be a broad and general trait of honesty, some children may be especially likely to cheat on a test when the risk of being caught is low and the rewards for cheating are high. Others might be motivated by the sense of risk involved in cheating and may do so even when the rewards are not very high. Thus, the behavior itself results from the child’s unique evaluation of the risks and rewards present at that moment, along with her evaluation of her abilities and values. Because of this, the same child might act very differently in different situations. Thus, Mischel thought that specific behaviors were driven by the interaction between very specific, psychologically meaningful features of the situation in which people found themselves, the person’s unique way of perceiving that situation, and his or her abilities for dealing with it. Mischel and others argued that it was these social-cognitive processes that underlie people’s reactions to specific situations that provide some consistency when situational features are the same. If so, then studying these broad traits might be more fruitful than cataloging and measuring narrow, context-free traits like Extraversion or Neuroticism. In the years after the publication of Mischel’s (1968) book, debates raged about whether personality truly exists, and if so, how it should be studied. And, as is often the case, it turns out that a more moderate middle ground than what the situationists proposed could be reached. It is certainly true, as Mischel pointed out, that a person’s behavior in one specific situation is not a good guide to how that person will behave in a very different specific situation. Someone who is extremely talkative at one specific party may sometimes be reticent to speak up during class and may even act like a wallflower at a different party. But this does not mean that personality does not exist, nor does it mean that people’s behavior is completely determined by situational factors. Indeed, research conducted after the person-situation debate shows that on average, the effect of the “situation” is about as large as that of personality traits. However, it is also true that if psychologists assess a broad range of behaviors across many different situations, there are general tendencies that emerge. Personality traits give an indication about how people will act on average, but frequently they are not so good at predicting how a person will act in a specific situation at a certain moment in time. Thus, to best capture broad traits, one must assess aggregatebehaviors, averaged over time and across many different types of situations. Most modern personality researchers agree that there is a place for broad personality traits and for the narrower units such as those studied by Walter Mischel. Appendix The Mini-IPIP Scale (Donnellan, Oswald, Baird, & Lucas, 2006) Instructions: Below are phrases describing people’s behaviors. Please use the rating scale below to describe how accurately each statement describes you. Describe yourself as you generally are now, not as you wish to be in the future. Describe yourself as you honestly see yourself, in relation to other people you know of the same sex as you are, and roughly your same age. Please read each statement carefully, and put a number from 1 to 5 next to it to describe how accurately the statement describes you. 1 = Very inaccurate 2 = Moderately inaccurate 3 = Neither inaccurate nor accurate 4 = Moderately accurate 5 = Very accurate 1. _______ Am the life of the party (E) 2. _______ Sympathize with others’ feelings (A) 3. _______ Get chores done right away (C) 4. _______ Have frequent mood swings (N) 5. _______ Have a vivid imagination (O) 6. _______Don’t talk a lot (E) 7. _______ Am not interested in other people’s problems (A) 8. _______ Often forget to put things back in their proper place (C) 9. _______ Am relaxed most of the time (N) 10. ______ Am not interested in abstract ideas (O) 11. ______ Talk to a lot of different people at parties (E) 12. ______ Feel others’ emotions (A) 13. ______ Like order (C) 14. ______ Get upset easily (N) 15. ______ Have difficulty understanding abstract ideas (O) 16. ______ Keep in the background (E) 17. ______ Am not really interested in others (A) 18. ______ Make a mess of things (C) 19. ______ Seldom feel blue (N) 20. ______ Do not have a good imagination (O) Scoring: The first thing you must do is to reverse the items that are worded in the opposite direction. In order to do this, subtract the number you put for that item from 6. So if you put a 4, for instance, it will become a 2. Cross out the score you put when you took the scale, and put the new number in representing your score subtracted from the number 6. Items to be reversed in this way: 6, 7, 8, 9, 10, 15, 16, 17, 18, 19, 20 Next, you need to add up the scores for each of the five OCEAN scales (including the reversed numbers where relevant). Each OCEAN score will be the sum of four items. Place the sum next to each scale below. __________ Openness: Add items 5, 10, 15, 20 __________ Conscientiousness: Add items 3, 8, 13, 18 __________ Extraversion: Add items 1, 6, 11, 16 __________ Agreeableness: Add items 2, 7, 12, 17 __________ Neuroticism: Add items 4, 9,14, 19 Compare your scores to the norms below to see where you stand on each scale. If you are low on a trait, it means you are the opposite of the trait label. For example, low on Extraversion is Introversion, low on Openness is Conventional, and low on Agreeableness is Assertive. 19–20 Extremely High, 17–18 Very High, 14–16 High, 11–13 Neither high nor low; in the middle, 8–10 Low, 6–7 Very low, 4–5 Extremely low Outside Resources Video 1: Gabriela Cintron’s – 5 Factors of Personality (OCEAN Song). This is a student-made video which cleverly describes, through song, common behavioral characteristics of the Big 5 personality traits. It was one of the winning entries in the 2016-17 Noba + Psi Chi Student Video Award. Video 2: Michael Harris’ – Personality Traits: The Big 5 and More. This is a student-made video that looks at characteristics of the OCEAN traits through a series of funny vignettes. It also presents on the Person vs Situation Debate. It was one of the winning entries in the 2016-17 Noba + Psi Chi Student Video Award. Video 3: David M. Cole’s – Grouchy with a Chance of Stomping. This is a student-made video that makes a very important point about the relationship between personality traits and behavior using a handy weather analogy. It was one of the winning entries in the 2016-17 Noba + Psi Chi Student Video Award. Web: International Personality Item Pool http://ipip.ori.org/ Web: John Johnson personality scales http://www.personal.psu.edu/j5j/IPIP/ipipneo120.htm Web: Personality trait systems compared http://www.personalityresearch.org/bigfive/goldberg.html Web: Sam Gosling website homepage.psy.utexas.edu/homep...samgosling.htm Discussion Questions 1. Consider different combinations of the Big Five, such as O (Low), C (High), E (Low), A (High), and N (Low). What would this person be like? Do you know anyone who is like this? Can you select politicians, movie stars, and other famous people and rate them on the Big Five? 2. How do you think learning and inherited personality traits get combined in adult personality? 3. Can you think of instances where people do not act consistently—where their personality traits are not good predictors of their behavior? 4. Has your personality changed over time, and in what ways? 5. Can you think of a personality trait not mentioned in this module that describes how people differ from one another? 6. When do extremes in personality traits become harmful, and when are they unusual but productive of good outcomes? Vocabulary Agreeableness A personality trait that reflects a person’s tendency to be compassionate, cooperative, warm, and caring to others. People low in agreeableness tend to be rude, hostile, and to pursue their own interests over those of others. Conscientiousness A personality trait that reflects a person’s tendency to be careful, organized, hardworking, and to follow rules. Continuous distributions Characteristics can go from low to high, with all different intermediate values possible. One does not simply have the trait or not have it, but can possess varying amounts of it. Extraversion A personality trait that reflects a person’s tendency to be sociable, outgoing, active, and assertive. Facets Broad personality traits can be broken down into narrower facets or aspects of the trait. For example, extraversion has several facets, such as sociability, dominance, risk-taking and so forth. Factor analysis A statistical technique for grouping similar things together according to how highly they are associated. Five-Factor Model (also called the Big Five) The Five-Factor Model is a widely accepted model of personality traits. Advocates of the model believe that much of the variability in people’s thoughts, feelings, and behaviors can be summarized with five broad traits. These five traits are Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. HEXACO model The HEXACO model is an alternative to the Five-Factor Model. The HEXACO model includes six traits, five of which are variants of the traits included in the Big Five (Emotionality [E], Extraversion [X], Agreeableness [A], Conscientiousness [C], and Openness [O]). The sixth factor, Honesty-Humility [H], is unique to this model. Independent Two characteristics or traits are separate from one another-- a person can be high on one and low on the other, or vice-versa. Some correlated traits are relatively independent in that although there is a tendency for a person high on one to also be high on the other, this is not always the case. Lexical hypothesis The lexical hypothesis is the idea that the most important differences between people will be encoded in the language that we use to describe people. Therefore, if we want to know which personality traits are most important, we can look to the language that people use to describe themselves and others. Neuroticism A personality trait that reflects the tendency to be interpersonally sensitive and the tendency to experience negative emotions like anxiety, fear, sadness, and anger. Openness to Experience A personality trait that reflects a person’s tendency to seek out and to appreciate new things, including thoughts, feelings, values, and experiences. Personality Enduring predispositions that characterize a person, such as styles of thought, feelings and behavior. Personality traits Enduring dispositions in behavior that show differences across individuals, and which tend to characterize the person across varying types of situations. Person-situation debate The person-situation debate is a historical debate about the relative power of personality traits as compared to situational influences on behavior. The situationist critique, which started the person-situation debate, suggested that people overestimate the extent to which personality traits are consistent across situations.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/11%3A_PERSONALITY/11.01%3A_Personality_Traits.txt
By David Watson University of Notre Dame This module provides a basic overview to the assessment of personality. It discusses objective personality tests (based on both self-report and informant ratings), projective and implicit tests, and behavioral/performance measures. It describes the basic features of each method, as well as reviewing the strengths, weaknesses, and overall validity of each approach. learning objectives • Appreciate the diversity of methods that are used to measure personality characteristics. • Understand the logic, strengths and weaknesses of each approach. • Gain a better sense of the overall validity and range of applications of personality tests. Introduction Personality is the field within psychology that studies the thoughts, feelings, behaviors, goals, and interests of normal individuals. It therefore covers a very wide range of important psychological characteristics. Moreover, different theoretical models have generated very different strategies for measuring these characteristics. For example, humanistically oriented models argue that people have clear, well-defined goals and are actively striving to achieve them (McGregor, McAdams, & Little, 2006). It, therefore, makes sense to ask them directly about themselves and their goals. In contrast, psychodynamically oriented theories propose that people lack insight into their feelings and motives, such that their behavior is influenced by processes that operate outside of their awareness (e.g., McClelland, Koestner, & Weinberger, 1989; Meyer & Kurtz, 2006). Given that people are unaware of these processes, it does not make sense to ask directly about them. One, therefore, needs to adopt an entirely different approach to identify these nonconscious factors. Not surprisingly, researchers have adopted a wide range of approaches to measure important personality characteristics. The most widely used strategies will be summarized in the following sections. Objective Tests Definition Objective tests (Loevinger, 1957; Meyer & Kurtz, 2006) represent the most familiar and widely used approach to assessing personality. Objective tests involve administering a standard set of items, each of which is answered using a limited set of response options (e.g., true or false; strongly disagree, slightly disagree, slightly agree, strongly agree). Responses to these items then are scored in a standardized, predetermined way. For example, self-ratings on items assessing talkativeness, assertiveness, sociability, adventurousness, and energy can be summed up to create an overall score on the personality trait of extraversion. It must be emphasized that the term “objective” refers to the method that is used to score a person’s responses, rather than to the responses themselves. As noted by Meyer and Kurtz (2006, p. 233), “What is objective about such a procedure is that the psychologist administering the test does not need to rely on judgment to classify or interpret the test-taker’s response; the intended response is clearly indicated and scored according to a pre-existing key.” In fact, as we will see, a person’s test responses may be highly subjective and can be influenced by a number of different rating biases. Basic Types of Objective Tests Self-report measures Objective personality tests can be further subdivided into two basic types. The first type—which easily is the most widely used in modern personality research—asks people to describe themselves. This approach offers two key advantages. First, self-raters have access to an unparalleled wealth of information: After all, who knows more about you than you yourself? In particular, self-raters have direct access to their own thoughts, feelings, and motives, which may not be readily available to others (Oh, Wang, & Mount, 2011; Watson, Hubbard, & Weise, 2000). Second, asking people to describe themselves is the simplest, easiest, and most cost-effective approach to assessing personality. Countless studies, for instance, have involved administering self-report measures to college students, who are provided some relatively simple incentive (e.g., extra course credit) to participate. The items included in self-report measures may consist of single words (e.g., assertive), short phrases (e.g., am full of energy), or complete sentences (e.g., I like to spend time with others). Table 1 presents a sample self-report measure assessing the general traits comprising the influential five-factor model (FFM) of personality: neuroticism, extraversion, openness, agreeableness, and conscientiousness (John & Srivastava, 1999; McCrae, Costa, & Martin, 2005). The sentences shown in Table 1 are modified versions of items included in the International Personality Item Pool (IPIP) (Goldberg et al., 2006), which is a rich source of personality-related content in the public domain (for more information about IPIP, go to: http://ipip.ori.org/). Self-report personality tests show impressive validity in relation to a wide range of important outcomes. For example, self-ratings of conscientiousness are significant predictors of both overall academic performance (e.g., cumulative grade point average; Poropat, 2009) and job performance (Oh, Wang, and Mount, 2011). Roberts, Kuncel, Shiner, Caspi, and Goldberg (2007) reported that self-rated personality predicted occupational attainment, divorce, and mortality. Similarly, Friedman, Kern, and Reynolds (2010) showed that personality ratings collected early in life were related to happiness/well-being, physical health, and mortality risk assessed several decades later. Finally, self-reported personality has important and pervasive links to psychopathology. Most notably, self-ratings of neuroticism are associated with a wide array of clinical syndromes, including anxiety disorders, depressive disorders, substance use disorders, somatoform disorders, eating disorders, personality and conduct disorders, and schizophrenia/schizotypy (Kotov, Gamez, Schmidt, & Watson, 2010; Mineka, Watson, & Clark, 1998). At the same time, however, it is clear that this method is limited in a number of ways. First, raters may be motivated to present themselves in an overly favorable, socially desirable way (Paunonen & LeBel, 2012). This is a particular concern in “high-stakes testing,” that is, situations in which test scores are used to make important decisions about individuals (e.g., when applying for a job). Second, personality ratings reflect a self-enhancement bias (Vazire & Carlson, 2011); in other words, people are motivated to ignore (or at least downplay) some of their less desirable characteristics and to focus instead on their more positive attributes. Third, self-ratings are subject to the reference group effect (Heine, Buchtel, & Norenzayan, 2008); that is, we base our self-perceptions, in part, on how we compare to others in our sociocultural reference group. For instance, if you tend to work harder than most of your friends, you will see yourself as someone who is relatively conscientious, even if you are not particularly conscientious in any absolute sense. Informant ratings Another approach is to ask someone who knows a person well to describe his or her personality characteristics. In the case of children or adolescents, the informant is most likely to be a parent or teacher. In studies of older participants, informants may be friends, roommates, dating partners, spouses, children, or bosses (Oh et al., 2011; Vazire & Carlson, 2011; Watson et al., 2000). Generally speaking, informant ratings are similar in format to self-ratings. As was the case with self-report, items may consist of single words, short phrases, or complete sentences. Indeed, many popular instruments include parallel self- and informant-rating versions, and it often is relatively easy to convert a self-report measure so that it can be used to obtain informant ratings. Table 2 illustrates how the self-report instrument shown in Table 1 can be converted to obtain spouse-ratings (in this case, having a husband describe the personality characteristics of his wife). Informant ratings are particularly valuable when self-ratings are impossible to collect (e.g., when studying young children or cognitively impaired adults) or when their validity is suspect (e.g., as noted earlier, people may not be entirely honest in high-stakes testing situations). They also may be combined with self-ratings of the same characteristics to produce more reliable and valid measures of these attributes (McCrae, 1994). Informant ratings offer several advantages in comparison to other approaches to assessing personality. A well-acquainted informant presumably has had the opportunity to observe large samples of behavior in the person he or she is rating. Moreover, these judgments presumably are not subject to the types of defensiveness that potentially can distort self-ratings (Vazire & Carlson, 2011). Indeed, informants typically have strong incentives for being accurate in their judgments. As Funder and Dobroth (1987, p. 409), put it, “Evaluations of the people in our social environment are central to our decisions about who to befriend and avoid, trust and distrust, hire and fire, and so on.” Informant personality ratings have demonstrated a level of validity in relation to important life outcomes that is comparable to that discussed earlier for self-ratings. Indeed, they outperform self-ratings in certain circumstances, particularly when the assessed traits are highly evaluative in nature (e.g., intelligence, charm, creativity; see Vazire & Carlson, 2011). For example, Oh et al. (2011) found that informant ratings were more strongly related to job performance than were self-ratings. Similarly, Oltmanns and Turkheimer (2009) summarized evidence indicating that informant ratings of Air Force cadets predicted early, involuntary discharge from the military better than self-ratings. Nevertheless, informant ratings also are subject to certain problems and limitations. One general issue is the level of relevant information that is available to the rater (Funder, 2012). For instance, even under the best of circumstances, informants lack full access to the thoughts, feelings, and motives of the person they are rating. This problem is magnified when the informant does not know the person particularly well and/or only sees him or her in a limited range of situations (Funder, 2012; Beer & Watson, 2010). Informant ratings also are subject to some of the same response biases noted earlier for self-ratings. For instance, they are not immune to the reference group effect. Indeed, it is well-established that parent ratings often are subject to a sibling contrast effect, such that parents exaggerate the true magnitude of differences between their children (Pinto, Rijsdijk, Frazier-Wood, Asherson, & Kuntsi, 2012). Furthermore, in many studies, individuals are allowed to nominate (or even recruit) the informants who will rate them. Because of this, it most often is the case that informants (who, as noted earlier, may be friends, relatives, or romantic partners) like the people they are rating. This, in turn, means that informants may produce overly favorable personality ratings. Indeed, their ratings actually can be more favorable than the corresponding self-ratings (Watson & Humrichouse, 2006). This tendency for informants to produce unrealistically positive ratings has been termed the letter of recommendation effect (Leising, Erbs, & Fritz, 2010) and the honeymoon effect when applied to newlyweds (Watson & Humrichouse, 2006). Other Ways of Classifying Objective Tests Comprehensiveness In addition to the source of the scores, there are at least two other important dimensions on which personality tests differ. The first such dimension concerns the extent to which an instrument seeks to assess personality in a reasonably comprehensive manner. At one extreme, many widely used measures are designed to assess a single core attribute. Examples of these types of measures include the Toronto Alexithymia Scale (Bagby, Parker, & Taylor, 1994), the Rosenberg Self-Esteem Scale (Rosenberg, 1965), and the Multidimensional Experiential Avoidance Questionnaire (Gamez, Chmielewski, Kotov, Ruggero, & Watson, 2011). At the other extreme, a number of omnibus inventories contain a large number of specific scales and purport to measure personality in a reasonably comprehensive manner. These instruments include the California Psychological Inventory (Gough, 1987), the Revised HEXACO Personality Inventory (HEXACO-PI-R) (Lee & Ashton, 2006), the Multidimensional Personality Questionnaire (Patrick, Curtin, & Tellegen, 2002), the NEO Personality Inventory-3 (NEO-PI-3) (McCrae et al., 2005), the Personality Research Form (Jackson, 1984), and the Sixteen Personality Factor Questionnaire (Cattell, Eber, & Tatsuoka, 1980). Breadth of the target characteristics Second, personality characteristics can be classified at different levels of breadth or generality. For example, many models emphasize broad, “big” traits such as neuroticism and extraversion. These general dimensions can be divided up into several distinct yet empirically correlated component traits. For example, the broad dimension of extraversion contains such specific component traits as dominance (extraverts are assertive, persuasive, and exhibitionistic), sociability (extraverts seek out and enjoy the company of others), positive emotionality (extraverts are active, energetic, cheerful, and enthusiastic), and adventurousness (extraverts enjoy intense, exciting experiences). Some popular personality instruments are designed to assess only the broad, general traits. For example, similar to the sample instrument displayed in Table 1, the Big Five Inventory (John & Srivastava, 1999) contains brief scales assessing the broad traits of neuroticism, extraversion, openness, agreeableness, and conscientiousness. In contrast, many instruments—including several of the omnibus inventories mentioned earlier—were designed primarily to assess a large number of more specific characteristics. Finally, some inventories—including the HEXACO-PI-R and the NEO-PI-3—were explicitly designed to provide coverage of both general and specific trait characteristics. For instance, the NEO-PI-3 contains six specific facet scales (e.g., Gregariousness, Assertiveness, Positive Emotions, Excitement Seeking) that then can be combined to assess the broad trait of extraversion. Projective and Implicit Tests Projective Tests As noted earlier, some approaches to personality assessment are based on the belief that important thoughts, feelings, and motives operate outside of conscious awareness. Projective tests represent influential early examples of this approach. Projective tests originally were based on the projective hypothesis (Frank, 1939; Lilienfeld, Wood, & Garb, 2000): If a person is asked to describe or interpret ambiguous stimuli—that is, things that can be understood in a number of different ways—their responses will be influenced by nonconscious needs, feelings, and experiences (note, however, that the theoretical rationale underlying these measures has evolved over time) (see, for example, Spangler, 1992). Two prominent examples of projective tests are the Rorschach Inkblot Test (Rorschach, 1921) and the Thematic Apperception Test (TAT) (Morgan & Murray, 1935). The former asks respondents to interpret symmetrical blots of ink, whereas the latter asks them to generate stories about a series of pictures. For instance, one TAT picture depicts an elderly woman with her back turned to a young man; the latter looks downward with a somewhat perplexed expression. Another picture displays a man clutched from behind by three mysterious hands. What stories could you generate in response to these pictures? In comparison to objective tests, projective tests tend to be somewhat cumbersome and labor intensive to administer. The biggest challenge, however, has been to develop a reliable and valid scheme to score the extensive set of responses generated by each respondent. The most widely used Rorschach scoring scheme is the Comprehensive System developed by Exner (2003). The most influential TAT scoring system was developed by McClelland, Atkinson and colleagues between 1947 and 1953 (McClelland et al., 1989; see also Winter, 1998), which can be used to assess motives such as the need for achievement. The validity of the Rorschach has been a matter of considerable controversy (Lilienfeld et al., 2000; Mihura, Meyer, Dumitrascu, & Bombel, 2012; Society for Personality Assessment, 2005). Most reviews acknowledge that Rorschach scores do show some ability to predict important outcomes. Its critics, however, argue that it fails to provide important incremental information beyond other, more easily acquired information, such as that obtained from standard self-report measures (Lilienfeld et al., 2000). Validity evidence is more impressive for the TAT. In particular, reviews have concluded that TAT-based measures of the need for achievement (a) show significant validity to predict important criteria and (b) provide important information beyond that obtained from objective measures of this motive (McClelland et al., 1989; Spangler, 1992). Furthermore, given the relatively weak associations between objective and projective measures of motives, McClelland et al. (1989) argue that they tap somewhat different processes, with the latter assessing implicit motives (Schultheiss, 2008). Implicit Tests In recent years, researchers have begun to use implicit measures of personality (Back, Schmuckle, & Egloff, 2009; Vazire & Carlson, 2011). These tests are based on the assumption that people form automatic or implicit associations between certain concepts based on their previous experience and behavior. If two concepts (e.g., meand assertive) are strongly associated with each other, then they should be sorted together more quickly and easily than two concepts (e.g., me and shy) that are less strongly associated. Although validity evidence for these measures still is relatively sparse, the results to date are encouraging: Back et al. (2009), for example, showed that implicit measures of the FFM personality traits predicted behavior even after controlling for scores on objective measures of these same characteristics. Behavioral and Performance Measures A final approach is to infer important personality characteristics from direct samples of behavior. For example, Funder and Colvin (1988) brought opposite-sex pairs of participants into the laboratory and had them engage in a five-minute “getting acquainted” conversation; raters watched videotapes of these interactions and then scored the participants on various personality characteristics. Mehl, Gosling, and Pennebaker (2006) used the electronically activated recorder (EAR) to obtain samples of ambient sounds in participants’ natural environments over a period of two days; EAR-based scores then were related to self- and observer-rated measures of personality. For instance, more frequent talking over this two-day period was significantly related to both self- and observer-ratings of extraversion. As a final example, Gosling, Ko, Mannarelli, and Morris (2002) sent observers into college students’ bedrooms and then had them rate the students’ personality characteristics on the Big Five traits. The averaged observer ratings correlated significantly with participants’ self-ratings on all five traits. Follow-up analyses indicated that conscientious students had neater rooms, whereas those who were high in openness to experience had a wider variety of books and magazines. Behavioral measures offer several advantages over other approaches to assessing personality. First, because behavior is sampled directly, this approach is not subject to the types of response biases (e.g., self-enhancement bias, reference group effect) that can distort scores on objective tests. Second, as is illustrated by the Mehl et al. (2006) and Gosling et al. (2002) studies, this approach allows people to be studied in their daily lives and in their natural environments, thereby avoiding the artificiality of other methods (Mehl et al., 2006). Finally, this is the only approach that actually assesses what people do, as opposed to what they think or feel (see Baumeister, Vohs, & Funder, 2007). At the same time, however, this approach also has some disadvantages. This assessment strategy clearly is much more cumbersome and labor intensive than using objective tests, particularly self-report. Moreover, similar to projective tests, behavioral measures generate a rich set of data that then need to be scored in a reliable and valid way. Finally, even the most ambitious study only obtains relatively small samples of behavior that may provide a somewhat distorted view of a person’s true characteristics. For example, your behavior during a “getting acquainted” conversation on a single given day inevitably will reflect a number of transient influences (e.g., level of stress, quality of sleep the previous night) that are idiosyncratic to that day. Conclusion No single method of assessing personality is perfect or infallible; each of the major methods has both strengths and limitations. By using a diversity of approaches, researchers can overcome the limitations of any single method and develop a more complete and integrative view of personality. Discussion Questions 1. Under what conditions would you expect self-ratings to be most similar to informant ratings? When would you expect these two sets of ratings to be most different from each other? 2. The findings of Gosling, et al. (2002) demonstrate that we can obtain important clues about students’ personalities from their dorm rooms. What other aspects of people’s lives might give us important information about their personalities? 3. Suppose that you were planning to conduct a study examining the personality trait of honesty. What method or methods might you use to measure it? Vocabulary Big Five Five, broad general traits that are included in many prominent models of personality. The five traits are neuroticism (those high on this trait are prone to feeling sad, worried, anxious, and dissatisfied with themselves), extraversion (high scorers are friendly, assertive, outgoing, cheerful, and energetic), openness to experience (those high on this trait are tolerant, intellectually curious, imaginative, and artistic), agreeableness (high scorers are polite, considerate, cooperative, honest, and trusting), and conscientiousness (those high on this trait are responsible, cautious, organized, disciplined, and achievement-oriented). High-stakes testing Settings in which test scores are used to make important decisions about individuals. For example, test scores may be used to determine which individuals are admitted into a college or graduate school, or who should be hired for a job. Tests also are used in forensic settings to help determine whether a person is competent to stand trial or fits the legal definition of sanity. Honeymoon effect The tendency for newly married individuals to rate their spouses in an unrealistically positive manner. This represents a specific manifestation of the letter of recommendation effect when applied to ratings made by current romantic partners. Moreover, it illustrates the very important role played by relationship satisfaction in ratings made by romantic partners: As marital satisfaction declines (i.e., when the “honeymoon is over”), this effect disappears. Implicit motives These are goals that are important to a person, but that he/she cannot consciously express. Because the individual cannot verbalize these goals directly, they cannot be easily assessed via self-report. However, they can be measured using projective devices such as the Thematic Apperception Test (TAT). Letter of recommendation effect The general tendency for informants in personality studies to rate others in an unrealistically positive manner. This tendency is due a pervasive bias in personality assessment: In the large majority of published studies, informants are individuals who like the person they are rating (e.g., they often are friends or family members) and, therefore, are motivated to depict them in a socially desirable way. The term reflects a similar tendency for academic letters of recommendation to be overly positive and to present the referent in an unrealistically desirable manner. Projective hypothesis The theory that when people are confronted with ambiguous stimuli (that is, stimuli that can be interpreted in more than one way), their responses will be influenced by their unconscious thoughts, needs, wishes, and impulses. This, in turn, is based on the Freudian notion of projection, which is the idea that people attribute their own undesirable/unacceptable characteristics to other people or objects. Reference group effect The tendency of people to base their self-concept on comparisons with others. For example, if your friends tend to be very smart and successful, you may come to see yourself as less intelligent and successful than you actually are. Informants also are prone to these types of effects. For instance, the sibling contrast effect refers to the tendency of parents to exaggerate the true extent of differences between their children. Reliablility The consistency of test scores across repeated assessments. For example, test-retest reliability examines the extent to which scores change over time. Self-enhancement bias The tendency for people to see and/or present themselves in an overly favorable way. This tendency can take two basic forms: defensiveness (when individuals actually believe they are better than they really are) and impression management (when people intentionally distort their responses to try to convince others that they are better than they really are). Informants also can show enhancement biases. The general form of this bias has been called the letter-of-recommendation effect, which is the tendency of informants who like the person they are rating (e.g., friends, relatives, romantic partners) to describe them in an overly favorable way. In the case of newlyweds, this tendency has been termed the honeymoon effect. Sibling contrast effect The tendency of parents to use their perceptions of all of their children as a frame of reference for rating the characteristics of each of them. For example, suppose that a mother has three children; two of these children are very sociable and outgoing, whereas the third is relatively average in sociability. Because of operation of this effect, the mother will rate this third child as less sociable and outgoing than he/she actually is. More generally, this effect causes parents to exaggerate the true extent of differences between their children. This effect represents a specific manifestation of the more general reference group effect when applied to ratings made by parents. Validity Evidence related to the interpretation and use of test scores. A particularly important type of evidence is criterion validity, which involves the ability of a test to predict theoretically relevant outcomes. For example, a presumed measure of conscientiousness should be related to academic achievement (such as overall grade point average).
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/11%3A_PERSONALITY/11.02%3A_Personality_Assessment.txt
By Dan P. McAdams Northwestern University For human beings, the self is what happens when “I” encounters “Me.” The central psychological question of selfhood, then, is this: How does a person apprehend and understand who he or she is? Over the past 100 years, psychologists have approached the study of self (and the related concept of identity) in many different ways, but three central metaphors for the self repeatedly emerge. First, the self may be seen as a social actor, who enacts roles and displays traits by performing behaviors in the presence of others. Second, the self is a motivated agent, who acts upon inner desires and formulates goals, values, and plans to guide behavior in the future. Third, the self eventually becomes an autobiographical author, too, who takes stock of life — past, present, and future — to create a story about who I am, how I came to be, and where my life may be going. This module briefly reviews central ideas and research findings on the self as an actor, an agent, and an author, with an emphasis on how these features of selfhood develop over the human life course. learning objectives • Explain the basic idea of reflexivity in human selfhood—how the “I” encounters and makes sense of itself (the “Me”). • Describe fundamental distinctions between three different perspectives on the self: the self as actor, agent, and author. • Describe how a sense of self as a social actor emerges around the age of 2 years and how it develops going forward. • Describe the development of the self’s sense of motivated agency from the emergence of the child’s theory of mind to the articulation of life goals and values in adolescence and beyond. • Define the term narrative identity, and explain what psychological and cultural functions narrative identity serves. Introduction In the Temple of Apollo at Delphi, the ancient Greeks inscribed the words: “Know thyself.” For at least 2,500 years, and probably longer, human beings have pondered the meaning of the ancient aphorism. Over the past century, psychological scientists have joined the effort. They have formulated many theories and tested countless hypotheses that speak to the central question of human selfhood: How does a person know who he or she is? The ancient Greeks seemed to realize that the self is inherently reflexive—it reflects back on itself. In the disarmingly simple idea made famous by the great psychologist William James (1892/1963), the self is what happens when “I” reflects back upon “Me.” The self is both the I and the Me—it is the knower, and it is what the knower knows when the knower reflects upon itself. When you look back at yourself, what do you see? When you look inside, what do you find? Moreover, when you try to change your self in some way, what is it that you are trying to change? The philosopher Charles Taylor (1989) describes the self as a reflexive project. In modern life, Taylor agues, we often try to manage, discipline, refine, improve, or develop the self. We work on our selves, as we might work on any other interesting project. But what exactly is it that we work on? Imagine for a moment that you have decided to improve yourself. You might, say, go on a diet to improve your appearance. Or you might decide to be nicer to your mother, in order to improve that important social role. Or maybe the problem is at work—you need to find a better job or go back to school to prepare for a different career. Perhaps you just need to work harder. Or get organized. Or recommit yourself to religion. Or maybe the key is to begin thinking about your whole life story in a completely different way, in a way that you hope will bring you more happiness, fulfillment, peace, or excitement. Although there are many different ways you might reflect upon and try to improve the self, it turns out that many, if not most, of them fall roughly into three broad psychological categories (McAdams & Cox, 2010). The I may encounter the Me as (a) a social actor, (b) a motivated agent, or (c) an autobiographical author. The Social Actor Shakespeare tapped into a deep truth about human nature when he famously wrote, “All the world’s a stage, and all the men and women merely players.” He was wrong about the “merely,” however, for there is nothing more important for human adaptation than the manner in which we perform our roles as actors in the everyday theatre of social life. What Shakespeare may have sensed but could not have fully understood is that human beings evolved to live in social groups. Beginning with Darwin (1872/1965) and running through contemporary conceptions of human evolution, scientists have portrayed human nature as profoundly social (Wilson, 2012). For a few million years, Homo sapiens and their evolutionary forerunners have survived and flourished by virtue of their ability to live and work together in complex social groups, cooperating with each other to solve problems and overcome threats and competing with each other in the face of limited resources. As social animals, human beings strive to get along and get ahead in the presence of each other (Hogan, 1982). Evolution has prepared us to care deeply about social acceptance and social status, for those unfortunate individuals who do not get along well in social groups or who fail to attain a requisite status among their peers have typically been severely compromised when it comes to survival and reproduction. It makes consummate evolutionary sense, therefore, that the human "I" should apprehend the "Me" first and foremost as a social actor. For human beings, the sense of the self as a social actor begins to emerge around the age of 18 months. Numerous studies have shown that by the time they reach their second birthday most toddlers recognize themselves in mirrors and other reflecting devices (Lewis & Brooks-Gunn, 1979; Rochat, 2003). What they see is an embodied actor who moves through space and time. Many children begin to use words such as “me” and “mine” in the second year of life, suggesting that the I now has linguistic labels that can be applied reflexively to itself: I call myself “me.” Around the same time, children also begin to express social emotions such as embarrassment, shame, guilt, and pride (Tangney, Stuewig, & Mashek, 2007). These emotions tell the social actor how well he or she is performing in the group. When I do things that win the approval of others, I feel proud of myself. When I fail in the presence of others, I may feel embarrassment or shame. When I violate a social rule, I may experience guilt, which may motivate me to make amends. Many of the classic psychological theories of human selfhood point to the second year of life as a key developmental period. For example, Freud (1923/1961) and his followers in the psychoanalytic tradition traced the emergence of an autonomous ego back to the second year. Freud used the term “ego” (in German das Ich, which also translates into “the I”) to refer to an executive self in the personality. Erikson (1963) argued that experiences of trust and interpersonal attachment in the first year of life help to consolidate the autonomy of the ego in the second. Coming from a more sociological perspective, Mead (1934) suggested that the I comes to know the Me through reflection, which may begin quite literally with mirrors but later involves the reflected appraisals of others. I come to know who I am as a social actor, Mead argued, by noting how other people in my social world react to my performances. In the development of the self as a social actor, other people function like mirrors—they reflect who I am back to me. Research has shown that when young children begin to make attributions about themselves, they start simple (Harter, 2006). At age 4, Jessica knows that she has dark hair, knows that she lives in a white house, and describes herself to others in terms of simple behavioral traits. She may say that she is “nice,” or “helpful,” or that she is “a good girl most of the time.” By the time, she hits fifth grade (age 10), Jessica sees herself in more complex ways, attributing traits to the self such as “honest,” “moody,” “outgoing,” “shy,” “hard-working,” “smart,” “good at math but not gym class,” or “nice except when I am around my annoying brother.” By late childhood and early adolescence, the personality traits that people attribute to themselves, as well as those attributed to them by others, tend to correlate with each other in ways that conform to a well-established taxonomy of five broad trait domains, repeatedly derived in studies of adult personality and often called the Big Five: (1) extraversion, (2) neuroticism, (3) agreeableness, (4) conscientiousness, and (5) openness to experience (Roberts, Wood, & Caspi, 2008). By late childhood, moreover, self-conceptions will likely also include important social roles: “I am a good student,” “I am the oldest daughter,” or “I am a good friend to Sarah.” Traits and roles, and variations on these notions, are the main currency of the self as social actor (McAdams & Cox, 2010). Trait terms capture perceived consistencies in social performance. They convey what I reflexively perceive to be my overall acting style, based in part on how I think others see me as an actor in many different social situations. Roles capture the quality, as I perceive it, of important structured relationships in my life. Taken together, traits and roles make up the main features of my social reputation, as I apprehend it in my own mind (Hogan, 1982). If you have ever tried hard to change yourself, you may have taken aim at your social reputation, targeting your central traits or your social roles. Maybe you woke up one day and decided that you must become a more optimistic and emotionally upbeat person. Taking into consideration the reflected appraisals of others, you realized that even your friends seem to avoid you because you bring them down. In addition, it feels bad to feel so bad all the time: Wouldn’t it be better to feel good, to have more energy and hope? In the language of traits, you have decided to “work on” your “neuroticism.” Or maybe instead, your problem is the trait of “conscientiousness”: You are undisciplined and don’t work hard enough, so you resolve to make changes in that area. Self-improvement efforts such as these—aimed at changing one’s traits to become a more effective social actor—are sometimes successful, but they are very hard—kind of like dieting. Research suggests that broad traits tend to be stubborn, resistant to change, even with the aid of psychotherapy. However, people often have more success working directly on their social roles. To become a more effective social actor, you may want to take aim at the important roles you play in life. What can I do to become a better son or daughter? How can I find new and meaningful roles to perform at work, or in my family, or among my friends, or in my church and community? By doing concrete things that enrich your performances in important social roles, you may begin to see yourself in a new light, and others will notice the change, too. Social actors hold the potential to transform their performances across the human life course. Each time you walk out on stage, you have a chance to start anew. The Motivated Agent Whether we are talking literally about the theatrical stage or more figuratively, as I do in this module, about the everyday social environment for human behavior, observers can never fully know what is in the actor’s head, no matter how closely they watch. We can see actors act, but we cannot know for sure what they want or what they value, unless they tell us straightaway. As a social actor, a person may come across as friendly and compassionate, or cynical and mean-spirited, but in neither case can we infer their motivations from their traits or their roles. What does the friendly person want? What is the cynical father trying to achieve? Many broad psychological theories of the self prioritize the motivational qualities of human behavior—the inner needs, wants, desires, goals, values, plans, programs, fears, and aversions that seem to give behavior its direction and purpose (Bandura, 1989; Deci & Ryan, 1991; Markus & Nurius, 1986). These kinds of theories explicitly conceive of the self as a motivated agent. To be an agent is to act with direction and purpose, to move forward into the future in pursuit of self-chosen and valued goals. In a sense, human beings are agents even as infants, for babies can surely act in goal-directed ways. By age 1 year, moreover, infants show a strong preference for observing and imitating the goal-directed, intentional behavior of others, rather than random behaviors (Woodward, 2009). Still, it is one thing to act in goal-directed ways; it is quite another for the I to know itself (the Me) as an intentional and purposeful force who moves forward in life in pursuit of self-chosen goals, values, and other desired end states. In order to do so, the person must first realize that people indeed have desires and goals in their minds and that these inner desires and goals motivate (initiate, energize, put into motion) their behavior. According to a strong line of research in developmental psychology, attaining this kind of understanding means acquiring a theory of mind (Wellman, 1993), which occurs for most children by the age of 4. Once a child understands that other people’s behavior is often motivated by inner desires and goals, it is a small step to apprehend the self in similar terms. Building on theory of mind and other cognitive and social developments, children begin to construct the self as a motivated agent in the elementary school years, layered over their still-developing sense of themselves as social actors. Theory and research on what developmental psychologists call the age 5-to-7 shift converge to suggest that children become more planful, intentional, and systematic in their pursuit of valued goals during this time (Sameroff & Haith, 1996). Schooling reinforces the shift in that teachers and curricula place increasing demands on students to work hard, adhere to schedules, focus on goals, and achieve success in particular, well-defined task domains. Their relative success in achieving their most cherished goals, furthermore, goes a long way in determining children’s self-esteem (Robins, Tracy, & Trzesniewski, 2008). Motivated agents feel good about themselves to the extent they believe that they are making good progress in achieving their goals and advancing their most important values. Goals and values become even more important for the self in adolescence, as teenagers begin to confront what Erikson (1963) famously termed the developmental challenge of identity. For adolescents and young adults, establishing a psychologically efficacious identity involves exploring different options with respect to life goals, values, vocations, and intimate relationships and eventually committing to a motivational and ideological agenda for adult life—an integrated and realistic sense of what I want and value in life and how I plan to achieve it (Kroger & Marcia, 2011). Committing oneself to an integrated suite of life goals and values is perhaps the greatest achievement for the self as motivated agent. Establishing an adult identity has implications, as well, for how a person moves through life as a social actor, entailing new role commitments and, perhaps, a changing understanding of one’s basic dispositional traits. According to Erikson, however, identity achievement is always provisional, for adults continue to work on their identities as they move into midlife and beyond, often relinquishing old goals in favor of new ones, investing themselves in new projects and making new plans, exploring new relationships, and shifting their priorities in response to changing life circumstances (Freund & Riediger, 2006; Josselson, 1996). There is a sense whereby any time you try to change yourself, you are assuming the role of a motivated agent. After all, to strive to change something is inherently what an agent does. However, what particular feature of selfhood you try to change may correspond to your self as actor, agent, or author, or some combination. When you try to change your traits or roles, you take aim at the social actor. By contrast, when you try to change your values or life goals, you are focusing on yourself as a motivated agent. Adolescence and young adulthood are periods in the human life course when many of us focus attention on our values and life goals. Perhaps you grew up as a traditional Catholic, but now in college you believe that the values inculcated in your childhood no longer function so well for you. You no longer believe in the central tenets of the Catholic Church, say, and are now working to replace your old values with new ones. Or maybe you still want to be Catholic, but you feel that your new take on faith requires a different kind of personal ideology. In the realm of the motivated agent, moreover, changing values can influence life goals. If your new value system prioritizes alleviating the suffering of others, you may decide to pursue a degree in social work, or to become a public interest lawyer, or to live a simpler life that prioritizes people over material wealth. A great deal of the identity work we do in adolescence and young adulthood is about values and goals, as we strive to articulate a personal vision or dream for what we hope to accomplish in the future. The Autobiographical Author Even as the “I”continues to develop a sense of the “Me” as both a social actor and a motivated agent, a third standpoint for selfhood gradually emerges in the adolescent and early-adult years. The third perspective is a response to Erikson’s (1963) challenge of identity. According to Erikson, developing an identity involves more than the exploration of and commitment to life goals and values (the self as motivated agent), and more than committing to new roles and re-evaluating old traits (the self as social actor). It also involves achieving a sense of temporal continuity in life—a reflexive understanding of how I have come to be the person I am becoming, or put differently, how my past self has developed into my present self, and how my present self will, in turn, develop into an envisioned future self. In his analysis of identity formation in the life of the 15th-century Protestant reformer Martin Luther, Erikson (1958) describes the culmination of a young adult’s search for identity in this way: "To be adult means among other things to see one’s own life in continuous perspective, both in retrospect and prospect. By accepting some definition of who he is, usually on the basis of a function in an economy, a place in the sequence of generations, and a status in the structure of society, the adult is able to selectively reconstruct his past in such a way that, step for step, it seems to have planned him, or better, he seems to have planned it. In this sense, psychologically we do choose our parents, our family history, and the history of our kings, heroes, and gods. By making them our own, we maneuver ourselves into the inner position of proprietors, of creators." -- (Erikson, 1958, pp. 111–112; emphasis added). In this rich passage, Erikson intimates that the development of a mature identity in young adulthood involves the I’s ability to construct a retrospective and prospective story about the Me (McAdams, 1985). In their efforts to find a meaningful identity for life, young men and women begin “to selectively reconstruct” their past, as Erikson wrote, and imagine their future to create an integrative life story, or what psychologists today often call a narrative identity. A narrative identity is an internalized and evolving story of the self that reconstructs the past and anticipates the future in such a way as to provide a person’s life with some degree of unity, meaning, and purpose over time (McAdams, 2008; McLean, Pasupathi, & Pals, 2007). The self typically becomes an autobiographical author in the early-adult years, a way of being that is layered over the motivated agent, which is layered over the social actor. In order to provide life with the sense of temporal continuity and deep meaning that Erikson believed identity should confer, we must author a personalized life story that integrates our understanding of who we once were, who we are today, and who we may become in the future. The story helps to explain, for the author and for the author’s world, why the social actor does what it does and why the motivated agent wants what it wants, and how the person as a whole has developed over time, from the past’s reconstructed beginning to the future’s imagined ending. By the time they are 5 or 6 years of age, children can tell well-formed stories about personal events in their lives (Fivush, 2011). By the end of childhood, they usually have a good sense of what a typical biography contains and how it is sequenced, from birth to death (Thomsen & Bernsten, 2008). But it is not until adolescence, research shows, that human beings express advanced storytelling skills and what psychologists call autobiographical reasoning (Habermas & Bluck, 2000; McLean & Fournier, 2008). In autobiographical reasoning, a narrator is able to derive substantive conclusions about the self from analyzing his or her own personal experiences. Adolescents may develop the ability to string together events into causal chains and inductively derive general themes about life from a sequence of chapters and scenes (Habermas & de Silveira, 2008). For example, a 16-year-old may be able to explain to herself and to others how childhood experiences in her family have shaped her vocation in life. Her parents were divorced when she was 5 years old, the teenager recalls, and this caused a great deal of stress in her family. Her mother often seemed anxious and depressed, but she (the now-teenager when she was a little girl—the story’s protagonist) often tried to cheer her mother up, and her efforts seemed to work. In more recent years, the teenager notes that her friends often come to her with their boyfriend problems. She seems to be very adept at giving advice about love and relationships, which stems, the teenager now believes, from her early experiences with her mother. Carrying this causal narrative forward, the teenager now thinks that she would like to be a marriage counselor when she grows up. Unlike children, then, adolescents can tell a full and convincing story about an entire human life, or at least a prominent line of causation within a full life, explaining continuity and change in the story’s protagonist over time. Once the cognitive skills are in place, young people seek interpersonal opportunities to share and refine their developing sense of themselves as storytellers (the I) who tell stories about themselves (the Me). Adolescents and young adults author a narrative sense of the self by telling stories about their experiences to other people, monitoring the feedback they receive from the tellings, editing their stories in light of the feedback, gaining new experiences and telling stories about those, and on and on, as selves create stories that, in turn, create new selves (McLean et al., 2007). Gradually, in fits and starts, through conversation and introspection, the I develops a convincing and coherent narrative about the Me. Contemporary research on the self as autobiographical author emphasizes the strong effect of culture on narrative identity (Hammack, 2008). Culture provides a menu of favored plot lines, themes, and character types for the construction of self-defining life stories. Autobiographical authors sample selectively from the cultural menu, appropriating ideas that seem to resonate well with their own life experiences. As such, life stories reflect the culture, wherein they are situated as much as they reflect the authorial efforts of the autobiographical I. As one example of the tight link between culture and narrative identity, McAdams (2013) and others (e.g., Kleinfeld, 2012) have highlighted the prominence of redemptive narratives in American culture. Epitomized in such iconic cultural ideals as the American dream, Horatio Alger stories, and narratives of Christian atonement, redemptive stories track the move from suffering to an enhanced status or state, while scripting the development of a chosen protagonist who journeys forth into a dangerous and unredeemed world (McAdams, 2013). Hollywood movies often celebrate redemptive quests. Americans are exposed to similar narrative messages in self-help books, 12-step programs, Sunday sermons, and in the rhetoric of political campaigns. Over the past two decades, the world’s most influential spokesperson for the power of redemption in human lives may be Oprah Winfrey, who tells her own story of overcoming childhood adversity while encouraging others, through her media outlets and philanthropy, to tell similar kinds of stories for their own lives (McAdams, 2013). Research has demonstrated that American adults who enjoy high levels of mental health and civic engagement tend to construct their lives as narratives of redemption, tracking the move from sin to salvation, rags to riches, oppression to liberation, or sickness/abuse to health/recovery (McAdams, Diamond, de St. Aubin, & Mansfield, 1997; McAdams, Reynolds, Lewis, Patten, & Bowman, 2001; Walker & Frimer, 2007). In American society, these kinds of stories are often seen to be inspirational. At the same time, McAdams (2011, 2013) has pointed to shortcomings and limitations in the redemptive stories that many Americans tell, which mirror cultural biases and stereotypes in American culture and heritage. McAdams has argued that redemptive stories support happiness and societal engagement for some Americans, but the same stories can encourage moral righteousness and a naïve expectation that suffering will always be redeemed. For better and sometimes for worse, Americans seem to love stories of personal redemption and often aim to assimilate their autobiographical memories and aspirations to a redemptive form. Nonetheless, these same stories may not work so well in cultures that espouse different values and narrative ideals (Hammack, 2008). It is important to remember that every culture offers its own storehouse of favored narrative forms. It is also essential to know that no single narrative form captures all that is good (or bad) about a culture. In American society, the redemptive narrative is but one of many different kinds of stories that people commonly employ to make sense of their lives. What is your story? What kind of a narrative are you working on? As you look to the past and imagine the future, what threads of continuity, change, and meaning do you discern? For many people, the most dramatic and fulfilling efforts to change the self happen when the I works hard, as an autobiographical author, to construct and, ultimately, to tell a new story about the Me. Storytelling may be the most powerful form of self-transformation that human beings have ever invented. Changing one’s life story is at the heart of many forms of psychotherapy and counseling, as well as religious conversions, vocational epiphanies, and other dramatic transformations of the self that people often celebrate as turning points in their lives (Adler, 2012). Storytelling is often at the heart of the little changes, too, minor edits in the self that we make as we move through daily life, as we live and experience life, and as we later tell it to ourselves and to others. Conclusion For human beings, selves begin as social actors, but they eventually become motivated agents and autobiographical authors, too. The I first sees itself as an embodied actor in social space; with development, however, it comes to appreciate itself also as a forward-looking source of self-determined goals and values, and later yet, as a storyteller of personal experience, oriented to the reconstructed past and the imagined future. To “know thyself” in mature adulthood, then, is to do three things: (a) to apprehend and to perform with social approval my self-ascribed traits and roles, (b) to pursue with vigor and (ideally) success my most valued goals and plans, and (c) to construct a story about life that conveys, with vividness and cultural resonance, how I became the person I am becoming, integrating my past as I remember it, my present as I am experiencing it, and my future as I hope it to be. Outside Resources Web: The website for the Foley Center for the Study of Lives, at Northwestern University. The site contains research materials, interview protocols, and coding manuals for conducting studies of narrative identity. http://www.sesp.northwestern.edu/foley/ Discussion Questions 1. Back in the 1950s, Erik Erikson argued that many adolescents and young adults experience a tumultuous identity crisis. Do you think this is true today? What might an identity crisis look and feel like? And, how might it be resolved? 2. Many people believe that they have a true self buried inside of them. From this perspective, the development of self is about discovering a psychological truth deep inside. Do you believe this to be true? How does thinking about the self as an actor, agent, and author bear on this question? 3. Psychological research shows that when people are placed in front of mirrors they often behave in a more moral and conscientious manner, even though they sometimes experience this procedure as unpleasant. From the standpoint of the self as a social actor, how might we explain this phenomenon? 4. By the time they reach adulthood, does everybody have a narrative identity? Do some people simply never develop a story for their life? 5. What happens when the three perspectives on self—the self as actor, agent, and author—conflict with each other? Is it necessary for people’s self-ascribed traits and roles to line up well with their goals and their stories? 6. William James wrote that the self includes all things that the person considers to be “mine.” If we take James literally, a person’s self might extend to include his or her material possessions, pets, and friends and family. Does this make sense? 7. To what extent can we control the self? Are some features of selfhood easier to control than others? 8. What cultural differences may be observed in the construction of the self? How might gender, ethnicity, and class impact the development of the self as actor, as agent, and as author? Vocabulary Autobiographical reasoning The ability, typically developed in adolescence, to derive substantive conclusions about the self from analyzing one’s own personal experiences. Big Five A broad taxonomy of personality trait domains repeatedly derived from studies of trait ratings in adulthood and encompassing the categories of (1) extraversion vs. introversion, (2) neuroticism vs. emotional stability, (3) agreeable vs. disagreeableness, (4) conscientiousness vs. nonconscientiousness, and (5) openness to experience vs. conventionality. By late childhood and early adolescence, people’s self-attributions of personality traits, as well as the trait attributions made about them by others, show patterns of intercorrelations that confirm with the five-factor structure obtained in studies of adults. Ego Sigmund Freud’s conception of an executive self in the personality. Akin to this module’s notion of “the I,” Freud imagined the ego as observing outside reality, engaging in rational though, and coping with the competing demands of inner desires and moral standards. Identity Sometimes used synonymously with the term “self,” identity means many different things in psychological science and in other fields (e.g., sociology). In this module, I adopt Erik Erikson’s conception of identity as a developmental task for late adolescence and young adulthood. Forming an identity in adolescence and young adulthood involves exploring alternative roles, values, goals, and relationships and eventually committing to a realistic agenda for life that productively situates a person in the adult world of work and love. In addition, identity formation entails commitments to new social roles and reevaluation of old traits, and importantly, it brings with it a sense of temporal continuity in life, achieved though the construction of an integrative life story. Narrative identity An internalized and evolving story of the self designed to provide life with some measure of temporal unity and purpose. Beginning in late adolescence, people craft self-defining stories that reconstruct the past and imagine the future to explain how the person came to be the person that he or she is becoming. Redemptive narratives Life stories that affirm the transformation from suffering to an enhanced status or state. In American culture, redemptive life stories are highly prized as models for the good self, as in classic narratives of atonement, upward mobility, liberation, and recovery. Reflexivity The idea that the self reflects back upon itself; that the I (the knower, the subject) encounters the Me (the known, the object). Reflexivity is a fundamental property of human selfhood. Self as autobiographical author The sense of the self as a storyteller who reconstructs the past and imagines the future in order to articulate an integrative narrative that provides life with some measure of temporal continuity and purpose. Self as motivated agent The sense of the self as an intentional force that strives to achieve goals, plans, values, projects, and the like. Self as social actor The sense of the self as an embodied actor whose social performances may be construed in terms of more or less consistent self-ascribed traits and social roles. Self-esteem The extent to which a person feels that he or she is worthy and good. The success or failure that the motivated agent experiences in pursuit of valued goals is a strong determinant of self-esteem. Social reputation The traits and social roles that others attribute to an actor. Actors also have their own conceptions of what they imagine their respective social reputations indeed are in the eyes of others. The Age 5-to-7 Shift Cognitive and social changes that occur in the early elementary school years that result in the child’s developing a more purposeful, planful, and goal-directed approach to life, setting the stage for the emergence of the self as a motivated agent. The “I” The self as knower, the sense of the self as a subject who encounters (knows, works on) itself (the Me). The “Me” The self as known, the sense of the self as the object or target of the I’s knowledge and work. Theory of mind Emerging around the age of 4, the child’s understanding that other people have minds in which are located desires and beliefs, and that desires and beliefs, thereby, motivate behavior.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/11%3A_PERSONALITY/11.03%3A_Self_and_Identity.txt
• 12.1: Anxiety and Related Disorders Anxiety disorders develop out of a blend of biological (genetic) and psychological factors that, when combined with stress, may lead to the development of ailments. Primary anxiety-related diagnoses include generalized anxiety disorder, panic disorder, specific phobia, social anxiety disorder (social phobia), n this module, we summarize the main clinical features of each of these disorders and discuss their similarities and differences with everyday experiences of anxiety. • 12.2: Mood Disorders Mood disorders are extended periods of depressed, euphoric, or irritable moods that in combination with other symptoms cause the person significant distress and interfere with his or her daily life, often resulting in social and occupational difficulties. In this module, we describe major mood disorders, including their symptom presentations, general prevalence rates, and how and why the rates of these disorders tend to vary by age, gender, and race. • 12.3: Schizophrenia Spectrum Disorders In this module, we summarize the primary clinical features of these disorders, describe the known cognitive and neurobiological changes associated with schizophrenia, describe potential risk factors and/or causes for the development of schizophrenia, and describe currently available treatments for schizophrenia. • 12.4: Personality Disorders The purpose of this module is to define what is meant by a personality disorder, identify the five domains of general personality (i.e., neuroticism, extraversion, openness, agreeableness, and conscientiousness) and identify the six personality disorders proposed for retention in the 5th edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5). 12: PSYCHOLOGICAL DISORDERS By David H. Barlow and Kristen K. Ellard Boston University, Massachusetts General Hospital, Harvard Medical School Anxiety is a natural part of life and, at normal levels, helps us to function at our best. However, for people with anxiety disorders, anxiety is overwhelming and hard to control. Anxiety disorders develop out of a blend of biological (genetic) and psychological factors that, when combined with stress, may lead to the development of ailments. Primary anxiety-related diagnoses include generalized anxiety disorder, panic disorder, specific phobia, social anxiety disorder (social phobia), post traumatic stress disorder, and obsessive-compulsive disorder. In this module, we summarize the main clinical features of each of these disorders and discuss their similarities and differences with everyday experiences of anxiety. learning objectives • Understand the relationship between anxiety and anxiety disorders. • Identify key vulnerabilities for developing anxiety and related disorders. • Identify main diagnostic features of specific anxiety-related disorders. • Differentiate between disordered and non-disordered functioning. Introduction What is anxiety? Most of us feel some anxiety almost every day of our lives. Maybe you have an important test coming up for school. Or maybe there’s that big game next Saturday, or that first date with someone new you are hoping to impress. Anxiety can be defined as a negative mood state that is accompanied by bodily symptoms such as increased heart rate, muscle tension, a sense of unease, and apprehension about the future (APA, 2013; Barlow, 2002). Anxiety is what motivates us to plan for the future, and in this sense, anxiety is actually a good thing. It’s that nagging feeling that motivates us to study for that test, practice harder for that game, or be at our very best on that date. But some people experience anxiety so intensely that it is no longer helpful or useful. They may become so overwhelmed and distracted by anxiety that they actually fail their test, fumble the ball, or spend the whole date fidgeting and avoiding eye contact. If anxiety begins to interfere in the person’s life in a significant way, it is considered a disorder. Anxiety and closely related disorders emerge from “triple vulnerabilities,”a combination of biological, psychological, and specific factors that increase our risk for developing a disorder (Barlow, 2002; Suárez, Bennett, Goldstein, & Barlow, 2009). Biological vulnerabilities refer to specific genetic and neurobiological factors that might predispose someone to develop anxiety disorders. No single gene directly causes anxiety or panic, but our genes may make us more susceptible to anxiety and influence how our brains react to stress (Drabant et al., 2012; Gelernter & Stein, 2009; Smoller, Block, & Young, 2009). Psychological vulnerabilities refer to the influences that our early experiences have on how we view the world. If we were confronted with unpredictable stressors or traumatic experiences at younger ages, we may come to view the world as unpredictable and uncontrollable, even dangerous (Chorpita & Barlow, 1998; Gunnar & Fisher, 2006). Specific vulnerabilities refer to how our experiences lead us to focus and channel our anxiety (Suárez et al., 2009). If we learned that physical illness is dangerous, maybe through witnessing our family’s reaction whenever anyone got sick, we may focus our anxiety on physical sensations. If we learned that disapproval from others has negative, even dangerous consequences, such as being yelled at or severely punished for even the slightest offense, we might focus our anxiety on social evaluation. If we learn that the “other shoe might drop” at any moment, we may focus our anxiety on worries about the future. None of these vulnerabilities directly causes anxiety disorders on its own—instead, when all of these vulnerabilities are present, and we experience some triggering life stress, an anxiety disorder may be the result (Barlow, 2002; Suárez et al., 2009). In the next sections, we will briefly explore each of the major anxiety based disorders, found in the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) (APA, 2013). Generalized Anxiety Disorder Most of us worry some of the time, and this worry can actually be useful in helping us to plan for the future or make sure we remember to do something important. Most of us can set aside our worries when we need to focus on other things or stop worrying altogether whenever a problem has passed. However, for someone with generalized anxiety disorder (GAD), these worries become difficult, or even impossible, to turn off. They may find themselves worrying excessively about a number of different things, both minor and catastrophic. Their worries also come with a host of other symptoms such as muscle tension, fatigue, agitation or restlessness, irritability, difficulties with sleep (either falling asleep, staying asleep, or both), or difficulty concentrating.The DSM-5 criteria specify that at least six months of excessive anxiety and worry of this type must be ongoing, happening more days than not for a good proportion of the day, to receive a diagnosis of GAD. About 5.7% of the population has met criteria for GAD at some point during their lifetime (Kessler, Berglund, et al., 2005), making it one of the most common anxiety disorders (see Table 1). What makes a person with GAD worry more than the average person? Research shows that individuals with GAD are more sensitive and vigilant toward possible threats than people who are not anxious (Aikins & Craske, 2001; Barlow, 2002; Bradley, Mogg, White, Groom, & de Bono, 1999). This may be related to early stressful experiences, which can lead to a view of the world as an unpredictable, uncontrollable, and even dangerous place. Some have suggested that people with GAD worry as a way to gain some control over these otherwise uncontrollable or unpredictable experiences and against uncertain outcomes (Dugas, Gagnon, Ladouceur, & Freeston, 1998). By repeatedly going through all of the possible “What if?” scenarios in their mind, the person might feel like they are less vulnerable to an unexpected outcome, giving them the sense that they have some control over the situation (Wells, 2002). Others have suggested people with GAD worry as a way to avoid feeling distressed (Borkovec, Alcaine, & Behar, 2004). For example, Borkovec and Hu (1990) found that those who worried when confronted with a stressful situation had less physiological arousal than those who didn’t worry, maybe because the worry “distracted” them in some way. The problem is, all of this “what if?”-ing doesn’t get the person any closer to a solution or an answer and, in fact, might take them away from important things they should be paying attention to in the moment, such as finishing an important project. Many of the catastrophic outcomes people with GAD worry about are very unlikely to happen, so when the catastrophic event doesn’t materialize, the act of worrying gets reinforced (Borkovec, Hazlett-Stevens, & Diaz, 1999). For example, if a mother spends all night worrying about whether her teenage daughter will get home safe from a night out and the daughter returns home without incident, the mother could easily attribute her daughter’s safe return to her successful “vigil.” What the mother hasn’t learned is that her daughter would have returned home just as safe if she had been focusing on the movie she was watching with her husband, rather than being preoccupied with worries. In this way, the cycle of worry is perpetuated, and, subsequently, people with GAD often miss out on many otherwise enjoyable events in their lives. Panic Disorder and Agoraphobia Have you ever gotten into a near-accident or been taken by surprise in some way? You may have felt a flood of physical sensations, such as a racing heart, shortness of breath, or tingling sensations. This alarm reaction is called the “fight or flight” response (Cannon, 1929) and is your body’s natural reaction to fear, preparing you to either fight or escape in response to threat or danger. It’s likely you weren’t too concerned with these sensations, because you knew what was causing them. But imagine if this alarm reaction came “out of the blue,” for no apparent reason, or in a situation in which you didn’t expect to be anxious or fearful. This is called an “unexpected” panic attack or a false alarm. Because there is no apparent reason or cue for the alarm reaction, you might react to the sensations with intense fear, maybe thinking you are having a heart attack, or going crazy, or even dying. You might begin to associate the physical sensations you felt during this attack with this fear and may start to go out of your way to avoid having those sensations again. Unexpected panic attacks such as these are at the heart of panic disorder (PD). However, to receive a diagnosis of PD, the person must not only have unexpected panic attacks but also must experience continued intense anxiety and avoidance related to the attack for at least one month, causing significant distress or interference in their lives. People with panic disorder tend to interpret even normal physical sensations in a catastrophic way, which triggers more anxiety and, ironically, more physical sensations, creating a vicious cycle of panic (Clark, 1986, 1996). The person may begin to avoid a number of situations or activities that produce the same physiological arousal that was present during the beginnings of a panic attack. For example, someone who experienced a racing heart during a panic attack might avoid exercise or caffeine. Someone who experienced choking sensations might avoid wearing high-necked sweaters or necklaces. Avoidance of these internal bodily or somatic cues for panic has been termed interoceptive avoidance (Barlow & Craske, 2007; Brown, White, & Barlow, 2005; Craske & Barlow, 2008; Shear et al., 1997). The individual may also have experienced an overwhelming urge to escape during the unexpected panic attack. This can lead to a sense that certain places or situations—particularly situations where escape might not be possible—are not “safe.” These situations become external cues for panic. If the person begins to avoid several places or situations, or still endures these situations but does so with a significant amount of apprehension and anxiety, then the person also has agoraphobia (Barlow, 2002; Craske & Barlow, 1988; Craske & Barlow, 2008). Agoraphobia can cause significant disruption to a person’s life, causing them to go out of their way to avoid situations, such as adding hours to a commute to avoid taking the train or only ordering take-out to avoid having to enter a grocery store. In one tragic case seen by our clinic, a woman suffering from agoraphobia had not left her apartment for 20 years and had spent the past 10 years confined to one small area of her apartment, away from the view of the outside. In some cases, agoraphobia develops in the absence of panic attacks and therefor is a separate disorder in DSM-5. But agoraphobia often accompanies panic disorder. About 4.7% of the population has met criteria for PD or agoraphobia over their lifetime (Kessler, Chiu, Demler, Merikangas, & Walters, 2005; Kessler et al., 2006) (see Table 1). In all of these cases of panic disorder, what was once an adaptive natural alarm reaction now becomes a learned, and much feared, false alarm. Specific Phobia The majority of us might have certain things we fear, such as bees, or needles, or heights (Myers et al., 1984). But what if this fear is so consuming that you can’t go out on a summer’s day, or get vaccines needed to go on a special trip, or visit your doctor in her new office on the 26th floor? To meet criteria for a diagnosis of specific phobia, there must be an irrational fear of a specific object or situation that substantially interferes with the person’s ability to function. For example, a patient at our clinic turned down a prestigious and coveted artist residency because it required spending time near a wooded area, bound to have insects. Another patient purposely left her house two hours early each morning so she could walk past her neighbor’s fenced yard before they let their dog out in the morning. The list of possible phobias is staggering, but four major subtypes of specific phobia are recognized: blood-injury-injection (BII) type, situational type (such as planes, elevators, or enclosed places), natural environment type for events one may encounter in nature (for example, heights, storms, and water), and animal type. A fifth category “other” includes phobias that do not fit any of the four major subtypes (for example, fears of choking, vomiting, or contracting an illness). Most phobic reactions cause a surge of activity in the sympathetic nervous system and increased heart rate and blood pressure, maybe even a panic attack. However, people with BII type phobias usually experience a marked drop in heart rate and blood pressure and may even faint. In this way, those with BII phobias almost always differ in their physiological reaction from people with other types of phobia (Barlow & Liebowitz, 1995; Craske, Antony, & Barlow, 2006; Hofmann, Alpers, & Pauli, 2009; Ost, 1992). BII phobia also runs in families more strongly than any phobic disorder we know (Antony & Barlow, 2002; Page & Martin, 1998). Specific phobia is one of the most common psychological disorders in the United States, with 12.5% of the population reporting a lifetime history of fears significant enough to be considered a “phobia” (Arrindell et al., 2003; Kessler, Berglund, et al., 2005) (see Table 1). Most people who suffer from specific phobia tend to have multiple phobias of several types (Hofmann, Lehman, & Barlow, 1997). Social Anxiety Disorder (Social Phobia) Many people consider themselves shy, and most people find social evaluation uncomfortable at best, or giving a speech somewhat mortifying. Yet, only a small proportion of the population fear these types of situations significantly enough to merit a diagnosis of social anxiety disorder (SAD) (APA, 2013). SAD is more than exaggerated shyness (Bogels et al., 2010; Schneier et al., 1996). To receive a diagnosis of SAD, the fear and anxiety associated with social situations must be so strong that the person avoids them entirely, or if avoidance is not possible, the person endures them with a great deal of distress. Further, the fear and avoidance of social situations must get in the way of the person’s daily life, or seriously limit their academic or occupational functioning. For example, a patient at our clinic compromised her perfect 4.0 grade point average because she could not complete a required oral presentation in one of her classes, causing her to fail the course. Fears of negative evaluation might make someone repeatedly turn down invitations to social events or avoid having conversations with people, leading to greater and greater isolation. The specific social situations that trigger anxiety and fear range from one-on-one interactions, such as starting or maintaining a conversation; to performance-based situations, such as giving a speech or performing on stage; to assertiveness, such as asking someone to change disruptive or undesirable behaviors. Fear of social evaluation might even extend to such things as using public restrooms, eating in a restaurant, filling out forms in a public place, or even reading on a train. Any type of situation that could potentially draw attention to the person can become a feared social situation. For example, one patient of ours went out of her way to avoid any situation in which she might have to use a public restroom for fear that someone would hear her in the bathroom stall and think she was disgusting. If the fear is limited to performance-based situations, such as public speaking, a diagnosis of SAD performance only is assigned. What causes someone to fear social situations to such a large extent? The person may have learned growing up that social evaluation in particular can be dangerous, creating a specific psychological vulnerability to develop social anxiety (Bruch & Heimberg, 1994; Lieb et al., 2000; Rapee & Melville, 1997). For example, the person’s caregivers may have harshly criticized and punished them for even the smallest mistake, maybe even punishing them physically. Or, someone might have experienced a social trauma that had lasting effects, such as being bullied or humiliated. Interestingly, one group of researchers found that 92% of adults in their study sample with social phobia experienced severe teasing and bullying in childhood, compared with only 35% to 50% among people with other anxiety disorders (McCabe, Antony, Summerfeldt, Liss, & Swinson, 2003). Someone else might react so strongly to the anxiety provoked by a social situation that they have an unexpected panic attack. This panic attack then becomes associated (conditioned response) with the social situation, causing the person to fear they will panic the next time they are in that situation. This is not considered PD, however, because the person’s fear is more focused on social evaluation than having unexpected panic attacks, and the fear of having an attack is limited to social situations. As many as 12.1% of the general population suffer from social phobia at some point in their lives (Kessler, Berglund, et al., 2005), making it one of the most common anxiety disorders, second only to specific phobia (see Table 1). Posttraumatic Stress Disorder With stories of war, natural disasters, and physical and sexual assault dominating the news, it is clear that trauma is a reality for many people. Many individual traumas that occur every day never even make the headlines, such as a car accident, domestic abuse, or the death of a loved one. Yet, while many people face traumatic events, not everyone who faces a trauma develops a disorder. Some, with the help of family and friends, are able to recover and continue on with their lives (Friedman, 2009). For some, however, the months and years following a trauma are filled with intrusive reminders of the event, a sense of intense fear that another traumatic event might occur, or a sense of isolation and emotional numbing. They may engage in a host of behaviors intended to protect themselves from being vulnerable or unsafe, such as constantly scanning their surroundings to look for signs of potential danger, never sitting with their back to the door, or never allowing themselves to be anywhere alone. This lasting reaction to trauma is what characterizes posttraumatic stress disorder (PTSD). A diagnosis of PTSD begins with the traumatic event itself. An individual must have been exposed to an event that involves actual or threatened death, serious injury, or sexual violence. To receive a diagnosis of PTSD, exposure to the event must include either directly experiencing the event, witnessing the event happening to someone else, learning that the event occurred to a close relative or friend, or having repeated or extreme exposure to details of the event (such as in the case of first responders). The person subsequently re-experiences the event through both intrusive memories and nightmares. Some memories may come back so vividly that the person feels like they are experiencing the event all over again, what is known as having a flashback. The individual may avoid anything that reminds them of the trauma, including conversations, places, or even specific types of people. They may feel emotionally numb or restricted in their ability to feel, which may interfere in their interpersonal relationships. The person may not be able to remember certain aspects of what happened during the event. They may feel a sense of a foreshortened future, that they will never marry, have a family, or live a long, full life. They may be jumpy or easily startled, hypervigilant to their surroundings, and quick to anger. The prevalence of PTSD among the population as a whole is relatively low, with 6.8% having experienced PTSD at some point in their life (Kessler, Berglund, et al., 2005) (see Table 1). Combat and sexual assault are the most common precipitating traumas (Kessler, Sonnega, Bromet, Hughes, & Nelson, 1995). Whereas PTSD was previously categorized as an Anxiety Disorder, in the most recent version of the DSM (DSM-5; APA, 2013) it has been reclassified under the more specific category of Trauma- and Stressor-Related Disorders. A person with PTSD is particularly sensitive to both internal and external cues that serve as reminders of their traumatic experience. For example, as we saw in PD, the physical sensations of arousal present during the initial trauma can become threatening in and of themselves, becoming a powerful reminder of the event. Someone might avoid watching intense or emotional movies in order to prevent the experience of emotional arousal. Avoidance of conversations, reminders, or even of the experience of emotion itself may also be an attempt to avoid triggering internal cues. External stimuli that were present during the trauma can also become strong triggers. For example, if a woman is raped by a man wearing a red t-shirt, she may develop a strong alarm reaction to the sight of red shirts, or perhaps even more indiscriminately to anything with a similar color red. A combat veteran who experienced a strong smell of gasoline during a roadside bomb attack may have an intense alarm reaction when pumping gas back at home. Individuals with a psychological vulnerability toward viewing the world as uncontrollable and unpredictable may particularly struggle with the possibility of additional future, unpredictable traumatic events, fueling their need for hypervigilance and avoidance, and perpetuating the symptoms of PTSD. Obsessive-Compulsive Disorder Have you ever had a strange thought pop into your mind, such as picturing the stranger next to you naked? Or maybe you walked past a crooked picture on the wall and couldn’t resist straightening it. Most people have occasional strange thoughts and may even engage in some “compulsive” behaviors, especially when they are stressed (Boyer & Liénard, 2008; Fullana et al., 2009). But for most people, these thoughts are nothing more than a passing oddity, and the behaviors are done (or not done) without a second thought. For someone with obsessive-compulsive disorder (OCD), however, these thoughts and compulsive behaviors don’t just come and go. Instead, strange or unusual thoughts are taken to mean something much more important and real, maybe even something dangerous or frightening. The urge to engage in some behavior, such as straightening a picture, can become so intense that it is nearly impossible not to carry it out, or causes significant anxiety if it can’t be carried out. Further, someone with OCD might become preoccupied with the possibility that the behavior wasn’t carried out to completion and feel compelled to repeat the behavior again and again, maybe several times before they are “satisfied.” To receive a diagnosis of OCD, a person must experience obsessive thoughts and/or compulsions that seem irrational or nonsensical, but that keep coming into their mind. Some examples of obsessions include doubting thoughts (such as doubting a door is locked or an appliance is turned off), thoughts of contamination (such as thinking that touching almost anything might give you cancer), or aggressive thoughts or images that are unprovoked or nonsensical. Compulsions may be carried out in an attempt to neutralize some of these thoughts, providing temporary relief from the anxiety the obsessions cause, or they may be nonsensical in and of themselves. Either way, compulsions are distinct in that they must be repetitive or excessive, the person feels “driven” to carry out the behavior, and the person feels a great deal of distress if they can’t engage in the behavior. Some examples of compulsive behaviors are repetitive washing (often in response to contamination obsessions), repetitive checking (locks, door handles, appliances often in response to doubting obsessions), ordering and arranging things to ensure symmetry, or doing things according to a specific ritual or sequence (such as getting dressed or ready for bed in a specific order). To meet diagnostic criteria for OCD, engaging in obsessions and/or compulsions must take up a significant amount of the person’s time, at least an hour per day, and must cause significant distress or impairment in functioning. About 1.6% of the population has met criteria for OCD over the course of a lifetime (Kessler, Berglund, et al., 2005) (see Table 1). Whereas OCD was previously categorized as an Anxiety Disorder, in the most recent version of the DSM (DSM-5; APA, 2013) it has been reclassified under the more specific category of Obsessive-Compulsive and Related Disorders. People with OCD often confuse having an intrusive thought with their potential for carrying out the thought. Whereas most people when they have a strange or frightening thought are able to let it go, a person with OCD may become “stuck” on the thought and be intensely afraid that they might somehow lose control and act on it. Or worse, they believe that having the thought is just as bad as doing it. This is called thought-action fusion. For example, one patient of ours was plagued by thoughts that she would cause harm to her young daughter. She experienced intrusive images of throwing hot coffee in her daughter’s face or pushing her face underwater when she was giving her a bath. These images were so terrifying to the patient that she would no longer allow herself any physical contact with her daughter and would leave her daughter in the care of a babysitter if her husband or another family was not available to “supervise” her. In reality, the last thing she wanted to do was harm her daughter, and she had no intention or desire to act on the aggressive thoughts and images, nor does anybody with OCD act on these thoughts, but these thoughts were so horrifying to her that she made every attempt to prevent herself from the potential of carrying them out, even if it meant not being able to hold, cradle, or cuddle her daughter. These are the types of struggles people with OCD face every day. Many successful treatments for anxiety and related disorders have been developed over the years. Medications (anti-anxiety drugs and antidepressants) have been found to be beneficial for disorders other than specific phobia, but relapse rates are high once medications are stopped (Heimberg et al., 1998; Hollon et al., 2005), and some classes of medications (minor tranquilizers or benzodiazepines) can be habit forming. Exposure-based cognitive behavioral therapies (CBT) are effective psychosocial treatments for anxiety disorders, and many show greater treatment effects than medication in the long term (Barlow, Allen, & Basden, 2007; Barlow, Gorman, Shear, & Woods, 2000). In CBT, patients are taught skills to help identify and change problematic thought processes, beliefs, and behaviors that tend to worsen symptoms of anxiety, and practice applying these skills to real-life situations through exposure exercises. Patients learn how the automatic “appraisals” or thoughts they have about a situation affect both how they feel and how they behave. Similarly, patients learn how engaging in certain behaviors, such as avoiding situations, tends to strengthen the belief that the situation is something to be feared. A key aspect of CBT is exposure exercises, in which the patient learns to gradually approach situations they find fearful or distressing, in order to challenge their beliefs and learn new, less fearful associations about these situations. Typically 50% to 80% of patients receiving drugs or CBT will show a good initial response, with the effect of CBT more durable. Newer developments in the treatment of anxiety disorders are focusing on novel interventions, such as the use of certain medications to enhance learning during CBT (Otto et al., 2010), and transdiagnostic treatments targeting core, underlying vulnerabilities (Barlow et al., 2011). As we advance our understanding of anxiety and related disorders, so too will our treatments advance, with the hopes that for the many people suffering from these disorders, anxiety can once again become something useful and adaptive, rather than something debilitating. 1. Kessler et al. (2005). 2. Kessler, Chiu, Demler, Merikangas, & Walters (2005). 3. Kessler, Sonnega, Bromet, Hughes, & Nelson (1995). 4. Craske et al. (1996). Outside Resources American Psychological Association (APA) http://www.apa.org/topics/anxiety/index.aspx National Institutes of Mental Health (NIMH) http://www.nimh.nih.gov/health/topics/anxiety-disorders/index.shtml Web: Anxiety and Depression Association of America (ADAA) http://www.adaa.org/ Web: Center for Anxiety and Related Disorders (CARD) http://www.bu.edu/card/ Discussion Questions 1. Name and describe the three main vulnerabilities contributing to the development of anxiety and related disorders. Do you think these disorders could develop out of biological factors alone? Could these disorders develop out of learning experiences alone? 2. Many of the symptoms in anxiety and related disorders overlap with experiences most people have. What features differentiate someone with a disorder versus someone without? 3. What is an “alarm reaction?” If someone experiences an alarm reaction when they are about to give a speech in front of a room full of people, would you consider this a “true alarm” or a “false alarm?” 4. Many people are shy. What differentiates someone who is shy from someone with social anxiety disorder? Do you think shyness should be considered an anxiety disorder? 5. Is anxiety ever helpful? What about worry? Vocabulary Agoraphobia A sort of anxiety disorder distinguished by feelings that a place is uncomfortable or may be unsafe because it is significantly open or crowded. Anxiety A mood state characterized by negative affect, muscle tension, and physical arousal in which a person apprehensively anticipates future danger or misfortune. Biological vulnerability A specific genetic and neurobiological factor that might predispose someone to develop anxiety disorders. Conditioned response A learned reaction following classical conditioning, or the process by which an event that automatically elicits a response is repeatedly paired with another neutral stimulus (conditioned stimulus), resulting in the ability of the neutral stimulus to elicit the same response on its own. External cues Stimuli in the outside world that serve as triggers for anxiety or as reminders of past traumatic events. Fight or flight response A biological reaction to alarming stressors that prepares the body to resist or escape a threat. Flashback Sudden, intense re-experiencing of a previous event, usually trauma-related. Generalized anxiety disorder (GAD) Excessive worry about everyday things that is at a level that is out of proportion to the specific causes of worry. Internal bodily or somatic cues Physical sensations that serve as triggers for anxiety or as reminders of past traumatic events. Interoceptive avoidance Avoidance of situations or activities that produce sensations of physical arousal similar to those occurring during a panic attack or intense fear response. Obsessive-compulsive disorder (OCD) A disorder characterized by the desire to engage in certain behaviors excessively or compulsively in hopes of reducing anxiety. Behaviors include things such as cleaning, repeatedly opening and closing doors, hoarding, and obsessing over certain thoughts. Panic disorder (PD) A condition marked by regular strong panic attacks, and which may include significant levels of worry about future attacks. Posttraumatic stress disorder (PTSD) A sense of intense fear, triggered by memories of a past traumatic event, that another traumatic event might occur. PTSD may include feelings of isolation and emotional numbing. Psychological vulnerabilities Influences that our early experiences have on how we view the world. Reinforced response Following the process of operant conditioning, the strengthening of a response following either the delivery of a desired consequence (positive reinforcement) or escape from an aversive consequence. SAD performance only Social anxiety disorder which is limited to certain situations that the sufferer perceives as requiring some type of performance. Social anxiety disorder (SAD) A condition marked by acute fear of social situations which lead to worry and diminished day to day functioning. Specific vulnerabilities How our experiences lead us to focus and channel our anxiety. Thought-action fusion The tendency to overestimate the relationship between a thought and an action, such that one mistakenly believes a “bad” thought is the equivalent of a “bad” action.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/12%3A_PSYCHOLOGICAL_DISORDERS/12.01%3A_Anxiety_and_Related_Disorders.txt
By Anda Gershon and Renee Thompson Stanford University, Washington University in St. Louis Everyone feels down or euphoric from time to time, but this is different from having a mood disorder such as major depressive disorder or bipolar disorder. Mood disorders are extended periods of depressed, euphoric, or irritable moods that in combination with other symptoms cause the person significant distress and interfere with his or her daily life, often resulting in social and occupational difficulties. In this module, we describe major mood disorders, including their symptom presentations, general prevalence rates, and how and why the rates of these disorders tend to vary by age, gender, and race. In addition, biological and environmental risk factors that have been implicated in the development and course of mood disorders, such as heritability and stressful life events, are reviewed. Finally, we provide an overview of treatments for mood disorders, covering treatments with demonstrated effectiveness, as well as new treatment options showing promise. learning objectives • Describe the diagnostic criteria for mood disorders. • Understand age, gender, and ethnic differences in prevalence rates of mood disorders. • Identify common risk factors for mood disorders. • Know effective treatments of mood disorders. The actress Brooke Shields published a memoir titled Down Came the Rain: My Journey through Postpartum Depression in which she described her struggles with depression following the birth of her daughter. Despite the fact that about one in 20 women experience depression after the birth of a baby (American Psychiatric Association [APA], 2013), postpartum depression—recently renamed “perinatal depression”—continues to be veiled by stigma, owing in part to a widely held expectation that motherhood should be a time of great joy. In an opinion piece in the New York Times, Shields revealed that entering motherhood was a profoundly overwhelming experience for her. She vividly describes experiencing a sense of “doom” and “dread” in response to her newborn baby. Because motherhood is conventionally thought of as a joyous event and not associated with sadness and hopelessness, responding to a newborn baby in this way can be shocking to the new mother as well as those close to her. It may also involve a great deal of shame for the mother, making her reluctant to divulge her experience to others, including her doctors and family. Feelings of shame are not unique to perinatal depression. Stigma applies to other types of depressive and bipolar disorders and contributes to people not always receiving the necessary support and treatment for these disorders. In fact, the World Health Organization ranks both major depressive disorder (MDD) and bipolar disorder (BD) among the top 10 leading causes of disability worldwide. Further, MDD and BD carry a high risk of suicide. It is estimated that 25%–50% of people diagnosed with BD will attempt suicide at least once in their lifetimes (Goodwin & Jamison, 2007). What Are Mood Disorders? Mood Episodes Everyone experiences brief periods of sadness, irritability, or euphoria. This is different than having a mood disorder, such as MDD or BD, which are characterized by a constellation of symptoms that causes people significant distress or impairs their everyday functioning. Major Depressive Episode A major depressive episode (MDE) refers to symptoms that co-occur for at least two weeks and cause significant distress or impairment in functioning, such as interfering with work, school, or relationships. Core symptoms include feeling down or depressed or experiencing anhedonia—loss of interest or pleasure in things that one typically enjoys. According to the fifth edition of the Diagnostic and Statistical Manual (DSM-5; APA, 2013), the criteria for an MDE require five or more of the following nine symptoms, including one or both of the first two symptoms, for most of the day, nearly every day: 1. depressed mood 2. diminished interest or pleasure in almost all activities 3. significant weight loss or gain or an increase or decrease in appetite 4. insomnia or hypersomnia 5. psychomotor agitation or retardation 6. fatigue or loss of energy 7. feeling worthless or excessive or inappropriate guilt 8. diminished ability to concentrate or indecisiveness 9. recurrent thoughts of death, suicidal ideation, or a suicide attempt These symptoms cannot be caused by physiological effects of a substance or a general medical condition (e.g., hypothyroidism). Manic or Hypomanic Episode The core criterion for a manic or hypomanic episode is a distinct period of abnormally and persistently euphoric, expansive, or irritable mood and persistently increased goal-directed activity or energy. The mood disturbance must be present for one week or longer in mania (unless hospitalization is required) or four days or longer in hypomania. Concurrently, at least three of the following symptoms must be present in the context of euphoric mood (or at least four in the context of irritable mood): 1. inflated self-esteem or grandiosity 2. increased goal-directed activity or psychomotor agitation 3. reduced need for sleep 4. racing thoughts or flight of ideas 5. distractibility 6. increased talkativeness 7. excessive involvement in risky behaviors Manic episodes are distinguished from hypomanic episodes by their duration and associated impairment; whereas manic episodes must last one week and are defined by a significant impairment in functioning, hypomanic episodes are shorter and not necessarily accompanied by impairment in functioning. Mood Disorders Unipolar Mood Disorders Two major types of unipolar disorders described by the DSM-5 (APA, 2013) are major depressive disorder and persistent depressive disorder (PDD; dysthymia). MDD is defined by one or more MDEs, but no history of manic or hypomanic episodes. Criteria for PDD are feeling depressed most of the day for more days than not, for at least two years. At least two of the following symptoms are also required to meet criteria for PDD: 1. poor appetite or overeating 2. insomnia or hypersomnia 3. low energy or fatigue 4. low self-esteem 5. poor concentration or difficulty making decisions 6. feelings of hopelessness Like MDD, these symptoms need to cause significant distress or impairment and cannot be due to the effects of a substance or a general medical condition. To meet criteria for PDD, a person cannot be without symptoms for more than two months at a time. PDD has overlapping symptoms with MDD. If someone meets criteria for an MDE during a PDD episode, the person will receive diagnoses of PDD and MDD. Bipolar Mood Disorders Three major types of BDs are described by the DSM-5 (APA, 2013). Bipolar I Disorder (BD I), which was previously known as manic-depression, is characterized by a single (or recurrent) manic episode. A depressive episode is not necessary but commonly present for the diagnosis of BD I. Bipolar II Disorder is characterized by single (or recurrent) hypomanic episodes and depressive episodes. Another type of BD is cyclothymic disorder, characterized by numerous and alternating periods of hypomania and depression, lasting at least two years. To qualify for cyclothymic disorder, the periods of depression cannot meet full diagnostic criteria for an MDE; the person must experience symptoms at least half the time with no more than two consecutive symptom-free months; and the symptoms must cause significant distress or impairment. It is important to note that the DSM-5 was published in 2013, and findings based on the updated manual will be forthcoming. Consequently, the research presented below was largely based on a similar, but not identical, conceptualization of mood disorders drawn from the DSM-IV (APA, 2000). How Common Are Mood Disorders? Who Develops Mood Disorders? Depressive Disorders In a nationally representative sample, lifetime prevalence rate for MDD is 16.6% (Kessler, Berglund, Demler, Jin, Merikangas, & Walters, 2005). This means that nearly one in five Americans will meet the criteria for MDD during their lifetime. The 12-month prevalence—the proportion of people who meet criteria for a disorder during a 12-month period—for PDD is approximately 0.5% (APA, 2013). Although the onset of MDD can occur at any time throughout the lifespan, the average age of onset is mid-20s, with the age of onset decreasing with people born more recently (APA, 2000). Prevalence of MDD among older adults is much lower than it is for younger cohorts (Kessler, Birnbaum, Bromet, Hwang, Sampson, & Shahly, 2010). The duration of MDEs varies widely. Recovery begins within three months for 40% of people with MDD and within 12 months for 80% (APA, 2013). MDD tends to be a recurrent disorder with about 40%–50% of those who experience one MDE experiencing a second MDE (Monroe & Harkness, 2011). An earlier age of onset predicts a worse course. About 5%–10% of people who experience an MDE will later experience a manic episode (APA, 2000), thus no longer meeting criteria for MDD but instead meeting them for BD I. Diagnoses of other disorders across the lifetime are common for people with MDD: 59% experience an anxiety disorder; 32% experience an impulse control disorder, and 24% experience a substance use disorder (Kessler, Merikangas, & Wang, 2007). Women experience two to three times higher rates of MDD than do men (Nolen-Hoeksema & Hilt, 2009). This gender difference emerges during puberty (Conley & Rudolph, 2009). Before puberty, boys exhibit similar or higher prevalence rates of MDD than do girls (Twenge & Nolen-Hoeksema, 2002). MDD is inversely correlated with socioeconomic status (SES), a person’s economic and social position based on income, education, and occupation. Higher prevalence rates of MDD are associated with lower SES (Lorant, Deliege, Eaton, Robert, Philippot, & Ansseau, 2003), particularly for adults over 65 years old (Kessler et al., 2010). Independent of SES, results from a nationally representative sample found that European Americans had a higher prevalence rate of MDD than did African Americans and Hispanic Americans, whose rates were similar (Breslau, Aguilar-Gaxiola, Kendler, Su, Williams, & Kessler, 2006). The course of MDD for African Americans is often more severe and less often treated than it is for European Americans, however (Williams et al., 2007). Native Americans have a higher prevalence rate than do European Americans, African Americans, or Hispanic Americans (Hasin, Goodwin, Stinson & Grant, 2005). Depression is not limited to industrialized or western cultures; it is found in all countries that have been examined, although the symptom presentation as well as prevalence rates vary across cultures (Chentsova-Dutton & Tsai, 2009). Bipolar Disorders The lifetime prevalence rate of bipolar spectrum disorders in the general U.S. population is estimated at approximately 4.4%, with BD I constituting about 1% of this rate (Merikangas et al., 2007). Prevalence estimates, however, are highly dependent on the diagnostic procedures used (e.g., interviews vs. self-report) and whether or not sub-threshold forms of the disorder are included in the estimate. BD often co-occurs with other psychiatric disorders. Approximately 65% of people with BD meet diagnostic criteria for at least one additional psychiatric disorder, most commonly anxiety disorders and substance use disorders (McElroy et al., 2001). The co-occurrence of BD with other psychiatric disorders is associated with poorer illness course, including higher rates of suicidality (Leverich et al., 2003). A recent cross-national study sample of more than 60,000 adults from 11 countries, estimated the worldwide prevalence of BD at 2.4%, with BD I constituting 0.6% of this rate (Merikangas et al., 2011). In this study, the prevalence of BD varied somewhat by country. Whereas the United States had the highest lifetime prevalence (4.4%), India had the lowest (0.1%). Variation in prevalence rates was not necessarily related to SES, as in the case of Japan, a high-income country with a very low prevalence rate of BD (0.7%). With regard to ethnicity, data from studies not confounded by SES or inaccuracies in diagnosis are limited, but available reports suggest rates of BD among European Americans are similar to those found among African Americans (Blazer et al., 1985) and Hispanic Americans (Breslau, Kendler, Su, Gaxiola-Aguilar, & Kessler, 2005). Another large community-based study found that although prevalence rates of mood disorders were similar across ethnic groups, Hispanic Americans and African Americans with a mood disorder were more likely to remain persistently ill than European Americans (Breslau et al., 2005). Compared with European Americans with BD, African Americans tend to be underdiagnosed for BD (and over-diagnosed for schizophrenia) (Kilbourne, Haas, Mulsant, Bauer, & Pincus, 2004; Minsky, Vega, Miskimen, Gara, & Escobar, 2003), and Hispanic Americans with BD have been shown to receive fewer psychiatric medication prescriptions and specialty treatment visits (Gonzalez et al., 2007). Misdiagnosis of BD can result in the underutilization of treatment or the utilization of inappropriate treatment, and thus profoundly impact the course of illness. As with MDD, adolescence is known to be a significant risk period for BD; mood symptoms start by adolescence in roughly half of BD cases (Leverich et al., 2007; Perlis et al., 2004). Longitudinal studies show that those diagnosed with BD prior to adulthood experience a more pernicious course of illness relative to those with adult onset, including more episode recurrence, higher rates of suicidality, and profound social, occupational, and economic repercussions (e.g., Lewinsohn, Seeley, Buckley, & Klein, 2002). The prevalence of BD is substantially lower in older adults compared with younger adults (1% vs. 4%) (Merikangas et al., 2007). What Are Some of the Factors Implicated in the Development and Course of Mood Disorders? Mood disorders are complex disorders resulting from multiple factors. Causal explanations can be attempted at various levels, including biological and psychosocial levels. Below are several of the key factors that contribute to onset and course of mood disorders are highlighted. Depressive Disorders Research across family and twin studies has provided support that genetic factors are implicated in the development of MDD. Twin studies suggest that familial influence on MDD is mostly due to genetic effects and that individual-specific environmental effects (e.g., romantic relationships) play an important role, too. By contrast, the contribution of shared environmental effect by siblings is negligible (Sullivan, Neale & Kendler, 2000). The mode of inheritance is not fully understood although no single genetic variation has been found to increase the risk of MDD significantly. Instead, several genetic variants and environmental factors most likely contribute to the risk for MDD (Lohoff, 2010). One environmental stressor that has received much support in relation to MDD is stressful life events. In particular, severe stressful life events—those that have long-term consequences and involve loss of a significant relationship (e.g., divorce) or economic stability (e.g., unemployment) are strongly related to depression (Brown & Harris, 1989; Monroe et al., 2009). Stressful life events are more likely to predict the first MDE than subsequent episodes (Lewinsohn, Allen, Seeley, & Gotlib, 1999). In contrast, minor events may play a larger role in subsequent episodes than the initial episodes (Monroe & Harkness, 2005). Depression research has not been limited to examining reactivity to stressful life events. Much research, particularly brain imagining research using functional magnetic resonance imaging (fMRI), has centered on examining neural circuitry—the interconnections that allow multiple brain regions to perceive, generate, and encode information in concert. A meta-analysis of neuroimaging studies showed that when viewing negative stimuli (e.g., picture of an angry face, picture of a car accident), compared with healthy control participants, participants with MDD have greater activation in brain regions involved in stress response and reduced activation of brain regions involved in positively motivated behaviors (Hamilton, Etkin, Furman, Lemus, Johnson, & Gotlib, 2012). Other environmental factors related to increased risk for MDD include experiencing early adversity (e.g., childhood abuse or neglect; Widom, DuMont, & Czaja, 2007), chronic stress (e.g., poverty) and interpersonal factors. For example, marital dissatisfaction predicts increases in depressive symptoms in both men and women. On the other hand, depressive symptoms also predict increases in marital dissatisfaction (Whisman & Uebelacker, 2009). Research has found that people with MDD generate some of their interpersonal stress (Hammen, 2005). People with MDD whose relatives or spouses can be described as critical and emotionally overinvolved have higher relapse rates than do those living with people who are less critical and emotionally overinvolved (Butzlaff & Hooley, 1998). People’s attributional styles or their general ways of thinking, interpreting, and recalling information have also been examined in the etiology of MDD (Gotlib & Joormann, 2010). People with a pessimistic attributional style tend to make internal (versus external), global (versus specific), and stable (versus unstable) attributions to negative events, serving as a vulnerability to developing MDD. For example, someone who when he fails an exam thinks that it was his fault (internal), that he is stupid (global), and that he will always do poorly (stable) has a pessimistic attribution style. Several influential theories of depression incorporate attributional styles (Abramson, Metalsky, & Alloy, 1989; Abramson Seligman, & Teasdale, 1978). Bipolar Disorders Although there have been important advances in research on the etiology, course, and treatment of BD, there remains a need to understand the mechanisms that contribute to episode onset and relapse. There is compelling evidence for biological causes of BD, which is known to be highly heritable (McGuffin, Rijsdijk, Andrew, Sham, Katz, & Cardno, 2003). It may be argued that a high rate of heritability demonstrates that BD is fundamentally a biological phenomenon. However, there is much variability in the course of BD both within a person across time and across people (Johnson, 2005). The triggers that determine how and when this genetic vulnerability is expressed are not yet understood; however, there is evidence to suggest that psychosocial triggers may play an important role in BD risk (e.g., Johnson et al., 2008; Malkoff-Schwartz et al., 1998). In addition to the genetic contribution, biological explanations of BD have also focused on brain function. Many of the studies using fMRI techniques to characterize BD have focused on the processing of emotional stimuli based on the idea that BD is fundamentally a disorder of emotion (APA, 2000). Findings show that regions of the brain thought to be involved in emotional processing and regulation are activated differently in people with BD relative to healthy controls (e.g., Altshuler et al., 2008; Hassel et al., 2008; Lennox, Jacob, Calder, Lupson, & Bullmore, 2004). However, there is little consensus as to whether a particular brain region becomes more or less active in response to an emotional stimulus among people with BD compared with healthy controls. Mixed findings are in part due to samples consisting of participants who are at various phases of illness at the time of testing (manic, depressed, inter-episode). Sample sizes tend to be relatively small, making comparisons between subgroups difficult. Additionally, the use of a standardized stimulus (e.g., facial expression of anger) may not elicit a sufficiently strong response. Personally engaging stimuli, such as recalling a memory, may be more effective in inducing strong emotions (Isacowitz, Gershon, Allard, & Johnson, 2013). Within the psychosocial level, research has focused on the environmental contributors to BD. A series of studies show that environmental stressors, particularly severe stressors (e.g., loss of a significant relationship), can adversely impact the course of BD. People with BD have substantially increased risk of relapse (Ellicott, Hammen, Gitlin, Brown, & Jamison, 1990) and suffer more depressive symptoms (Johnson, Winett, Meyer, Greenhouse, & Miller, 1999) following a severe life stressor. Interestingly, positive life events can also adversely impact the course of BD. People with BD suffer more manic symptoms after life events involving attainment of a desired goal (Johnson et al., 2008). Such findings suggest that people with BD may have a hypersensitivity to rewards. Evidence from the life stress literature has also suggested that people with mood disorders may have a circadian vulnerability that renders them sensitive to stressors that disrupt their sleep or rhythms. According to social zeitgeber theory (Ehlers, Frank, & Kupfer, 1988; Frank et al., 1994), stressors that disrupt sleep, or that disrupt the daily routines that entrain the biological clock (e.g., meal times) can trigger episode relapse. Consistent with this theory, studies have shown that life events that involve a disruption in sleep and daily routines, such as overnight travel, can increase bipolar symptoms in people with BD (Malkoff-Schwartz et al., 1998). What Are Some of the Well-Supported Treatments for Mood Disorders? Depressive Disorders There are many treatment options available for people with MDD. First, a number of antidepressant medications are available, all of which target one or more of the neurotransmitters implicated in depression.The earliest antidepressant medications were monoamine oxidase inhibitors (MAOIs). MAOIs inhibit monoamine oxidase, an enzyme involved in deactivating dopamine, norepinephrine, and serotonin. Although effective in treating depression, MAOIs can have serious side effects. Patients taking MAOIs may develop dangerously high blood pressure if they take certain drugs (e.g., antihistamines) or eat foods containing tyramine, an amino acid commonly found in foods such as aged cheeses, wine, and soy sauce. Tricyclics, the second-oldest class of antidepressant medications, block the reabsorption of norepinephrine, serotonin, or dopamine at synapses, resulting in their increased availability. Tricyclics are most effective for treating vegetative and somatic symptoms of depression. Like MAOIs, they have serious side effects, the most concerning of which is being cardiotoxic. Selective serotonin reuptake inhibitors (SSRIs; e.g., Fluoxetine) and serotonin and norepinephrine reuptake inhibitors (SNRIs; e.g., Duloxetine) are the most recently introduced antidepressant medications. SSRIs, the most commonly prescribed antidepressant medication, block the reabsorption of serotonin, whereas SNRIs block the reabsorption of serotonin and norepinephrine. SSRIs and SNRIs have fewer serious side effects than do MAOIs and tricyclics. In particular, they are less cardiotoxic, less lethal in overdose, and produce fewer cognitive impairments. They are not, however, without their own side effects, which include but are not limited to difficulty having orgasms, gastrointestinal issues, and insomnia. Other biological treatments for people with depression include electroconvulsive therapy (ECT), transcranial magnetic stimulation (TMS), and deep brain stimulation. ECT involves inducing a seizure after a patient takes muscle relaxants and is under general anesthesia. ECT is viable treatment for patients with severe depression or who show resistance to antidepressants although the mechanisms through which it works remain unknown. A common side effect is confusion and memory loss, usually short-term (Schulze-Rauschenbach, Harms, Schlaepfer, Maier, Falkai, & Wagner, 2005). Repetitive TMS is a noninvasive technique administered while a patient is awake. Brief pulsating magnetic fields are delivered to the cortex, inducing electrical activity. TMS has fewer side effects than ECT (Schulze-Rauschenbach et al., 2005), and while outcome studies are mixed, there is evidence that TMS is a promising treatment for patients with MDD who have shown resistance to other treatments (Rosa et al., 2006). Most recently, deep brain stimulation is being examined as a treatment option for patients who did not respond to more traditional treatments like those already described. Deep brain stimulation involves implanting an electrode in the brain. The electrode is connected to an implanted neurostimulator, which electrically stimulates that particular brain region. Although there is some evidence of its effectiveness (Mayberg et al., 2005), additional research is needed. Several psychosocial treatments have received strong empirical support, meaning that independent investigations have achieved similarly positive results—a high threshold for examining treatment outcomes. These treatments include but are not limited to behavior therapy, cognitive therapy, and interpersonal therapy. Behavior therapies focus on increasing the frequency and quality of experiences that are pleasant or help the patient achieve mastery. Cognitive therapies primarily focus on helping patients identify and change distorted automatic thoughts and assumptions (e.g., Beck, 1967). Cognitive-behavioral therapies are based on the rationale that thoughts, behaviors, and emotions affect and are affected by each other. Interpersonal Therapy for Depression focuses largely on improving interpersonal relationships by targeting problem areas, specifically unresolved grief, interpersonal role disputes, role transitions, and interpersonal deficits. Finally, there is also some support for the effectiveness of Short-Term Psychodynamic Therapy for Depression (Leichsenring, 2001). The short-term treatment focuses on a limited number of important issues, and the therapist tends to be more actively involved than in more traditional psychodynamic therapy. Bipolar Disorders Patients with BD are typically treated with pharmacotherapy. Antidepressants such as SSRIs and SNRIs are the primary choice of treatment for depression, whereas for BD, lithium is the first line treatment choice. This is because SSRIs and SNRIs have the potential to induce mania or hypomania in patients with BD. Lithium acts on several neurotransmitter systems in the brain through complex mechanisms, including reduction of excitatory (dopamine and glutamate) neurotransmission, and increasing of inhibitory (GABA) neurotransmission (Lenox & Hahn, 2000). Lithium has strong efficacy for the treatment of BD (Geddes, Burgess, Hawton, Jamison, & Goodwin, 2004). However, a number of side effects can make lithium treatment difficult for patients to tolerate. Side effects include impaired cognitive function (Wingo, Wingo, Harvey, & Baldessarini, 2009), as well as physical symptoms such as nausea, tremor, weight gain, and fatigue (Dunner, 2000). Some of these side effects can improve with continued use; however, medication noncompliance remains an ongoing concern in the treatment of patients with BD. Anticonvulsant medications (e.g., carbamazepine, valproate) are also commonly used to treat patients with BD, either alone or in conjunction with lithium. There are several adjunctive treatment options for people with BD. Interpersonal and social rhythm therapy (IPSRT; Frank et al., 1994) is a psychosocial intervention focused on addressing the mechanism of action posited in social zeitgeber theory to predispose patients who have BD to relapse, namely sleep disruption. A growing body of literature provides support for the central role of sleep dysregulation in BD (Harvey, 2008). Consistent with this literature, IPSRT aims to increase rhythmicity of patients’ lives and encourage vigilance in maintaining a stable rhythm. The therapist and patient work to develop and maintain a healthy balance of activity and stimulation such that the patient does not become overly active (e.g., by taking on too many projects) or inactive (e.g., by avoiding social contact). The efficacy of IPSRT has been demonstrated in that patients who received this treatment show reduced risk of episode recurrence and are more likely to remain well (Frank et al., 2005). Conclusion Everyone feels down or euphoric from time to time. For some people, these feelings can last for long periods of time and can also co-occur with other symptoms that, in combination, interfere with their everyday lives. When people experience an MDE or a manic episode, they see the world differently. During an MDE, people often feel hopeless about the future, and may even experience suicidal thoughts. During a manic episode, people often behave in ways that are risky or place them in danger. They may spend money excessively or have unprotected sex, often expressing deep shame over these decisions after the episode. MDD and BD cause significant problems for people at school, at work, and in their relationships and affect people regardless of gender, age, nationality, race, religion, or sexual orientation. If you or someone you know is suffering from a mood disorder, it is important to seek help. Effective treatments are available and continually improving. If you have an interest in mood disorders, there are many ways to contribute to their understanding, prevention, and treatment, whether by engaging in research or clinical work. Outside Resources Books: Recommended memoirs include A Memoir of Madness by William Styron (MDD); Noonday Demon: An Atlas of Depression by Andrew Solomon (MDD); and An Unquiet Mind: A Memoir of Moods and Madness by Kay Redfield (BD). Web: Visit the Association for Behavioral and Cognitive Therapies to find a list of the recommended therapists and evidence-based treatments. http://www.abct.org Web: Visit the Depression and Bipolar Support Alliance for educational information and social support options. http://www.dbsalliance.org/ Discussion Questions 1. What factors might explain the large gender difference in the prevalence rates of MDD? 2. Why might American ethnic minority groups experience more persistent BD than European Americans? 3. Why might the age of onset for MDD be decreasing over time? 4. Why might overnight travel constitute a potential risk for a person with BD? 5. What are some reasons positive life events may precede the occurrence of manic episode? Vocabulary Anhedonia Loss of interest or pleasure in activities one previously found enjoyable or rewarding. Attributional style The tendency by which a person infers the cause or meaning of behaviors or events. Chronic stress Discrete or related problematic events and conditions which persist over time and result in prolonged activation of the biological and/or psychological stress response (e.g., unemployment, ongoing health difficulties, marital discord). Early adversity Single or multiple acute or chronic stressful events, which may be biological or psychological in nature (e.g., poverty, abuse, childhood illness or injury), occurring during childhood and resulting in a biological and/or psychological stress response. Grandiosity Inflated self-esteem or an exaggerated sense of self-importance and self-worth (e.g., believing one has special powers or superior abilities). Hypersomnia Excessive daytime sleepiness, including difficulty staying awake or napping, or prolonged sleep episodes. Psychomotor agitation Increased motor activity associated with restlessness, including physical actions (e.g., fidgeting, pacing, feet tapping, handwringing). Psychomotor retardation A slowing of physical activities in which routine activities (e.g., eating, brushing teeth) are performed in an unusually slow manner. Social zeitgeber Zeitgeber is German for “time giver.” Social zeitgebers are environmental cues, such as meal times and interactions with other people, that entrain biological rhythms and thus sleep-wake cycle regularity. Socioeconomic status (SES) A person’s economic and social position based on income, education, and occupation. Suicidal ideation Recurring thoughts about suicide, including considering or planning for suicide, or preoccupation with suicide.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/12%3A_PSYCHOLOGICAL_DISORDERS/12.02%3A_Mood_Disorders.txt
By Deanna M. Barch Washington University in St. Louis Schizophrenia and the other psychotic disorders are some of the most impairing forms of psychopathology, frequently associated with a profound negative effect on the individual’s educational, occupational, and social function. Sadly, these disorders often manifest right at time of the transition from adolescence to adulthood, just as young people should be evolving into independent young adults. The spectrum of psychotic disorders includes schizophrenia, schizoaffective disorder, delusional disorder, schizotypal personality disorder, schizophreniform disorder, brief psychotic disorder, as well as psychosis associated with substance use or medical conditions. In this module, we summarize the primary clinical features of these disorders, describe the known cognitive and neurobiological changes associated with schizophrenia, describe potential risk factors and/or causes for the development of schizophrenia, and describe currently available treatments for schizophrenia. learning objectives • Describe the signs and symptoms of schizophrenia and related psychotic disorders. • Describe the most well-replicated cognitive and neurobiological changes associated with schizophrenia. • Describe the potential risk factors for the development of schizophrenia. • Describe the controversies associated with “clinical high risk” approaches to identifying individuals at risk for the development of schizophrenia. • Describe the treatments that work for some of the symptoms of schizophrenia. Most of you have probably had the experience of walking down the street in a city and seeing a person you thought was acting oddly. They may have been dressed in an unusual way, perhaps disheveled or wearing an unusual collection of clothes, makeup, or jewelry that did not seem to fit any particular group or subculture. They may have been talking to themselves or yelling at someone you could not see. If you tried to speak to them, they may have been difficult to follow or understand, or they may have acted paranoid or started telling a bizarre story about the people who were plotting against them. If so, chances are that you have encountered an individual with schizophrenia or another type of psychotic disorder. If you have watched the movie A Beautiful Mind or The Fisher King, you have also seen a portrayal of someone thought to have schizophrenia. Sadly, a few of the individuals who have committed some of the recently highly publicized mass murders may have had schizophrenia, though most people who commit such crimes do not have schizophrenia. It is also likely that you have met people with schizophrenia without ever knowing it, as they may suffer in silence or stay isolated to protect themselves from the horrors they see, hear, or believe are operating in the outside world. As these examples begin to illustrate, psychotic disorders involve many different types of symptoms, including delusions, hallucinations, disorganized speech and behavior, abnormal motor behavior (including catatonia), and negative symptoms such anhedonia/amotivation and blunted affect/reduced speech. Delusions are false beliefs that are often fixed, hard to change even when the person is presented with conflicting information, and are often culturally influenced in their content (e.g., delusions involving Jesus in Judeo-Christian cultures, delusions involving Allah in Muslim cultures). They can be terrifying for the person, who may remain convinced that they are true even when loved ones and friends present them with clear information that they cannot be true. There are many different types or themes to delusions. The most common delusions are persecutory and involve the belief that individuals or groups are trying to hurt, harm, or plot against the person in some way. These can be people that the person knows (people at work, the neighbors, family members), or more abstract groups (the FBI, the CIA, aliens, etc.). Other types of delusions include grandiose delusions, where the person believes that they have some special power or ability (e.g., I am the new Buddha, I am a rock star); referential delusions, where the person believes that events or objects in the environment have special meaning for them (e.g., that song on the radio is being played specifically for me); or other types of delusions where the person may believe that others are controlling their thoughts and actions, their thoughts are being broadcast aloud, or that others can read their mind (or they can read other people’s minds). When you see a person on the street talking to themselves or shouting at other people, they are experiencing hallucinations. These are perceptual experiences that occur even when there is no stimulus in the outside world generating the experiences. They can be auditory, visual, olfactory (smell), gustatory (taste), or somatic (touch). The most common hallucinations in psychosis (at least in adults) are auditory, and can involve one or more voices talking about the person, commenting on the person’s behavior, or giving them orders. The content of the hallucinations is frequently negative (“you are a loser,” “that drawing is stupid,” “you should go kill yourself”) and can be the voice of someone the person knows or a complete stranger. Sometimes the voices sound as if they are coming from outside the person’s head. Other times the voices seem to be coming from inside the person’s head, but are not experienced the same as the person’s inner thoughts or inner speech. Talking to someone with schizophrenia is sometimes difficult, as their speech may be difficult to follow, either because their answers do not clearly flow from your questions, or because one sentence does not logically follow from another. This is referred to as disorganized speech, and it can be present even when the person is writing. Disorganized behaviorcan include odd dress, odd makeup (e.g., lipstick outlining a mouth for 1 inch), or unusual rituals (e.g., repetitive hand gestures). Abnormal motor behavior can include catatonia, which refers to a variety of behaviors that seem to reflect a reduction in responsiveness to the external environment. This can include holding unusual postures for long periods of time, failing to respond to verbal or motor prompts from another person, or excessive and seemingly purposeless motor activity. Some of the most debilitating symptoms of schizophrenia are difficult for others to see. These include what people refer to as “negative symptoms” or the absence of certain things we typically expect most people to have. For example, anhedonia or amotivation reflect a lack of apparent interest in or drive to engage in social or recreational activities. These symptoms can manifest as a great amount of time spent in physical immobility. Importantly, anhedonia and amotivation do not seem to reflect a lack of enjoyment in pleasurable activities or events (Cohen & Minor, 2010; Kring & Moran, 2008; Llerena, Strauss, & Cohen, 2012) but rather a reduced drive or ability to take the steps necessary to obtain the potentially positive outcomes (Barch & Dowd, 2010). Flat affect and reduced speech (alogia) reflect a lack of showing emotions through facial expressions, gestures, and speech intonation, as well as a reduced amount of speech and increased pause frequency and duration. In many ways, the types of symptoms associated with psychosis are the most difficult for us to understand, as they may seem far outside the range of our normal experiences. Unlike depression or anxiety, many of us may not have had experiences that we think of as on the same continuum as psychosis. However, just like many of the other forms of psychopathology described in this book, the types of psychotic symptoms that characterize disorders like schizophrenia are on a continuum with “normal” mental experiences. For example, work by Jim van Os in the Netherlands has shown that a surprisingly large percentage of the general population (10%+) experience psychotic-like symptoms, though many fewer have multiple experiences and most will not continue to experience these symptoms in the long run (Verdoux & van Os, 2002). Similarly, work in a general population of adolescents and young adults in Kenya has also shown that a relatively high percentage of individuals experience one or more psychotic-like experiences (~19%) at some point in their lives (Mamah et al., 2012; Ndetei et al., 2012), though again most will not go on to develop a full-blown psychotic disorder. Schizophrenia is the primary disorder that comes to mind when we discuss “psychotic” disorders (see Table 1 for diagnostic criteria), though there are a number of other disorders that share one or more features with schizophrenia. In the remainder of this module, we will use the terms “psychosis” and “schizophrenia” somewhat interchangeably, given that most of the research has focused on schizophrenia. In addition to schizophrenia (see Table 1), other psychotic disorders include schizophreniform disorder (a briefer version of schizophrenia), schizoaffective disorder (a mixture of psychosis and depression/mania symptoms), delusional disorder (the experience of only delusions), and brief psychotic disorder (psychotic symptoms that last only a few days or weeks). The Cognitive Neuroscience of Schizophrenia As described above, when we think of the core symptoms of psychotic disorders such as schizophrenia, we think of people who hear voices, see visions, and have false beliefs about reality (i.e., delusions). However, problems in cognitive function are also a critical aspect of psychotic disorders and of schizophrenia in particular. This emphasis on cognition in schizophrenia is in part due to the growing body of research suggesting that cognitive problems in schizophrenia are a major source of disability and loss of functional capacity (Green, 2006; Nuechterlein et al., 2011). The cognitive deficits that are present in schizophrenia are widespread and can include problems with episodic memory (the ability to learn and retrieve new information or episodes in one’s life), working memory (the ability to maintain information over a short period of time, such as 30 seconds), and other tasks that require one to “control” or regulate one’s behavior (Barch & Ceaser, 2012; Bora, Yucel, & Pantelis, 2009a; Fioravanti, Carlone, Vitale, Cinti, & Clare, 2005; Forbes, Carrick, McIntosh, & Lawrie, 2009; Mesholam-Gately, Giuliano, Goff, Faraone, & Seidman, 2009). Individuals with schizophrenia also have difficulty with what is referred to as “processing speed” and are frequently slower than healthy individuals on almost all tasks. Importantly, these cognitive deficits are present prior to the onset of the illness (Fusar-Poli et al., 2007) and are also present, albeit in a milder form, in the first-degree relatives of people with schizophrenia (Snitz, Macdonald, & Carter, 2006). This suggests that cognitive impairments in schizophrenia reflect part of the risk for the development of psychosis, rather than being an outcome of developing psychosis. Further, people with schizophrenia who have more severe cognitive problems also tend to have more severe negative symptoms and more disorganized speech and behavior (Barch, Carter, & Cohen, 2003; Barch et al., 1999; Dominguez Mde, Viechtbauer, Simons, van Os, & Krabbendam, 2009; Ventura, Hellemann, Thames, Koellner, & Nuechterlein, 2009; Ventura, Thames, Wood, Guzik, & Hellemann, 2010). In addition, people with more cognitive problems have worse function in everyday life (Bowie et al., 2008; Bowie, Reichenberg, Patterson, Heaton, & Harvey, 2006; Fett et al., 2011). Some people with schizophrenia also show deficits in what is referred to as social cognition, though it is not clear whether such problems are separate from the cognitive problems described above or the result of them (Hoe, Nakagami, Green, & Brekke, 2012; Kerr & Neale, 1993; van Hooren et al., 2008). This includes problems with the recognition of emotional expressions on the faces of other individuals (Kohler, Walker, Martin, Healey, & Moberg, 2010) and problems inferring the intentions of other people (theory of mind) (Bora, Yucel, & Pantelis, 2009b). Individuals with schizophrenia who have more problems with social cognition also tend to have more negative and disorganized symptoms (Ventura, Wood, & Hellemann, 2011), as well as worse community function (Fett et al., 2011). The advent of neuroimaging techniques such as structural and functional magnetic resonance imaging and positron emission tomographyopened up the ability to try to understand the brain mechanisms of the symptoms of schizophrenia as well as the cognitive impairments found in psychosis. For example, a number of studies have suggested that delusions in psychosis may be associated with problems in “salience” detection mechanisms supported by the ventral striatum (Jensen & Kapur, 2009; Jensen et al., 2008; Kapur, 2003; Kapur, Mizrahi, & Li, 2005; Murray et al., 2008) and the anterior prefrontal cortex (Corlett et al., 2006; Corlett, Honey, & Fletcher, 2007; Corlett, Murray, et al., 2007a, 2007b). These are regions of the brain that normally increase their activity when something important (aka “salient”) happens in the environment. If these brain regions misfire, it may lead individuals with psychosis to mistakenly attribute importance to irrelevant or unconnected events. Further, there is good evidence that problems in working memory and cognitive control in schizophrenia are related to problems in the function of a region of the brain called the dorsolateral prefrontal cortex (DLPFC) (Minzenberg, Laird, Thelen, Carter, & Glahn, 2009; Ragland et al., 2009). These problems include changes in how the DLPFC works when people are doing working-memory or cognitive-control tasks, and problems with how this brain region is connected to other brain regions important for working memory and cognitive control, including the posterior parietal cortex (e.g., Karlsgodt et al., 2008; J. J. Kim et al., 2003; Schlosser et al., 2003), the anterior cingulate (Repovs & Barch, 2012), and temporal cortex (e.g., Fletcher et al., 1995; Meyer-Lindenberg et al., 2001). In terms of understanding episodic memory problems in schizophrenia, many researchers have focused on medial temporal lobe deficits, with a specific focus on the hippocampus (e.g., Heckers & Konradi, 2010). This is because there is much data from humans and animals showing that the hippocampus is important for the creation of new memories (Squire, 1992). However, it has become increasingly clear that problems with the DLPFC also make important contributions to episodic memory deficits in schizophrenia (Ragland et al., 2009), probably because this part of the brain is important for controlling our use of memory. In addition to problems with regions such as the DLFPC and medial temporal lobes in schizophrenia described above, magnitude resonance neuroimaging studies have also identified changes in cellular architecture, white matter connectivity, and gray matter volume in a variety of regions that include the prefrontal and temporal cortices (Bora et al., 2011). People with schizophrenia also show reduced overall brain volume, and reductions in brain volume as people get older may be larger in those with schizophrenia than in healthy people (Olabi et al., 2011). Taking antipsychotic medications or taking drugs such as marijuana, alcohol, and tobacco may cause some of these structural changes. However, these structural changes are not completely explained by medications or substance use alone. Further, both functional and structural brain changes are seen, again to a milder degree, in the first-degree relatives of people with schizophrenia (Boos, Aleman, Cahn, Pol, & Kahn, 2007; Brans et al., 2008; Fusar-Poli et al., 2007; MacDonald, Thermenos, Barch, & Seidman, 2009). This again suggests that that neural changes associated with schizophrenia are related to a genetic risk for this illness. Risk Factors for Developing Schizophrenia It is clear that there are important genetic contributions to the likelihood that someone will develop schizophrenia, with consistent evidence from family, twin, and adoption studies. (Sullivan, Kendler, & Neale, 2003). However, there is no “schizophrenia gene” and it is likely that the genetic risk for schizophrenia reflects the summation of many different genes that each contribute something to the likelihood of developing psychosis (Gottesman & Shields, 1967; Owen, Craddock, & O'Donovan, 2010). Further, schizophrenia is a very heterogeneous disorder, which means that two different people with “schizophrenia” may each have very different symptoms (e.g., one has hallucinations and delusions, the other has disorganized speech and negative symptoms). This makes it even more challenging to identify specific genes associated with risk for psychosis. Importantly, many studies also now suggest that at least some of the genes potentially associated with schizophrenia are also associated with other mental health conditions, including bipolar disorder, depression, and autism (Gejman, Sanders, & Kendler, 2011; Y. Kim, Zerwas, Trace, & Sullivan, 2011; Owen et al., 2010; Rutter, Kim-Cohen, & Maughan, 2006). There are also a number of environmental factors that are associated with an increased risk of developing schizophrenia. For example, problems during pregnancy such as increased stress, infection, malnutrition, and/or diabetes have been associated with increased risk of schizophrenia. In addition, complications that occur at the time of birth and which cause hypoxia (lack of oxygen) are also associated with an increased risk for developing schizophrenia (M. Cannon, Jones, & Murray, 2002; Miller et al., 2011). Children born to older fathers are also at a somewhat increased risk of developing schizophrenia. Further, using cannabis increases risk for developing psychosis, especially if you have other risk factors (Casadio, Fernandes, Murray, & Di Forti, 2011; Luzi, Morrison, Powell, di Forti, & Murray, 2008). The likelihood of developing schizophrenia is also higher for kids who grow up in urban settings (March et al., 2008) and for some minority ethnic groups (Bourque, van der Ven, & Malla, 2011). Both of these factors may reflect higher social and environmental stress in these settings. Unfortunately, none of these risk factors is specific enough to be particularly useful in a clinical setting, and most people with these “risk” factors do not develop schizophrenia. However, together they are beginning to give us clues as the neurodevelopmental factors that may lead someone to be at an increased risk for developing this disease. An important research area on risk for psychosis has been work with individuals who may be at “clinical high risk.” These are individuals who are showing attenuated (milder) symptoms of psychosis that have developed recently and who are experiencing some distress or disability associated with these symptoms. When people with these types of symptoms are followed over time, about 35% of them develop a psychotic disorder (T. D. Cannon et al., 2008), most frequently schizophrenia (Fusar-Poli, McGuire, & Borgwardt, 2012). In order to identify these individuals, a new category of diagnosis, called “Attenuated Psychotic Syndrome,” was added to Section III (the section for disorders in need of further study) of the DSM-5 (see Table 1 for symptoms) (APA, 2013). However, adding this diagnostic category to the DSM-5 created a good deal of controversy (Batstra & Frances, 2012; Fusar-Poli & Yung, 2012). Many scientists and clinicians have been worried that including “risk” states in the DSM-5 would create mental disorders where none exist, that these individuals are often already seeking treatment for other problems, and that it is not clear that we have good treatments to stop these individuals from developing to psychosis. However, the counterarguments have been that there is evidence that individuals with high-risk symptoms develop psychosis at a much higher rate than individuals with other types of psychiatric symptoms, and that the inclusion of Attenuated Psychotic Syndrome in Section III will spur important research that might have clinical benefits. Further, there is some evidence that non-invasive treatments such as omega-3 fatty acids and intensive family intervention may help reduce the development of full-blown psychosis (Preti & Cella, 2010) in people who have high-risk symptoms. Treatment of Schizophrenia The currently available treatments for schizophrenia leave much to be desired, and the search for more effective treatments for both the psychotic symptoms of schizophrenia (e.g., hallucinations and delusions) as well as cognitive deficits and negative symptoms is a highly active area of research. The first line of treatment for schizophrenia and other psychotic disorders is the use of antipsychotic medications. There are two primary types of antipsychotic medications, referred to as “typical” and “atypical.” The fact that “typical” antipsychotics helped some symptoms of schizophrenia was discovered serendipitously more than 60 years ago (Carpenter & Davis, 2012; Lopez-Munoz et al., 2005). These are drugs that all share a common feature of being a strong block of the D2 type dopamine receptor. Although these drugs can help reduce hallucinations, delusions, and disorganized speech, they do little to improve cognitive deficits or negative symptoms and can be associated with distressing motor side effects. The newer generation of antipsychotics is referred to as “atypical” antipsychotics. These drugs have more mixed mechanisms of action in terms of the receptor types that they influence, though most of them also influence D2 receptors. These newer antipsychotics are not necessarily more helpful for schizophrenia but have fewer motor side effects. However, many of the atypical antipsychotics are associated with side effects referred to as the “metabolic syndrome,” which includes weight gain and increased risk for cardiovascular illness, Type-2 diabetes, and mortality (Lieberman et al., 2005). The evidence that cognitive deficits also contribute to functional impairment in schizophrenia has led to an increased search for treatments that might enhance cognitive function in schizophrenia. Unfortunately, as of yet, there are no pharmacological treatments that work consistently to improve cognition in schizophrenia, though many new types of drugs are currently under exploration. However, there is a type of psychological intervention, referred to as cognitive remediation, which has shown some evidence of helping cognition and function in schizophrenia. In particular, a version of this treatment called Cognitive Enhancement Therapy (CET) has been shown to improve cognition, functional outcome, social cognition, and to protect against gray matter loss (Eack et al., 2009; Eack, Greenwald, Hogarty, & Keshavan, 2010; Eack et al., 2010; Eack, Pogue-Geile, Greenwald, Hogarty, & Keshavan, 2010; Hogarty, Greenwald, & Eack, 2006) in young individuals with schizophrenia. The development of new treatments such as Cognitive Enhancement Therapy provides some hope that we will be able to develop new and better approaches to improving the lives of individuals with this serious mental health condition and potentially even prevent it some day. Outside Resources Book: Ben Behind His Voices: One family’s journal from the chaos of schizophrenia to hope (2011). Randye Kaye. Rowman and Littlefield. Book: Conquering Schizophrenia: A father, his son, and a medical breakthrough (1997). Peter Wyden. Knopf. Book: Henry’s Demons: Living with schizophrenia, a father and son’s story (2011). Henry and Patrick Cockburn. Scribner Macmillan. Book: My Mother’s Keeper: A daughter’s memoir of growing up in the shadow of schizophrenia (1997). Tara Elgin Holley. William Morrow Co. Book: Recovered, Not Cured: A journey through schizophrenia (2005). Richard McLean. Allen and Unwin. Book: The Center Cannot Hold: My journey through madness (2008). Elyn R. Saks. Hyperion. Book: The Quiet Room: A journal out of the torment of madness (1996). Lori Schiller. Grand Central Publishing. Book: Welcome Silence: My triumph over schizophrenia (2003). Carol North. CSS Publishing. Web: National Alliance for the Mentally Ill. This is an excellent site for learning more about advocacy for individuals with major mental illnesses such as schizophrenia. http://www.nami.org/ Web: National Institute of Mental Health. This website has information on NIMH-funded schizophrenia research. http://www.nimh.nih.gov/health/topics/schizophrenia/index.shtml Web: Schizophrenia Research Forum. This is an excellent website that contains a broad array of information about current research on schizophrenia. http://www.schizophreniaforum.org/ Discussion Questions 1. Describe the major differences between the major psychotic disorders. 2. How would one be able to tell when an individual is “delusional” versus having non-delusional beliefs that differ from the societal normal? How should cultural and sub-cultural variation been taken into account when assessing psychotic symptoms? 3. Why are cognitive impairments important to understanding schizophrenia? 4. Why has the inclusion of a new diagnosis (Attenuated Psychotic Syndrome) in Section III of the DSM-5 created controversy? 5. What are some of the factors associated with increased risk for developing schizophrenia? If we know whether or not someone has these risk factors, how well can we tell whether they will develop schizophrenia? 6. What brain changes are most consistent in schizophrenia? 7. Do antipsychotic medications work well for all symptoms of schizophrenia? If not, which symptoms respond better to antipsychotic medications? 8. Are there any treatments besides antipsychotic medications that help any of the symptoms of schizophrenia? If so, what are they? Vocabulary Alogia A reduction in the amount of speech and/or increased pausing before the initiation of speech. Anhedonia/amotivation A reduction in the drive or ability to take the steps or engage in actions necessary to obtain the potentially positive outcome. Catatonia Behaviors that seem to reflect a reduction in responsiveness to the external environment. This can include holding unusual postures for long periods of time, failing to respond to verbal or motor prompts from another person, or excessive and seemingly purposeless motor activity. Delusions False beliefs that are often fixed, hard to change even in the presence of conflicting information, and often culturally influenced in their content. Diagnostic criteria The specific criteria used to determine whether an individual has a specific type of psychiatric disorder. Commonly used diagnostic criteria are included in the Diagnostic and Statistical Manual of Mental Disorder, 5th Edition (DSM-5) and the Internal Classification of Disorders, Version 9 (ICD-9). Disorganized behavior Behavior or dress that is outside the norm for almost all subcultures. This would include odd dress, odd makeup (e.g., lipstick outlining a mouth for 1 inch), or unusual rituals (e.g., repetitive hand gestures). Disorganized speech Speech that is difficult to follow, either because answers do not clearly follow questions or because one sentence does not logically follow from another. Dopamine A neurotransmitter in the brain that is thought to play an important role in regulating the function of other neurotransmitters. Episodic memory The ability to learn and retrieve new information or episodes in one’s life. Flat affect A reduction in the display of emotions through facial expressions, gestures, and speech intonation. Functional capacity The ability to engage in self-care (cook, clean, bathe), work, attend school, and/or engage in social relationships. Hallucinations Perceptual experiences that occur even when there is no stimulus in the outside world generating the experiences. They can be auditory, visual, olfactory (smell), gustatory (taste), or somatic (touch). Magnetic resonance imaging A set of techniques that uses strong magnets to measure either the structure of the brain (e.g., gray matter and white matter) or how the brain functions when a person performs cognitive tasks (e.g., working memory or episodic memory) or other types of tasks. Neurodevelopmental Processes that influence how the brain develops either in utero or as the child is growing up. Positron emission tomography A technique that uses radio-labelled ligands to measure the distribution of different neurotransmitter receptors in the brain or to measure how much of a certain type of neurotransmitter is released when a person is given a specific type of drug or does a particularly cognitive task. Processing speed The speed with which an individual can perceive auditory or visual information and respond to it. Psychopathology Illnesses or disorders that involve psychological or psychiatric symptoms. Working memory The ability to maintain information over a short period of time, such as 30 seconds or less.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/12%3A_PSYCHOLOGICAL_DISORDERS/12.03%3A_Schizophrenia_Spectrum_Disorders.txt
By Cristina Crego and Thomas Widiger University of Kentucky The purpose of this module is to define what is meant by a personality disorder, identify the five domains of general personality (i.e., neuroticism, extraversion, openness, agreeableness, and conscientiousness), identify the six personality disorders proposed for retention in the 5th edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) (i.e., borderline, antisocial, schizotypal, avoidant, obsessive-compulsive, and narcissistic), summarize the etiology for antisocial and borderline personality disorder, and identify the treatment for borderline personality disorder (i.e., dialectical behavior therapy and mentalization therapy). learning objectives • Define what is meant by a personality disorder. • Identify the five domains of general personality. • Identify the six personality disorders proposed for retention in DSM-5. • Summarize the etiology for antisocial and borderline personality disorder. • Identify the treatment for borderline personality disorder. Introduction Everybody has their own unique personality; that is, their characteristic manner of thinking, feeling, behaving, and relating to others (John, Robins, & Pervin, 2008). Some people are typically introverted, quiet, and withdrawn; whereas others are more extraverted, active, and outgoing. Some individuals are invariably conscientiousness, dutiful, and efficient; whereas others might be characteristically undependable and negligent. Some individuals are consistently anxious, self-conscious, and apprehensive; whereas others are routinely relaxed, self-assured, and unconcerned. Personality traits refer to these characteristic, routine ways of thinking, feeling, and relating to others. There are signs or indicators of these traits in childhood, but they become particularly evident when the person is an adult. Personality traits are integral to each person’s sense of self, as they involve what people value, how they think and feel about things, what they like to do, and, basically, what they are like most every day throughout much of their lives. There are literally hundreds of different personality traits. All of these traits can be organized into the broad dimensions referred to as the Five-Factor Model (John, Naumann, & Soto, 2008). These five broad domains are inclusive; there does not appear to be any traits of personality that lie outside of the Five-Factor Model. This even applies to traits that you may use to describe yourself. Table I provides illustrative traits for both poles of the five domains of this model of personality. A number of the traits that you see in this table may describe you. If you can think of some other traits that describe yourself, you should be able to place them somewhere in this table. DSM-5 Personality Disorders When personality traits result in significant distress, social impairment, and/or occupational impairment, they are considered to be a personality disorder (American Psychiatric Association, 2013). The authoritative manual for what constitutes a personality disorder is provided by the American Psychiatric Association’s (APA) Diagnostic and Statistical Manual of Mental Disorders (DSM), the current version of which is DSM-5 (APA, 2013). The DSM provides a common language and standard criteria for the classification and diagnosis of mental disorders. This manual is used by clinicians, researchers, health insurance companies, and policymakers. DSM-5 includes 10 personality disorders: antisocial, avoidant, borderline, dependent, histrionic, narcissistic, obsessive-compulsive, paranoid, schizoid, and schizotypal. All 10 of these personality disorders will be included in the next edition of the diagnostic manual, DSM-5. This list of 10 though does not fully cover all of the different ways in which a personality can be maladaptive. DSM-5 also includes a “wastebasket” diagnosis of other specified personality disorder (OSPD) and unspecified personality disorder (UPD). This diagnosis is used when a clinician believes that a patient has a personality disorder but the traits that constitute this disorder are not well covered by one of the 10 existing diagnoses. OSPD and UPD or as they used to be referred to in previous editions - PDNOS (personality disorder not otherwise specified) are often one of the most frequently used diagnoses in clinical practice, suggesting that the current list of 10 is not adequately comprehensive (Widiger & Trull, 2007). Description Each of the 10 DSM-5 (and DSM-IV-TR) personality disorders is a constellation of maladaptive personality traits, rather than just one particular personality trait (Lynam & Widiger, 2001). In this regard, personality disorders are “syndromes.” For example, avoidant personality disorder is a pervasive pattern of social inhibition, feelings of inadequacy, and hypersensitivity to negative evaluation (APA, 2013), which is a combination of traits from introversion (e.g., socially withdrawn, passive, and cautious) and neuroticism (e.g., self-consciousness, apprehensiveness, anxiousness, and worrisome). Dependent personality disorder includes submissiveness, clinging behavior, and fears of separation (APA, 2013), for the most part a combination of traits of neuroticism (anxious, uncertain, pessimistic, and helpless) and maladaptive agreeableness (e.g., gullible, guileless, meek, subservient, and self-effacing). Antisocial personality disorder is, for the most part, a combination of traits from antagonism (e.g., dishonest, manipulative, exploitative, callous, and merciless) and low conscientiousness (e.g., irresponsible, immoral, lax, hedonistic, and rash). See the 1967 movie, Bonnie and Clyde, starring Warren Beatty, for a nice portrayal of someone with antisocial personality disorder. Some of the DSM-5 personality disorders are confined largely to traits within one of the basic domains of personality. For example, obsessive-compulsive personality disorder is largely a disorder of maladaptive conscientiousness, including such traits as workaholism, perfectionism, punctilious, ruminative, and dogged; schizoid is confined largely to traits of introversion (e.g., withdrawn, cold, isolated, placid, and anhedonic); borderlinepersonality disorder is largely a disorder of neuroticism, including such traits as emotionally unstable, vulnerable, overwhelmed, rageful, depressive, and self-destructive (watch the 1987 movie, Fatal Attraction, starring Glenn Close, for a nice portrayal of this personality disorder); and histrionic personality disorder is largely a disorder of maladaptive extraversion, including such traits as attention-seeking, seductiveness, melodramatic emotionality, and strong attachment needs (see the 1951 film adaptation of Tennessee William’s play, Streetcar Named Desire, starring Vivian Leigh, for a nice portrayal of this personality disorder). It should be noted though that a complete description of each DSM-5 personality disorder would typically include at least some traits from other domains. For example, antisocial personality disorder (or psychopathy) also includes some traits from low neuroticism (e.g., fearlessness and glib charm) and extraversion (e.g., excitement-seeking and assertiveness); borderline includes some traits from antagonism (e.g., manipulative and oppositional) and low conscientiousness (e.g., rash); and histrionic includes some traits from antagonism (e.g., vanity) and low conscientiousness (e.g., impressionistic). Narcissistic personality disorder includes traits from neuroticism (e.g., reactive anger, reactive shame, and need for admiration), extraversion (e.g., exhibitionism and authoritativeness), antagonism (e.g., arrogance, entitlement, and lack of empathy), and conscientiousness (e.g., acclaim-seeking). Schizotypal personality disorder includes traits from neuroticism (e.g., social anxiousness and social discomfort), introversion (e.g., social withdrawal), unconventionality (e.g., odd, eccentric, peculiar, and aberrant ideas), and antagonism (e.g., suspiciousness). The APA currently conceptualizes personality disorders as qualitatively distinct conditions; distinct from each other and from normal personality functioning. However, included within an appendix to DSM-5 is an alternative view that personality disorders are simply extreme and/or maladaptive variants of normal personality traits, as suggested herein. Nevertheless, many leading personality disorder researchers do not hold this view (e.g., Gunderson, 2010; Hopwood, 2011; Shedler et al., 2010). They suggest that there is something qualitatively unique about persons suffering from a personality disorder, usually understood as a form of pathology in sense of self and interpersonal relatedness that is considered to be distinct from personality traits (APA, 2012; Skodol, 2012). For example, it has been suggested that antisocial personality disorder includes impairments in identity (e.g., egocentrism), self-direction, empathy, and capacity for intimacy, which are said to be different from such traits as arrogance, impulsivity, and callousness (APA, 2012). Validity It is quite possible that in future revisions of the DSM some of the personality disorders included in DSM-5 and DSM-IV-TR will no longer be included. In fact, for DSM-5 it was originally proposed that four be deleted. The personality disorders that were slated for deletion were histrionic, schizoid, paranoid, and dependent (APA, 2012). The rationale for the proposed deletions was in large part because they are said to have less empirical support than the diagnoses that were at the time being retained (Skodol, 2012). There is agreement within the field with regard to the empirical support for the borderline, antisocial, and schizotypal personality disorders (Mullins-Sweat, Bernstein, & Widiger, 2012; Skodol, 2012). However, there is a difference of opinion with respect to the empirical support for the dependent personality disorder (Bornstein, 2012; Livesley, 2011; Miller, Widiger, & Campbell, 2010; Mullins-Sweat et al., 2012). Little is known about the specific etiology for most of the DSM-5 personality disorders. Because each personality disorder represents a constellation of personality traits, the etiology for the syndrome will involve a complex interaction of an array of different neurobiological vulnerabilities and dispositions with a variety of environmental, psychosocial events. Antisocial personality disorder, for instance, is generally considered to be the result of an interaction of genetic dispositions for low anxiousness, aggressiveness, impulsivity, and/or callousness, with a tough, urban environment, inconsistent parenting, poor parental role modeling, and/or peer support (Hare, Neumann, & Widiger, 2012). Borderline personality disorder is generally considered to be the result of an interaction of a genetic disposition to negative affectivity interacting with a malevolent, abusive, and/or invalidating family environment (Hooley, Cole, & Gironde, 2012). To the extent that one considers the DSM-5 personality disorders to be maladaptive variants of general personality structure, as described, for instance, within the Five-Factor Model, there would be a considerable body of research to support the validity for all of the personality disorders, including even the histrionic, schizoid, and paranoid. There is compelling multivariate behavior genetic support with respect to the precise structure of the Five-Factor Model (e.g., Yamagata et al., 2006), childhood antecedents (Caspi, Roberts, & Shiner, 2005), universality (Allik, 2005), temporal stability across the lifespan (Roberts & DelVecchio, 2000), ties with brain structure (DeYoung, Hirsh, Shane, Papademetris, Rajeevan, & Gray, 2010), and even molecular genetic support for neuroticism (Widiger, 2009). Treatment Personality disorders are relatively unique because they are often “ego-syntonic;” that is, most people are largely comfortable with their selves, with their characteristic manner of behaving, feeling, and relating to others. As a result, people rarely seek treatment for their antisocial, narcissistic, histrionic, paranoid, and/or schizoid personality disorder. People typically lack insight into the maladaptivity of their personality. One clear exception though is borderline personality disorder (and perhaps as well avoidant personality disorder). Neuroticism is the domain of general personality structure that concerns inherent feelings of emotional pain and suffering, including feelings of distress, anxiety, depression, self-consciousness, helplessness, and vulnerability. Persons who have very high elevations on neuroticism (i.e., persons with borderline personality disorder) experience life as one of pain and suffering, and they will seek treatment to alleviate this severe emotional distress. People with avoidant personality may also seek treatment for their high levels of neuroticism (anxiousness and self-consciousness) and introversion (social isolation). In contrast, narcissistic individuals will rarely seek treatment to reduce their arrogance; paranoid persons rarely seek treatment to reduce their feelings of suspiciousness; and antisocial people rarely (or at least willfully) seek treatment to reduce their disposition for criminality, aggression, and irresponsibility. Nevertheless, maladaptive personality traits will be evident in many individuals seeking treatment for other mental disorders, such as anxiety, mood, or substance use. Many of the people with a substance use disorder will have antisocial personality traits; many of the people with mood disorder will have borderline personality traits. The prevalence of personality disorders within clinical settings is estimated to be well above 50% (Torgersen, 2012). As many as 60% of inpatients within some clinical settings are diagnosed with borderline personality disorder (APA, 2000). Antisocial personality disorder may be diagnosed in as many as 50% of inmates within a correctional setting (Hare et al., 2012). It is estimated that 10% to 15% of the general population meets criteria for at least one of the 10 DSM-IV-TR personality disorders (Torgersen, 2012), and quite a few more individuals are likely to have maladaptive personality traits not covered by one of the 10 DSM-5 diagnoses. The presence of a personality disorder will often have an impact on the treatment of other mental disorders, typically inhibiting or impairing responsivity. Antisocial persons will tend to be irresponsible and negligent; borderline persons can form intensely manipulative attachments to their therapists; paranoid patients will be unduly suspicious and accusatory; narcissistic patients can be dismissive and denigrating; and dependent patients can become overly attached to and feel helpless without their therapists. It is a misnomer, though, to suggest that personality disorders cannot themselves be treated. Personality disorders are among the most difficult of disorders to treat because they involve well-established behaviors that can be integral to a client’s self-image (Millon, 2011). Nevertheless, much has been written on the treatment of personality disorder (e.g., Beck, Freeman, Davis, & Associates, 1990; Gunderson & Gabbard, 2000), and there is empirical support for clinically and socially meaningful changes in response to psychosocial and pharmacologic treatments (Perry & Bond, 2000). The development of an ideal or fully healthy personality structure is unlikely to occur through the course of treatment, but given the considerable social, public health, and personal costs associated with some of the personality disorders, such as the antisocial and borderline, even just moderate adjustments in personality functioning can represent quite significant and meaningful change. Nevertheless, manualized and/or empirically validated treatment protocols have been developed for only one personality disorder, borderline (APA, 2001). Focus Topic: Treatment of Borderline Personality Disorder Dialectical behavior therapy (Lynch & Cuyper, 2012) and mentalization therapy (Bateman & Fonagy, 2012): Dialectical behavior therapy is a form of cognitive-behavior therapy that draws on principles from Zen Buddhism, dialectical philosophy, and behavioral science. The treatment has four components: individual therapy, group skills training, telephone coaching, and a therapist consultation team, and will typically last a full year. As such, it is a relatively expensive form of treatment, but research has indicated that its benefits far outweighs its costs, both financially and socially. It is unclear why specific and explicit treatment manuals have not been developed for the other personality disorders. This may reflect a regrettable assumption that personality disorders are unresponsive to treatment. It may also reflect the complexity of their treatment. As noted earlier, each DSM-5 disorder is a heterogeneous constellation of maladaptive personality traits. In fact, a person can meet diagnostic criteria for the antisocial, borderline, schizoid, schizotypal, narcissistic, and avoidant personality disorders and yet have only one diagnostic criterion in common. For example, only five of nine features are necessary for the diagnosis of borderline personality disorder; therefore, two persons can meet criteria for this disorder and yet have only one feature in common. In addition, patients meeting diagnostic criteria for one personality disorder will often meet diagnostic criteria for another. This degree of diagnostic overlap and heterogeneity of membership hinders tremendously any effort to identify a specific etiology, pathology, or treatment for a respective personality disorder as there is so much variation within any particular group of patients sharing the same diagnosis (Smith & Zapolski, 2009). Of course, this diagnostic overlap and complexity did not prevent researchers and clinicians from developing dialectical behavior therapy and mentalization therapy. A further reason for the weak progress in treatment development is that, as noted earlier, persons rarely seek treatment for their personality disorder. It would be difficult to obtain a sufficiently large group of people with, for instance, narcissistic or obsessive–compulsive disorder to participate in a treatment outcome study, one receiving the manualized treatment protocol, the other receiving treatment as usual. Conclusions It is evident that all individuals have a personality, as indicated by their characteristic way of thinking, feeling, behaving, and relating to others. For some people, these traits result in a considerable degree of distress and/or impairment, constituting a personality disorder. A considerable body of research has accumulated to help understand the etiology, pathology, and/or treatment for some personality disorders (i.e., antisocial, schizotypal, borderline, dependent, and narcissistic), but not so much for others (e.g., histrionic, schizoid, and paranoid). However, researchers and clinicians are now shifting toward a more dimensional understanding of personality disorders, wherein each is understood as a maladaptive variant of general personality structure, thereby bringing to bear all that is known about general personality functioning to an understanding of these maladaptive variants. Outside Resources Structured Clinical Interview for DSM-5 (SCID-5) https://www.appi.org/products/structured-clinical-interview-for-dsm-5-scid-5 Web: DSM-5 website discussion of personality disorders http://www.dsm5.org/ProposedRevision...Disorders.aspx Discussion Questions 1. Do you think that any of the personality disorders, or some of their specific traits, are ever good or useful to have? 2. If someone with a personality disorder commits a crime, what is the right way for society to respond? For example, does or should meeting diagnostic criteria for antisocial personality disorder mitigate (lower) a person’s responsibility for committing a crime? 3. Given what you know about personality disorders and the traits that comprise each one, would you say there is any personality disorder that is likely to be diagnosed in one gender more than the other? Why or why not? 4. Do you believe that personality disorders can be best understood as a constellation of maladaptive personality traits, or do you think that there is something more involved for individuals suffering from a personality disorder? 5. The authors suggested Clyde Barrow as an example of antisocial personality disorder and Blanche Dubois for histrionic personality disorder. Can you think of a person from the media or literature who would have at least some of the traits of narcissistic personality disorder? Vocabulary Antisocial A pervasive pattern of disregard and violation of the rights of others. These behaviors may be aggressive or destructive and may involve breaking laws or rules, deceit or theft. Avoidant A pervasive pattern of social inhibition, feelings of inadequacy, and hypersensitivity to negative evaluation. Borderline A pervasive pattern of instability of interpersonal relationships, self-image, and affects, and marked impulsivity. Dependent A pervasive and excessive need to be taken care of that leads to submissive and clinging behavior and fears of separation. Five-Factor Model Five broad domains or dimensions that are used to describe human personality. Histrionic A pervasive pattern of excessive emotionality and attention seeking. Narcissistic A pervasive pattern of grandiosity (in fantasy or behavior), need for admiration, and lack of empathy. Obsessive-compulsive A pervasive pattern of preoccupation with orderliness, perfectionism, and mental and interpersonal control, at the expense of flexibility, openness, and efficiency. Paranoid A pervasive distrust and suspiciousness of others such that their motives are interpreted as malevolent. Personality Characteristic, routine ways of thinking, feeling, and relating to others. Personality disorders When personality traits result in significant distress, social impairment, and/or occupational impairment. Schizoid A pervasive pattern of detachment from social relationships and a restricted range of expression of emotions in interpersonal settings. Schizotypal A pervasive pattern of social and interpersonal deficits marked by acute discomfort with, and reduced capacity for, close relationships as well as perceptual distortions and eccentricities of behavior.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/12%3A_PSYCHOLOGICAL_DISORDERS/12.04%3A_Personality_Disorders.txt
• 13.1: Therapeutic Orientations This module outlines some of the best-known therapeutic approaches and explains the history, techniques, advantages, and disadvantages associated with each. The most effective modern approach is cognitive behavioral therapy (CBT). We also discuss psychoanalytic therapy, person-centered therapy, and mindfulness-based approaches. Drug therapy and emerging new treatment strategies will also be briefly explored. • 13.2: Psychopharmacology Psychopharmacology is the study of how drugs affect behavior. If a drug changes your perception, or the way you feel or think, the drug exerts effects on your brain and nervous system. In this module, we will provide an overview of some of these topics as well as discuss some current controversial areas in the field of psychopharmacology. 13: THERAPIES By Hannah Boettcher, Stefan G. Hofmann, and Q. Jade Wu Boston University In the past century, a number of psychotherapeutic orientations have gained popularity for treating mental illnesses. This module outlines some of the best-known therapeutic approaches and explains the history, techniques, advantages, and disadvantages associated with each. The most effective modern approach is cognitive behavioral therapy (CBT). We also discuss psychoanalytic therapy, person-centered therapy, and mindfulness-based approaches. Drug therapy and emerging new treatment strategies will also be briefly explored. Learning Objectives • Become familiar with the most widely practiced approaches to psychotherapy. • For each therapeutic approach, consider: history, goals, key techniques, and empirical support. • Consider the impact of emerging treatment strategies in mental health. Introduction The history of mental illness can be traced as far back as 1500 BCE, when the ancient Egyptians noted cases of “distorted concentration” and “emotional distress in the heart or mind” (Nasser, 1987). Today, nearly half of all Americans will experience mental illness at some point in their lives, and mental health problems affect more than one-quarter of the population in any given year (Kessler et al., 2005). Fortunately, a range of psychotherapies exist to treat mental illnesses. This module provides an overview of some of the best-known schools of thought in psychotherapy. Currently, the most effective approach is called Cognitive Behavioral Therapy (CBT); however, other approaches, such as psychoanalytic therapy, person-centered therapy, and mindfulness-based therapies are also used—though the effectiveness of these treatments aren’t as clear as they are for CBT. Throughout this module, note the advantages and disadvantages of each approach, paying special attention to their support by empirical research. Psychoanalysis and Psychodynamic Therapy The earliest organized therapy for mental disorders was psychoanalysis. Made famous in the early 20th century by one of the best-known clinicians of all time, Sigmund Freud, this approach stresses that mental health problems are rooted in unconscious conflicts and desires. In order to resolve the mental illness, then, these unconscious struggles must be identified and addressed. Psychoanalysis often does this through exploring one’s early childhood experiences that may have continuing repercussions on one’s mental health in the present and later in life. Psychoanalysis is an intensive, long-term approach in which patients and therapists may meet multiple times per week, often for many years. History of Psychoanalytic Therapy Freud initially suggested that mental health problems arise from efforts to push inappropriate sexual urges out of conscious awareness (Freud, 1895/1955). Later, Freud suggested more generally that psychiatric problems are the result of tension between different parts of the mind: the id, the superego, and the ego. In Freud’s structural model, the id represents pleasure-driven unconscious urges (e.g., our animalistic desires for sex and aggression), while the superego is the semi-conscious part of the mind where morals and societal judgment are internalized (e.g., the part of you that automatically knows how society expects you to behave). The ego—also partly conscious—mediates between the id and superego. Freud believed that bringing unconscious struggles like these (where the id demands one thing and the superego another) into conscious awareness would relieve the stress of the conflict (Freud, 1920/1955)—which became the goal of psychoanalytic therapy. Although psychoanalysis is still practiced today, it has largely been replaced by the more broadly defined psychodynamic therapy. This latter approach has the same basic tenets as psychoanalysis, but is briefer, makes more of an effort to put clients in their social and interpersonal context, and focuses more on relieving psychological distress than on changing the person. Techniques in Psychoanalysis Psychoanalysts and psychodynamic therapists employ several techniques to explore patients’ unconscious mind. One common technique is called free association. Here, the patient shares any and all thoughts that come to mind, without attempting to organize or censor them in any way. For example, if you took a pen and paper and just wrote down whatever came into your head, letting one thought lead to the next without allowing conscious criticism to shape what you were writing, you would be doing free association. The analyst then uses his or her expertise to discern patterns or underlying meaning in the patient’s thoughts. Sometimes, free association exercises are applied specifically to childhood recollections. That is, psychoanalysts believe a person’s childhood relationships with caregivers often determine the way that person relates to others, and predicts later psychiatric difficulties. Thus, exploring these childhood memories, through free association or otherwise, can provide therapists with insights into a patient’s psychological makeup. Because we don’t always have the ability to consciously recall these deep memories, psychoanalysts also discuss their patients’ dreams. In Freudian theory, dreams contain not only manifest (or literal) content, but also latent (or symbolic) content (Freud, 1900; 1955). For example, someone may have a dream that his/her teeth are falling out—the manifest or actual content of the dream. However, dreaming that one’s teeth are falling out could be a reflection of the person’s unconscious concern about losing his or her physical attractiveness—the latent or metaphorical content of the dream. It is the therapist’s job to help discover the latent content underlying one’s manifest content through dream analysis. In psychoanalytic and psychodynamic therapy, the therapist plays a receptive role—interpreting the patient’s thoughts and behavior based on clinical experience and psychoanalytic theory. For example, if during therapy a patient begins to express unjustified anger toward the therapist, the therapist may recognize this as an act of transference. That is, the patient may be displacing feelings for people in his or her life (e.g., anger toward a parent) onto the therapist. At the same time, though, the therapist has to be aware of his or her own thoughts and emotions, for, in a related process, called countertransference, the therapist may displace his/her own emotions onto the patient. The key to psychoanalytic theory is to have patients uncover the buried, conflicting content of their mind, and therapists use various tactics—such as seating patients to face away from them—to promote a freer self-disclosure. And, as a therapist spends more time with a patient, the therapist can come to view his or her relationship with the patient as another reflection of the patient’s mind. Advantages and Disadvantages of Psychoanalytic Therapy Psychoanalysis was once the only type of psychotherapy available, but presently the number of therapists practicing this approach is decreasing around the world. Psychoanalysis is not appropriate for some types of patients, including those with severe psychopathology or mental retardation. Further, psychoanalysis is often expensive because treatment usually lasts many years. Still, some patients and therapists find the prolonged and detailed analysis very rewarding. Perhaps the greatest disadvantage of psychoanalysis and related approaches is the lack of empirical support for their effectiveness. The limited research that has been conducted on these treatments suggests that they do not reliably lead to better mental health outcomes (e.g., Driessen et al., 2010). And, although there are some reviews that seem to indicate that long-term psychodynamic therapies might be beneficial (e.g., Leichsenring & Rabung, 2008), other researchers have questioned the validity of these reviews. Nevertheless, psychoanalytic theory was history’s first attempt at formal treatment of mental illness, setting the stage for the more modern approaches used today. Humanistic and Person-Centered Therapy One of the next developments in therapy for mental illness, which arrived in the mid-20th century, is called humanistic or person-centered therapy (PCT). Here, the belief is that mental health problems result from an inconsistency between patients’ behavior and their true personal identity. Thus, the goal of PCT is to create conditions under which patients can discover their self-worth, feel comfortable exploring their own identity, and alter their behavior to better reflect this identity. History of Person-Centered Therapy PCT was developed by a psychologist named Carl Rogers, during a time of significant growth in the movements of humanistic theory and human potential. These perspectives were based on the idea that humans have an inherent drive to realize and express their own capabilities and creativity. Rogers, in particular, believed that all people have the potential to change and improve, and that the role of therapists is to foster self-understanding in an environment where adaptive change is most likely to occur (Rogers, 1951). Rogers suggested that the therapist and patient must engage in a genuine, egalitarian relationship in which the therapist is nonjudgmental and empathetic. In PCT, the patient should experience both a vulnerability to anxiety, which motivates the desire to change, and an appreciation for the therapist’s support. Techniques in Person-Centered Therapy Humanistic and person-centered therapy, like psychoanalysis, involves a largely unstructured conversation between the therapist and the patient. Unlike psychoanalysis, though, a therapist using PCT takes a passive role, guiding the patient toward his or her own self-discovery. Rogers’s original name for PCT was non-directive therapy, and this notion is reflected in the flexibility found in PCT. Therapists do not try to change patients’ thoughts or behaviors directly. Rather, their role is to provide the therapeutic relationship as a platform for personal growth. In these kinds of sessions, the therapist tends only to ask questions and doesn’t provide any judgment or interpretation of what the patient says. Instead, the therapist is present to provide a safe and encouraging environment for the person to explore these issues for him- or herself. An important aspect of the PCT relationship is the therapist’s unconditional positive regard for the patient’s feelings and behaviors. That is, the therapist is never to condemn or criticize the patient for what s/he has done or thought; the therapist is only to express warmth and empathy. This creates an environment free of approval or disapproval, where patients come to appreciate their value and to behave in ways that are congruent with their own identity. Advantages and Disadvantages of Person-Centered Therapy One key advantage of person-centered therapy is that it is highly acceptable to patients. In other words, people tend to find the supportive, flexible environment of this approach very rewarding. Furthermore, some of the themes of PCT translate well to other therapeutic approaches. For example, most therapists of any orientation find that clients respond well to being treated with nonjudgmental empathy. The main disadvantage to PCT, however, is that findings about its effectiveness are mixed. One possibility for this could be that the treatment is primarily based on unspecific treatment factors. That is, rather than using therapeutic techniques that are specific to the patient and the mental problem (i.e., specific treatment factors), the therapy focuses on techniques that can be applied to anyone (e.g., establishing a good relationship with the patient) (Cuijpers et al., 2012; Friedli, King, Lloyd, & Horder, 1997). Similar to how “one-size-fits-all” doesn’t really fit every person, PCT uses the same practices for everyone, which may work for some people but not others. Further research is necessary to evaluate its utility as a therapeutic approach. Cognitive Behavioral Therapy Although both psychoanalysis and PCT are still used today, another therapy, cognitive-behavioral therapy (CBT), has gained more widespread support and practice. CBT refers to a family of therapeutic approaches whose goal is to alleviate psychological symptoms by changing their underlying cognitions and behaviors. The premise of CBT is that thoughts, behaviors, and emotions interact and contribute to various mental disorders. For example, let’s consider how a CBT therapist would view a patient who compulsively washes her hands for hours every day. First, the therapist would identify the patient’s maladaptive thought: “If I don’t wash my hands like this, I will get a disease and die.” The therapist then identifies how this maladaptive thought leads to a maladaptive emotion: the feeling of anxiety when her hands aren’t being washed. And finally, this maladaptive emotion leads to the maladaptive behavior: the patient washing her hands for hours every day. CBT is a present-focused therapy (i.e., focused on the “now” rather than causes from the past, such as childhood relationships) that uses behavioral goals to improve one’s mental illness. Often, these behavioral goals involve between-session homework assignments. For example, the therapist may give the hand-washing patient a worksheet to take home; on this worksheet, the woman is to write down every time she feels the urge to wash her hands, how she deals with the urge, and what behavior she replaces that urge with. When the patient has her next therapy session, she and the therapist review her “homework” together. CBT is a relatively brief intervention of 12 to 16 weekly sessions, closely tailored to the nature of the psychopathology and treatment of the specific mental disorder. And, as the empirical data shows, CBT has proven to be highly efficacious for virtually all psychiatric illnesses (Hofmann, Asnaani, Vonk, Sawyer, & Fang, 2012). History of Cognitive Behavioral Therapy CBT developed from clinical work conducted in the mid-20th century by Dr. Aaron T. Beck, a psychiatrist, and Albert Ellis, a psychologist. Beck used the term automatic thoughts to refer to the thoughts depressed patients report experiencing spontaneously. He observed that these thoughts arise from three belief systems, or schemas: beliefs about the self, beliefs about the world, and beliefs about the future. In treatment, therapy initially focuses on identifying automatic thoughts (e.g., “If I don’t wash my hands constantly, I’ll get a disease”), testing their validity, and replacing maladaptive thoughts with more adaptive thoughts (e.g., “Washing my hands three times a day is sufficient to prevent a disease”). In later stages of treatment, the patient’s maladaptive schemas are examined and modified. Ellis (1957) took a comparable approach, in what he called rational-emotive-behavioral therapy (REBT), which also encourages patients to evaluate their own thoughts about situations. Techniques in CBT Beck and Ellis strove to help patients identify maladaptive appraisals, or the untrue judgments and evaluations of certain thoughts. For example, if it’s your first time meeting new people, you may have the automatic thought, “These people won’t like me because I have nothing interesting to share.” That thought itself is not what’s troublesome; the appraisal (or evaluation) that it might have merit is what’s troublesome. The goal of CBT is to help people make adaptive, instead of maladaptive, appraisals (e.g., “I do know interesting things!”). This technique of reappraisal, or cognitive restructuring, is a fundamental aspect of CBT. With cognitive restructuring, it is the therapist’s job to help point out when a person has an inaccurate or maladaptive thought, so that the patient can either eliminate it or modify it to be more adaptive. In addition to thoughts, though, another important treatment target of CBT is maladaptive behavior. Every time a person engages in maladaptive behavior (e.g., never speaking to someone in new situations), he or she reinforces the validity of the maladaptive thought, thus maintaining or perpetuating the psychological illness. In treatment, the therapist and patient work together to develop healthy behavioral habits (often tracked with worksheet-like homework), so that the patient can break this cycle of maladaptive thoughts and behaviors. For many mental health problems, especially anxiety disorders, CBT incorporates what is known as exposure therapy. During exposure therapy, a patient confronts a problematic situation and fully engages in the experience instead of avoiding it. For example, imagine a man who is terrified of spiders. Whenever he encounters one, he immediately screams and panics. In exposure therapy, the man would be forced to confront and interact with spiders, rather than simply avoiding them as he usually does. The goal is to reduce the fear associated with the situation through extinction learning, a neurobiological and cognitive process by which the patient “unlearns” the irrational fear. For example, exposure therapy for someone terrified of spiders might begin with him looking at a cartoon of a spider, followed by him looking at pictures of real spiders, and later, him handling a plastic spider. After weeks of this incremental exposure, the patient may even be able to hold a live spider. After repeated exposure (starting small and building one’s way up), the patient experiences less physiological fear and maladaptive thoughts about spiders, breaking his tendency for anxiety and subsequent avoidance. Advantages and Disadvantages of CBT CBT interventions tend to be relatively brief, making them cost-effective for the average consumer. In addition, CBT is an intuitive treatment that makes logical sense to patients. It can also be adapted to suit the needs of many different populations. One disadvantage, however, is that CBT does involve significant effort on the patient’s part, because the patient is an active participant in treatment. Therapists often assign “homework” (e.g., worksheets for recording one’s thoughts and behaviors) between sessions to maintain the cognitive and behavioral habits the patient is working on. The greatest strength of CBT is the abundance of empirical support for its effectiveness. Studies have consistently found CBT to be equally or more effective than other forms of treatment, including medication and other therapies (Butler, Chapman, Forman, & Beck, 2006; Hofmann et al., 2012). For this reason, CBT is considered a first-line treatment for many mental disorders. Focus Topic: Pioneers of CBT The central notion of CBT is the idea that a person’s behavioral and emotional responses are causally influenced by one’s thinking. The stoic Greek philosopher Epictetus is quoted as saying, “men are not moved by things, but by the view they take of them.” Meaning, it is not the event per se, but rather one’s assumptions (including interpretations and perceptions) of the event that are responsible for one’s emotional response to it. Beck calls these assumptions about events and situations automatic thoughts (Beck, 1979), whereas Ellis (1962) refers to these assumptions as self-statements. The cognitive model assumes that these cognitive processes cause the emotional and behavioral responses to events or stimuli. This causal chain is illustrated in Ellis’s ABC model, in which A stands for the antecedent event, B stands for belief, and C stands for consequence. During CBT, the person is encouraged to carefully observe the sequence of events and the response to them, and then explore the validity of the underlying beliefs through behavioral experiments and reasoning, much like a detective or scientist. Acceptance and Mindfulness-Based Approaches Unlike the preceding therapies, which were developed in the 20th century, this next one was born out of age-old Buddhist and yoga practices. Mindfulness, or a process that tries to cultivate a nonjudgmental, yet attentive, mental state, is a therapy that focuses on one’s awareness of bodily sensations, thoughts, and the outside environment. Whereas other therapies work to modify or eliminate these sensations and thoughts, mindfulness focuses on nonjudgmentally accepting them (Kabat-Zinn, 2003; Baer, 2003). For example, whereas CBT may actively confront and work to change a maladaptive thought, mindfulness therapy works to acknowledge and accept the thought, understanding that the thought is spontaneous and not what the person truly believes. There are two important components of mindfulness: (1) self-regulation of attention, and (2) orientation toward the present moment (Bishop et al., 2004). Mindfulness is thought to improve mental health because it draws attention away from past and future stressors, encourages acceptance of troubling thoughts and feelings, and promotes physical relaxation. Techniques in Mindfulness-Based Therapy Psychologists have adapted the practice of mindfulness as a form of psychotherapy, generally called mindfulness-based therapy (MBT). Several types of MBT have become popular in recent years, including mindfulness-based stress reduction (MBSR) (e.g., Kabat-Zinn, 1982) and mindfulness-based cognitive therapy (MBCT) (e.g., Segal, Williams, & Teasdale, 2002). MBSR uses meditation, yoga, and attention to physical experiences to reduce stress. The hope is that reducing a person’s overall stress will allow that person to more objectively evaluate his or her thoughts. In MBCT, rather than reducing one’s general stress to address a specific problem, attention is focused on one’s thoughts and their associated emotions. For example, MBCT helps prevent relapses in depression by encouraging patients to evaluate their own thoughts objectively and without value judgment (Baer, 2003). Although cognitive behavioral therapy (CBT) may seem similar to this, it focuses on “pushing out” the maladaptive thought, whereas mindfulness-based cognitive therapy focuses on “not getting caught up” in it. The treatments used in MBCT have been used to address a wide range of illnesses, including depression, anxiety, chronic pain, coronary artery disease, and fibromyalgia (Hofmann, Sawyer, Witt & Oh, 2010). Mindfulness and acceptance—in addition to being therapies in their own right—have also been used as “tools” in other cognitive-behavioral therapies, particularly in dialectical behavior therapy (DBT) (e.g., Linehan, Amstrong, Suarez, Allmon, & Heard, 1991). DBT, often used in the treatment of borderline personality disorder, focuses on skills training. That is, it often employs mindfulness and cognitive behavioral therapy practices, but it also works to teach its patients “skills” they can use to correct maladaptive tendencies. For example, one skill DBT teaches patients is called distress tolerance—or, ways to cope with maladaptive thoughts and emotions in the moment. For example, people who feel an urge to cut themselves may be taught to snap their arm with a rubber band instead. The primary difference between DBT and CBT is that DBT employs techniques that address the symptoms of the problem (e.g., cutting oneself) rather than the problem itself (e.g., understanding the psychological motivation to cut oneself). CBT does not teach such skills training because of the concern that the skills—even though they may help in the short-term—may be harmful in the long-term, by maintaining maladaptive thoughts and behaviors. DBT is founded on the perspective of a dialectical worldview. That is, rather than thinking of the world as “black and white,” or “only good and only bad,” it focuses on accepting that some things can have characteristics of both “good” and “bad.” So, in a case involving maladaptive thoughts, instead of teaching that a thought is entirely bad, DBT tries to help patients be less judgmental of their thoughts (as with mindfulness-based therapy) and encourages change through therapeutic progress, using cognitive-behavioral techniques as well as mindfulness exercises. Another form of treatment that also uses mindfulness techniques is acceptance and commitment therapy (ACT) (Hayes, Strosahl, & Wilson, 1999). In this treatment, patients are taught to observe their thoughts from a detached perspective (Hayes et al., 1999). ACT encourages patients not to attempt to change or avoid thoughts and emotions they observe in themselves, but to recognize which are beneficial and which are harmful. However, the differences among ACT, CBT, and other mindfulness-based treatments are a topic of controversy in the current literature. Advantages and Disadvantages of Mindfulness-Based Therapy Two key advantages of mindfulness-based therapies are their acceptability and accessibility to patients. Because yoga and meditation are already widely known in popular culture, consumers of mental healthcare are often interested in trying related psychological therapies. Currently, psychologists have not come to a consensus on the efficacy of MBT, though growing evidence supports its effectiveness for treating mood and anxiety disorders. For example, one review of MBT studies for anxiety and depression found that mindfulness-based interventions generally led to moderate symptom improvement (Hofmann et al., 2010). Emerging Treatment Strategies With growth in research and technology, psychologists have been able to develop new treatment strategies in recent years. Often, these approaches focus on enhancing existing treatments, such as cognitive-behavioral therapies, through the use of technological advances. For example, internet- and mobile-delivered therapies make psychological treatments more available, through smartphones and online access. Clinician-supervised online CBT modules allow patients to access treatment from home on their own schedule—an opportunity particularly important for patients with less geographic or socioeconomic access to traditional treatments. Furthermore, smartphones help extend therapy to patients’ daily lives, allowing for symptom tracking, homework reminders, and more frequent therapist contact. Another benefit of technology is cognitive bias modification. Here, patients are given exercises, often through the use of video games, aimed at changing their problematic thought processes. For example, researchers might use a mobile app to train alcohol abusers to avoid stimuli related to alcohol. One version of this game flashes four pictures on the screen—three alcohol cues (e.g., a can of beer, the front of a bar) and one health-related image (e.g., someone drinking water). The goal is for the patient to tap the healthy picture as fast as s/he can. Games like these aim to target patients’ automatic, subconscious thoughts that may be difficult to direct through conscious effort. That is, by repeatedly tapping the healthy image, the patient learns to “ignore” the alcohol cues, so when those cues are encountered in the environment, they will be less likely to trigger the urge to drink. Approaches like these are promising because of their accessibility, however they require further research to establish their effectiveness. Yet another emerging treatment employs CBT-enhancing pharmaceutical agents. These are drugs used to improve the effects of therapeutic interventions. Based on research from animal experiments, researchers have found that certain drugs influence the biological processes known to be involved in learning. Thus, if people take these drugs while going through psychotherapy, they are better able to “learn” the techniques for improvement. For example, the antibiotic d-cycloserine improves treatment for anxiety disorders by facilitating the learning processes that occur during exposure therapy. Ongoing research in this exciting area may prove to be quite fruitful. Pharmacological Treatments Up until this point, all the therapies we have discussed have been talk-based or meditative practices. However, psychiatric medications are also frequently used to treat mental disorders, including schizophrenia, bipolar disorder, depression, and anxiety disorders. Psychiatric drugs are commonly used, in part, because they can be prescribed by general medical practitioners, whereas only trained psychologists are qualified to deliver effective psychotherapy. While drugs and CBT therapies tend to be almost equally effective, choosing the best intervention depends on the disorder and individual being treated, as well as other factors—such as treatment availability and comorbidity (i.e., having multiple mental or physical disorders at once). Although many new drugs have been introduced in recent decades, there is still much we do not understand about their mechanism in the brain. Further research is needed to refine our understanding of both pharmacological and behavioral treatments before we can make firm claims about their effectiveness. Integrative and Eclectic Psychotherapy In discussing therapeutic orientations, it is important to note that some clinicians incorporate techniques from multiple approaches, a practice known as integrative or eclectic psychotherapy. For example, a therapist may employ distress tolerance skills from DBT (to resolve short-term problems), cognitive reappraisal from CBT (to address long-standing issues), and mindfulness-based meditation from MBCT (to reduce overall stress). And, in fact, between 13% and 42% of therapists have identified their own approaches as integrative or eclectic (Norcross & Goldfried, 2005). Conclusion Throughout human history we have had to deal with mental illness in one form or another. Over time, several schools of thought have emerged for treating these problems. Although various therapies have been shown to work for specific individuals, cognitive behavioral therapy is currently the treatment most widely supported by empirical research. Still, practices like psychodynamic therapies, person-centered therapy, mindfulness-based treatments, and acceptance and commitment therapy have also shown success. And, with recent advances in research and technology, clinicians are able to enhance these and other therapies to treat more patients more effectively than ever before. However, what is important in the end is that people actually seek out mental health specialists to help them with their problems. One of the biggest deterrents to doing so is that people don’t understand what psychotherapy really entails. Through understanding how current practices work, not only can we better educate people about how to get the help they need, but we can continue to advance our treatments to be more effective in the future. Outside Resources Article: A personal account of the benefits of mindfulness-based therapy https://www.theguardian.com/lifeandstyle/2014/jan/11/julie-myerson-mindfulness-based-cognitive-therapy Article: The Effect of Mindfulness-Based Therapy on Anxiety and Depression: A Meta-Analytic Review https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2848393/ Video: An example of a person-centered therapy session. Video: Carl Rogers, the founder of the humanistic, person-centered approach to psychology, discusses the position of the therapist in PCT. Video: CBT (cognitive behavioral therapy) is one of the most common treatments for a range of mental health problems, from anxiety, depression, bipolar, OCD or schizophrenia. This animation explains the basics and how you can decide whether it's best for you or not. Web: An overview of the purpose and practice of cognitive behavioral therapy (CBT) http://psychcentral.com/lib/in-depth-cognitive-behavioral-therapy/ Web: The history and development of psychoanalysis http://www.freudfile.org/psychoanalysis/history.html Discussion Questions 1. Psychoanalytic theory is no longer the dominant therapeutic approach, because it lacks empirical support. Yet many consumers continue to seek psychoanalytic or psychodynamic treatments. Do you think psychoanalysis still has a place in mental health treatment? If so, why? 2. What might be some advantages and disadvantages of technological advances in psychological treatment? What will psychotherapy look like 100 years from now? 3. Some people have argued that all therapies are about equally effective, and that they all affect change through common factors such as the involvement of a supportive therapist. Does this claim sound reasonable to you? Why or why not? 4. When choosing a psychological treatment for a specific patient, what factors besides the treatment’s demonstrated efficacy should be taken into account? Vocabulary Acceptance and commitment therapy A therapeutic approach designed to foster nonjudgmental observation of one’s own mental processes. Automatic thoughts Thoughts that occur spontaneously; often used to describe problematic thoughts that maintain mental disorders. Cognitive bias modification Using exercises (e.g., computer games) to change problematic thinking habits. Cognitive-behavioral therapy (CBT) A family of approaches with the goal of changing the thoughts and behaviors that influence psychopathology. Comorbidity Describes a state of having more than one psychological or physical disorder at a given time. Dialectical behavior therapy (DBT) A treatment often used for borderline personality disorder that incorporates both cognitive-behavioral and mindfulness elements. Dialectical worldview A perspective in DBT that emphasizes the joint importance of change and acceptance. Exposure therapy A form of intervention in which the patient engages with a problematic (usually feared) situation without avoidance or escape. Free association In psychodynamic therapy, a process in which the patient reports all thoughts that come to mind without censorship, and these thoughts are interpreted by the therapist. Integrative or eclectic psychotherapy Also called integrative psychotherapy, this term refers to approaches combining multiple orientations (e.g., CBT with psychoanalytic elements). Integrative or eclectic psychotherapy Also called integrative psychotherapy, this term refers to approaches combining multiple orientations (e.g., CBT with psychoanalytic elements). Mindfulness A process that reflects a nonjudgmental, yet attentive, mental state. Mindfulness-based therapy A form of psychotherapy grounded in mindfulness theory and practice, often involving meditation, yoga, body scan, and other features of mindfulness exercises. Person-centered therapy A therapeutic approach focused on creating a supportive environment for self-discovery. Psychoanalytic therapy Sigmund Freud’s therapeutic approach focusing on resolving unconscious conflicts. Psychodynamic therapy Treatment applying psychoanalytic principles in a briefer, more individualized format. Reappraisal, or Cognitive restructuring The process of identifying, evaluating, and changing maladaptive thoughts in psychotherapy. Schema A mental representation or set of beliefs about something. Unconditional positive regard In person-centered therapy, an attitude of warmth, empathy and acceptance adopted by the therapist in order to foster feelings of inherent worth in the patient.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/13%3A_THERAPIES/13.01%3A_Therapeutic_Orientations.txt
By Susan Barron University of Kentucky Psychopharmacology is the study of how drugs affect behavior. If a drug changes your perception, or the way you feel or think, the drug exerts effects on your brain and nervous system. We call drugs that change the way you think or feel psychoactive or psychotropic drugs, and almost everyone has used a psychoactive drug at some point (yes, caffeine counts). Understanding some of the basics about psychopharmacology can help us better understand a wide range of things that interest psychologists and others. For example, the pharmacological treatment of certain neurodegenerative diseases such as Parkinson’s disease tells us something about the disease itself. The pharmacological treatments used to treat psychiatric conditions such as schizophrenia or depression have undergone amazing development since the 1950s, and the drugs used to treat these disorders tell us something about what is happening in the brain of individuals with these conditions. Finally, understanding something about the actions of drugs of abuse and their routes of administration can help us understand why some psychoactive drugs are so addictive. In this module, we will provide an overview of some of these topics as well as discuss some current controversial areas in the field of psychopharmacology. learning objectives • How do the majority of psychoactive drugs work in the brain? • How does the route of administration affect how rewarding a drug might be? • Why is grapefruit dangerous to consume with many psychotropic medications? • Why might individualized drug doses based on genetic screening be helpful for treating conditions like depression? • Why is there controversy regarding pharmacotherapy for children, adolescents, and the elderly? Introduction Psychopharmacology, the study of how drugs affect the brain and behavior, is a relatively new science, although people have probably been taking drugs to change how they feel from early in human history (consider the of eating fermented fruit, ancient beer recipes, chewing on the leaves of the cocaine plant for stimulant properties as just some examples). The word psychopharmacology itself tells us that this is a field that bridges our understanding of behavior (and brain) and pharmacology, and the range of topics included within this field is extremely broad. Virtually any drug that changes the way you feel does this by altering how neurons communicate with each other. Neurons (more than 100 billion in your nervous system) communicate with each other by releasing a chemical (neurotransmitter) across a tiny space between two neurons (the synapse). When the neurotransmitter crosses the synapse, it binds to a postsynaptic receptor (protein) on the receiving neuron and the message may then be transmitted onward. Obviously, neurotransmission is far more complicated than this – links at the end of this module can provide some useful background if you want more detail – but the first step is understanding that virtually all psychoactive drugsinterfere with or alter how neurons communicate with each other. There are many neurotransmitters. Some of the most important in terms of psychopharmacological treatment and drugs of abuse are outlined in Table 1. The neurons that release these neurotransmitters, for the most part, are localized within specific circuits of the brain that mediate these behaviors. Psychoactive drugs can either increase activity at the synapse (these are called agonists) or reduce activity at the synapse (antagonists). Different drugs do this by different mechanisms, and some examples of agonists and antagonists are presented in Table 2. For each example, the drug’s trade name, which is the name of the drug provided by the drug company, and generic name (in parentheses) are provided. A very useful link at the end of this module shows the various steps involved in neurotransmission and some ways drugs can alter this. Table 2 provides examples of drugs and their primary mechanism of action, but it is very important to realize that drugs also have effects on other neurotransmitters. This contributes to the kinds of side effects that are observed when someone takes a particular drug. The reality is that no drugs currently available work only exactly where we would like in the brain or only on a specific neurotransmitter. In many cases, individuals are sometimes prescribed one psychotropic drug but then may also have to take additional drugs to reduce the side effects caused by the initial drug. Sometimes individuals stop taking medication because the side effects can be so profound. Pharmacokinetics: What Is It – Why Is It Important? While this section may sound more like pharmacology, it is important to realize how important pharmacokinetics can be when considering psychoactive drugs. Pharmacokinetics refers to how the body handles a drug that we take. As mentioned earlier, psychoactive drugs exert their effects on behavior by altering neuronal communication in the brain, and the majority of drugs reach the brain by traveling in the blood. The acronym ADME is often used with A standing for absorption (how the drug gets into the blood), Distribution (how the drug gets to the organ of interest – in this module, that is the brain), Metabolism (how the drug is broken down so it no longer exerts its psychoactive effects), and Excretion (how the drug leaves the body). We will talk about a couple of these to show their importance for considering psychoactive drugs. Drug Administration There are many ways to take drugs, and these routes of drug administration can have a significant impact on how quickly that drug reaches brain. The most common route of administration is oral administration, which is relatively slow and – perhaps surprisingly – often the most variable and complex route of administration. Drugs enter the stomach and then get absorbed by the blood supply and capillaries that line the small intestine. The rate of absorption can be affected by a variety of factors including the quantity and the type of food in the stomach (e.g., fats vs. proteins). This is why the medicine label for some drugs (like antibiotics) may specifically state foods that you should or should NOT consume within an hour of taking the drug because they can affect the rate of absorption. Two of the most rapid routes of administration include inhalation (i.e., smoking or gaseous anesthesia) and intravenous (IV) in which the drug is injected directly into the vein and hence the blood supply. Both of these routes of administration can get the drug to brain in less than 10 seconds. IV administration also has the distinction of being the most dangerous because if there is an adverse drug reaction, there is very little time to administer any antidote, as in the case of an IV heroin overdose. Why might how quickly a drug gets to the brain be important? If a drug activates the reward circuits in the brain AND it reaches the brain very quickly, the drug has a high risk for abuse and addiction. Psychostimulants like amphetamine or cocaine are examples of drugs that have high risk for abuse because they are agonists at DA neurons involved in reward AND because these drugs exist in forms that can be either smoked or injected intravenously. Some argue that cigarette smoking is one of the hardest addictions to quit, and although part of the reason for this may be that smoking gets the nicotine into the brain very quickly (and indirectly acts on DA neurons), it is a more complicated story. For drugs that reach the brain very quickly, not only is the drug very addictive, but so are the cues associated with the drug (see Rohsenow, Niaura, Childress, Abrams, & Monti, 1990). For a crack user, this could be the pipe that they use to smoke the drug. For a cigarette smoker, however, it could be something as normal as finishing dinner or waking up in the morning (if that is when the smoker usually has a cigarette). For both the crack user and the cigarette smoker, the cues associated with the drug may actually cause craving that is alleviated by (you guessed it) – lighting a cigarette or using crack (i.e., relapse). This is one of the reasons individuals that enroll in drug treatment programs, especially out-of-town programs, are at significant risk of relapse if they later find themselves in proximity to old haunts, friends, etc. But this is much more difficult for a cigarette smoker. How can someone avoid eating? Or avoid waking up in the morning, etc. These examples help you begin to understand how important the route of administration can be for psychoactive drugs. Drug Metabolism Metabolism involves the breakdown of psychoactive drugs, and this occurs primarily in the liver. The liver produces enzymes (proteins that speed up a chemical reaction), and these enzymes help catalyze a chemical reaction that breaks down psychoactive drugs. Enzymes exist in “families,” and many psychoactive drugs are broken down by the same family of enzymes, the cytochrome P450 superfamily. There is not a unique enzyme for each drug; rather, certain enzymes can break down a wide variety of drugs. Tolerance to the effects of many drugs can occur with repeated exposure; that is, the drug produces less of an effect over time, so more of the drug is needed to get the same effect. This is particularly true for sedative drugs like alcohol or opiate-based painkillers. Metabolic tolerance is one kind of tolerance and it takes place in the liver. Some drugs (like alcohol) cause enzyme induction – an increase in the enzymes produced by the liver. For example, chronic drinking results in alcohol being broken down more quickly, so the alcoholic needs to drink more to get the same effect – of course, until so much alcohol is consumed that it damages the liver (alcohol can cause fatty liver or cirrhosis). Grapefruit Juice and Metabolism Certain types of food in the stomach can alter the rate of drug absorption, and other foods can also alter the rate of drug metabolism. The most well known is grapefruit juice. Grapefruit juice suppresses cytochrome P450 enzymes in the liver, and these liver enzymes normally break down a large variety of drugs (including some of the psychotropic drugs). If the enzymes are suppressed, drug levels can build up to potentially toxic levels. In this case, the effects can persist for extended periods of time after the consumption of grapefruit juice. As of 2013, there are at least 85 drugs shown to adversely interact with grapefruit juice (Bailey, Dresser, & Arnold, 2013). Some psychotropic drugs that are likely to interact with grapefruit juice include carbamazepine (Tegretol), prescribed for bipolar disorder; diazepam (Valium), used to treat anxiety, alcohol withdrawal, and muscle spasms; and fluvoxamine (Luvox), used to treat obsessive compulsive disorder and depression. A link at the end of this module gives the latest list of drugs reported to have this unusual interaction. Individualized Therapy, Metabolic Differences, and Potential Prescribing Approaches for the Future Mental illnesses contribute to more disability in western countries than all other illnesses including cancer and heart disease. Depression alone is predicted to be the second largest contributor to disease burden by 2020 (World Health Organization, 2004). The numbers of people affected by mental health issues are pretty astonishing, with estimates that 25% of adults experience a mental health issue in any given year, and this affects not only the individual but their friends and family. One in 17 adults experiences a serious mental illness (Kessler, Chiu, Demler, & Walters, 2005). Newer antidepressants are probably the most frequently prescribed drugs for treating mental health issues, although there is no “magic bullet” for treating depression or other conditions. Pharmacotherapy with psychological therapy may be the most beneficial treatment approach for many psychiatric conditions, but there are still many unanswered questions. For example, why does one antidepressant help one individual yet have no effect for another? Antidepressants can take 4 to 6 weeks to start improving depressive symptoms, and we don’t really understand why. Many people do not respond to the first antidepressant prescribed and may have to try different drugs before finding something that works for them. Other people just do not improve with antidepressants (Ioannidis, 2008). As we better understand why individuals differ, the easier and more rapidly we will be able to help people in distress. One area that has received interest recently has to do with an individualized treatment approach. We now know that there are genetic differences in some of the cytochrome P450 enzymes and their ability to break down drugs. The general population falls into the following 4 categories: 1) ultra-extensive metabolizers break down certain drugs (like some of the current antidepressants) very, very quickly, 2) extensive metabolizers are also able to break down drugs fairly quickly, 3) intermediate metabolizers break down drugs more slowly than either of the two above groups, and finally 4) poor metabolizers break down drugs much more slowly than all of the other groups. Now consider someone receiving a prescription for an antidepressant – what would the consequences be if they were either an ultra-extensive metabolizer or a poor metabolizer? The ultra-extensive metabolizer would be given antidepressants and told it will probably take 4 to 6 weeks to begin working (this is true), but they metabolize the medication so quickly that it will never be effective for them. In contrast, the poor metabolizer given the same daily dose of the same antidepressant may build up such high levels in their blood (because they are not breaking the drug down), that they will have a wide range of side effects and feel really badly – also not a positive outcome. What if – instead – prior to prescribing an antidepressant, the doctor could take a blood sample and determine which type of metabolizer a patient actually was? They could then make a much more informed decision about the best dose to prescribe. There are new genetic tests now available to better individualize treatment in just this way. A blood sample can determine (at least for some drugs) which category an individual fits into, but we need data to determine if this actually is effective for treating depression or other mental illnesses (Zhou, 2009). Currently, this genetic test is expensive and not many health insurance plans cover this screen, but this may be an important component in the future of psychopharmacology. Other Controversial Issues Juveniles and Psychopharmacology A recent Centers for Disease Control (CDC) report has suggested that as many as 1 in 5 children between the ages of 5 and 17 may have some type of mental disorder (e.g., ADHD, autism, anxiety, depression) (CDC, 2013). The incidence of bipolar disorder in children and adolescents has also increased 40 times in the past decade (Moreno, Laje, Blanco, Jiang, Schmidt, & Olfson, 2007), and it is now estimated that 1 in 88 children have been diagnosed with an autism spectrum disorder (CDC, 2011). Why has there been such an increase in these numbers? There is no single answer to this important question. Some believe that greater public awareness has contributed to increased teacher and parent referrals. Others argue that the increase stems from changes in criterion currently used for diagnosing. Still others suggest environmental factors, either prenatally or postnatally, have contributed to this upsurge. We do not have an answer, but the question does bring up an additional controversy related to how we should treat this population of children and adolescents. Many psychotropic drugs used for treating psychiatric disorders have been tested in adults, but few have been tested for safety or efficacy with children or adolescents. The most well-established psychotropics prescribed for children and adolescents are the psychostimulant drugs used for treating attention deficit hyperactivity disorder (ADHD), and there are clinical data on how effective these drugs are. However, we know far less about the safety and efficacy in young populations of the drugs typically prescribed for treating anxiety, depression, or other psychiatric disorders. The young brain continues to mature until probably well after age 20, so some scientists are concerned that drugs that alter neuronal activity in the developing brain could have significant consequences. There is an obvious need for clinical trials in children and adolescents to test the safety and effectiveness of many of these drugs, which also brings up a variety of ethical questions about who decides what children and adolescents will participate in these clinical trials, who can give consent, who receives reimbursements, etc. The Elderly and Psychopharmacology Another population that has not typically been included in clinical trials to determine the safety or effectiveness of psychotropic drugs is the elderly. Currently, there is very little high-quality evidence to guide prescribing for older people – clinical trials often exclude people with multiple comorbidities (other diseases, conditions, etc.), which are typical for elderly populations (see Hilmer and Gnjidict, 2008; Pollock, Forsyth, & Bies, 2008). This is a serious issue because the elderly consume a disproportionate number of the prescription meds prescribed. The term polypharmacy refers to the use of multiple drugs, which is very common in elderly populations in the United States. As our population ages, some estimate that the proportion of people 65 or older will reach 20% of the U.S. population by 2030, with this group consuming 40% of the prescribed medications. As shown in Table 3 (from Schwartz and Abernethy, 2008), it is quite clear why the typical clinical trial that looks at the safety and effectiveness of psychotropic drugs can be problematic if we try to interpret these results for an elderly population. Metabolism of drugs is often slowed considerably for elderly populations, so less drug can produce the same effect (or all too often, too much drug can result in a variety of side effects). One of the greatest risk factors for elderly populations is falling (and breaking bones), which can happen if the elderly person gets dizzy from too much of a drug. There is also evidence that psychotropic medications can reduce bone density (thus worsening the consequences if someone falls) (Brown & Mezuk, 2012). Although we are gaining an awareness about some of the issues facing pharmacotherapy in older populations, this is a very complex area with many medical and ethical questions. This module provided an introduction of some of the important areas in the field of psychopharmacology. It should be apparent that this module just touched on a number of topics included in this field. It should also be apparent that understanding more about psychopharmacology is important to anyone interested in understanding behavior and that our understanding of issues in this field has important implications for society. Outside Resources Video: Neurotransmission Web: Description of how some drugs work and the brain areas involved - 1 www.drugabuse.gov/news-events...rotransmission Web: Description of how some drugs work and the brain areas involved - 2 http://learn.genetics.utah.edu/content/addiction/mouse/ Web: Information about how neurons communicate and the reward pathways http://learn.genetics.utah.edu/content/addiction/rewardbehavior/ Web: National Institute of Alcohol Abuse and Alcoholism http://www.niaaa.nih.gov/ Web: National Institute of Drug Abuse http://www.drugabuse.gov/ Web: National Institute of Mental Health http://www.nimh.nih.gov/index.shtml Web: Neurotransmission science.education.nih.gov/su...nsmission.html Web: Report of the Working Group on Psychotropic Medications for Children and Adolescents: Psychopharmacological, Psychosocial, and Combined Interventions for Childhood Disorders: Evidence Base, Contextual Factors, and Future Directions (2008): http://www.apa.org/pi/families/resources/child-medications.pdf Web: Ways drugs can alter neurotransmission http://thebrain.mcgill.ca/flash/d/d_03/d_03_m/d_03_m_par/d_03_m_par.html Discussion Questions 1. What are some of the issues surrounding prescribing medications for children and adolescents? How might this be improved? 2. What are some of the factors that can affect relapse to an addictive drug? 3. How might prescribing medications for depression be improved in the future to increase the likelihood that a drug would work and minimize side effects? Vocabulary Agonists A drug that increases or enhances a neurotransmitter’s effect. Antagonist A drug that blocks a neurotransmitter’s effect. Enzyme A protein produced by a living organism that allows or helps a chemical reaction to occur. Enzyme induction Process through which a drug can enhance the production of an enzyme. Metabolism Breakdown of substances. Neurotransmitter A chemical substance produced by a neuron that is used for communication between neurons. Pharmacokinetics The action of a drug through the body, including absorption, distribution, metabolism, and excretion. Polypharmacy The use of many medications. Psychoactive drugs A drug that changes mood or the way someone feels. Psychotropic drug A drug that changes mood or emotion, usually used when talking about drugs prescribed for various mental conditions (depression, anxiety, schizophrenia, etc.). Synapse The tiny space separating neurons.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/13%3A_THERAPIES/13.02%3A_Psychopharmacology.txt
• 14.1: Social Cognition and Attitudes Social cognition is the area of social psychology that examines how people perceive and think about their social world. This module provides an overview of key topics within social cognition and attitudes, including judgmental heuristics, social prediction, affective and motivational influences on judgment, and explicit and implicit attitudes. • 14.2: Conformity and Obedience We often change our attitudes and behaviors to match the attitudes and behaviors of the people around us. One reason for this conformity is a concern about what other people think of us. This process was demonstrated in a classic study in which college students deliberately gave wrong answers to a simple visual judgment task rather than go against the group. Another reason we conform to the norm is because other people often have information we do not. • 14.3: Persuasion- So Easily Fooled This module introduces several major principles in the process of persuasion. It offers an overview of the different paths to persuasion. It then describes how mindless processing makes us vulnerable to undesirable persuasion and some of the “tricks” that may be used against us • 14.4: Prejudice, Discrimination, and Stereotyping People are often biased against others outside of their own social group, showing prejudice (emotional bias), stereotypes (cognitive bias), and discrimination (behavioral bias). In the past, people used to be more explicit with their biases, but during the 20th century, when it became less socially acceptable to exhibit bias, such things like prejudice, stereotypes, and discrimination became more subtle (automatic, ambiguous, and ambivalent). 14: SOCIAL PSYCHOLOGY By Yanine D. Hess and Cynthia L. Pickett University of California, Davis Social cognition is the area of social psychology that examines how people perceive and think about their social world. This module provides an overview of key topics within social cognition and attitudes, including judgmental heuristics, social prediction, affective and motivational influences on judgment, and explicit and implicit attitudes. learning objectives • Learn how we simplify the vast array of information in the world in a way that allows us to make decisions and navigate our environments efficiently. • Understand some of the social factors that influence how we reason. • Determine if our reasoning processes are always conscious, and if not, what some of the effects of automatic/nonconscious cognition are. • Understand the difference between explicit and implicit attitudes, and the implications they have for behavior. Introduction Imagine you are walking toward your classroom and you see your teacher and a fellow student you know to be disruptive in class whispering together in the hallway. As you approach, both of them quit talking, nod to you, and then resume their urgent whispers after you pass by. What would you make of this scene? What story might you tell yourself to help explain this interesting and unusual behavior? People know intuitively that we can better understand others’ behavior if we know the thoughts contributing to the behavior. In this example, you might guess that your teacher harbors several concerns about the disruptive student, and therefore you believe their whispering is related to this. The area of social psychology that focuses on how people think about others and about the social world is called social cognition. Researchers of social cognition study how people make sense of themselves and others to make judgments, form attitudes, and make predictions about the future. Much of the research in social cognition has demonstrated that humans are adept at distilling large amounts of information into smaller, more usable chunks, and that we possess many cognitive tools that allow us to efficiently navigate our environments. This research has also illuminated many social factors that can influence these judgments and predictions. Not only can our past experiences, expectations, motivations, and moods impact our reasoning, but many of our decisions and behaviors are driven by unconscious processes and implicit attitudes we are unaware of having. The goal of this module is to highlight the mental tools we use to navigate and make sense of our complex social world, and describe some of the emotional, motivational, and cognitive factors that affect our reasoning. Simplifying Our Social World Consider how much information you come across on any given day; just looking around your bedroom, there are hundreds of objects, smells, and sounds. How do we simplify all this information to attend to what is important and make decisions quickly and efficiently? In part, we do it by forming schemas of the various people, objects, situations, and events we encounter. A schema is a mental model, or representation, of any of the various things we come across in our daily lives. A schema (related to the word schematic) is kind of like a mental blueprint for how we expect something to be or behave. It is an organized body of general information or beliefs we develop from direct encounters, as well as from secondhand sources. Rather than spending copious amounts of time learning about each new individual object (e.g., each new dog we see), we rely on our schemas to tell us that a newly encountered dog probably barks, likes to fetch, and enjoys treats. In this way, our schemas greatly reduce the amount of cognitive work we need to do and allow us to “go beyond the information given” (Bruner, 1957). We can hold schemas about almost anything—individual people (person schemas), ourselves (self-schemas), and recurring events (event schemas, or scripts). Each of these types of schemas is useful in its own way. For example, event schemas allow us to navigate new situations efficiently and seamlessly. A script for dining at a restaurant would indicate that one should wait to be seated by the host or hostess, that food should be ordered from a menu, and that one is expected to pay the check at the end of the meal. Because the majority of dining situations conform to this general format, most diners just need to follow their mental scripts to know what to expect and how they should behave, greatly reducing their cognitive workload. Another important way we simplify our social world is by employing heuristics, which are mental shortcuts that reduce complex problem-solving to more simple, rule-based decisions. For example, have you ever had a hard time trying to decide on a book to buy, then you see one ranked highly on a book review website? Although selecting a book to purchase can be a complicated decision, you might rely on the “rule of thumb” that a recommendation from a credible source is likely a safe bet—so you buy it. A common instance of using heuristics is when people are faced with judging whether an object belongs to a particular category. For example, you would easily classify a pit bull into the category of “dog.” But what about a coyote? Or a fox? A plastic toy dog? In order to make this classification (and many others), people may rely on the representativeness heuristic to arrive at a quick decision (Kahneman & Tversky, 1972, 1973). Rather than engaging in an in-depth consideration of the object’s attributes, one can simply judge the likelihood of the object belonging to a category, based on how similar it is to one’s mental representation of that category. For example, a perceiver may quickly judge a female to be an athlete based on the fact that the female is tall, muscular, and wearing sports apparel—which fits the perceiver’s representation of an athlete’s characteristics. In many situations, an object’s similarity to a category is a good indicator of its membership in that category, and an individual using the representativeness heuristic will arrive at a correct judgment. However, when base-rate information (e.g., the actual percentage of athletes in the area and therefore the probability that this person actually is an athlete) conflicts with representativeness information, use of this heuristic is less appropriate. For example, if asked to judge whether a quiet, thin man who likes to read poetry is a classics professor at a prestigious university or a truck driver, the representativeness heuristic might lead one to guess he’s a professor. However, considering the base-rates, we know there are far fewer university classics professors than truck drivers. Therefore, although the man fits the mental image of a professor, the actual probability of him being one (considering the number of professors out there) is lower than that of being a truck driver. In addition to judging whether things belong to particular categories, we also attempt to judge the likelihood that things will happen. A commonly employed heuristic for making this type of judgment is called the availability heuristic. People use the availability heuristic to evaluate the frequency or likelihood of an event based on how easily instances of it come to mind (Tversky & Kahneman, 1973). Because more commonly occurring events are more likely to be cognitively accessible (or, they come to mind more easily), use of the availability heuristic can lead to relatively good approximations of frequency. However, the heuristic can be less reliable when judging the frequency of relatively infrequent but highly accessible events. For example, do you think there are more words that begin with “k,” or more that have “k” as the third letter? To figure this out, you would probably make a list of words that start with “k” and compare it to a list of words with “k” as the third letter. Though such a quick test may lead you to believe there are more words that begin with “k,” the truth is that there are 3 times as many words that have “k” as the third letter (Schwarz et al., 1991). In this case, words beginning with “k” are more readily available to memory (i.e., more accessible), so they seem to be more numerous. Another example is the very common fear of flying: dying in a plane crash is extremely rare, but people often overestimate the probability of it occurring because plane crashes tend to be highly memorable and publicized. In summary, despite the vast amount of information we are bombarded with on a daily basis, the mind has an entire kit of “tools” that allows us to navigate that information efficiently. In addition to category and frequency judgments, another common mental calculation we perform is predicting the future. We rely on our predictions about the future to guide our actions. When deciding what entrée to select for dinner, we may ask ourselves, “How happy will I be if I choose this over that?” The answer we arrive at is an example of a future prediction. In the next section, we examine individuals’ ability to accurately predict others’ behaviors, as well as their own future thoughts, feelings, and behaviors, and how these predictions can impact their decisions. Making Predictions About the Social World Whenever we face a decision, we predict our future behaviors or feelings in order to choose the best course of action. If you have a paper due in a week and have the option of going out to a party or working on the paper, the decision of what to do rests on a few things: the amount of time you predict you will need to write the paper, your prediction of how you will feel if you do poorly on the paper, and your prediction of how harshly the professor will grade it. In general, we make predictions about others quickly, based on relatively little information. Research on “thin-slice judgments” has shown that perceivers are able to make surprisingly accurate inferences about another person’s emotional state, personality traits, and even sexual orientation based on just snippets of information—for example, a 10-second video clip (Ambady, Bernieri, & Richeson, 2000; Ambady, Hallahan, & Conner, 1999; Ambady & Rosenthal, 1993). Furthermore, these judgments are predictive of the target’s future behaviors. For example, one study found that students’ ratings of a teacher’s warmth, enthusiasm, and attentiveness from a 30-second video clip strongly predicted that teacher’s final student evaluations after an entire semester (Ambady & Rosenthal, 1993). As might be expected, the more information there is available, the more accurate many of these judgments become (Carney, Colvin, & Hall, 2007). Because we seem to be fairly adept at making predictions about others, one might expect predictions about the self to be foolproof, given the considerable amount of information one has about the self compared to others. To an extent, research has supported this conclusion. For example, our own predictions of our future academic performance are more accurate than peers’ predictions of our performance, and self-expressed interests better predict occupational choice than career inventories (Shrauger & Osberg, 1981). Yet, it is not always the case that we hold greater insight into ourselves. While our own assessment of our personality traits does predict certain behavioral tendencies better than peer assessment of our personality, for certain behaviors, peer reports are more accurate than self-reports (Kolar, Funder, & Colvin, 1996; Vazire, 2010). Similarly, although we are generally aware of our knowledge, abilities, and future prospects, our perceptions are often overly positive, and we display overconfidence in their accuracy and potential (Metcalfe, 1998). For example, we tend to underestimate how much time it will take us to complete a task, whether it is writing a paper, finishing a project at work, or building a bridge—a phenomenon known as the planning fallacy (Buehler, Griffin, & Ross, 1994). The planning fallacy helps explain why so many college students end up pulling all-nighters to finish writing assignments or study for exams. The tasks simply end up taking longer than expected. On the positive side, the planning fallacy can also lead individuals to pursue ambitious projects that may turn out to be worthwhile. That is, if they had accurately predicted how much time and work it would have taken them, they may have never started it in the first place. The other important factor that affects decision-making is our ability to predict how we will feel about certain outcomes. Not only do we predict whether we will feel positively or negatively, we also make predictions about how strongly and for how long we will feel that way. Research demonstrates that these predictions of one’s future feelings—known as affective forecasting—are accurate in some ways but limited in others (Gilbert & Wilson, 2007). We are adept at predicting whether a future event or situation will make us feel positively or negatively (Wilson & Gilbert, 2003), but we often incorrectly predict the strength or duration of those emotions. For example, you may predict that if your favorite sports team loses an important match, you will be devastated. Although you’re probably right that you will feel negative (and not positive) emotions, will you be able to accurately estimate how negative you’ll feel? What about how long those negative feelings will last? Predictions about future feelings are influenced by the impact bias : the tendency for a person to overestimate the intensity of their future feelings. For example, by comparing people’s estimates of how they expected to feel after a specific event to their actual feelings after the event, research has shown that people generally overestimate how badly they will feel after a negative event—such as losing a job—and they also overestimate how happy they will feel after a positive event—such as winning the lottery (Brickman, Coates, & Janoff-Bullman, 1978). Another factor in these estimations is the durability bias. The durability bias refers to the tendency for people to overestimate how long (or, the duration) positive and negative events will affect them. This bias is much greater for predictions regarding negative events than positive events, and occurs because people are generally unaware of the many psychological mechanisms that help us adapt to and cope with negative events (Gilbert, Pinel, Wilson, Blumberg, & Wheatley, 1998;Wilson, Wheatley, Meyers, Gilbert, & Axsom, 2000). In summary, individuals form impressions of themselves and others, make predictions about the future, and use these judgments to inform their decisions. However, these judgments are shaped by our tendency to view ourselves in an overly positive light and our inability to appreciate our habituation to both positive and negative events. In the next section, we will discuss how motivations, moods, and desires also shape social judgment. Hot Cognition: The Influence of Motivations, Mood, and Desires on Social Judgment Although we may believe we are always capable of rational and objective thinking (for example, when we methodically weigh the pros and cons of two laundry detergents in an unemotional—i.e., “cold”—manner), our reasoning is often influenced by our motivations and mood. Hot cognition refers to the mental processes that are influenced by desires and feelings. For example, imagine you receive a poor grade on a class assignment. In this situation, your ability to reason objectively about the quality of your assignment may be limited by your anger toward the teacher, upset feelings over the bad grade, and your motivation to maintain your belief that you are a good student. In this sort of scenario, we may want the situation to turn out a particular way or our belief to be the truth. When we have these directional goals, we are motivated to reach a particular outcome or judgment and do not process information in a cold, objective manner. Directional goals can bias our thinking in many ways, such as leading to motivated skepticism, whereby we are skeptical of evidence that goes against what we want to believe despite the strength of the evidence (Ditto & Lopez, 1992). For example, individuals trust medical tests less if the results suggest they have a deficiency compared to when the results suggest they are healthy. Through this motivated skepticism, people often continue to believe what they want to believe, even in the face of nearly incontrovertible evidence to the contrary. There are also situations in which we do not have wishes for a particular outcome but our goals bias our reasoning, anyway. For example, being motivated to reach an accurate conclusion can influence our reasoning processes by making us more cautious—leading to indecision. In contrast, sometimes individuals are motivated to make a quick decision, without being particularly concerned about the quality of it. Imagine trying to choose a restaurant with a group of friends when you’re really hungry. You may choose whatever’s nearby without caring if the restaurant is the best or not. This need for closure (the desire to come to a firm conclusion) is often induced by time constraints (when a decision needs to be made quickly) as well as by individual differences in the need for closure (Webster & Kruglanski, 1997). Some individuals are simply more uncomfortable with ambiguity than others, and are thus more motivated to reach clear, decisive conclusions. Just as our goals and motivations influence our reasoning, our moods and feelings also shape our thinking process and ultimate decisions. Many of our decisions are based in part on our memories of past events, and our retrieval of memories is affected by our current mood. For example, when you are sad, it is easier to recall the sad memory of your dog’s death than the happy moment you received the dog. This tendency to recall memories similar in valence to our current mood is known as mood-congruent memory (Blaney, 1986; Bower 1981, 1991; DeSteno, Petty, Wegener, & Rucker, 2000; Forgas, Bower, & Krantz, 1984; Schwarz, Strack, Kommer, & Wagner, 1987). The mood we were in when the memory was recorded becomes a retrieval cue; our present mood primes these congruent memories, making them come to mind more easily (Fiedler, 2001). Furthermore, because the availability of events in our memory can affect their perceived frequency (the availability heuristic), the biased retrieval of congruent memories can then impact the subsequent judgments we make (Tversky & Kahneman, 1973). For example, if you are retrieving many sad memories, you might conclude that you have had a tough, depressing life. In addition to our moods influencing the specific memories we retrieve, our moods can also influence the broader judgments we make. This sometimes leads to inaccuracies when our current mood is irrelevant to the judgment at hand. In a classic study demonstrating this effect, researchers found that study participants rated themselves as less-satisfied with their lives in general if they were asked on a day when it happened to be raining vs. sunny (Schwarz & Clore, 1983). However, this occurred only if the participants were not aware that the weather might be influencing their mood. In essence, participants were in worse moods on rainy days than sunny days, and, if unaware of the weather’s effect on their mood, they incorrectly used their mood as evidence of their overall life satisfaction. In summary, our mood and motivations can influence both the way we think and the decisions we ultimately make. Mood can shape our thinking even when the mood is irrelevant to the judgment, and our motivations can influence our thinking even if we have no particular preference about the outcome. Just as we might be unaware of how our reasoning is influenced by our motives and moods, research has found that our behaviors can be determined by unconscious processes rather than intentional decisions, an idea we will explore in the next section. Automaticity Do we actively choose and control all our behaviors or do some of these behaviors occur automatically? A large body of evidence now suggests that many of our behaviors are, in fact, automatic. A behavior or process is considered automatic if it is unintentional, uncontrollable, occurs outside of conscious awareness, or is cognitively efficient (Bargh & Chartrand, 1999). A process may be considered automatic even if it does not have all these features; for example, driving is a fairly automatic process, but is clearly intentional. Processes can become automatic through repetition, practice, or repeated associations. Staying with the driving example: although it can be very difficult and cognitively effortful at the start, over time it becomes a relatively automatic process, and aspects of it can occur outside conscious awareness. In addition to practice leading to the learning of automatic behaviors, some automatic processes, such as fear responses, appear to be innate. For example, people quickly detect negative stimuli, such as negative words, even when those stimuli are presented subliminally (Dijksterhuis & Aarts, 2003; Pratto & John, 1991). This may represent an evolutionarily adaptive response that makes individuals more likely to detect danger in their environment. Other innate automatic processes may have evolved due to their pro-social outcomes. The chameleon effect—where individuals nonconsciously mimic the postures, mannerisms, facial expressions, and other behaviors of their interaction partners—is an example of how people may engage in certain behaviors without conscious intention or awareness (Chartrand & Bargh, 1999). For example, have you ever noticed that you’ve picked up some of the habits of your friends? Over time, but also in brief encounters, we will nonconsciously mimic those around us because of the positive social effects of doing so. That is, automatic mimicry has been shown to lead to more positive social interactions and to increase liking between the mimicked person and the mimicking person. When concepts and behaviors have been repeatedly associated with each other, one of them can be primed—i.e., made more cognitively accessible—by exposing participants to the (strongly associated) other one. For example, by presenting participants with the concept of a doctor, associated concepts such as “nurse” or “stethoscope” are primed. As a result, participants recognize a word like “nurse” more quickly (Meyer, & Schvaneveldt, 1971). Similarly, stereotypes can automatically prime associated judgments and behaviors. Stereotypes are our general beliefs about a group of people and, once activated, they may guide our judgments outside of conscious awareness. Similar to schemas, stereotypes involve a mental representation of how we expect a person will think and behave. For example, someone’s mental schema for women may be that they’re caring, compassionate, and maternal; however, a stereotype would be that all women are examples of this schema. As you know, assuming all people are a certain way is not only wrong but insulting, especially if negative traits are incorporated into a schema and subsequent stereotype. In a now classic study, Patricia Devine (1989) primed study participants with words typically associated with Blacks (e.g., “blues,” “basketball”) in order to activate the stereotype of Blacks. Devine found that study participants who were primed with the Black stereotype judged a target’s ambiguous behaviors as being more hostile (a trait stereotypically associated with Blacks) than nonprimed participants. Research in this area suggests that our social context—which constantly bombards us with concepts—may prime us to form particular judgments and influence our thoughts and behaviors. In summary, there are many cognitive processes and behaviors that occur outside of our awareness and despite our intentions. Because automatic thoughts and behaviors do not require the same level of cognitive processing as conscious, deliberate thinking and acting, automaticity provides an efficient way for individuals to process and respond to the social world. However, this efficiency comes at a cost, as unconsciously held stereotypes and attitudes can sometimes influence us to behave in unintended ways. We will discuss the consequences of both consciously and unconsciously held attitudes in the next section. Attitudes and Attitude Measurement When we encounter a new object or person, we often form an attitude toward it (him/her). An attitude is a “psychological tendency that is expressed by evaluating a particular entity with some degree of favor or disfavor” (Eagly & Chaiken, 1993, p. 1). In essence, our attitudes are our general evaluations of things (i.e., do you regard this thing positively or negatively?) that can bias us toward having a particular response to it. For example, a negative attitude toward mushrooms would predispose you to avoid them and think negatively of them in other ways. This bias can be long- or short-term and can be overridden by another experience with the object. Thus, if you encounter a delicious mushroom dish in the future, your negative attitude could change to a positive one. Traditionally, attitudes have been measured through explicit attitude measures, in which participants are directly asked to provide their attitudes toward various objects, people, or issues (e.g., a survey). For example, in a semantic-differential scale, respondents are asked to provide evaluations of an attitude object using a series of negative to positive response scales—which have something like “unpleasant” at one end of the scale and “pleasant” at the other (Osgood, Suci, & Tannenbaum, 1957). In a Likert scale, respondents are asked to indicate their agreement level with various evaluative statements, such as, “I believe that psychology is the most interesting major” (Likert, 1932). Here, participants mark their selection between something like “strongly disagree” and “strongly agree.” These explicit measures of attitudes can be used to predict people’s actual behavior, but there are limitations to them. For one thing, individuals aren’t always aware of their true attitudes, because they’re either undecided or haven’t given a particular issue much thought. Furthermore, even when individuals are aware of their attitudes, they might not want to admit to them, such as when holding a certain attitude is viewed negatively by their culture. For example, sometimes it can be difficult to measure people’s true opinions on racial issues, because participants fear that expressing their true attitudes will be viewed as socially unacceptable. Thus, explicit attitude measures may be unreliable when asking about controversial attitudes or attitudes that are not widely accepted by society. In order to avoid some of these limitations, many researchers use more subtle or covert ways of measuring attitudes that do not suffer from such self-presentation concerns (Fazio & Olson, 2003). An implicit attitude is an attitude that a person does not verbally or overtly express. For example, someone may have a positive, explicit attitude toward his job; however, nonconsciously, he may have a lot of negative associations with it (e.g., having to wake up early, the long commute, the office heating is broken) which results in an implicitly negative attitude. To learn what a person’s implicit attitude is, you have to use implicit measures of attitudes. These measures infer the participant’s attitude rather than having the participant explicitly report it. Many implicit measures accomplish this by recording the time it takes a participant (i.e., the reaction time) to label or categorize an attitude object (i.e., the person, concept, or object of interest) as positive or negative. For example, the faster someone categorizes his or her job (measured in milliseconds) as negative compared to positive, the more negative the implicit attitude is (i.e., because a faster categorization implies that the two concepts—“work” and “negative”—are closely related in one’s mind). One common implicit measure is the Implicit Association Test (IAT;Greenwald & Banaji, 1995; Greenwald, McGhee, & Schwartz, 1998), which does just what the name suggests, measuring how quickly the participant pairs a concept (e.g., cats) with an attribute (e.g., good or bad). The participant’s response time in pairing the concept with the attribute indicates how strongly the participant associates the two. Another common implicit measure is the evaluative priming task (Fazio, Jackson, Dunton, & Williams, 1995), which measures how quickly the participant labels the valence (i.e., positive or negative) of the attitude object when it appears immediately after a positive or negative image. The more quickly a participant labels the attitude object after being primed with a positive versus negative image indicates how positively the participant evaluates the object. Individuals’ implicit attitudes are sometimes inconsistent with their explicitly held attitudes. Hence, implicit measures may reveal biases that participants do not report on explicit measures. As a result, implicit attitude measures are especially useful for examining the pervasiveness and strength of controversial attitudes and stereotypic associations, such as racial biases or associations between race and violence. For example, research using the IAT has shown that about 66% of white respondents have a negative bias toward Blacks (Nosek, Banaji, & Greenwald, 2002), that bias on the IAT against Blacks is associated with more discomfort during interracial interactions (McConnell, & Leibold, 2001), and that implicit associations linking Blacks to violence are associated with a greater tendency to shoot unarmed Black targets in a video game (Payne, 2001). Thus, even though individuals are often unaware of their implicit attitudes, these attitudes can have serious implications for their behavior, especially when these individuals do not have the cognitive resources available to override the attitudes’ influence. Conclusion Decades of research on social cognition and attitudes have revealed many of the “tricks” and “tools” we use to efficiently process the limitless amounts of social information we encounter. These tools are quite useful for organizing that information to arrive at quick decisions. When you see an individual engage in a behavior, such as seeing a man push an elderly woman to the ground, you form judgments about his personality, predictions about the likelihood of him engaging in similar behaviors in the future, as well as predictions about the elderly woman’s feelings and how you would feel if you were in her position. As the research presented in this module demonstrates, we are adept and efficient at making these judgments and predictions, but they are not made in a vacuum. Ultimately, our perception of the social world is a subjective experience, and, consequently, our decisions are influenced by our experiences, expectations, emotions, motivations, and current contexts. Being aware of when our judgments are most accurate, and how our judgments are shaped by social influences, prepares us to be in a much better position to appreciate, and potentially counter, their effects. Outside Resources Video: Daniel Gilbert discussing affective forecasting. www.dailymotion.com/video/xeb...e#.UQlwDx3WLm4 Video: Focus on heuristics. http://study.com/academy/lesson/heuristics.html Web: BBC Horizon documentary How to Make Better Decisions that discusses many module topics (Part 1). Web: Implicit Attitudes Test. https://implicit.harvard.edu/implicit/ Discussion Questions 1. Describe your event-schema, or script, for an event that you encounter regularly (e.g., dining at a restaurant). Now, attempt to articulate a script for an event that you have encountered only once or a few times. How are these scripts different? How confident are you in your ability to navigate these two events? 2. Think of a time when you made a decision that you thought would make you very happy (e.g., purchasing an item). To what extent were you accurate or inaccurate? In what ways were you wrong, and why do you think you were wrong? 3. What is an issue you feel strongly about (e.g., abortion, death penalty)? How would you react if research demonstrated that your opinion was wrong? What would it take before you would believe the evidence? 4. Take an implicit association test at the Project Implicit website (https://implicit.harvard.edu/implicit). How do your results match or mismatch your explicit attitudes. Vocabulary Affective forecasting Predicting how one will feel in the future after some event or decision. Attitude A psychological tendency that is expressed by evaluating a particular entity with some degree of favor or disfavor. Automatic A behavior or process has one or more of the following features: unintentional, uncontrollable, occurring outside of conscious awareness, and cognitively efficient. Availability heuristic A heuristic in which the frequency or likelihood of an event is evaluated based on how easily instances of it come to mind. Chameleon effect The tendency for individuals to nonconsciously mimic the postures, mannerisms, facial expressions, and other behaviors of one’s interaction partners. Directional goals The motivation to reach a particular outcome or judgment. Durability bias A bias in affective forecasting in which one overestimates for how long one will feel an emotion (positive or negative) after some event. Evaluative priming task An implicit attitude task that assesses the extent to which an attitude object is associated with a positive or negative valence by measuring the time it takes a person to label an adjective as good or bad after being presented with an attitude object. Explicit attitude An attitude that is consciously held and can be reported on by the person holding the attitude. Heuristics A mental shortcut or rule of thumb that reduces complex mental problems to more simple rule-based decisions. Hot cognition The mental processes that are influenced by desires and feelings. Impact bias A bias in affective forecasting in which one overestimates the strength or intensity of emotion one will experience after some event. Implicit Association Test An implicit attitude task that assesses a person’s automatic associations between concepts by measuring the response times in pairing the concepts. Implicit attitude An attitude that a person cannot verbally or overtly state. Implicit measures of attitudes Measures of attitudes in which researchers infer the participant’s attitude rather than having the participant explicitly report it. Mood-congruent memory The tendency to be better able to recall memories that have a mood similar to our current mood. Motivated skepticism A form of bias that can result from having a directional goal in which one is skeptical of evidence despite its strength because it goes against what one wants to believe. Need for closure The desire to come to a decision that will resolve ambiguity and conclude an issue. Planning fallacy A cognitive bias in which one underestimates how long it will take to complete a task. Primed A process by which a concept or behavior is made more cognitively accessible or likely to occur through the presentation of an associated concept. Representativeness heuristic A heuristic in which the likelihood of an object belonging to a category is evaluated based on the extent to which the object appears similar to one’s mental representation of the category. Schema A mental model or representation that organizes the important information about a thing, person, or event (also known as a script). Social cognition The study of how people think about the social world. Stereotypes Our general beliefs about the traits or behaviors shared by group of people.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/14%3A_SOCIAL_PSYCHOLOGY/14.01%3A_Social_Cognition_and_Attitudes.txt
By Jerry M. Burger Santa Clara University We often change our attitudes and behaviors to match the attitudes and behaviors of the people around us. One reason for this conformity is a concern about what other people think of us. This process was demonstrated in a classic study in which college students deliberately gave wrong answers to a simple visual judgment task rather than go against the group. Another reason we conform to the norm is because other people often have information we do not, and relying on norms can be a reasonable strategy when we are uncertain about how we are supposed to act. Unfortunately, we frequently misperceive how the typical person acts, which can contribute to problems such as the excessive binge drinking often seen in college students. Obeying orders from an authority figure can sometimes lead to disturbing behavior. This danger was illustrated in a famous study in which participants were instructed to administer painful electric shocks to another person in what they believed to be a learning experiment. Despite vehement protests from the person receiving the shocks, most participants continued the procedure when instructed to do so by the experimenter. The findings raise questions about the power of blind obedience in deplorable situations such as atrocities and genocide. They also raise concerns about the ethical treatment of participants in psychology experiments. learning objectives • Become aware of how widespread conformity is in our lives and some of the ways each of us changes our attitudes and behavior to match the norm. • Understand the two primary reasons why people often conform to perceived norms. • Appreciate how obedience to authority has been examined in laboratory studies and some of the implications of the findings from these investigations. • Consider some of the remaining issues and sources of controversy surrounding Milgram’s obedience studies. Introduction When he was a teenager, my son often enjoyed looking at photographs of me and my wife taken when we were in high school. He laughed at the hairstyles, the clothing, and the kind of glasses people wore “back then.” And when he was through with his ridiculing, we would point out that no one is immune to fashions and fads and that someday his children will probably be equally amused by his high school photographs and the trends he found so normal at the time. Everyday observation confirms that we often adopt the actions and attitudes of the people around us. Trends in clothing, music, foods, and entertainment are obvious. But our views on political issues, religious questions, and lifestyles also reflect to some degree the attitudes of the people we interact with. Similarly, decisions about behaviors such as smoking and drinking are influenced by whether the people we spend time with engage in these activities. Psychologists refer to this widespread tendency to act and think like the people around us as conformity. Conformity What causes all this conformity? To start, humans may possess an inherent tendency to imitate the actions of others. Although we usually are not aware of it, we often mimic the gestures, body posture, language, talking speed, and many other behaviors of the people we interact with. Researchers find that this mimicking increases the connection between people and allows our interactions to flow more smoothly (Chartrand & Bargh, 1999). Beyond this automatic tendency to imitate others, psychologists have identified two primary reasons for conformity. The first of these is normative influence. When normative influence is operating, people go along with the crowd because they are concerned about what others think of them. We don’t want to look out of step or become the target of criticism just because we like different kinds of music or dress differently than everyone else. Fitting in also brings rewards such as camaraderie and compliments. How powerful is normative influence? Consider a classic study conducted many years ago by Solomon Asch (1956). The participants were male college students who were asked to engage in a seemingly simple task. An experimenter standing several feet away held up a card that depicted one line on the left side and three lines on the right side. The participant’s job was to say aloud which of the three lines on the right was the same length as the line on the left. Sixteen cards were presented one at a time, and the correct answer on each was so obvious as to make the task a little boring. Except for one thing. The participant was not alone. In fact, there were six other people in the room who also gave their answers to the line-judgment task aloud. Moreover, although they pretended to be fellow participants, these other individuals were, in fact, confederates working with the experimenter. The real participant was seated so that he always gave his answer after hearing what five other “participants” said. Everything went smoothly until the third trial, when inexplicably the first “participant” gave an obviously incorrect answer. The mistake might have been amusing, except the second participant gave the same answer. As did the third, the fourth, and the fifth participant. Suddenly the real participant was in a difficult situation. His eyes told him one thing, but five out of five people apparently saw something else. It’s one thing to wear your hair a certain way or like certain foods because everyone around you does. But, would participants intentionally give a wrong answer just to conform with the other participants? The confederates uniformly gave incorrect answers on 12 of the 16 trials, and 76 percent of the participants went along with the norm at least once and also gave the wrong answer. In total, they conformed with the group on one-third of the 12 test trials. Although we might be impressed that the majority of the time participants answered honestly, most psychologists find it remarkable that so many college students caved in to the pressure of the group rather than do the job they had volunteered to do. In almost all cases, the participants knew they were giving an incorrect answer, but their concern for what these other people might be thinking about them overpowered their desire to do the right thing. Variations of Asch’s procedures have been conducted numerous times (Bond, 2005; Bond & Smith, 1996). We now know that the findings are easily replicated, that there is an increase in conformity with more confederates (up to about five), that teenagers are more prone to conforming than are adults, and that people conform significantly less often when they believe the confederates will not hear their responses (Berndt, 1979; Bond, 2005; Crutchfield, 1955; Deutsch & Gerard, 1955). This last finding is consistent with the notion that participants change their answers because they are concerned about what others think of them. Finally, although we see the effect in virtually every culture that has been studied, more conformity is found in collectivist countries such as Japan and China than in individualistic countries such as the United States (Bond & Smith, 1996). Compared with individualistic cultures, people who live in collectivist cultures place a higher value on the goals of the group than on individual preferences. They also are more motivated to maintain harmony in their interpersonal relations. The other reason we sometimes go along with the crowd is that people are often a source of information. Psychologists refer to this process as informational influence. Most of us, most of the time, are motivated to do the right thing. If society deems that we put litter in a proper container, speak softly in libraries, and tip our waiter, then that’s what most of us will do. But sometimes it’s not clear what society expects of us. In these situations, we often rely on descriptive norms (Cialdini, Reno, & Kallgren, 1990). That is, we act the way most people—or most people like us—act. This is not an unreasonable strategy. Other people often have information that we do not, especially when we find ourselves in new situations. If you have ever been part of a conversation that went something like this, “Do you think we should?” “Sure. Everyone else is doing it.”, you have experienced the power of informational influence. However, it’s not always easy to obtain good descriptive norm information, which means we sometimes rely on a flawed notion of the norm when deciding how we should behave. A good example of how misperceived norms can lead to problems is found in research on binge drinking among college students. Excessive drinking is a serious problem on many campuses (Mita, 2009). There are many reasons why students binge drink, but one of the most important is their perception of the descriptive norm. How much students drink is highly correlated with how much they believe the average student drinks (Neighbors, Lee, Lewis, Fossos, & Larimer, 2007). Unfortunately, students aren’t very good at making this assessment. They notice the boisterous heavy drinker at the party but fail to consider all the students not attending the party. As a result, students typically overestimate the descriptive norm for college student drinking (Borsari & Carey, 2003; Perkins, Haines, & Rice, 2005). Most students believe they consume significantly less alcohol than the norm, a miscalculation that creates a dangerous push toward more and more excessive alcohol consumption. On the positive side, providing students with accurate information about drinking norms has been found to reduce overindulgent drinking (Burger, LaSalvia, Hendricks, Mehdipour, & Neudeck, 2011; Neighbors, Lee, Lewis, Fossos, & Walter, 2009). Researchers have demonstrated the power of descriptive norms in a number of areas. Homeowners reduced the amount of energy they used when they learned that they were consuming more energy than their neighbors (Schultz, Nolan, Cialdini, Goldstein, & Griskevicius, 2007). Undergraduates selected the healthy food option when led to believe that other students had made this choice (Burger et al., 2010). Hotel guests were more likely to reuse their towels when a hanger in the bathroom told them that this is what most guests did (Goldstein, Cialdini, & Griskevicius, 2008). And more people began using the stairs instead of the elevator when informed that the vast majority of people took the stairs to go up one or two floors (Burger & Shelton, 2011). Obedience Although we may be influenced by the people around us more than we recognize, whether we conform to the norm is up to us. But sometimes decisions about how to act are not so easy. Sometimes we are directed by a more powerful person to do things we may not want to do. Researchers who study obedience are interested in how people react when given an order or command from someone in a position of authority. In many situations, obedience is a good thing. We are taught at an early age to obey parents, teachers, and police officers. It’s also important to follow instructions from judges, firefighters, and lifeguards. And a military would fail to function if soldiers stopped obeying orders from superiors. But, there is also a dark side to obedience. In the name of “following orders” or “just doing my job,” people can violate ethical principles and break laws. More disturbingly, obedience often is at the heart of some of the worst of human behavior—massacres, atrocities, and even genocide. It was this unsettling side of obedience that led to some of the most famous and most controversial research in the history of psychology. Milgram (1963, 1965, 1974) wanted to know why so many otherwise decent German citizens went along with the brutality of the Nazi leaders during the Holocaust. “These inhumane policies may have originated in the mind of a single person,” Milgram (1963, p. 371) wrote, “but they could only be carried out on a massive scale if a very large number of persons obeyed orders.” To understand this obedience, Milgram conducted a series of laboratory investigations. In all but one variation of the basic procedure, participants were men recruited from the community surrounding Yale University, where the research was carried out. These citizens signed up for what they believed to be an experiment on learning and memory. In particular, they were told the research concerned the effects of punishment on learning. Three people were involved in each session. One was the participant. Another was the experimenter. The third was a confederate who pretended to be another participant. The experimenter explained that the study consisted of a memory test and that one of the men would be the teacher and the other the learner. Through a rigged drawing, the real participant was always assigned the teacher’s role and the confederate was always the learner. The teacher watched as the learner was strapped into a chair and had electrodes attached to his wrist. The teacher then moved to the room next door where he was seated in front of a large metal box the experimenter identified as a “shock generator.” The front of the box displayed gauges and lights and, most noteworthy, a series of 30 levers across the bottom. Each lever was labeled with a voltage figure, starting with 15 volts and moving up in 15-volt increments to 450 volts. Labels also indicated the strength of the shocks, starting with “Slight Shock” and moving up to “Danger: Severe Shock” toward the end. The last two levers were simply labeled “XXX” in red. Through a microphone, the teacher administered a memory test to the learner in the next room. The learner responded to the multiple-choice items by pressing one of four buttons that were barely within reach of his strapped-down hand. If the teacher saw the correct answer light up on his side of the wall, he simply moved on to the next item. But if the learner got the item wrong, the teacher pressed one of the shock levers and, thereby, delivered the learner’s punishment. The teacher was instructed to start with the 15-volt lever and move up to the next highest shock for each successive wrong answer. In reality, the learner received no shocks. But he did make a lot of mistakes on the test, which forced the teacher to administer what he believed to be increasingly strong shocks. The purpose of the study was to see how far the teacher would go before refusing to continue. The teacher’s first hint that something was amiss came after pressing the 75-volt lever and hearing through the wall the learner say “Ugh!” The learner’s reactions became stronger and louder with each lever press. At 150 volts, the learner yelled out, “Experimenter! That’s all. Get me out of here. I told you I had heart trouble. My heart’s starting to bother me now. Get me out of here, please. My heart’s starting to bother me. I refuse to go on. Let me out.” The experimenter’s role was to encourage the participant to continue. If at any time the teacher asked to end the session, the experimenter responded with phrases such as, “The experiment requires that you continue,” and “You have no other choice, you must go on.” The experimenter ended the session only after the teacher stated four successive times that he did not want to continue. All the while, the learner’s protests became more intense with each shock. After 300 volts, the learner refused to answer any more questions, which led the experimenter to say that no answer should be considered a wrong answer. After 330 volts, despite vehement protests from the learner following previous shocks, the teacher heard only silence, suggesting that the learner was now physically unable to respond. If the teacher reached 450 volts—the end of the generator—the experimenter told him to continue pressing the 450 volt lever for each wrong answer. It was only after the teacher pressed the 450-volt lever three times that the experimenter announced that the study was over. If you had been a participant in this research, what would you have done? Virtually everyone says he or she would have stopped early in the process. And most people predict that very few if any participants would keep pressing all the way to 450 volts. Yet in the basic procedure described here, 65 percent of the participants continued to administer shocks to the very end of the session. These were not brutal, sadistic men. They were ordinary citizens who nonetheless followed the experimenter’s instructions to administer what they believed to be excruciating if not dangerous electric shocks to an innocent person. The disturbing implication from the findings is that, under the right circumstances, each of us may be capable of acting in some very uncharacteristic and perhaps some very unsettling ways. Milgram conducted many variations of this basic procedure to explore some of the factors that affect obedience. He found that obedience rates decreased when the learner was in the same room as the experimenter and declined even further when the teacher had to physically touch the learner to administer the punishment. Participants also were less willing to continue the procedure after seeing other teachers refuse to press the shock levers, and they were significantly less obedient when the instructions to continue came from a person they believed to be another participant rather than from the experimenter. Finally, Milgram found that women participants followed the experimenter’s instructions at exactly the same rate the men had. Milgram’s obedience research has been the subject of much controversy and discussion. Psychologists continue to debate the extent to which Milgram’s studies tell us something about atrocities in general and about the behavior of German citizens during the Holocaust in particular (Miller, 2004). Certainly, there are important features of that time and place that cannot be recreated in a laboratory, such as a pervasive climate of prejudice and dehumanization. Another issue concerns the relevance of the findings. Some people have argued that today we are more aware of the dangers of blind obedience than we were when the research was conducted back in the 1960s. However, findings from partial and modified replications of Milgram’s procedures conducted in recent years suggest that people respond to the situation today much like they did a half a century ago (Burger, 2009). Another point of controversy concerns the ethical treatment of research participants. Researchers have an obligation to look out for the welfare of their participants. Yet, there is little doubt that many of Milgram’s participants experienced intense levels of stress as they went through the procedure. In his defense, Milgram was not unconcerned about the effects of the experience on his participants. And in follow-up questionnaires, the vast majority of his participants said they were pleased they had been part of the research and thought similar experiments should be conducted in the future. Nonetheless, in part because of Milgram’s studies, guidelines and procedures were developed to protect research participants from these kinds of experiences. Although Milgram’s intriguing findings left us with many unanswered questions, conducting a full replication of his experiment remains out of bounds by today’s standards. Social psychologists are fond of saying that we are all influenced by the people around us more than we recognize. Of course, each person is unique, and ultimately each of us makes choices about how we will and will not act. But decades of research on conformity and obedience make it clear that we live in a social world and that—for better or worse—much of what we do is a reflection of the people we encounter. Outside Resources Student Video: Christine N. Winston and Hemali Maher's 'The Milgram Experiment' gives an excellent 3-minute overview of one of the most famous experiments in the history of psychology. It was one of the winning entries in the 2015 Noba Student Video Award. Video: An example of information influence in a field setting Video: Scenes from a recent partial replication of Milgram’s obedience studies Video: Scenes from a recent replication of Asch’s conformity experiment Web: Website devoted to scholarship and research related to Milgram’s obedience studies http://www.stanleymilgram.com Discussion Questions 1. In what ways do you see normative influence operating among you and your peers? How difficult would it be to go against the norm? What would it take for you to not do something just because all your friends were doing it? 2. What are some examples of how informational influence helps us do the right thing? How can we use descriptive norm information to change problem behaviors? 3. Is conformity more likely or less likely to occur when interacting with other people through social media as compared to face-to-face encounters? 4. When is obedience to authority a good thing and when is it bad? What can be done to prevent people from obeying commands to engage in truly deplorable behavior such as atrocities and massacres? 5. In what ways do Milgram’s experimental procedures fall outside the guidelines for research with human participants? Are there ways to conduct relevant research on obedience to authority without violating these guidelines? Vocabulary Conformity Changing one’s attitude or behavior to match a perceived social norm. Descriptive norm The perception of what most people do in a given situation. Informational influence Conformity that results from a concern to act in a socially approved manner as determined by how others act. Normative influence Conformity that results from a concern for what other people think of us. Obedience Responding to an order or command from a person in a position of authority.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/14%3A_SOCIAL_PSYCHOLOGY/14.02%3A_Conformity_and_Obedience.txt
By Robert V. Levine California State University, Fresno This module introduces several major principles in the process of persuasion. It offers an overview of the different paths to persuasion. It then describes how mindless processing makes us vulnerable to undesirable persuasion and some of the “tricks” that may be used against us. learning objectives • Recognize the difference between the central and peripheral routes to persuasion. • Understand the concepts of trigger features, fixed action patterns, heuristics, and mindless thinking, and how these processes are essential to our survival but, at the same time, leave us vulnerable to exploitation. • Understand some common “tricks” persuasion artists may use to take advantage of us. • Use this knowledge to make you less susceptible to unwanted persuasion. Introduction Have you ever tried to swap seats with a stranger on an airline? Ever negotiated the price of a car? Ever tried to convince someone to recycle, quit smoking, or make a similar change in health behaviors? If so, you are well versed with how persuasion can show up in everyday life. Persuasion has been defined as “the process by which a message induces change in beliefs, attitudes, or behaviors” (Myers, 2011). Persuasion can take many forms. It may, for example, differ in whether it targets public compliance or private acceptance, is short-term or long-term, whether it involves slowly escalating commitments or sudden interventions and, most of all, in the benevolence of its intentions. When persuasion is well-meaning, we might call it education. When it is manipulative, it might be called mind control (Levine, 2003). Whatever the content, however, there is a similarity to the form of the persuasion process itself. As the advertising commentator Sid Bernstein once observed, “Of course, you sell candidates for political office the same way you sell soap or sealing wax or whatever; because, when you get right down to it, that’s the only way anything is sold” (Levine, 2003). Persuasion is one of the most studied of all social psychology phenomena. This module provides an introduction to several of its most important components. Two Paths to Persuasion Persuasion theorists distinguish between the central and peripheral routes to persuasion (Petty & Cacioppo, 1986). The central route employs direct, relevant, logical messages. This method rests on the assumption that the audience is motivated, will think carefully about what is presented, and will react on the basis of your arguments. The central route is intended to produce enduring agreement. For example, you might decide to vote for a particular political candidate after hearing her speak and finding her logic and proposed policies to be convincing. The peripheral route, on the other hand, relies on superficial cues that have little to do with logic. The peripheral approach is the salesman’s way of thinking. It requires a target who isn’tthinking carefully about what you are saying. It requires low effort from the target and often exploits rule-of-thumb heuristics that trigger mindless reactions (see below). It may be intended to persuade you to do something you do not want to do and might later be sorry you did. Advertisements, for example, may show celebrities, cute animals, beautiful scenery, or provocative sexual images that have nothing to do with the product. The peripheral approach is also common in the darkest of persuasion programs, such as those of dictators and cult leaders. Returning to the example of voting, you can experience the peripheral route in action when you see a provocative, emotionally charged political advertisement that tugs at you to vote a particular way. Triggers and Fixed Action Patterns The central route emphasizes objective communication of information. The peripheral route relies on psychological techniques. These techniques may take advantage of a target’s not thinking carefully about the message. The process mirrors a phenomenon in animal behavior known as fixed action patterns (FAPs). These are sequences of behavior that occur in exactly the same fashion, in exactly the same order, every time they’re elicited. Cialdini (2008) compares it to a prerecorded tape that is turned on and, once it is, always plays to its finish. He describes it is as if the animal were turning on a tape recorder (Cialdini, 2008). There is the feeding tape, the territorial tape, the migration tape, the nesting tape, the aggressive tape—each sequence ready to be played when a situation calls for it. In humans fixed action patterns include many of the activities we engage in while mentally on "auto-pilot." These behaviors are so automatic that it is very difficult to control them. If you ever feed a baby, for instance, nearly everyone mimics each bite the baby takes by opening and closing their own mouth! If two people near you look up and point you will automatically look up yourself. We also operate in a reflexive, non-thinking way when we make many decisions. We are more likely, for example, to be less critical about medical advice dispensed from a doctor than from a friend who read an interesting article on the topic in a popular magazine. A notable characteristic of fixed action patterns is how they are activated. At first glance, it appears the animal is responding to the overall situation. For example, the maternal tape appears to be set off when a mother sees her hungry baby, or the aggressive tape seems to be activated when an enemy invades the animal’s territory. It turns out, however, that the on/off switch may actually be controlled by a specific, minute detail of the situation—maybe a sound or shape or patch of color. These are the hot buttons of the biological world—what Cialdini refers to as “trigger features” and biologists call “releasers.” Humans are not so different. Take the example of a study conducted on various ways to promote a campus bake sale for charity (Levine, 2003). Simply displaying the cookies and other treats to passersby did not generate many sales (only 2 out of 30 potential customers made a purchase). In an alternate condition, however, when potential customers were asked to "buy a cookie for a good cause" the number rose to 12 out of 30. It seems that the phrase "a good cause" triggered a willingness to act. In fact, when the phrase "a good cause" was paired with a locally-recognized charity (known for its food-for-the-homeless program) the numbers held steady at 14 out of 30. When a fictional good cause was used instead (the make believe "Levine House") still 11 out of 30 potential customers made purchases and not one asked about the purpose or nature of the cause. The phrase "for a good cause" was an influential enough hot button that the exact cause didn't seem to matter. The effectiveness of peripheral persuasion relies on our frequent reliance on these sorts of fixed action patterns and trigger features. These mindless, rules-of-thumb are generally effective shortcuts for coping with the overload of information we all must confront. They serve as heuristics—mental shortcuts-- that enable us to make decisions and solve problems quickly and efficiently. They also, however, make us vulnerable to uninvited exploitation through the peripheral route of persuasion. The Source of Persuasion: The Triad of Trustworthiness Effective persuasion requires trusting the source of the communication. Studies have identified three characteristics that lead to trust: perceived authority, honesty, and likability. When the source appears to have any or all of these characteristics, people not only are more willing to agree to their request but are willing to do so without carefully considering the facts. We assume we are on safe ground and are happy to shortcut the tedious process of informed decision making. As a result, we are more susceptible to messages and requests, no matter their particular content or how peripheral they may be. Authority From earliest childhood, we learn to rely on authority figures for sound decision making because their authority signifies status and power, as well as expertise. These two facets often work together. Authorities such as parents and teachers are not only our primary sources of wisdom while we grow up, but they control us and our access to the things we want. In addition, we have been taught to believe that respect for authority is a moral virtue. As adults, it is natural to transfer this respect to society’s designated authorities, such as judges, doctors, bosses, and religious leaders. We assume their positions give them special access to information and power. Usually we are correct, so that our willingness to defer to authorities becomes a convenient shortcut to sound decision making. Uncritical trust in authority may, however, lead to bad decisions. Perhaps the most famous study ever conducted in social psychology demonstrated that, when conditions were set up just so, two-thirds of a sample of psychologically normal men were willing to administer potentially lethal shocks to a stranger when an apparent authority in a laboratory coat ordered them to do so (Milgram, 1974; Burger, 2009). Uncritical trust in authority can be problematic for several reasons. First, even if the source of the message is a legitimate, well-intentioned authority, they may not always be correct. Second, when respect for authority becomes mindless, expertise in one domain may be confused with expertise in general. To assume there is credibility when a successful actor promotes a cold remedy, or when a psychology professor offers his views about politics, can lead to problems. Third, the authority may not be legitimate. It is not difficult to fake a college degree or professional credential or to buy an official-looking badge or uniform. Honesty Honesty is the moral dimension of trustworthiness. Persuasion professionals have long understood how critical it is to their efforts. Marketers, for example, dedicate exorbitant resources to developing and maintaining an image of honesty. A trusted brand or company name becomes a mental shortcut for consumers. It is estimated that some 50,000 new products come out each year. Forrester Research, a marketing research company, calculates that children have seen almost six million ads by the age of 16. An established brand name helps us cut through this volume of information. It signals we are in safe territory. “The real suggestion to convey,” advertising leader Theodore MacManus observed in 1910, “is that the man manufacturing the product is an honest man, and the product is an honest product, to be preferred above all others” (Fox, 1997). Likability If we know that celebrities aren’t really experts, and that they are being paid to say what they’re saying, why do their endorsements sell so many products? Ultimately, it is because we like them. More than any single quality, we trust people we like. Roger Ailes, a public relations adviser to Presidents Reagan and George H.W. Bush, observed: “If you could master one element of personal communication that is more powerful than anything . . . it is the quality of being likable. I call it the magic bullet, because if your audience likes you, they’ll forgive just about everything else you do wrong. If they don’t like you, you can hit every rule right on target and it doesn’t matter.” The mix of qualities that make a person likable are complex and often do not generalize from one situation to another. One clear finding, however, is that physically attractive people tend to be liked more. In fact, we prefer them to a disturbing extent: Various studies have shown we perceive attractive people as smarter, kinder, stronger, more successful, more socially skilled, better poised, better adjusted, more exciting, more nurturing, and, most important, of higher moral character. All of this is based on no other information than their physical appearance (e.g., Dion, Berscheid, & Walster, 1972). Manipulating the Perception of Trustworthiness The perception of trustworthiness is highly susceptible to manipulation. Levine (2003) lists some of the most common psychological strategies that are used to achieve this effect: Testimonials and Endorsement This technique employs someone who people already trust to testify about the product or message being sold. The technique goes back to the earliest days of advertising when satisfied customers might be shown describing how a patent medicine cured their life-long battle with “nerves” or how Dr. Scott’s Electric Hair Brush healed their baldness (“My hair (was) falling out, and I was rapidly becoming bald, but since using the brush a thick growth of hair has made its appearance, quite equal to that I had before previous to its falling out,” reported a satisfied customer in an 1884 ad for the product). Similarly, Kodak had Prince Henri D’Orleans and others endorse the superior quality of their camera (“The results are marvellous[sic]. The enlargements which you sent me are superb,“ stated Prince Henri D’Orleans in a 1888 ad). Celebrity endorsements are a frequent feature in commercials aimed at children. The practice has aroused considerable ethical concern, and research shows the concern is warranted. In a study funded by the Federal Trade Commission, more than 400 children ages 8 to 14 were shown one of various commercials for a model racing set. Some of the commercials featured an endorsement from a famous race car driver, some included real racing footage, and others included neither. Children who watched the celebrity endorser not only preferred the toy cars more but were convinced the endorser was an expert about the toys. This held true for children of all ages. In addition, they believed the toy race cars were bigger, faster, and more complex than real race cars they saw on film. They were also less likely to believe the commercial was staged (Ross et al., 1984). Presenting the Message as Education The message may be framed as objective information. Salespeople, for example, may try to convey the impression they are less interested in selling a product than helping you make the best decision. The implicit message is that being informed is in everyone’s best interest, because they are confident that when you understand what their product has to offer that you will conclude it is the best choice. Levine (2003) describes how, during training for a job as a used car salesman, he was instructed: “If the customer tells you they do not want to be bothered by a salesperson, your response is ‘I’m not a salesperson, I’m a product consultant. I don’t give prices or negotiate with you. I’m simply here to show you our inventory and help you find a vehicle that will fit your needs.’” Word of Mouth Imagine you read an ad that claims a new restaurant has the best food in your city. Now, imagine a friend tells you this new restaurant has the best food in the city. Who are you more likely to believe? Surveys show we turn to people around us for many decisions. A 1995 poll found that 70% of Americans rely on personal advice when selecting a new doctor. The same poll found that 53% of moviegoers are influenced by the recommendation of a person they know. In another survey, 91% said they’re likely to use another person’s recommendation when making a major purchase. Persuasion professionals may exploit these tendencies. Often, in fact, they pay for the surveys. Using this data, they may try to disguise their message as word of mouth from your peers. For example, Cornerstone Promotion, a leading marketing firm that advertises itself as under-the-radar marketing specialists, sometimes hires children to log into chat rooms and pretend to be fans of one of their clients or pays students to throw parties where they subtly circulate marketing material among their classmates. The Maven More persuasive yet, however, is to involve peers face-to-face. Rather than over-investing in formal advertising, businesses and organizations may plant seeds at the grassroots level hoping that consumers themselves will then spread the word to each other. The seeding process begins by identifying so-called information hubs—individuals the marketers believe can and will reach the most other people. The seeds may be planted with established opinion leaders. Software companies, for example, give advance copies of new computer programs to professors they hope will recommend it to students and colleagues. Pharmaceutical companies regularly provide travel expenses and speaking fees to researchers willing to lecture to health professionals about the virtues of their drugs. Hotels give travel agents free weekends at their resorts in the hope they’ll later recommend them to clients seeking advice. There is a Yiddish word, maven, which refers to a person who’s an expert or a connoisseur, as in a friend who knows where to get the best price on a sofa or the co-worker you can turn to for advice about where to buy a computer. They (a) know a lot of people, (b) communicate a great deal with people, (c) are more likely than others to be asked for their opinions, and (d) enjoy spreading the word about what they know and think. Most important of all, they are trusted. As a result, mavens are often targeted by persuasion professionals to help spread their message. Other Tricks of Persuasion There are many other mindless, mental shortcuts—heuristics and fixed action patterns—that leave us susceptible to persuasion. A few examples: • "Free Gifts" & Reciprocity • Social Proof • Getting a Foot-in-the-Door • A Door-in-the-Face • "And That's Not All" • The Sunk Cost Trap • Scarcity & Psychological Reactance Reciprocity “There is no duty more indispensable than that of returning a kindness,” wrote Cicero. Humans are motivated by a sense of equity and fairness. When someone does something for us or gives us something, we feel obligated to return the favor in kind. It triggers one of the most powerful of social norms, the reciprocity rule, whereby we feel compelled to repay, in equitable value, what another person has given to us. Gouldner (1960), in his seminal study of the reciprocity rule, found it appears in every culture. It lays the basis for virtually every type of social relationship, from the legalities of business arrangements to the subtle exchanges within a romance. A salesperson may offer free gifts, concessions, or their valuable time in order to get us to do something for them in return. For example, if a colleague helps you when you’re busy with a project, you might feel obliged to support her ideas for improving team processes. You might decide to buy more from a supplier if they have offered you an aggressive discount. Or, you might give money to a charity fundraiser who has given you a flower in the street (Cialdini, 2008; Levine, 2003). Social Proof If everyone is doing it, it must be right. People are more likely to work late if others on their team are doing the same, to put a tip in a jar that already contains money, or eat in a restaurant that is busy. This principle derives from two extremely powerful social forces—social comparison and conformity. We compare our behavior to what others are doing and, if there is a discrepancy between the other person and ourselves, we feel pressure to change (Cialdini, 2008). The principle of social proof is so common that it easily passes unnoticed. Advertisements, for example, often consist of little more than attractive social models appealing to our desire to be one of the group. For example, the German candy company Haribo suggests that when you purchase their products you are joining a larger society of satisfied customers: “Kids and grown-ups love it so-- the happy world of Haribo”. Sometimes social cues are presented with such specificity that it is as if the target is being manipulated by a puppeteer—for example, the laugh tracks on situation comedies that instruct one not only when to laugh but how to laugh. Studies find these techniques work. Fuller and Skeehy-Skeffington (1974), for example, found that audiences laughed longer and more when a laugh track accompanied the show than when it did not, even though respondents knew the laughs they heard were connived by a technician from old tapes that had nothing to do with the show they were watching. People are particularly susceptible to social proof (a) when they are feeling uncertain, and (b) if the people in the comparison group seem to be similar to ourselves. As P.T. Barnum once said, “Nothing draws a crowd like a crowd.” Commitment and Consistency Westerners have a desire to both feel and be perceived to act consistently. Once we have made an initial commitment, it is more likely that we will agree to subsequent commitments that follow from the first. Knowing this, a clever persuasion artist might induce someone to agree to a difficult-to-refuse small request and follow this with progressively larger requests that were his target from the beginning. The process is known as getting a foot in the door and then slowly escalating the commitments. Paradoxically, we are less likely to say “No” to a large request than we are to a small request when it follows this pattern. This can have costly consequences. Levine (2003), for example, found ex-cult members tend to agree with the statement: “Nobody ever joins a cult. They just postpone the decision to leave.” A Door in the Face Some techniques bring a paradoxical approach to the escalation sequence by pushing a request to or beyond its acceptable limit and then backing off. In the door-in-the-face (sometimes called the reject-then-compromise) procedure, the persuader begins with a large request they expect will be rejected. They want the door to be slammed in their face. Looking forlorn, they now follow this with a smaller request, which, unknown to the customer, was their target all along. In one study, for example, Mowen and Cialdini (1980), posing as representatives of the fictitious “California Mutual Insurance Co.,” asked university students walking on campus if they’d be willing to fill out a survey about safety in the home or dorm. The survey, students were told, would take about 15 minutes. Not surprisingly, most of the students declined—only one out of four complied with the request. In another condition, however, the researchers door-in-the-faced them by beginning with a much larger request. “The survey takes about two hours,” students were told. Then, after the subject declined to participate, the experimenters retreated to the target request: “. . . look, one part of the survey is particularly important and is fairly short. It will take only 15 minutes to administer.” Almost twice as many now complied. And That’s Not All! The that’s-not-all technique also begins with the salesperson asking a high price. This is followed by several seconds’ pause during which the customer is kept from responding. The salesperson then offers a better deal by either lowering the price or adding a bonus product. That’s-not-all is a variation on door-in-the-face. Whereas the latter begins with a request that will be rejected, however, that’s-not-all gains its influence by putting the customer on the fence, allowing them to waver and then offering them a comfortable way off. Burger (1986) demonstrated the technique in a series of field experiments. In one study, for example, an experimenter-salesman told customers at a student bake sale that cupcakes cost 75 cents. As this price was announced, another salesman held up his hand and said, “Wait a second,” briefly consulted with the first salesman, and then announced (“that’s-not-all”) that the price today included two cookies. In a control condition, customers were offered the cupcake and two cookies as a package for 75 cents right at the onset. The bonus worked magic: Almost twice as many people bought cupcakes in the that’s-not-all condition (73%) than in the control group (40%). The Sunk Cost Trap Sunk cost is a term used in economics referring to nonrecoverable investments of time or money. The trap occurs when a person’s aversion to loss impels them to throw good money after bad, because they don’t want to waste their earlier investment. This is vulnerable to manipulation. The more time and energy a cult recruit can be persuaded to spend with the group, the more “invested” they will feel, and, consequently, the more of a loss it will feel to leave that group. Consider the advice of billionaire investor Warren Buffet: “When you find yourself in a hole, the best thing you can do is stop digging” (Levine, 2003). Scarcity and Psychological Reactance People tend to perceive things as more attractive when their availability is limited, or when they stand to lose the opportunity to acquire them on favorable terms (Cialdini, 2008). Anyone who has encountered a willful child is familiar with this principle. In a classic study, Brehm & Weinraub (1977), for example, placed 2-year-old boys in a room with a pair of equally attractive toys. One of the toys was placed next to a plexiglass wall; the other was set behind the plexiglass. For some boys, the wall was 1 foot high, which allowed the boys to easily reach over and touch the distant toy. Given this easy access, they showed no particular preference for one toy or the other. For other boys, however, the wall was a formidable 2 feet high, which required them to walk around the barrier to touch the toy. When confronted with this wall of inaccessibility, the boys headed directly for the forbidden fruit, touching it three times as quickly as the accessible toy. Research shows that much of that 2-year-old remains in adults, too. People resent being controlled. When a person seems too pushy, we get suspicious, annoyed, often angry, and yearn to retain our freedom of choice more than before. Brehm (1966) labeled this the principle of psychological reactance. The most effective way to circumvent psychological reactance is to first get a foot in the door and then escalate the demands so gradually that there is seemingly nothing to react against. Hassan (1988), who spent many years as a higher-up in the “Moonies” cult, describes how they would shape behaviors subtly at first, then more forcefully. The material that would make up the new identity of a recruit was doled out gradually, piece by piece, only as fast as the person was deemed ready to assimilate it. The rule of thumb was to “tell him only what he can accept.” He continues: “Don’t sell them [the converts] more than they can handle . . . . If a recruit started getting angry because he was learning too much about us, the person working on him would back off and let another member move in .....” Defending Against Unwelcome Persuasion The most commonly used approach to help people defend against unwanted persuasion is known as the “inoculation” method. Research has shown that people who are subjected to weak versions of a persuasive message are less vulnerable to stronger versions later on, in much the same way that being exposed to small doses of a virus immunizes you against full-blown attacks. In a classic study by McGuire (1964), subjects were asked to state their opinion on an issue. They were then mildly attacked for their position and then given an opportunity to refute the attack. When later confronted by a powerful argument against their initial opinion, these subjects were more resistant than were a control group. In effect, they developed defenses that rendered them immune. Sagarin and his colleagues have developed a more aggressive version of this technique that they refer to as “stinging” (Sagarin, Cialdini, Rice, & Serna, 2002). Their studies focused on the popular advertising tactic whereby well-known authority figures are employed to sell products they know nothing about, for example, ads showing a famous astronaut pontificating on Rolex watches. In a first experiment, they found that simply forewarning people about the deviousness of these ads had little effect on peoples’ inclination to buy the product later. Next, they stung the subjects. This time, they were immediately confronted with their gullibility. “Take a look at your answer to the first question. Did you find the ad to be even somewhat convincing? If so, then you got fooled. ... Take a look at your answer to the second question. Did you notice that this ‘stockbroker’ was a fake?” They were then asked to evaluate a new set of ads. The sting worked. These subjects were not only more likely to recognize the manipulativeness of deceptive ads; they were also less likely to be persuaded by them. Anti-vulnerability trainings such as these can be helpful. Ultimately, however, the most effective defense against unwanted persuasion is to accept just how vulnerable we are. One must, first, accept that it is normal to be vulnerable and, second, to learn to recognize the danger signs when we are falling prey. To be forewarned is to be forearmed. Conclusion This module has provided a brief introduction to the psychological processes and subsequent “tricks” involved in persuasion. It has emphasized the peripheral route of persuasion because this is when we are most vulnerable to psychological manipulation. These vulnerabilities are side effects of “normal” and usually adaptive psychological processes. Mindless heuristics offer shortcuts for coping with a hopelessly complicated world. They are necessities for human survival. All, however, underscore the dangers that accompany any mindless thinking. Outside Resources Book: Ariely, D. (2008). Predictably irrational. New York, NY: Harper. Book: Cialdini, R. B. (2008). Influence: Science and practice (5th ed.). Boston, MA: Allyn and Bacon. Book: Gass, R., & Seiter, J. (2010). Persuasion, social influence, and compliance gaining (4th ed.). Boston, MA: Pearson. Book: Kahneman, D. (2012). Thinking fast and slow. New York, NY: Farrar, Straus & Giroux. Book: Levine, R. (2006). The power of persuasion: how we\'re bought and sold. Hoboken, NJ: Wiley www.amazon.com/The-Power-Pers.../dp/0471763179 Book: Tavris, C., & Aronson, E. (2011). Mistakes were made (but not by me). New York, NY: Farrar, Straus & Giroux. Student Video 1: Kyle Ball and Brandon Do's 'Principles of Persuasion'. This is a student-made video highlighting 6 key principles of persuasion that we encounter in our everyday lives. It was one of the winning entries in the 2015 Noba Student Video Award. Student Video 2: 'Persuasion', created by Jake Teeny and Ben Oliveto, compares the central and peripheral routes to persuasion and also looks at how techniques of persuasion such as Scarcity and Social Proof influence our consumer choices. It was one of the winning entries in the 2015 Noba Student Video Award. Student Video 3: 'Persuasion in Advertising' is a humorous look at the techniques used by companies to try to convince us to buy their products. The video was created by the team of Edward Puckering, Chris Cameron, and Kevin Smith. It was one of the winning entries in the 2015 Noba Student Video Award. Video: A brief, entertaining interview with the celebrity pickpocket shows how easily we can be fooled. See A Pickpocket’s Tale at http://www.newyorker.com/online/blogs/culture/2013/01/video-the-art-of-pickpocketing.html Video: Cults employ extreme versions of many of the principles in this module. An excellent documentary tracing the history of the Jonestown cult is the PBS “American Experience” production, Jonestown: The Life and Death of Peoples Temple at www.pbs.org/wgbh/americanexpe...-introduction/ Video: Philip Zimbardo’s now-classic video, Quiet Rage, offers a powerful, insightful description of his famous Stanford prison study www.prisonexp.org/documentary.htm Video: The documentary Outfoxed provides an excellent example of how persuasion can be masked as news and education. http://www.outfoxed.org/ Video: The video, The Science of Countering Terrorism: Psychological Perspectives, a talk by psychologist Fathali Moghaddam, is an excellent introduction to the process of terrorist recruitment and thinking sciencestage.com/v/32330/fath...spectives.html Discussion Questions 1. Imagine you are commissioned to create an ad to sell a new beer. Can you give an example of an ad that would rely on the central route? Can you give an example of an ad that would rely on the peripheral route? 2. The reciprocity principle can be exploited in obvious ways, such as giving a customer a free sample of a product. Can you give an example of a less obvious way it might be exploited? What is a less obvious way that a cult leader might use it to get someone under his or her grip? 3. Which “trick” in this module are you, personally, most prone to? Give a personal example of this. How might you have avoided it? Vocabulary Central route to persuasion Persuasion that employs direct, relevant, logical messages. Fixed action patterns (FAPs) Sequences of behavior that occur in exactly the same fashion, in exactly the same order, every time they are elicited. Foot in the door Obtaining a small, initial commitment. Gradually escalating commitments A pattern of small, progressively escalating demands is less likely to be rejected than a single large demand made all at once. Heuristics Mental shortcuts that enable people to make decisions and solve problems quickly and efficiently. Peripheral route to persuasion Persuasion that relies on superficial cues that have little to do with logic. Psychological reactance A reaction to people, rules, requirements, or offerings that are perceived to limit freedoms. Social proof The mental shortcut based on the assumption that, if everyone is doing it, it must be right. The norm of reciprocity The normative pressure to repay, in equitable value, what another person has given to us. The rule of scarcity People tend to perceive things as more attractive when their availability is limited, or when they stand to lose the opportunity to acquire them on favorable terms. The triad of trust We are most vulnerable to persuasion when the source is perceived as an authority, as honest and likable. Trigger features Specific, sometimes minute, aspects of a situation that activate fixed action patterns.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/14%3A_SOCIAL_PSYCHOLOGY/14.03%3A_Persuasion-_So_Easily_Fooled.txt
By Susan T. Fiske Princeton University People are often biased against others outside of their own social group, showing prejudice (emotional bias), stereotypes (cognitive bias), and discrimination (behavioral bias). In the past, people used to be more explicit with their biases, but during the 20th century, when it became less socially acceptable to exhibit bias, such things like prejudice, stereotypes, and discrimination became more subtle (automatic, ambiguous, and ambivalent). In the 21st century, however, with social group categories even more complex, biases may be transforming once again. learning objectives • Distinguish prejudice, stereotypes, and discrimination. • Distinguish old-fashioned, blatant biases from contemporary, subtle biases. • Understand old-fashioned biases such as social dominance orientation and right-wing. authoritarianism. • Understand subtle, unexamined biases that are automatic, ambiguous, and ambivalent. • Understand 21st century biases that may break down as identities get more complicated. Introduction Even in one’s own family, everyone wants to be seen for who they are, not as “just another typical X.” But still, people put other people into groups, using that label to inform their evaluation of the person as a whole—a process that can result in serious consequences. This module focuses on biases against social groups, which social psychologists sort into emotional prejudices, mental stereotypes, and behavioral discrimination. These three aspects of bias are related, but they each can occur separately from the others (Dovidio & Gaertner, 2010; Fiske, 1998). For example, sometimes people have a negative, emotional reaction to a social group (prejudice) without knowing even the most superficial reasons to dislike them (stereotypes). This module shows that today’s biases are not yesterday’s biases in many ways, but at the same time, they are troublingly similar. First, we’ll discuss old-fashioned biases that might have belonged to our grandparents and great-grandparents—or even the people nowadays who have yet to leave those wrongful times. Next, we will discuss late 20th century biases that affected our parents and still linger today. Finally, we will talk about today’s 21st century biases that challenge fairness and respect for all. Old-fashioned Biases: Almost Gone You would be hard pressed to find someone today who openly admits they don’t believe in equality. Regardless of one’s demographics, most people believe everyone is entitled to the same, natural rights. However, as much as we now collectively believe this, not too far back in our history, this ideal of equality was an unpracticed sentiment. Of all the countries in the world, only a few have equality in their constitution, and those who do, originally defined it for a select group of people. At the time, old-fashioned biases were simple: people openly put down those not from their own group. For example, just 80 years ago, American college students unabashedly thought Turkish people were “cruel, very religious, and treacherous” (Katz & Braly, 1933). So where did they get those ideas, assuming that most of them had never met anyone from Turkey? Old-fashioned stereotypes were overt, unapologetic, and expected to be shared by others—what we now call “blatant biases.” Blatant biases are conscious beliefs, feelings, and behavior that people are perfectly willing to admit, which mostly express hostility toward other groups (outgroups) while unduly favoring one’s own group (in-group). For example, organizations that preach contempt for other races (and praise for their own) is an example of a blatant bias. And scarily, these blatant biases tend to run in packs: People who openly hate one outgroup also hate many others. To illustrate this pattern, we turn to two personality scales next. Social Dominance Orientation Social dominance orientation (SDO) describes a belief that group hierarchies are inevitable in all societies and are even a good idea to maintain order and stability (Sidanius & Pratto, 1999). Those who score high on SDO believe that some groups are inherently better than others, and because of this, there is no such thing as group “equality.” At the same time, though, SDO is not just about being personally dominant and controlling of others; SDO describes a preferred arrangement of groups with some on top (preferably one’s own group) and some on the bottom. For example, someone high in SDO would likely be upset if someone from an outgroup moved into his or her neighborhood. It’s not that the person high in SDO wants to “control” what this outgroup member does; it’s that moving into this “nice neighborhood” disrupts the social hierarchy the person high in SDO believes in (i.e. living in a nice neighborhood denotes one’s place in the social hierarchy—a place reserved for one’s in-group members). Although research has shown that people higher in SDO are more likely to be politically conservative, there are other traits that more strongly predict one’s SDO. For example, researchers have found that those who score higher on SDO are usually lower than average on tolerance, empathy, altruism, and community orientation. In general, those high in SDO have a strong belief in work ethic—that hard work always pays off and leisure is a waste of time. People higher on SDO tend to choose and thrive in occupations that maintain existing group hierarchies (police, prosecutors, business), compared to those lower in SDO, who tend to pick more equalizing occupations (social work, public defense, psychology). The point is that SDO—a preference for inequality as normal and natural—also predicts endorsing the superiority of certain groups: men, native-born residents, heterosexuals, and believers in the dominant religion. This means seeing women, minorities, homosexuals, and non-believers as inferior. Understandably, the first list of groups tend to score higher on SDO, while the second group tends to score lower. For example, the SDO gender difference (men higher, women lower) appears all over the world. At its heart, SDO rests on a fundamental belief that the world is tough and competitive with only a limited number of resources. Thus, those high in SDO see groups as battling each other for these resources, with winners at the top of the social hierarchy and losers at the bottom (see Table 1). Right-wing Authoritarianism Right-wing authoritarianism (RWA) focuses on value conflicts, whereas SDO focuses on the economic ones. That is, RWA endorses respect for obedience and authority in the service of group conformity (Altemeyer, 1988). Returning to an example from earlier, the homeowner who’s high in SDO may dislike the outgroup member moving into his or her neighborhood because it “threatens” one’s economic resources (e.g. lowering the value of one’s house; fewer openings in the school; etc.). Those high in RWA may equally dislike the outgroup member moving into the neighborhood but for different reasons. Here, it’s because this outgroup member brings in values or beliefs that the person high in RWA disagrees with, thus “threatening” the collective values of his or her group. RWA respects group unity over individual preferences, wanting to maintain group values in the face of differing opinions. Despite its name, though, RWA is not necessarily limited to people on the right (conservatives). Like SDO, there does appear to be an association between this personality scale (i.e. the preference for order, clarity, and conventional values) and conservative beliefs. However, regardless of political ideology, RWA focuses on groups’ competing frameworks of values. Extreme scores on RWA predict biases against outgroups while demanding in-group loyalty and conformity Notably, the combination of high RWA and high SDO predicts joining hate groups that openly endorse aggression against minority groups, immigrants, homosexuals, and believers in non-dominant religions (Altemeyer, 2004). 20th Century Biases: Subtle but Significant Fortunately, old-fashioned biases have diminished over the 20th century and into the 21st century. Openly expressing prejudice is like blowing second-hand cigarette smoke in someone’s face: It’s just not done any more in most circles, and if it is, people are readily criticized for their behavior. Still, these biases exist in people; they’re just less in view than before. These subtle biases are unexamined and sometimes unconscious but real in their consequences. They are automatic, ambiguous, and ambivalent, but nonetheless biased, unfair, and disrespectful to the belief in equality. Automatic Biases Most people like themselves well enough, and most people identify themselves as members of certain groups but not others. Logic suggests, then, that because we like ourselves, we therefore like the groups we associate with more, whether those groups are our hometown, school, religion, gender, or ethnicity. Liking yourself and your groups is human nature. The larger issue, however, is that own-group preference often results in liking other groups less. And whether you recognize this “favoritism” as wrong, this trade-off is relatively automatic, that is, unintended, immediate, and irresistible. Social psychologists have developed several ways to measure this relatively automatic own-group preference, the most famous being the Implicit Association Test(IAT;Greenwald, Banaji, Rudman, Farnham, Nosek, & Mellott, 2002; Greenwald, McGhee, & Schwartz, 1998). The test itself is rather simple and you can experience it yourself if you Google “implicit” or go to understandingprejudice.org. Essentially, the IAT is done on the computer and measures how quickly you can sort words or pictures into different categories. For example, if you were asked to categorize “ice cream” as good or bad, you would quickly categorize it as good. However, imagine if every time you ate ice cream, you got a brain freeze. When it comes time to categorize ice cream as good or bad, you may still categorize it as “good,” but you will likely be a little slower in doing so compared to someone who has nothing but positive thoughts about ice cream. Related to group biases, people may explicitly claim they don’t discriminate against outgroups—and this is very likely true. However, when they’re given this computer task to categorize people from these outgroups, that automatic or unconscious hesitation (a result of having mixed evaluations about the outgroup) will show up in the test. And as countless studies have revealed, people are mostly faster at pairing their own group with good categories, compared to pairing others’ groups. In fact, this finding generally holds regardless if one’s group is measured according race, age, religion, nationality, and even temporary, insignificant memberships. This all-too-human tendency would remain a mere interesting discovery except that people’s reaction time on the IAT predicts actual feelings about individuals from other groups, decisions about them, and behavior toward them, especially nonverbal behavior (Greenwald, Poehlman, Uhlmann, & Banaji, 2009). For example, although a job interviewer may not be “blatantly biased,” his or her “automatic or implicit biases” may result in unconsciously acting distant and indifferent, which can have devastating effects on the hopeful interviewee’s ability to perform well (Word, Zanna, & Cooper, 1973). Although this is unfair, sometimes the automatic associations—often driven by society’s stereotypes—trump our own, explicit values (Devine, 1989). And sadly, this can result in consequential discrimination, such as allocating fewer resources to disliked outgroups (Rudman & Ashmore, 2009). See Table 2 for a summary of this section and the next two sections on subtle biases. Ambiguous Biases As the IAT indicates, people’s biases often stem from the spontaneous tendency to favor their own, at the expense of the other. Social identity theory (Tajfel, Billig, Bundy, & Flament, 1971) describes this tendency to favor one’s own in-group over another’s outgroup. And as a result, outgroup disliking stems from this in-group liking (Brewer & Brown, 1998). For example, if two classes of children want to play on the same soccer field, the classes will come to dislike each other not because of any real, objectionable traits about the other group. The dislike originates from each class’s favoritism toward itself and the fact that only one group can play on the soccer field at a time. With this preferential perspective for one’s own group, people are not punishing the other one so much as neglecting it in favor of their own. However, to justify this preferential treatment, people will often exaggerate the differences between their in-group and the outgroup. In turn, people see the outgroup as more similar in personality than they are. This results in the perception that “they” really differ from us, and “they” are all alike. Spontaneously, people categorize people into groups just as we categorize furniture or food into one type or another. The difference is that we people inhabit categories ourselves, as self-categorization theory points out (Turner, 1975). Because the attributes of group categories can be either good or bad, we tend to favor the groups with people like us and incidentally disfavor the others. In-group favoritism is an ambiguous form of bias because it disfavors the outgroup by exclusion. For example, if a politician has to decide between funding one program or another, s/he may be more likely to give resources to the group that more closely represents his in-group. And this life-changing decision stems from the simple, natural human tendency to be more comfortable with people like yourself. A specific case of comfort with the ingroup is called aversive racism, so-called because people do not like to admit their own racial biases to themselves or others (Dovidio & Gaertner, 2010). Tensions between, say, a White person’s own good intentions and discomfort with the perhaps novel situation of interacting closely with a Black person may cause the White person to feel uneasy, behave stiffly, or be distracted. As a result, the White person may give a good excuse to avoid the situation altogether and prevent any awkwardness that could have come from it. However, such a reaction will be ambiguous to both parties and hard to interpret. That is, was the White person right to avoid the situation so that neither person would feel uncomfortable? Indicators of aversive racism correlate with discriminatory behavior, despite being the ambiguous result of good intentions gone bad. Bias Can Be Complicated - Ambivalent Biases Not all stereotypes of outgroups are all bad. For example, ethnic Asians living in the United States are commonly referred to as the “model minority” because of their perceived success in areas such as education, income, and social stability. Another example includes people who feel benevolent toward traditional women but hostile toward nontraditional women. Or even ageist people who feel respect toward older adults but, at the same time, worry about the burden they place on public welfare programs. A simple way to understand these mixed feelings, across a variety of groups, results from the Stereotype Content Model (Fiske, Cuddy, & Glick, 2007). When people learn about a new group, they first want to know if its intentions of the people in this group are for good or ill. Like the guard at night: “Who goes there, friend or foe?” If the other group has good, cooperative intentions, we view them as warm and trustworthy and often consider them part of “our side.” However, if the other group is cold and competitive or full of exploiters, we often view them as a threat and treat them accordingly. After learning the group’s intentions, though, we also want to know whether they are competent enough to act on them (if they are incompetent, or unable, their intentions matter less). These two simple dimensions—warmth and competence—together map how groups relate to each other in society. There are common stereotypes of people from all sorts of categories and occupations that lead them to be classified along these two dimensions. For example, a stereotypical “housewife” would be seen as high in warmth but lower in competence. This is not to suggest that actual housewives are not competent, of course, but that they are not widely admired for their competence in the same way as scientific pioneers, trendsetters, or captains of industry. At another end of the spectrum are homeless people and drug addicts, stereotyped as not having good intentions (perhaps exploitative for not trying to play by the rules), and likewise being incompetent (unable) to do anything useful. These groups reportedly make society more disgusted than any other groups do. Some group stereotypes are mixed, high on one dimension and low on the other. Groups stereotyped as competent but not warm, for example, include rich people and outsiders good at business. These groups that are seen as “competent but cold” make people feel some envy, admitting that these others may have some talent but resenting them for not being “people like us.” The “model minority” stereotype mentioned earlier includes people with this excessive competence but deficient sociability. The other mixed combination is high warmth but low competence. Groups who fit this combination include older people and disabled people. Others report pitying them, but only so long as they stay in their place. In an effort to combat this negative stereotype, disability- and elderly-rights activists try to eliminate that pity, hopefully gaining respect in the process. Altogether, these four kinds of stereotypes and their associated emotional prejudices (pride, disgust, envy, pity) occur all over the world for each of society’s own groups. These maps of the group terrain predict specific types of discrimination for specific kinds of groups, underlining how bias is not exactly equal opportunity. Conclusion: 21st Century Prejudices As the world becomes more interconnected—more collaborations between countries, more intermarrying between different groups—more and more people are encountering greater diversity of others in everyday life. Just ask yourself if you’ve ever been asked, “What are you?” Such a question would be preposterous if you were only surrounded by members of your own group. Categories, then, are becoming more and more uncertain, unclear, volatile, and complex (Bodenhausen & Peery, 2009). People’s identities are multifaceted, intersecting across gender, race, class, age, region, and more. Identities are not so simple, but maybe as the 21st century unfurls, we will recognize each other by the content of our character instead of the cover on our outside. Outside Resources Web: Website exploring the causes and consequences of prejudice. http://www.understandingprejudice.org/ Discussion Questions 1. Do you know more people from different kinds of social groups than your parents did? 2. How often do you hear people criticizing groups without knowing anything about them? 3. Take the IAT. Could you feel that some associations are easier than others? 4. What groups illustrate ambivalent biases, seemingly competent but cold, or warm but incompetent? 5. Do you or someone you know believe that group hierarchies are inevitable? Desirable? 6. How can people learn to get along with people who seem different from them? Vocabulary Automatic bias Automatic biases are unintended, immediate, and irresistible. Aversive racism Aversive racism is unexamined racial bias that the person does not intend and would reject, but that avoids inter-racial contact. Blatant biases Blatant biases are conscious beliefs, feelings, and behavior that people are perfectly willing to admit, are mostly hostile, and openly favor their own group. Discrimination Discrimination is behavior that advantages or disadvantages people merely based on their group membership. Implicit Association Test Implicit Association Test (IAT) measures relatively automatic biases that favor own group relative to other groups. Prejudice Prejudice is an evaluation or emotion toward people merely based on their group membership. Right-wing authoritarianism Right-wing authoritarianism (RWA) focuses on value conflicts but endorses respect for obedience and authority in the service of group conformity. Self-categorization theory Self-categorization theory develops social identity theory’s point that people categorize themselves, along with each other into groups, favoring their own group. Social dominance orientation Social dominance orientation (SDO) describes a belief that group hierarchies are inevitable in all societies and even good, to maintain order and stability. Social identity theory Social identity theory notes that people categorize each other into groups, favoring their own group. Stereotype Content Model Stereotype Content Model shows that social groups are viewed according to their perceived warmth and competence. Stereotypes Stereotype is a belief that characterizes people based merely on their group membership. Subtle biases Subtle biases are automatic, ambiguous, and ambivalent, but real in their consequences.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/14%3A_SOCIAL_PSYCHOLOGY/14.04%3A_Prejudice_Discrimination_and_Stereotyping.txt
• 15.1: Happiness- The Science of Subjective Well-Being Subjective well-being (SWB) is the scientific term for happiness and life satisfaction—thinking and feeling that your life is going well, not badly. Scientists rely primarily on self-report surveys to assess the happiness of individuals, but they have validated these scales with other types of measures. People’s levels of subjective well-being are influenced by both internal factors, such as personality and outlook, and external factors, such as the society in which they live. • 15.2: The Healthy Life Our emotions, thoughts, and behaviors play an important role in our health. Not only do they influence our day-to-day health practices, but they can also influence how our body functions. This module provides an overview of health psychology, which is a field devoted to understanding the connections between psychology and health. 15: PSYCHOLOGICAL HEALTH By Edward Diener University of Utah, University of Virginia Subjective well-being (SWB) is the scientific term for happiness and life satisfaction—thinking and feeling that your life is going well, not badly. Scientists rely primarily on self-report surveys to assess the happiness of individuals, but they have validated these scales with other types of measures. People’s levels of subjective well-being are influenced by both internal factors, such as personality and outlook, and external factors, such as the society in which they live. Some of the major determinants of subjective well-being are a person’s inborn temperament, the quality of their social relationships, the societies they live in, and their ability to meet their basic needs. To some degree people adapt to conditions so that over time our circumstances may not influence our happiness as much as one might predict they would. Importantly, researchers have also studied the outcomes of subjective well-being and have found that “happy” people are more likely to be healthier and live longer, to have better social relationships, and to be more productive at work. In other words, people high in subjective well-being seem to be healthier and function more effectively compared to people who are chronically stressed, depressed, or angry. Thus, happiness does not just feel good, but it is good for people and for those around them. learning objectives • Describe three major forms of happiness and a cause of each of them. • Be able to list two internal causes of subjective well-being and two external causes of subjective well-being. • Describe the types of societies that experience the most and least happiness, and why they do. • Describe the typical course of adaptation to events in terms of the time course of SWB. • Describe several of the beneficial outcomes of being a happy person. • Describe how happiness is typically measured. Introduction When people describe what they most want out of life, happiness is almost always on the list, and very frequently it is at the top of the list. When people describe what they want in life for their children, they frequently mention health and wealth, occasionally they mention fame or success—but they almost always mention happiness. People will claim that whether their kids are wealthy and work in some prestigious occupation or not, “I just want my kids to be happy.” Happiness appears to be one of the most important goals for people, if not the most important. But what is it, and how do people get it? In this module I describe “happiness” or subjective well-being (SWB) as a process—it results from certain internal and external causes, and in turn it influences the way people behave, as well as their physiological states. Thus, high SWB is not just a pleasant outcome but is an important factor in our future success. Because scientists have developed valid ways of measuring “happiness,” they have come in the past decades to know much about its causes and consequences. Types of Happiness Philosophers debated the nature of happiness for thousands of years, but scientists have recently discovered that happiness means different things. Three major types of happiness are high life satisfaction, frequent positive feelings, and infrequent negative feelings (Diener, 1984). “Subjective well-being” is the label given by scientists to the various forms of happiness taken together. Although there are additional forms of SWB, the three in the table below have been studied extensively. The table also shows that the causes of the different types of happiness can be somewhat different. You can see in the table that there are different causes of happiness, and that these causes are not identical for the various types of SWB. Therefore, there is no single key, no magic wand—high SWB is achieved by combining several different important elements (Diener & Biswas-Diener, 2008). Thus, people who promise to know the key to happiness are oversimplifying. Some people experience all three elements of happiness—they are very satisfied, enjoy life, and have only a few worries or other unpleasant emotions. Other unfortunate people are missing all three. Most of us also know individuals who have one type of happiness but not another. For example, imagine an elderly person who is completely satisfied with her life—she has done most everything she ever wanted—but is not currently enjoying life that much because of the infirmities of age. There are others who show a different pattern, for example, who really enjoy life but also experience a lot of stress, anger, and worry. And there are those who are having fun, but who are dissatisfied and believe they are wasting their lives. Because there are several components to happiness, each with somewhat different causes, there is no magic single cure-all that creates all forms of SWB. This means that to be happy, individuals must acquire each of the different elements that cause it. Causes of Subjective Well-Being There are external influences on people’s happiness—the circumstances in which they live. It is possible for some to be happy living in poverty with ill health, or with a child who has a serious disease, but this is difficult. In contrast, it is easier to be happy if one has supportive family and friends, ample resources to meet one’s needs, and good health. But even here there are exceptions—people who are depressed and unhappy while living in excellent circumstances. Thus, people can be happy or unhappy because of their personalities and the way they think about the world or because of the external circumstances in which they live. People vary in their propensity to happiness—in their personalities and outlook—and this means that knowing their living conditions is not enough to predict happiness. In the table below are shown internal and external circumstances that influence happiness. There are individual differences in what makes people happy, but the causes in the table are important for most people (Diener, Suh, Lucas, & Smith, 1999; Lyubomirsky, 2013; Myers, 1992). Societal Influences on Happiness When people consider their own happiness, they tend to think of their relationships, successes and failures, and other personal factors. But a very important influence on how happy people are is the society in which they live. It is easy to forget how important societies and neighborhoods are to people’s happiness or unhappiness. In Figure 10.2.1, I present life satisfaction around the world. You can see that some nations, those with the darkest shading on the map, are high in life satisfaction. Others, the lightest shaded areas, are very low. The grey areas in the map are places we could not collect happiness data—they were just too dangerous or inaccessible. Can you guess what might make some societies happier than others? Much of North America and Europe have relatively high life satisfaction, and much of Africa is low in life satisfaction. For life satisfaction living in an economically developed nation is helpful because when people must struggle to obtain food, shelter, and other basic necessities, they tend to be dissatisfied with lives. However, other factors, such as trusting and being able to count on others, are also crucial to the happiness within nations. Indeed, for enjoying life our relationships with others seem more important than living in a wealthy society. One factor that predicts unhappiness is conflict—individuals in nations with high internal conflict or conflict with neighboring nations tend to experience low SWB. Money and Happiness Will money make you happy? A certain level of income is needed to meet our needs, and very poor people are frequently dissatisfied with life (Diener & Seligman, 2004). However, having more and more money has diminishing returns—higher and higher incomes make less and less difference to happiness. Wealthy nations tend to have higher average life satisfaction than poor nations, but the United States has not experienced a rise in life satisfaction over the past decades, even as income has doubled. The goal is to find a level of income that you can live with and earn. Don’t let your aspirations continue to rise so that you always feel poor, no matter how much money you have. Research shows that materialistic people often tend to be less happy, and putting your emphasis on relationships and other areas of life besides just money is a wise strategy. Money can help life satisfaction, but when too many other valuable things are sacrificed to earn a lot of money—such as relationships or taking a less enjoyable job—the pursuit of money can harm happiness. There are stories of wealthy people who are unhappy and of janitors who are very happy. For instance, a number of extremely wealthy people in South Korea have committed suicide recently, apparently brought down by stress and other negative feelings. On the other hand, there is the hospital janitor who loved her life because she felt that her work in keeping the hospital clean was so important for the patients and nurses. Some millionaires are dissatisfied because they want to be billionaires. Conversely, some people with ordinary incomes are quite happy because they have learned to live within their means and enjoy the less expensive things in life. It is important to always keep in mind that high materialism seems to lower life satisfaction—valuing money over other things such as relationships can make us dissatisfied. When people think money is more important than everything else, they seem to have a harder time being happy. And unless they make a great deal of money, they are not on average as happy as others. Perhaps in seeking money they sacrifice other important things too much, such as relationships, spirituality, or following their interests. Or it may be that materialists just can never get enough money to fulfill their dreams—they always want more. To sum up what makes for a happy life, let’s take the example of Monoj, a rickshaw driver in Calcutta. He enjoys life, despite the hardships, and is reasonably satisfied with life. How could he be relatively happy despite his very low income, sometimes even insufficient to buy enough food for his family? The things that make Monoj happy are his family and friends, his religion, and his work, which he finds meaningful. His low income does lower his life satisfaction to some degree, but he finds his children to be very rewarding, and he gets along well with his neighbors. I also suspect that Monoj’s positive temperament and his enjoyment of social relationships help to some degree to overcome his poverty and earn him a place among the happy. However, Monoj would also likely be even more satisfied with life if he had a higher income that allowed more food, better housing, and better medical care for his family. Besides the internal and external factors that influence happiness, there are psychological influences as well—such as our aspirations, social comparisons, and adaptation. People’s aspirations are what they want in life, including income, occupation, marriage, and so forth. If people’s aspirations are high, they will often strive harder, but there is also a risk of them falling short of their aspirations and being dissatisfied. The goal is to have challenging aspirations but also to be able to adapt to what actually happens in life. One’s outlook and resilience are also always very important to happiness. Every person will have disappointments in life, fail at times, and have problems. Thus, happiness comes not to people who never have problems—there are no such individuals—but to people who are able to bounce back from failures and adapt to disappointments. This is why happiness is never caused just by what happens to us but always includes our outlook on life. Adaptation to Circumstances The process of adaptation is important in understanding happiness. When good and bad events occur, people often react strongly at first, but then their reactions adapt over time and they return to their former levels of happiness. For instance, many people are euphoric when they first marry, but over time they grow accustomed to the marriage and are no longer ecstatic. The marriage becomes commonplace and they return to their former level of happiness. Few of us think this will happen to us, but the truth is that it usually does. Some people will be a bit happier even years after marriage, but nobody carries that initial “high” through the years. People also adapt over time to bad events. However, people take a long time to adapt to certain negative events such as unemployment. People become unhappy when they lose their work, but over time they recover to some extent. But even after a number of years, unemployed individuals sometimes have lower life satisfaction, indicating that they have not completely habituated to the experience. However, there are strong individual differences in adaptation, too. Some people are resilient and bounce back quickly after a bad event, and others are fragile and do not ever fully adapt to the bad event. Do you adapt quickly to bad events and bounce back, or do you continue to dwell on a bad event and let it keep you down? An example of adaptation to circumstances is shown in Figure 10.2.2, which shows the daily moods of “Harry,” a college student who had Hodgkin’s lymphoma (a form of cancer). As can be seen, over the 6-week period when I studied Harry’s moods, they went up and down. A few times his moods dropped into the negative zone below the horizontal blue line. Most of the time Harry’s moods were in the positive zone above the line. But about halfway through the study Harry was told that his cancer was in remission—effectively cured—and his moods on that day spiked way up. But notice that he quickly adapted—the effects of the good news wore off, and Harry adapted back toward where he was before. So even the very best news one can imagine—recovering from cancer—was not enough to give Harry a permanent “high.” Notice too, however, that Harry’s moods averaged a bit higher after cancer remission. Thus, the typical pattern is a strong response to the event, and then a dampening of this joy over time. However, even in the long run, the person might be a bit happier or unhappier than before. Outcomes of High Subjective Well-Being Is the state of happiness truly a good thing? Is happiness simply a feel-good state that leaves us unmotivated and ignorant of the world’s problems? Should people strive to be happy, or are they better off to be grumpy but “realistic”? Some have argued that happiness is actually a bad thing, leaving us superficial and uncaring. Most of the evidence so far suggests that happy people are healthier, more sociable, more productive, and better citizens (Diener & Tay, 2012; Lyubomirsky, King, & Diener, 2005). Research shows that the happiest individuals are usually very sociable. The table below summarizes some of the major findings. Although it is beneficial generally to be happy, this does not mean that people should be constantly euphoric. In fact, it is appropriate and helpful sometimes to be sad or to worry. At times a bit of worry mixed with positive feelings makes people more creative. Most successful people in the workplace seem to be those who are mostly positive but sometimes a bit negative. Thus, people need not be a superstar in happiness to be a superstar in life. What is not helpful is to be chronically unhappy. The important question is whether people are satisfied with how happy they are. If you feel mostly positive and satisfied, and yet occasionally worry and feel stressed, this is probably fine as long as you feel comfortable with this level of happiness. If you are a person who is chronically unhappy much of the time, changes are needed, and perhaps professional intervention would help as well. Measuring Happiness SWB researchers have relied primarily on self-report scales to assess happiness—how people rate their own happiness levels on self-report surveys. People respond to numbered scales to indicate their levels of satisfaction, positive feelings, and lack of negative feelings. You can see where you stand on these scales by going to internal.psychology.illinois....er/scales.html or by filling out the Flourishing Scale below. These measures will give you an idea of what popular scales of happiness are like. The self-report scales have proved to be relatively valid (Diener, Inglehart, & Tay, 2012), although people can lie, or fool themselves, or be influenced by their current moods or situational factors. Because the scales are imperfect, well-being scientists also sometimes use biological measures of happiness (e.g., the strength of a person’s immune system, or measuring various brain areas that are associated with greater happiness). Scientists also use reports by family, coworkers, and friends—these people reporting how happy they believe the target person is. Other measures are used as well to help overcome some of the shortcomings of the self-report scales, but most of the field is based on people telling us how happy they are using numbered scales. There are scales to measure life satisfaction (Pavot & Diener, 2008), positive and negative feelings, and whether a person is psychologically flourishing (Diener et al., 2009). Flourishing has to do with whether a person feels meaning in life, has close relationships, and feels a sense of mastery over important life activities. You can take the well-being scales created in the Diener laboratory, and let others take them too, because they are free and open for use. Some Ways to Be Happier Most people are fairly happy, but many of them also wish they could be a bit more satisfied and enjoy life more. Prescriptions about how to achieve more happiness are often oversimplified because happiness has different components and prescriptions need to be aimed at where each individual needs improvement—one size does not fit all. A person might be strong in one area and deficient in other areas. People with prolonged serious unhappiness might need help from a professional. Thus, recommendations for how to achieve happiness are often appropriate for one person but not for others. With this in mind, I list in Table 4 below some general recommendations for you to be happier (see also Lyubomirsky, 2013): Outside Resources Web: Barbara Fredrickson’s website on positive emotions www.unc.edu/peplab/news.html Web: Ed Diener’s website internal.psychology.illinois.edu/~ediener/ Web: International Positive Psychology Association http://www.ippanetwork.org/ Web: Positive Acorn Positive Psychology website http://positiveacorn.com/ Web: Sonja Lyubomirsky’s website on happiness http://sonjalyubomirsky.com/ Web: University of Pennsylvania Positive Psychology Center website http://www.ppc.sas.upenn.edu/ Web: World Database on Happiness www1.eur.nl/fsw/happiness/ Discussion Questions 1. Which do you think is more important, the “top-down” personality influences on happiness or the “bottom-up” situational circumstances that influence it? In other words, discuss whether internal sources such as personality and outlook or external factors such situations, circumstances, and events are more important to happiness. Can you make an argument that both are very important? 2. Do you know people who are happy in one way but not in others? People who are high in life satisfaction, for example, but low in enjoying life or high in negative feelings? What should they do to increase their happiness across all three types of subjective well-being? 3. Certain sources of happiness have been emphasized in this book, but there are others. Can you think of other important sources of happiness and unhappiness? Do you think religion, for example, is a positive source of happiness for most people? What about age or ethnicity? What about health and physical handicaps? If you were a researcher, what question might you tackle on the influences on happiness? 4. Are you satisfied with your level of happiness? If not, are there things you might do to change it? Would you function better if you were happier? 5. How much happiness is helpful to make a society thrive? Do people need some worry and sadness in life to help us avoid bad things? When is satisfaction a good thing, and when is some dissatisfaction a good thing? 6. How do you think money can help happiness? Interfere with happiness? What level of income will you need to be satisfied? Vocabulary Adaptation The fact that after people first react to good or bad events, sometimes in a strong way, their feelings and reactions tend to dampen down over time and they return toward their original level of subjective well-being. “Bottom-up” or external causes of happiness Situational factors outside the person that influence his or her subjective well-being, such as good and bad events and circumstances such as health and wealth. Happiness The popular word for subjective well-being. Scientists sometimes avoid using this term because it can refer to different things, such as feeling good, being satisfied, or even the causes of high subjective well-being. Life satisfaction A person reflects on their life and judges to what degree it is going well, by whatever standards that person thinks are most important for a good life. Negative feelings Undesirable and unpleasant feelings that people tend to avoid if they can. Moods and emotions such as depression, anger, and worry are examples. Positive feelings Desirable and pleasant feelings. Moods and emotions such as enjoyment and love are examples. Subjective well-being The name that scientists give to happiness—thinking and feeling that our lives are going very well. Subjective well-being scales Self-report surveys or questionnaires in which participants indicate their levels of subjective well-being, by responding to items with a number that indicates how well off they feel. “Top-down” or internal causes of happiness The person’s outlook and habitual response tendencies that influence their happiness—for example, their temperament or optimistic outlook on life.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/15%3A_PSYCHOLOGICAL_HEALTH/15.01%3A_Happiness-_The_Science_of_Subjective_Well-Being.txt
By Emily Hooker and Sarah Pressman University of Calfornia, Irvine Our emotions, thoughts, and behaviors play an important role in our health. Not only do they influence our day-to-day health practices, but they can also influence how our body functions. This module provides an overview of health psychology, which is a field devoted to understanding the connections between psychology and health. Discussed here are examples of topics a health psychologist might study, including stress, psychosocial factors related to health and disease, how to use psychology to improve health, and the role of psychology in medicine. learning objectives • Describe basic terminology used in the field of health psychology. • Explain theoretical models of health, as well as the role of psychological stress in the development of disease. • Describe psychological factors that contribute to resilience and improved health. • Defend the relevance and importance of psychology to the field of medicine. What Is Health Psychology? Today, we face more chronic disease than ever before because we are living longer lives while also frequently behaving in unhealthy ways. One example of a chronic disease is coronary heart disease (CHD): It is the number one cause of death worldwide (World Health Organization, 2013). CHD develops slowly over time and typically appears midlife, but related heart problems can persist for years after the original diagnosis or cardiovascular event. In managing illnesses that persist over time (other examples might include cancer, diabetes, and long-term disability) many psychological factors will determine the progression of the ailment. For example, do patients seek help when appropriate? Do they follow doctor recommendations? Do they develop negative psychological symptoms due to lasting illness (e.g., depression)? Also important is that psychological factors can play a significant role in who develops these diseases, the prognosis, and the nature of the symptoms related to the illness. Health psychology is a relatively new, interdisciplinary field of study that focuses on these very issues, or more specifically, the role of psychology in maintaining health, as well as preventing and treating illness. Consideration of how psychological and social factors influence health is especially important today because many of the leading causes of illness in developed countries are often attributed to psychological and behavioral factors. In the case of CHD, discussed above, psychosocial factors, such as excessive stress, smoking, unhealthy eating habits, and some personality traits can also lead to increased risk of disease and worse health outcomes. That being said, many of these factors can be adjusted using psychological techniques. For example, clinical health psychologists can improve health practices like poor dietary choices and smoking, they can teach important stress reduction techniques, and they can help treat psychological disorders tied to poor health. Health psychology considers how the choices we make, the behaviors we engage in, and even the emotions that we feel, can play an important role in our overall health (Cohen & Herbert, 1996; Taylor, 2012). Health psychology relies on the Biopsychosocial Model of Health. This model posits that biology, psychology, and social factors are just as important in the development of disease as biological causes (e.g., germs, viruses), which is consistent with the World Health Organization (1946) definition of health. This model replaces the older Biomedical Model of Health, which primarily considers the physical, or pathogenic, factors contributing to illness. Thanks to advances in medical technology, there is a growing understanding of the physiology underlying the mind–body connection, and in particular, the role that different feelings can have on our body’s function. Health psychology researchers working in the fields of psychosomatic medicine and psychoneuroimmunology, for example, are interested in understanding how psychological factors can “get under the skin” and influence our physiology in order to better understand how factors like stress can make us sick. Stress And Health You probably know exactly what it’s like to feel stress, but what you may not know is that it can objectively influence your health. Answers to questions like, “How stressed do you feel?” or “How overwhelmed do you feel?” can predict your likelihood of developing both minor illnesses as well as serious problems like future heart attack (Cohen, Janicki-Deverts, & Miller, 2007). (Want to measure your own stress level? Check out the links at the end of the module.) To understand how health psychologists study these types of associations, we will describe one famous example of a stress and health study. Imagine that you are a research subject for a moment. After you check into a hotel room as part of the study, the researchers ask you to report your general levels of stress. Not too surprising; however, what happens next is that you receive droplets of cold virus into your nose! The researchers intentionally try to make you sick by exposing you to an infectious illness. After they expose you to the virus, the researchers will then evaluate you for several days by asking you questions about your symptoms, monitoring how much mucus you are producing by weighing your used tissues, and taking body fluid samples—all to see if you are objectively ill with a cold. Now, the interesting thing is that not everyone who has drops of cold virus put in their nose develops the illness. Studies like this one find that people who are less stressed and those who are more positive at the beginning of the study are at a decreased risk of developing a cold (Cohen, Tyrrell, & Smith, 1991; Cohen, Alper, Doyle, Treanor, & Turner, 2006) (see Figure 10.4.1 for an example). Importantly, it is not just major life stressors (e.g., a family death, a natural disaster) that increase the likelihood of getting sick. Even small daily hassles like getting stuck in traffic or fighting with your girlfriend can raise your blood pressure, alter your stress hormones, and even suppress your immune system function (DeLongis, Folkman, & Lazarus, 1988; Twisk, Snel, Kemper, & van Machelen, 1999). It is clear that stress plays a major role in our mental and physical health, but what exactly is it? The term stresswas originally derived from the field of mechanics where it is used to describe materials under pressure. The word was first used in a psychological manner by researcher Hans Selye. He was examining the effect of an ovarian hormone that he thought caused sickness in a sample of rats. Surprisingly, he noticed that almost any injected hormone produced this same sickness. He smartly realized that it was not the hormone under investigation that was causing these problems, but instead, the aversive experience of being handled and injected by researchers that led to high physiological arousal and, eventually, to health problems like ulcers. Selye (1946) coined the term stressor to label a stimulus that had this effect on the body and developed a model of the stress response called the General Adaptation Syndrome. Since then, psychologists have studied stress in a myriad of ways, including stress as negative events (e.g., natural disasters or major life changes like dropping out of school), as chronically difficult situations (e.g., taking care of a loved one with Alzheimer’s), as short-term hassles, as a biological fight-or-flight response, and even as clinical illness like post-traumatic stress disorder (PTSD). It continues to be one of the most important and well-studied psychological correlates of illness, because excessive stress causes potentially damaging wear and tear on the body and can influence almost any imaginable disease process. Protecting Our Health An important question that health psychologists ask is: What keeps us protected from disease and alive longer? When considering this issue of resilience (Rutter, 1985), five factors are often studied in terms of their ability to protect (or sometimes harm) health. They are: 1. Coping 2. Control and Self-Efficacy 3. Social Relationships 4. Dispositions and Emotions 5. Stress Management Coping Strategies How individuals cope with the stressors they face can have a significant impact on health. Coping is often classified into two categories: problem-focused coping or emotion-focused coping (Carver, Scheier, & Weintraub, 1989). Problem-focused coping is thought of as actively addressing the event that is causing stress in an effort to solve the issue at hand. For example, say you have an important exam coming up next week. A problem-focused strategy might be to spend additional time over the weekend studying to make sure you understand all of the material. Emotion-focused coping, on the other hand, regulates the emotions that come with stress. In the above examination example, this might mean watching a funny movie to take your mind off the anxiety you are feeling. In the short term, emotion-focused coping might reduce feelings of stress, but problem-focused coping seems to have the greatest impact on mental wellness (Billings & Moos, 1981; Herman-Stabl, Stemmler, & Petersen, 1995). That being said, when events are uncontrollable (e.g., the death of a loved one), emotion-focused coping directed at managing your feelings, at first, might be the better strategy. Therefore, it is always important to consider the match of the stressor to the coping strategy when evaluating its plausible benefits. Control and Self-Efficacy Another factor tied to better health outcomes and an improved ability to cope with stress is having the belief that you have control over a situation. For example, in one study where participants were forced to listen to unpleasant (stressful) noise, those who were led to believe that they had control over the noise performed much better on proofreading tasks afterwards (Glass & Singer, 1972). In other words, even though participants did not have actual control over the noise, the control belief aided them in completing the task. In similar studies, perceived control benefited immune system functioning (Sieber et al., 1992). Outside of the laboratory, studies have shown that older residents in assisted living facilities, which are notorious for low control, lived longer and showed better health outcomes when given control over something as simple as watering a plant or choosing when student volunteers came to visit (Rodin & Langer, 1977; Schulz & Hanusa, 1978). In addition, feeling in control of a threatening situation can actually change stress hormone levels (Dickerson & Kemeny, 2004). Believing that you have control over your own behaviors can also have a positive influence on important outcomes like smoking cessation, contraception use, and weight management (Wallston & Wallston, 1978). When individuals do not believe they have control, they do not try to change. Self-efficacy is closely related to control, in that people with high levels of this trait believe they can complete tasks and reach their goals. Just as feeling in control can reduce stress and improve health, higher self-efficacy can reduce stress and negative health behaviors, and is associated with better health (O’Leary, 1985). Social Relationships Research has shown that the impact of social isolation on our risk for disease and death is similar in magnitude to the risk associated with smoking regularly (Holt-Lunstad, Smith, & Layton, 2010; House, Landis, & Umberson, 1988). In fact, the importance of social relationships for our health is so significant that some scientists believe our body has developed a physiological system that encourages us to seek out our relationships, especially in times of stress (Taylor et al., 2000). Social integration is the concept used to describe the number of social roles that you have (Cohen & Wills, 1985), as well as the lack of isolation. For example, you might be a daughter, a basketball team member, a Humane Society volunteer, a coworker, and a student. Maintaining these different roles can improve your health via encouragement from those around you to maintain a healthy lifestyle. Those in your social network might also provide you with social support (e.g., when you are under stress). This support might include emotional help (e.g., a hug when you need it), tangible help (e.g., lending you money), or advice. By helping to improve health behaviors and reduce stress, social relationships can have a powerful, protective impact on health, and in some cases, might even help people with serious illnesses stay alive longer (Spiegel, Kraemer, Bloom, & Gottheil, 1989). Dispositions and Emotions: What’s Risky and What’s Protective? Negative dispositions and personality traits have been strongly tied to an array of health risks. One of the earliest negative trait-to-health connections was discovered in the 1950s by two cardiologists. They made the interesting discovery that there were common behavioral and psychological patterns among their heart patients that were not present in other patient samples. This pattern included being competitive, impatient, hostile, and time urgent. They labeled it Type A Behavior. Importantly, it was found to be associated with double the risk of heart disease as compared with Type B Behavior (Friedman & Rosenman, 1959). Since the 1950s, researchers have discovered that it is the hostility and competitiveness components of Type A that are especially harmful to heart health (Iribarren et al., 2000; Matthews, Glass, Rosenman, & Bortner, 1977; Miller, Smith, Turner, Guijarro, & Hallet, 1996). Hostile individuals are quick to get upset, and this angry arousal can damage the arteries of the heart. In addition, given their negative personality style, hostile people often lack a heath-protective supportive social network. Positive traits and states, on the other hand, are often health protective. For example, characteristics like positive emotions (e.g., feeling happy or excited) have been tied to a wide range of benefits such as increased longevity, a reduced likelihood of developing some illnesses, and better outcomes once you are diagnosed with certain diseases (e.g., heart disease, HIV) (Pressman & Cohen, 2005). Across the world, even in the most poor and underdeveloped nations, positive emotions are consistently tied to better health (Pressman, Gallagher, & Lopez, 2013). Positive emotions can also serve as the “antidote” to stress, protecting us against some of its damaging effects (Fredrickson, 2001; Pressman & Cohen, 2005; see Figure 10.4.2). Similarly, looking on the bright side can also improve health. Optimism has been shown to improve coping, reduce stress, and predict better disease outcomes like recovering from a heart attack more rapidly (Kubzansky, Sparrow, Vokonas, & Kawachi, 2001; Nes & Segerstrom, 2006; Scheier & Carver, 1985; Segerstrom, Taylor, Kemeny, & Fahey, 1998). Stress Management About 20 percent of Americans report having stress, with 18–33 year-olds reporting the highest levels (American Psychological Association, 2012). Given that the sources of our stress are often difficult to change (e.g., personal finances, current job), a number of interventions have been designed to help reduce the aversive responses to duress. For example, relaxation activities and forms of meditation are techniques that allow individuals to reduce their stress via breathing exercises, muscle relaxation, and mental imagery. Physiological arousal from stress can also be reduced via biofeedback, a technique where the individual is shown bodily information that is not normally available to them (e.g., heart rate), and then taught strategies to alter this signal. This type of intervention has even shown promise in reducing heart and hypertension risk, as well as other serious conditions (e.g., Moravec, 2008; Patel, Marmot, & Terry, 1981). But reducing stress does not have to be complicated! For example, exercise is a great stress reduction activity (Salmon, 2001) that has a myriad of health benefits. The Importance Of Good Health Practices As a student, you probably strive to maintain good grades, to have an active social life, and to stay healthy (e.g., by getting enough sleep), but there is a popular joke about what it’s like to be in college: you can only pick two of these things (see Figure 10.4.3 for an example). The busy life of a college student doesn’t always allow you to maintain all three areas of your life, especially during test-taking periods. In one study, researchers found that students taking exams were more stressed and, thus, smoked more, drank more caffeine, had less physical activity, and had worse sleep habits (Oaten & Chang, 2005), all of which could have detrimental effects on their health. Positive health practices are especially important in times of stress when your immune system is compromised due to high stress and the elevated frequency of exposure to the illnesses of your fellow students in lecture halls, cafeterias, and dorms. Psychologists study both health behaviors and health habits. The former are behaviors that can improve or harm your health. Some examples include regular exercise, flossing, and wearing sunscreen, versus negative behaviors like drunk driving, pulling all-nighters, or smoking. These behaviors become habits when they are firmly established and performed automatically. For example, do you have to think about putting your seatbelt on or do you do it automatically? Habits are often developed early in life thanks to parental encouragement or the influence of our peer group. While these behaviors sound minor, studies have shown that those who engaged in more of these protective habits (e.g., getting 7–8 hours of sleep regularly, not smoking or drinking excessively, exercising) had fewer illnesses, felt better, and were less likely to die over a 9–12-year follow-up period (Belloc & Breslow 1972; Breslow & Enstrom 1980). For college students, health behaviors can even influence academic performance. For example, poor sleep quality and quantity are related to weaker learning capacity and academic performance (Curcio, Ferrara, & De Gennaro, 2006). Due to the effects that health behaviors can have, much effort is put forward by psychologists to understand how to change unhealthy behaviors, and to understand why individuals fail to act in healthy ways. Health promotion involves enabling individuals to improve health by focusing on behaviors that pose a risk for future illness, as well as spreading knowledge on existing risk factors. These might be genetic risks you are born with, or something you developed over time like obesity, which puts you at risk for Type 2 diabetes and heart disease, among other illnesses. Psychology And Medicine There are many psychological factors that influence medical treatment outcomes. For example, older individuals, (Meara, White, & Cutler, 2004), women (Briscoe, 1987), and those from higher socioeconomic backgrounds (Adamson, Ben-Shlomo, Chaturvedi, & Donovan, 2008) are all more likely to seek medical care. On the other hand, some individuals who need care might avoid it due to financial obstacles or preconceived notions about medical practitioners or the illness. Thanks to the growing amount of medical information online, many people now use the Internet for health information and 38% percent report that this influences their decision to see a doctor (Fox & Jones, 2009). Unfortunately, this is not always a good thing because individuals tend to do a poor job assessing the credibility of health information. For example, college-student participants reading online articles about HIV and syphilis rated a physician’s article and a college student’s article as equally credible if the participants said they were familiar with the health topic (Eastin, 2001). Credibility of health information often means how accurate or trustworthy the information is, and it can be influenced by irrelevant factors, such as the website’s design, logos, or the organization’s contact information (Freeman & Spyridakis, 2004). Similarly, many people post health questions on online, unmoderated forums where anyone can respond, which allows for the possibility of inaccurate information being provided for serious medical conditions by unqualified individuals. After individuals decide to seek care, there is also variability in the information they give their medical provider. Poor communication (e.g., due to embarrassment or feeling rushed) can influence the accuracy of the diagnosis and the effectiveness of the prescribed treatment. Similarly, there is variation following a visit to the doctor. While most individuals are tasked with a health recommendation (e.g., buying and using a medication appropriately, losing weight, going to another expert), not everyone adheres to medical recommendations (Dunbar-Jacob & Mortimer-Stephens, 2010). For example, many individuals take medications inappropriately (e.g., stopping early, not filling prescriptions) or fail to change their behaviors (e.g., quitting smoking). Unfortunately, getting patients to follow medical orders is not as easy as one would think. For example, in one study, over one third of diabetic patients failed to get proper medical care that would prevent or slow down diabetes-related blindness (Schoenfeld, Greene, Wu, & Leske, 2001)! Fortunately, as mobile technology improves, physicians now have the ability to monitor adherence and work to improve it (e.g., with pill bottles that monitor if they are opened at the right time). Even text messages are useful for improving treatment adherence and outcomes in depression, smoking cessation, and weight loss (Cole-Lewis, & Kershaw, 2010). Being A Health Psychologist Training as a clinical health psychologist provides a variety of possible career options. Clinical health psychologists often work on teams of physicians, social workers, allied health professionals, and religious leaders. These teams may be formed in locations like rehabilitation centers, hospitals, primary care offices, emergency care centers, or in chronic illness clinics. Work in each of these settings will pose unique challenges in patient care, but the primary responsibility will be the same. Clinical health psychologists will evaluate physical, personal, and environmental factors contributing to illness and preventing improved health. In doing so, they will then help create a treatment strategy that takes into account all dimensions of a person’s life and health, which maximizes its potential for success. Those who specialize in health psychology can also conduct research to discover new health predictors and risk factors, or develop interventions to prevent and treat illness. Researchers studying health psychology work in numerous locations, such as universities, public health departments, hospitals, and private organizations. In the related field of behavioral medicine, careers focus on the application of this type of research. Occupations in this area might include jobs in occupational therapy, rehabilitation, or preventative medicine. Training as a health psychologist provides a wide skill set applicable in a number of different professional settings and career paths. The Future Of Health Psychology Much of the past medical research literature provides an incomplete picture of human health. “Health care” is often “illness care.” That is, it focuses on the management of symptoms and illnesses as they arise. As a result, in many developed countries, we are faced with several health epidemics that are difficult and costly to treat. These include obesity, diabetes, and cardiovascular disease, to name a few. The National Institutes of Health have called for researchers to use the knowledge we have about risk factors to design effective interventions to reduce the prevalence of preventable illness. Additionally, there are a growing number of individuals across developed countries with multiple chronic illnesses and/or lasting disabilities, especially with older age. Addressing their needs and maintaining their quality of life will require skilled individuals who understand how to properly treat these populations. Health psychologists will be on the forefront of work in these areas. With this focus on prevention, it is important that health psychologists move beyond studying risk (e.g., depression, stress, hostility, low socioeconomic status) in isolation, and move toward studying factors that confer resilience and protection from disease. There is, fortunately, a growing interest in studying the positive factors that protect our health (e.g., Diener & Chan, 2011; Pressman & Cohen, 2005; Richman, Kubzansky, Maselko, Kawachi, Choo, & Bauer, 2005) with evidence strongly indicating that people with higher positivity live longer, suffer fewer illnesses, and generally feel better. Seligman (2008) has even proposed a field of “Positive Health” to specifically study those who exhibit “above average” health—something we do not think about enough. By shifting some of the research focus to identifying and understanding these health-promoting factors, we may capitalize on this information to improve public health. Innovative interventions to improve health are already in use and continue to be studied. With recent advances in technology, we are starting to see great strides made to improve health with the aid of computational tools. For example, there are hundreds of simple applications (apps) that use email and text messages to send reminders to take medication, as well as mobile apps that allow us to monitor our exercise levels and food intake (in the growing mobile-health, or m-health, field). These m-health applications can be used to raise health awareness, support treatment and compliance, and remotely collect data on a variety of outcomes. Also exciting are devices that allow us to monitor physiology in real time; for example, to better understand the stressful situations that raise blood pressure or heart rate. With advances like these, health psychologists will be able to serve the population better, learn more about health and health behavior, and develop excellent health-improving strategies that could be specifically targeted to certain populations or individuals. These leaps in equipment development, partnered with growing health psychology knowledge and exciting advances in neuroscience and genetic research, will lead health researchers and practitioners into an exciting new time where, hopefully, we will understand more and more about how to keep people healthy. Outside Resources App: 30 iPhone apps to monitor your health http://www.hongkiat.com/blog/iphone-health-app/ Quiz: Hostility http://www.mhhe.com/socscience/hhp/f...sheet_090.html Self-assessment: Perceived Stress Scale www.ncsu.edu/assessment/resou...ress_scale.pdf Self-assessment: What’s your real age (based on your health practices and risk factors)? http://www.realage.com Video: Try out a guided meditation exercise to reduce your stress Web: American Psychosomatic Society http://www.psychosomatic.org/home/index.cfm Web: APA Division 38, Health Psychology http://www.health-psych.org Web: Society of Behavioral Medicine http://www.sbm.org Discussion Questions 1. What psychological factors contribute to health? 2. Which psychosocial constructs and behaviors might help protect us from the damaging effects of stress? 3. What kinds of interventions might help to improve resilience? Who will these interventions help the most? 4. How should doctors use research in health psychology when meeting with patients? 5. Why do clinical health psychologists play a critical role in improving public health? Vocabulary Adherence In health, it is the ability of a patient to maintain a health behavior prescribed by a physician. This might include taking medication as prescribed, exercising more, or eating less high-fat food. Behavioral medicine A field similar to health psychology that integrates psychological factors (e.g., emotion, behavior, cognition, and social factors) in the treatment of disease. This applied field includes clinical areas of study, such as occupational therapy, hypnosis, rehabilitation or medicine, and preventative medicine. Biofeedback The process by which physiological signals, not normally available to human perception, are transformed into easy-to-understand graphs or numbers. Individuals can then use this information to try to change bodily functioning (e.g., lower blood pressure, reduce muscle tension). Biomedical Model of Health A reductionist model that posits that ill health is a result of a deviation from normal function, which is explained by the presence of pathogens, injury, or genetic abnormality. Biopsychosocial Model of Health An approach to studying health and human function that posits the importance of biological, psychological, and social (or environmental) processes. Chronic disease A health condition that persists over time, typically for periods longer than three months (e.g., HIV, asthma, diabetes). Control Feeling like you have the power to change your environment or behavior if you need or want to. Daily hassles Irritations in daily life that are not necessarily traumatic, but that cause difficulties and repeated stress. Emotion-focused coping Coping strategy aimed at reducing the negative emotions associated with a stressful event. General Adaptation Syndrome A three-phase model of stress, which includes a mobilization of physiological resources phase, a coping phase, and an exhaustion phase (i.e., when an organism fails to cope with the stress adequately and depletes its resources). Health According to the World Health Organization, it is a complete state of physical, mental, and social well-being and not merely the absence of disease or infirmity. Health behavior Any behavior that is related to health—either good or bad. Hostility An experience or trait with cognitive, behavioral, and emotional components. It often includes cynical thoughts, feelings of emotion, and aggressive behavior. Mind–body connection The idea that our emotions and thoughts can affect how our body functions. Problem-focused coping A set of coping strategies aimed at improving or changing stressful situations. Psychoneuroimmunology A field of study examining the relationship among psychology, brain function, and immune function. Psychosomatic medicine An interdisciplinary field of study that focuses on how biological, psychological, and social processes contribute to physiological changes in the body and health over time. Resilience The ability to “bounce back” from negative situations (e.g., illness, stress) to normal functioning or to simply not show poor outcomes in the face of adversity. In some cases, resilience may lead to better functioning following the negative experience (e.g., post-traumatic growth). Self-efficacy The belief that one can perform adequately in a specific situation. Social integration The size of your social network, or number of social roles (e.g., son, sister, student, employee, team member). Social support The perception or actuality that we have a social network that can help us in times of need and provide us with a variety of useful resources (e.g., advice, love, money). Stress A pattern of physical and psychological responses in an organism after it perceives a threatening event that disturbs its homeostasis and taxes its abilities to cope with the event. Stressor An event or stimulus that induces feelings of stress. Type A Behavior Type A behavior is characterized by impatience, competitiveness, neuroticism, hostility, and anger. Type B Behavior Type B behavior reflects the absence of Type A characteristics and is represented by less competitive, aggressive, and hostile behavior patterns.
textbooks/socialsci/Psychology/Introductory_Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/15%3A_PSYCHOLOGICAL_HEALTH/15.02%3A_The_Healthy_Life.txt
• 1.10: The Psychology of Human Sexuality Sexuality is one of the fundamental drives behind everyone’s feelings, thoughts, and behaviors. It defines the means of biological reproduction, describes psychological and sociological representations of self, and orients a person’s attraction to others. Further, it shapes the brain and body to be pleasure-seeking. Yet, as important as sexuality is to being human, it is often viewed as a taboo topic for personal or scientific inquiry. • 1.11: Biochemistry of Love Love is deeply biological. The evolutionary principles and ancient hormonal and neural systems that support the beneficial and healing effects of loving relationships are described here. • 1.1: The Brain The human brain is responsible for all behaviors, thoughts, and experiences described in this textbook. This module provides an introductory overview of the brain, including some basic neuroanatomy, and brief descriptions of the neuroscience methods used to study it. • 1.2: The Nervous System The mammalian nervous system is a complex biological organ, which enables many animals including humans to function in a coordinated fashion. The original design of this system is preserved across many animals through evolution; thus, adaptive physiological and behavioral functions are similar across many animal species. • 1.3: Neurons This module on the biological basis of behavior provides an overview of the basic structure of neurons and their means of communication. Having a basic knowledge of the fundamental structure and function of neurons is a necessary foundation as you move forward in the field of psychology. • 1.4: The Brain and Nervous System The brain is the most complex part of the human body. It is the center of consciousness and also controls all voluntary and involuntary movement and bodily functions. It communicates with each part of the body through the nervous system, a network of channels that carry electrochemical signals. • 1.5: Hormones and Behavior The goal of this module is to introduce you to the topic of hormones and behavior. This field of study is also called behavioral endocrinology, which is the scientific study of the interaction between hormones and behavior. • 1.6: Evolutionary Theories in Psychology Evolution or change over time occurs through the processes of natural and sexual selection. In response to problems in our environment, we adapt both physically and psychologically to ensure our survival and reproduction. • 1.7: The Nature-Nurture Question People have a deep intuition about what has been called the “nature–nurture question.” Some aspects of our behavior feel as though they originate in our genetic makeup, while others feel like the result of our upbringing or our own hard work. Genes and environments always combine to produce behavior, and the real science is in the discovery of how they combine for a given behavior. • 1.8: Epigenetics in Psychology Early life experiences exert a profound and long-lasting influence on physical and mental health throughout life. In this module, we survey recent developments revealing epigenetic aspects of mental health and review some of the challenges of epigenetic approaches in psychology to help explain how nurture shapes nature. • 1.9: Human Sexual Anatomy and Physiology It’s natural to be curious about anatomy and physiology. Being knowledgeable about anatomy and physiology increases our potential for pleasure, physical and psychological health, and life satisfaction. An appreciation of both the biological and psychological motivating forces behind sexual curiosity, desire, and the capacities of our brains can enhance the health of relationships. Chapter 1: Biological Basis of Behavior By Diane Beck and Evelina Tapia University of Illinois at Urbana-Champaign, University of Illinois The human brain is responsible for all behaviors, thoughts, and experiences described in this textbook. This module provides an introductory overview of the brain, including some basic neuroanatomy, and brief descriptions of the neuroscience methods used to study it. Learning Objectives • Name and describe the basic function of the brain stem, cerebellum, and cerebral hemispheres. • Name and describe the basic function of the four cerebral lobes: occipital, temporal, parietal, and frontal cortex. • Describe a split-brain patient and at least two important aspects of brain function that these patients reveal. • Distinguish between gray and white matter of the cerebral hemispheres. • Name and describe the most common approaches to studying the human brain. • Distinguish among four neuroimaging methods: PET, fMRI, EEG, and DOI. • Describe the difference between spatial and temporal resolution with regard to brain function. Introduction Any textbook on psychology would be incomplete without reference to the brain. Every behavior, thought, or experience described in the other modules must be implemented in the brain. A detailed understanding of the human brain can help us make sense of human experience and behavior. For example, one well-established fact about human cognition is that it is limited. We cannot do two complex tasks at once: We cannot read and carry on a conversation at the same time, text and drive, or surf the Internet while listening to a lecture, at least not successfully or safely. We cannot even pat our head and rub our stomach at the same time (with exceptions, see “A Brain Divided”). Why is this? Many people have suggested that such limitations reflect the fact that the behaviors draw on the same resource; if one behavior uses up most of the resource there is not enough resource left for the other. But what might this limited resource be in the brain? The brain uses oxygen and glucose, delivered via the blood. The brain is a large consumer of these metabolites, using 20% of the oxygen and calories we consume despite being only 2% of our total weight. However, as long as we are not oxygen-deprived or malnourished, we have more than enough oxygen and glucose to fuel the brain. Thus, insufficient “brain fuel” cannot explain our limited capacity. Nor is it likely that our limitations reflect too few neurons. The average human brain contains 100 billion neurons. It is also not the case that we use only 10% of our brain, a myth that was likely started to imply we had untapped potential. Modern neuroimaging (see “Studying the Human Brain”) has shown that we use all parts of brain, just at different times, and certainly more than 10% at any one time. If we have an abundance of brain fuel and neurons, how can we explain our limited cognitive abilities? Why can’t we do more at once? The most likely explanation is the way these neurons are wired up. We know, for instance, that many neurons in the visual cortex (the part of the brain responsible for processing visual information) are hooked up in such a way as to inhibit each other (Beck & Kastner, 2009). When one neuron fires, it suppresses the firing of other nearby neurons. If two neurons that are hooked up in an inhibitory way both fire, then neither neuron can fire as vigorously as it would otherwise. This competitive behavior among neurons limits how much visual information the brain can respond to at the same time. Similar kinds of competitive wiring among neurons may underlie many of our limitations. Thus, although talking about limited resources provides an intuitive description of our limited capacity behavior, a detailed understanding of the brain suggests that our limitations more likely reflect the complex way in which neurons talk to each other rather than the depletion of any specific resource. The Anatomy of the Brain There are many ways to subdivide the mammalian brain, resulting in some inconsistent and ambiguous nomenclature over the history of neuroanatomy (Swanson, 2000). For simplicity, we will divide the brain into three basic parts: the brain stem, cerebellum, and cerebral hemispheres (see Figure 1.1.1). In Figure 1.1.2, however, we depict other prominent groupings (Swanson, 2000) of the six major subdivisions of the brain (Kandal, Schwartz, & Jessell, 2000). Brain Stem The brain stem is sometimes referred to as the “trunk” of the brain. It is responsible for many of the neural functions that keep us alive, including regulating our respiration (breathing), heart rate, and digestion. In keeping with its function, if a patient sustains severe damage to the brain stem he or she will require “life support” (i.e., machines are used to keep him or her alive). Because of its vital role in survival, in many countries, a person who has lost brain stem function is said to be “brain dead,” although other countries require significant tissue loss in the cortex (of the cerebral hemispheres), which is responsible for our conscious experience, for the same diagnosis. The brain stem includes the medulla, pons, midbrain, and diencephalon (which consists of thalamus and hypothalamus). Collectively, these regions also are involved in our sleep–wake cycle, some sensory and motor function, as well as growth and other hormonal behaviors. Cerebellum The cerebellum is the distinctive structure at the back of the brain. The Greek philosopher and scientist Aristotle aptly referred to it as the “small brain” (“parencephalon” in Greek, “cerebellum” in Latin) in order to distinguish it from the “large brain” (“encephalon” in Greek, “cerebrum” in Latin). The cerebellum is critical for coordinated movement and posture. More recently, neuroimaging studies (see “Studying the Human Brain”) have implicated it in a range of cognitive abilities, including language. It is perhaps not surprising that the cerebellum’s influence extends beyond that of movement and posture, given that it contains the greatest number of neurons of any structure in the brain. However, the exact role it plays in these higher functions is still a matter of further study. Cerebral Hemispheres The cerebral hemispheres are responsible for our cognitive abilities and conscious experience. They consist of the cerebral cortex and accompanying white matter (“cerebrum” in Latin) as well as the subcortical structures of the basal ganglia, amygdala, and hippocampal formation. The cerebral cortex is the largest and most visible part of the brain, retaining the Latin name (cerebrum) for “large brain” that Aristotle coined. It consists of two hemispheres (literally two half spheres) and gives the brain its characteristic gray and convoluted appearance; the folds and grooves of the cortex are called gyri and sulci (gyrus and sulcus if referring to just one), respectively. The two cerebral hemispheres can be further subdivided into four lobes: the occipital, temporal, parietal, and frontal lobes. The occipital lobe is responsible for vision, as is much of the temporal lobe. The temporal lobe is also involved in auditory processing, memory, and multisensory integration (e.g., the convergence of vision and audition). The parietal lobe houses the somatosensory (body sensations) cortex and structures involved in visual attention, as well as multisensory convergence zones. The frontal lobe houses the motor cortex and structures involved in motor planning, language, judgment, and decision-making. Not surprisingly then, the frontal lobe is proportionally larger in humans than in any other animal. The subcortical structures are so named because they reside beneath the cortex. The basal ganglia are critical to voluntary movement and as such make contact with the cortex, the thalamus, and the brain stem. The amygdala and hippocampal formation are part of the limbic system, which also includes some cortical structures. The limbic system plays an important role in emotion and, in particular, in aversion and gratification. A Brain Divided The two cerebral hemispheres are connected by a dense bundle of white matter tracts called the corpus callosum. Some functions are replicated in the two hemispheres. For example, both hemispheres are responsible for sensory and motor function, although the sensory and motor cortices have a contralateral (or opposite-side) representation; that is, the left cerebral hemisphere is responsible for movements and sensations on the right side of the body and the right cerebral hemisphere is responsible for movements and sensations on the left side of the body. Other functions are lateralized; that is, they reside primarily in one hemisphere or the other. For example, for right-handed and the majority of left-handed individuals, the left hemisphere is most responsible for language. There are some people whose two hemispheres are not connected, either because the corpus callosum was surgically severed (callosotomy) or due to a genetic abnormality. These split-brain patients have helped us understand the functioning of the two hemispheres. First, because of the contralateral representation of sensory information, if an object is placed in only the left or only the right visual hemifield, then only the right or left hemisphere, respectively, of the split-brain patient will see it. In essence, it is as though the person has two brains in his or her head, each seeing half the world. Interestingly, because language is very often localized in the left hemisphere, if we show the right hemisphere a picture and ask the patient what she saw, she will say she didn’t see anything (because only the left hemisphere can speak and it didn’t see anything). However, we know that the right hemisphere sees the picture because if the patient is asked to press a button whenever she sees the image, the left hand (which is controlled by the right hemisphere) will respond despite the left hemisphere’s denial that anything was there. There are also some advantages to having disconnected hemispheres. Unlike those with a fully functional corpus callosum, a split-brain patient can simultaneously search for something in his right and left visual fields (Luck, Hillyard, Mangun, & Gazzaniga, 1989) and can do the equivalent of rubbing his stomach and patting his head at the same time (Franz, Eliason, Ivry, & Gazzaniga, 1996). In other words, they exhibit less competition between the hemispheres. Gray Versus White Matter The cerebral hemispheres contain both grey and white matter, so called because they appear grayish and whitish in dissections or in an MRI (magnetic resonance imaging; see, “Studying the Human Brain”). The gray matter is composed of the neuronal cell bodies (see module, “Neurons”). The cell bodies (or soma) contain the genes of the cell and are responsible for metabolism (keeping the cell alive) and synthesizing proteins. In this way, the cell body is the workhorse of the cell. The white matter is composed of the axons of the neurons, and, in particular, axons that are covered with a sheath of myelin (fatty support cells that are whitish in color). Axons conduct the electrical signals from the cell and are, therefore, critical to cell communication. People use the expression “use your gray matter” when they want a person to think harder. The “gray matter” in this expression is probably a reference to the cerebral hemispheres more generally; the gray cortical sheet (the convoluted surface of the cortex) being the most visible. However, both the gray matter and white matter are critical to proper functioning of the mind. Losses of either result in deficits in language, memory, reasoning, and other mental functions. See Figure 1.1.3 for MRI slices showing both the inner white matter that connects the cell bodies in the gray cortical sheet. Studying the Human Brain How do we know what the brain does? We have gathered knowledge about the functions of the brain from many different methods. Each method is useful for answering distinct types of questions, but the strongest evidence for a specific role or function of a particular brain area is converging evidence; that is, similar findings reported from multiple studies using different methods. One of the first organized attempts to study the functions of the brain was phrenology, a popular field of study in the first half of the 19th century. Phrenologists assumed that various features of the brain, such as its uneven surface, are reflected on the skull; therefore, they attempted to correlate bumps and indentations of the skull with specific functions of the brain. For example, they would claim that a very artistic person has ridges on the head that vary in size and location from those of someone who is very good at spatial reasoning. Although the assumption that the skull reflects the underlying brain structure has been proven wrong, phrenology nonetheless significantly impacted current-day neuroscience and its thinking about the functions of the brain. That is, different parts of the brain are devoted to very specific functions that can be identified through scientific inquiry. Neuroanatomy Dissection of the brain, in either animals or cadavers, has been a critical tool of neuroscientists since 340 BC when Aristotle first published his dissections. Since then this method has advanced considerably with the discovery of various staining techniques that can highlight particular cells. Because the brain can be sliced very thinly, examined under the microscope, and particular cells highlighted, this method is especially useful for studying specific groups of neurons or small brain structures; that is, it has a very high spatial resolution. Dissections allow scientists to study changes in the brain that occur due to various diseases or experiences (e.g., exposure to drugs or brain injuries). Virtual dissection studies with living humans are also conducted. Here, the brain is imaged using computerized axial tomography (CAT) or MRI scanners; they reveal with very high precision the various structures in the brain and can help detect changes in gray or white matter. These changes in the brain can then be correlated with behavior, such as performance on memory tests, and, therefore, implicate specific brain areas in certain cognitive functions. Changing the Brain Some researchers induce lesions or ablate (i.e., remove) parts of the brain in animals. If the animal’s behavior changes after the lesion, we can infer that the removed structure is important for that behavior. Lesions of human brains are studied in patient populations only; that is, patients who have lost a brain region due to a stroke or other injury, or who have had surgical removal of a structure to treat a particular disease (e.g., a callosotomy to control epilepsy, as in split-brain patients). From such case studies, we can infer brain function by measuring changes in the behavior of the patients before and after the lesion. Because the brain works by generating electrical signals, it is also possible to change brain function with electrical stimulation. Transcranial magnetic stimulation (TMS) refers to a technique whereby a brief magnetic pulse is applied to the head that temporarily induces a weak electrical current in the brain. Although effects of TMS are sometimes referred to as temporary virtual lesions, it is more appropriate to describe the induced electricity as interference with neurons’ normal communication with each other. TMS allows very precise study of when events in the brain happen so it has a good temporal resolution, but its application is limited only to the surface of the cortex and cannot extend to deep areas of the brain. Transcranial direct current stimulation (tDCS) is similar to TMS except that it uses electrical current directly, rather than inducing it with magnetic pulses, by placing small electrodes on the skull. A brain area is stimulated by a low current (equivalent to an AA battery) for a more extended period of time than TMS. When used in combination with cognitive training, tDCS has been shown to improve performance of many cognitive functions such as mathematical ability, memory, attention, and coordination (e.g., Brasil-Neto, 2012; Feng, Bowden, & Kautz, 2013; Kuo & Nitsche, 2012). Neuroimaging Neuroimaging tools are used to study the brain in action; that is, when it is engaged in a specific task. Positron emission tomography (PET) records blood flow in the brain. The PET scanner detects the radioactive substance that is injected into the bloodstream of the participant just before or while he or she is performing some task (e.g., adding numbers). Because active neuron populations require metabolites, more blood and hence more radioactive substance flows into those regions. PET scanners detect the injected radioactive substance in specific brain regions, allowing researchers to infer that those areas were active during the task. Functional magnetic resonance imaging (fMRI) also relies on blood flow in the brain. This method, however, measures the changes in oxygen levels in the blood and does not require any substance to be injected into the participant. Both of these tools have good spatial resolution (although not as precise as dissection studies), but because it takes at least several seconds for the blood to arrive to the active areas of the brain, PET and fMRI have poor temporal resolution; that is, they do not tell us very precisely when the activity occurred. Electroencephalography (EEG), on the other hand, measures the electrical activity of the brain, and therefore, it has a much greater temporal resolution (millisecond precision rather than seconds) than PET or fMRI. Like tDCS, electrodes are placed on the participant’s head when he or she is performing a task. In this case, however, many more electrodes are used, and they measure rather than produce activity. Because the electrical activity picked up at any particular electrode can be coming from anywhere in the brain, EEG has poor spatial resolution; that is, we have only a rough idea of which part of the brain generates the measured activity. Diffuse optical imaging (DOI) can give researchers the best of both worlds: high spatial and temporal resolution, depending on how it is used. Here, one shines infrared light into the brain, and measures the light that comes back out. DOI relies on the fact that the properties of the light change when it passes through oxygenated blood, or when it encounters active neurons. Researchers can then infer from the properties of the collected light what regions in the brain were engaged by the task. When DOI is set up to detect changes in blood oxygen levels, the temporal resolution is low and comparable to PET or fMRI. However, when DOI is set up to directly detect active neurons, it has both high spatial and temporal resolution. Because the spatial and temporal resolution of each tool varies, strongest evidence for what role a certain brain area serves comes from converging evidence. For example, we are more likely to believe that the hippocampal formation is involved in memory if multiple studies using a variety of tasks and different neuroimaging tools provide evidence for this hypothesis. The brain is a complex system, and only advances in brain research will show whether the brain can ever really understand itself. Outside Resources Video: Brain Bank at Harvard (National Geographic video) http://video.nationalgeographic.com/video/science/health-human-body-sci/human-body/brain-bank-sci/ Video: Frontal Lobes and Behavior (video #25) www.learner.org/resources/series142.html Video: Organization and Evaluation of Human Brain Function video (video #1) www.learner.org/resources/series142.html Video: Videos of a split-brain patient https://youtu.be/ZMLzP1VCANo Video: Videos of a split-brain patient (video #5) www.learner.org/resources/series142.html Web: Atlas of the Human Brain: interactive demos and brain sections http://www.thehumanbrain.info/ Web: Harvard University Human Brain Atlas: normal and diseased brain scans http://www.med.harvard.edu/aanlib/home.html Discussion Questions 1. In what ways does the segmentation of the brain into the brain stem, cerebellum, and cerebral hemispheres provide a natural division? 2. How has the study of split-brain patients been informative? 3. What is behind the expression “use your gray matter,” and why is it not entirely accurate? 4. Why is converging evidence the best kind of evidence in the study of brain function? 5. If you were interested in whether a particular brain area was involved in a specific behavior, what neuroscience methods could you use? 6. If you were interested in the precise time in which a particular brain process occurred, which neuroscience methods could you use? Vocabulary Ablation Surgical removal of brain tissue. Axial plane See “horizontal plane.” Basal ganglia Subcortical structures of the cerebral hemispheres involved in voluntary movement. Brain stem The “trunk” of the brain comprised of the medulla, pons, midbrain, and diencephalon. Callosotomy Surgical procedure in which the corpus callosum is severed (used to control severe epilepsy). Case study A thorough study of a patient (or a few patients) with naturally occurring lesions. Cerebellum The distinctive structure at the back of the brain, Latin for “small brain.” Cerebral cortex The outermost gray matter of the cerebrum; the distinctive convolutions characteristic of the mammalian brain. Cerebral hemispheres The cerebral cortex, underlying white matter, and subcortical structures. Cerebrum Usually refers to the cerebral cortex and associated white matter, but in some texts includes the subcortical structures. Contralateral Literally “opposite side”; used to refer to the fact that the two hemispheres of the brain process sensory information and motor commands for the opposite side of the body (e.g., the left hemisphere controls the right side of the body). Converging evidence Similar findings reported from multiple studies using different methods. Coronal plane A slice that runs from head to foot; brain slices in this plane are similar to slices of a loaf of bread, with the eyes being the front of the loaf. Diffuse optical imaging (DOI) A neuroimaging technique that infers brain activity by measuring changes in light as it is passed through the skull and surface of the brain. Electroencephalography (EEG) A neuroimaging technique that measures electrical brain activity via multiple electrodes on the scalp. Frontal lobe The front most (anterior) part of the cerebrum; anterior to the central sulcus and responsible for motor output and planning, language, judgment, and decision-making. Functional magnetic resonance imaging (fMRI) Functional magnetic resonance imaging (fMRI): A neuroimaging technique that infers brain activity by measuring changes in oxygen levels in the blood. Gray matter The outer grayish regions of the brain comprised of the neurons’ cell bodies. Gyri (plural) Folds between sulci in the cortex. Gyrus A fold between sulci in the cortex. Horizontal plane A slice that runs horizontally through a standing person (i.e., parallel to the floor); slices of brain in this plane divide the top and bottom parts of the brain; this plane is similar to slicing a hamburger bun. Lateralized To the side; used to refer to the fact that specific functions may reside primarily in one hemisphere or the other (e.g., for the majority individuals, the left hemisphere is most responsible for language). Lesion A region in the brain that suffered damage through injury, disease, or medical intervention. Limbic system Includes the subcortical structures of the amygdala and hippocampal formation as well as some cortical structures; responsible for aversion and gratification. Metabolite A substance necessary for a living organism to maintain life. Motor cortex Region of the frontal lobe responsible for voluntary movement; the motor cortex has a contralateral representation of the human body. Myelin Fatty tissue, produced by glial cells (see module, “Neurons”) that insulates the axons of the neurons; myelin is necessary for normal conduction of electrical impulses among neurons. Nomenclature Naming conventions. Occipital lobe The back most (posterior) part of the cerebrum; involved in vision. Parietal lobe The part of the cerebrum between the frontal and occipital lobes; involved in bodily sensations, visual attention, and integrating the senses. Phrenology A now-discredited field of brain study, popular in the first half of the 19th century that correlated bumps and indentations of the skull with specific functions of the brain. Positron emission tomography (PET) A neuroimaging technique that measures brain activity by detecting the presence of a radioactive substance in the brain that is initially injected into the bloodstream and then pulled in by active brain tissue. Sagittal plane A slice that runs vertically from front to back; slices of brain in this plane divide the left and right side of the brain; this plane is similar to slicing a baked potato lengthwise. Somatosensory (body sensations) cortex The region of the parietal lobe responsible for bodily sensations; the somatosensory cortex has a contralateral representation of the human body. Spatial resolution A term that refers to how small the elements of an image are; high spatial resolution means the device or technique can resolve very small elements; in neuroscience it describes how small of a structure in the brain can be imaged. Split-brain patient A patient who has had most or all of his or her corpus callosum severed. Subcortical Structures that lie beneath the cerebral cortex, but above the brain stem. Sulci (plural) Grooves separating folds of the cortex. Sulcus A groove separating folds of the cortex. Temporal lobe The part of the cerebrum in front of (anterior to) the occipital lobe and below the lateral fissure; involved in vision, auditory processing, memory, and integrating vision and audition. Temporal resolution A term that refers to how small a unit of time can be measured; high temporal resolution means capable of resolving very small units of time; in neuroscience it describes how precisely in time a process can be measured in the brain. Transcranial direct current stimulation (tDCS) A neuroscience technique that passes mild electrical current directly through a brain area by placing small electrodes on the skull. Transcranial magnetic stimulation (TMS) A neuroscience technique whereby a brief magnetic pulse is applied to the head that temporarily induces a weak electrical current that interferes with ongoing activity. Transverse plane See “horizontal plane.” Visual hemifield The half of visual space (what we see) on one side of fixation (where we are looking); the left hemisphere is responsible for the right visual hemifield, and the right hemisphere is responsible for the left visual hemifield. White matter The inner whitish regions of the cerebrum comprised of the myelinated axons of neurons in the cerebral cortex.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_1%3A_Biological_Basis_of_Behavior/1.1%3A_The_Brain.txt
By Don Lucas and Jennifer Fox Northwest Vista College Sexuality is one of the fundamental drives behind everyone’s feelings, thoughts, and behaviors. It defines the means of biological reproduction, describes psychological and sociological representations of self, and orients a person’s attraction to others. Further, it shapes the brain and body to be pleasure-seeking. Yet, as important as sexuality is to being human, it is often viewed as a taboo topic for personal or scientific inquiry. Learning Objectives • Explain how scientists study human sexuality. • Share a definition of human sexuality. • Distinguish between sex, gender, and sexual orientation. • Review common and alternative sexual behaviors. • Appraise how pleasure, sexual behaviors, and consent are intertwined. Introduction Sex makes the world go around: It makes babies bond, children giggle, adolescents flirt, and adults have babies. It is addressed in the holy books of the world’s great religions, and it infiltrates every part of society. It influences the way we dress, joke, and talk. In many ways, sex defines who we are. It is so important, the eminent neuropsychologist Karl Pribram (1958) described sex as one of four basic human drive states. Drive states motivate us to accomplish goals. They are linked to our survival. According to Pribram, feeding, fighting, fleeing, and sex are the four drives behind every thought, feeling, and behavior. Since these drives are so closely associated with our psychological and physical health, you might assume people would study, understand, and discuss them openly. Your assumption would be generally correct for three of the four drives (Malacane & Beckmeyer, 2016). Can you guess which drive is the least understood and openly discussed? This module presents an opportunity for you to think openly and objectively about sex. Without shame or taboo, using science as a lens, we examine fundamental aspects of human sexuality—including gender, sexual orientation, fantasies, behaviors, paraphilias, and sexual consent. The History of Scientific Investigations of Sex The history of human sexuality is as long as human history itself—200,000+ years and counting (Antón & Swisher, 2004). For almost as long as we have been having sex, we have been creating art, writing, and talking about it. Some of the earliest recovered artifacts from ancient cultures are thought to be fertility totems. The Hindu Kama Sutra (400 BCE to 200 CE)—an ancient text discussing love, desire, and pleasure—includes a how-to manual for having sexual intercourse. Rules, advice, and stories about sex are also contained in the Muslim Qur’an, Jewish Torah, and Christian Bible. By contrast, people have been scientifically investigating sex for only about 125 years. The first scientific investigations of sex employed the case study method of research. Using this method, the English physician Henry Havelock Ellis (1859-1939) examined diverse topics within sexuality, including arousal and masturbation. From 1897 to 1923, his findings were published in a seven-volume set of books titled Studies in the Psychology of Sex. Among his most noteworthy findings is that transgender people are distinct from homosexual people. Ellis’s studies led him to be an advocate of equal rights for women and comprehensive human sexuality education in public schools. Using case studies, the Austrian neurologist Sigmund Freud (1856-1939) is credited with being the first scientist to link sex to healthy development and to recognize humans as being sexual throughout their lifespans, including childhood (Freud, 1905). Freud (1923) argued that people progress through five stages of psychosexual development: oral, anal, phallic, latent, and genital. According to Freud, each of these stages could be passed through in a healthy or unhealthy manner. In unhealthy manners, people might develop psychological problems, such as frigidity, impotence, or anal-retentiveness. The American biologist Alfred Kinsey (1894-1956) is commonly referred to as the father of human sexuality research. Kinsey was a world-renowned expert on wasps but later changed his focus to the study of humans. This shift happened because he wanted to teach a course on marriage but found data on human sexual behavior lacking. He believed that sexual knowledge was the product of guesswork and had never really been studied systematically or in an unbiased way. He decided to collect information himself using the survey method, and set a goal of interviewing 100 thousand people about their sexual histories. Although he fell short of his goal, he still managed to collect 18 thousand interviews! Many “behind closed doors” behaviors investigated by contemporary scientists are based on Kinsey’s seminal work. Today, a broad range of scientific research on sexuality continues. It’s a topic that spans various disciplines, including anthropology, biology, neurology, psychology, and sociology. Sex, Gender, and Sexual Orientation: Three Different Parts of You Applying for a credit card or filling out a job application requires your name, address, and birth-date. Additionally, applications usually ask for your sex or gender. It’s common for us to use the terms “sex” and “gender” interchangeably. However, in modern usage, these terms are distinct from one another. Sex describes means of biological reproduction. Sex includes sexual organs, such as ovaries—defining what it is to be a female—or testes—defining what it is to be a male. Interestingly, biological sex is not as easily defined or determined as you might expect (see the section on variations in sex, below). By contrast, the term gender describes psychological (gender identity) and sociological (gender role) representations of biological sex. At an early age, we begin learning cultural norms for what is considered masculine and feminine. For example, children may associate long hair or dresses with femininity. Later in life, as adults, we often conform to these norms by behaving in gender-specific ways: as men, we build houses; as women, we bake cookies (Marshall, 1989; Money et al., 1955; Weinraub et al., 1984). Because cultures change over time, so too do ideas about gender. For example, European and American cultures today associate pink with femininity and blue with masculinity. However, less than a century ago, these same cultures were swaddling baby boys in pink, because of its masculine associations with “blood and war,” and dressing little girls in blue, because of its feminine associations with the Virgin Mary (Kimmel, 1996). Sex and gender are important aspects of a person’s identity. However, they do not tell us about a person’s sexual orientation (Rule & Ambady, 2008). Sexual orientation refers to a person’s sexual attraction to others. Within the context of sexual orientation, sexual attraction refers to a person’s capacity to arouse the sexual interest of another, or, conversely, the sexual interest one person feels toward another. While some argue that sexual attraction is primarily driven by reproduction (e.g., Geary, 1998), empirical studies point to pleasure as the primary force behind our sex drive. For example, in a survey of college students who were asked, “Why do people have sex?” respondents gave more than 230 unique responses, most of which were related to pleasure rather than reproduction (Meston & Buss, 2007). Here’s a thought-experiment to further demonstrate how reproduction has relatively little to do with driving sexual attraction: Add the number of times you’ve had and hope to have sex during your lifetime. With this number in mind, consider how many times the goal was (or will be) for reproduction versus how many it was (or will be) for pleasure. Which number is greater? Although a person’s intimate behavior may have sexual fluidity —changing due to circumstances (Diamond, 2009)—sexual orientations are relatively stable over one’s lifespan, and are genetically rooted (Frankowski, 2004). One method of measuring these genetic roots is the sexual orientation concordance rate (SOCR). An SOCR is the probability that a pair of individuals has the same sexual orientation. SOCRs are calculated and compared between people who share the same genetics (monozygotic twins, 99%); some of the same genetics (dizygotic twins, 50%); siblings (50%); and non-related people, randomly selected from the population. Researchers find SOCRs are highest for monozygotic twins; and SOCRs for dizygotic twins, siblings, and randomly-selected pairs do not significantly differ from one another (Bailey et al. 2016; Kendler et al., 2000). Because sexual orientation is a hotly debated issue, an appreciation of the genetic aspects of attraction can be an important piece of this dialogue. On Being Normal: Variations in Sex, Gender, and Sexual Orientation Only the human mind invents categories and tries to force facts into separated pigeon-holes. The living world is a continuum in each and every one of its aspects. The sooner we learn this concerning human sexual behavior, the sooner we shall reach a sound understanding of the realities of sex.” (Kinsey, Pomeroy, & Martin, 1948, pp. 638–639) We live in an era when sex, gender, and sexual orientation are controversial religious and political issues. Some nations have laws against homosexuality, while others have laws protecting same-sex marriages. At a time when there seems to be little agreement among religious and political groups, it makes sense to wonder, “What is normal?” and, “Who decides?” The international scientific and medical communities (e.g., World Health Organization, World Medical Association, World Psychiatric Association, Association for Psychological Science) view variations of sex, gender, and sexual orientation as normal. Furthermore, variations of sex, gender, and sexual orientation occur naturally throughout the animal kingdom. More than 500 animal species have homosexual or bisexual orientations (Lehrer, 2006). More than 65,000 animal species are intersex—born with either an absence or some combination of male and female reproductive organs, sex hormones, or sex chromosomes (Jarne & Auld, 2006). In humans, intersex individuals make up about two percent—more than 150 million people—of the world’s population (Blackless et al., 2000). There are dozens of intersex conditions, such as Androgen Insensitivity Syndrome and Turner’s Syndrome (Lee et al., 2006). The term “syndrome” can be misleading; although intersex individuals may have physical limitations (e.g., about a third of Turner’s individuals have heart defects; Matura et al., 2007), they otherwise lead relatively normal intellectual, personal, and social lives. In any case, intersex individuals demonstrate the diverse variations of biological sex. Just as biological sex varies more widely than is commonly thought, so too does gender. Cisgender individuals’ gender identities correspond with their birth sexes, whereas transgender individuals’ gender identities do not correspond with their birth sexes. Because gender is so deeply ingrained culturally, rates of transgender individuals vary widely around the world (see Table 1). Although incidence rates of transgender individuals differ significantly between cultures, transgender females (TGFs)—whose birth sex was male—are by far the most frequent type of transgender individuals in any culture. Of the 18 countries studied by Meier and Labuski (2013), 16 of them had higher rates of TGFs than transgender males (TGMs)—whose birth sex was female— and the 18 country TGF to TGM ratio was 3 to 1. TGFs have diverse levels of androgyny—having both feminine and masculine characteristics. For example, five percent of the Samoan population are TGFs referred to as fa'afafine, who range in androgyny from mostly masculine to mostly feminine (Tan, 2016); in Pakistan, India, Nepal, and Bangladesh, TGFs are referred to as hijras, recognized by their governments as a third gender, and range in androgyny from only having a few masculine characteristics to being entirely feminine (Pasquesoone, 2014); and as many as six percent of biological males living in Oaxaca, Mexico are TGFs referred to as muxes, who range in androgyny from mostly masculine to mostly feminine (Stephen, 2002). Sexual orientation is as diverse as gender identity. Instead of thinking of sexual orientation as being two categories—homosexual and heterosexual—Kinsey argued that it’s a continuum (Kinsey, Pomeroy, & Martin, 1948). He measured orientation on a continuum, using a 7-point Likert scale called the Heterosexual-Homosexual Rating Scale, in which 0 is exclusively heterosexual, 3 is bisexual, and 6 is exclusively homosexual. Later researchers using this method have found 18% to 39% of Europeans and Americans identifying as somewhere between heterosexual and homosexual (Lucas et al., 2017; YouGov.com, 2015). These percentages drop dramatically (0.5% to 1.9%) when researchers force individuals to respond using only two categories (Copen, Chandra, & Febo-Vazquez, 2016; Gates, 2011). What Are You Doing? A Brief Guide to Sexual Behavior Just as we may wonder what characterizes particular gender or sexual orientations as “normal,” we might have similar questions about sexual behaviors. What is considered sexually normal depends on culture. Some cultures are sexually-restrictive—such as one extreme example off the coast of Ireland, studied in the mid-20th century, known as the island of Inis Beag. The inhabitants of Inis Beag detested nudity and viewed sex as a necessary evil for the sole purpose of reproduction. They wore clothes when they bathed and even while having sex. Further, sex education was nonexistent, as was breast feeding (Messenger, 1989). By contrast, Mangaians, of the South Pacific island of A’ua’u, are an example of a highly sexually-permissive culture. Young Mangaian boys are encouraged to masturbate. By age 13, they’re instructed by older males on how to sexually perform and maximize orgasms for themselves and their partners. When the boys are a bit older, this formal instruction is replaced with hands-on coaching by older females. Young girls are also expected to explore their sexuality and develop a breadth of sexual knowledge before marriage (Marshall & Suggs, 1971). These cultures make clear that what are considered sexually normal behaviors depends on time and place. Sexual behaviors are linked to, but distinct from, fantasies. Leitenberg and Henning (1995) define sexual fantasies as “any mental imagery that is sexually arousing.” One of the more common fantasies is the replacement fantasy—fantasizing about someone other than one’s current partner (Hicks & Leitenberg, 2001). In addition, more than 50% of people have forced-sex fantasies (Critelli & Bivona, 2008). However, this does not mean most of us want to be cheating on our partners or be involved in sexual assault. Sexual fantasies are not equal to sexual behaviors. Sexual fantasies are often a context for the sexual behavior of masturbation—tactile (physical) stimulation of the body for sexual pleasure. Historically, masturbation has earned a bad reputation; it’s been described as “self-abuse,” and falsely associated with causing adverse side effects, such as hairy palms, acne, blindness, insanity, and even death (Kellogg, 1888). However, empirical evidence links masturbation to increased levels of sexual and marital satisfaction, and physical and psychological health (Hurlburt & Whitaker, 1991; Levin, 2007). There is even evidence that masturbation significantly decreases the risk of developing prostate cancer among males over the age of 50 (Dimitropoulou et al., 2009). Masturbation is common among males and females in the U.S. Robbins et al. (2011) found that 74% of males and 48% of females reported masturbating. However, frequency of masturbation is affected by culture. An Australian study found that only 58% of males and 42% of females reported masturbating (Smith, Rosenthal, & Reichler, 1996). Further, rates of reported masturbation by males and females in India are even lower, at 46% and 13%, respectively (Ramadugu et al., 2011). Coital sex is the term for vaginal-penile intercourse, which occurs for about 3 to 13 minutes on average—though its duration and frequency decrease with age (Corty & Guardiani, 2008; Smith et al., 2012). Traditionally, people are known as “virgins” before they engage in coital sex, and have “lost” their virginity afterwards. Durex (2005) found the average age of first coital experiences across 41 different countries to be 17 years, with a low of 16 (Iceland), and a high of 20 (India). There is tremendous variation regarding frequency of coital sex. For example, the average number of times per year a person in Greece (138) or France (120) engages in coital sex is between 1.6 and 3 times greater than in India (75) or Japan (45; Durex, 2005). Oral sex includes cunnilingus—oral stimulation of the female’s external sex organs, and fellatio—oral stimulation of the male’s external sex organs. The prevalence of oral sex widely differs between cultures—with Western cultures, such as the U.S., Canada, and Austria, reporting higher rates (greater than 75%); and Eastern and African cultures, such as Japan and Nigeria, reporting lower rates (less than 10%; Copen, Chandra, & Febo-Vazquez, 2016; Malacad & Hess, 2010; Wylie, 2009). Not only are there differences between cultures regarding how many people engage in oral sex, there are differences in its very definition. For example, most college students in the U.S. do not believe cunnilingus or fellatio are sexual behaviors—and more than a third of college students believe oral sex is a form of abstinence (Barnett et al., 2017; Horan, Phillips, & Hagan, 1998; Sanders & Reinisch, 1999). Anal sex refers to penetration of the anus by an object. Anal sex is not exclusively a “homosexual behavior.” The anus has extensive sensory-nerve innervation and is often experienced as an erogenous zone, no matter where a person is on the Heterosexual-Homosexual Rating Scale (Cordeau et al., 2014). When heterosexual people are asked about their sexual behaviors, more than a third (about 40%) of both males and females report having had anal sex at some time during their life (Chandra, Mosher, & Copen, 2011; Copen, Chandra, & Febo-Vazquez, 2016). Comparatively, when homosexual men are asked about their most recent sexual behaviors, more than a third (37%) report having had anal sex (Rosenberger et al., 2011). Like heterosexual people, homosexual people engage in a variety of sexual behaviors, the most frequent being masturbation, romantic kissing, and oral sex (Rosenberger et al., 2011). The prevalence of anal sex widely differs between cultures. For example, people in Greece and Italy report high rates of anal sex (greater than 50%), whereas people in China and India report low rates of anal sex (less than 15%; Durex, 2005). In contrast to “more common” sexual behaviors, there is a vast array of alternative sexual behaviors. Some of these behaviors, such as voyeurism, exhibitionism, and pedophilia are classified in the DSM as paraphilic disorders—behaviors that victimize and cause harm to others or one’s self (American Psychiatric Association, 2013). Sadism—inflicting pain upon another person to experience pleasure for one’s self—and masochism—receiving pain from another person to experience pleasure for one’s self—are also classified in the DSM as paraphilic disorders. However, if an individual consensually engages in these behaviors, the term “disorder” is replaced with the term “interest.” Janus and Janus (1993) found that 14% of males and 11% of females have engaged in some form of sadism and/or masochism. Sexual Consent Clearly, people engage in a multitude of behaviors whose variety is limited only by our own imaginations. Further, our standards for what’s normal differs substantially from culture to culture. However, there is one aspect of sexual behavior that is universally acceptable—indeed, fundamental and necessary. At the heart of what qualifies as sexually “normal” is the concept of consent. Sexual consent refers to the voluntary, conscious, and empathic participation in a sexual act, which can be withdrawn at any time (Jozkowski & Peterson, 2013). Sexual consent is the baseline for what are considered normal—acceptable and healthy—behaviors; whereas, nonconsensual sex—i.e., forced, pressured or unconscious participation—is unacceptable and unhealthy. When engaging in sexual behaviors with a partner, a clear and explicit understanding of your boundaries, as well as your partner’s boundaries, is essential. We recommend safer-sex practices, such as condoms, honesty, and communication, whenever you engage in a sexual act. Discussing likes, dislikes, and limits prior to sexual exploration reduces the likelihood of miscommunication and misjudging nonverbal cues. In the heat of the moment, things are not always what they seem. For example, Kristen Jozkowski and her colleagues (2014) found that females tend to use verbal strategies of consent, whereas males tend to rely on nonverbal indications of consent. Awareness of this basic mismatch between heterosexual couples’ exchanges of consent may proactively reduce miscommunication and unwanted sexual advances. The universal principles of pleasure, sexual behaviors, and consent are intertwined. Consent is the foundation on which sexual activity needs to be built. Understanding and practicing empathic consent requires sexual literacy and an ability to effectively communicate desires and limits, as well as to respect others’ parameters. Conclusion Considering the amount of attention people give to the topic of sex, it’s surprising how little most actually know about it. Historically, people’s beliefs about sexuality have emerged as having absolute moral, physical, and psychological boundaries. The truth is, sex is less concrete than most people assume. Gender and sexual orientation, for example, are not either/or categories. Instead, they are continuums. Similarly, sexual fantasies and behaviors vary greatly by individual and culture. Ultimately, open discussions about sexual identity and sexual practices will help people better understand themselves, others, and the world around them. Acknowledgements The authors are indebted to Robert Biswas-Diener, Trina Cowan, Kara Paige, and Liz Wright for editing drafts of this module. Outside Resources Documentary: I am Elizabeth Smart. In 2002, Elizabeth Smart became a household name when news of her kidnapping from her home—at age 14—made national news. She was the victim of sexual assault and was held hostage for nearly a year, until she escaped. Today, she is an outspoken advocate for issues related to sex education and human trafficking. She is also author of an autobiography. Note: some content may be behind a paywall. http://www.aetv.com/shows/elizabeth-smart-autobiography/season-1/episode-1 Journal: The Journal of Sex Research www.sexscience.org/journal_of_sex_research/ Journal: The Journal of Sexual Medicine http://www.jsm.jsexmed.org/ Non-fiction book: Missoula. In 2015, journalist Jon Krakauer wrote a book discussing rape on college campuses by focusing on a single town: Missoula, Montana (USA). www.amazon.com/Missoula-Rape...=UTF8&qid=&sr= Organization: SIECUS - the Sexuality Information and Education Council of the United States- was founded in 1964 to provide education and information about sexuality and sexual and reproductive health. http://www.siecus.org/ Organization: The Guttmacher Institute is a leading research and policy organization committed to advancing sexual and reproductive health and rights in the United States and globally. https://www.guttmacher.org/ Organization: The Intersex Society of North America http://www.isna.org/ Podcast : This American Life - Sissies, This episode focuses on perceptions of masculinity and of being seen as a “sissy.” The transcript can be found here. https://www.thisamericanlife.org/rad...190/transcript Podcast: This American Life - Testosterone, Stories of people getting more testosterone and regretting it and some of people losing it and coming to appreciate their new circumstances. https://www.thisamericanlife.org/rad.../testosterone/ Video: 5MIweekly—YouTube channel with weekly videos that playfully and scientifically examine human sexuality. https://www.youtube.com/channel/UCQFQ0vPPNPS-LYhlbKOzpFw Video: Muxes, a documentary about Mexican children identified as male at birth, but who choose at a young age to be raised as female. Video: Sexplanations—YouTube channel with shame-free educational videos on everything sex. https://www.youtube.com/user/sexplanations Video: YouTube: AsapSCIENCE https://www.youtube.com/user/AsapSCIENCE Web: Kinsey Confidential—Podcast with empirically-based answers about sexual questions. kinseyconfidential.org/ Web: Sex & Psychology—Blog about the science of sex, love, and relationships. http://www.lehmiller.com/ Discussion Questions 1. Of the four basic human drive states Karl Pribram describes as being linked to our survival, why do you think the sex drive is the least likely to be openly and objectively addressed? 2. How might you go about scientifically investigating attitudes and behaviors regarding masturbation across various cultures? 3. Discuss the three different parts of you as described by this module. 4. How would you define “natural” human sexual behavior with respect to sex, gender, and sexual orientation? How does nature (i.e., the animal kingdom) help us define what is considered natural? 5. Why do humans feel compelled to categorize themselves and others based on their sex, gender, and sexual orientation? What would the world be like if these categories were removed? 6. How has culture influenced your sexual attitudes and behaviors? 7. The concept of sexual consent is seemingly simple; however, as this module presents, it is oftentimes skewed or ignored. Identify at least three factors that contribute to the complexities of consent, and how these factors might best be addressed to reduce unwanted sexual advances. Vocabulary Anal sex Penetration of the anus by an animate or inanimate object. Androgyny Having both feminine and masculine characteristics. Bisexual Attraction to two sexes. Case study An in-depth and objective examination of the details of a single person or entity. Cisgender When a person’s birth sex corresponds with his/her gender identity and gender role. Coital sex Vaginal-penile intercourse. Cunnilingus Oral stimulation of the female’s external sex organs. Dizygotic twins Twins conceived from two ova and two sperm. Fellatio Oral stimulation of the male’s external sex organs. Five stages of psychosexual development Oral, anal, phallic, latency, and genital. Gender The psychological and sociological representations of one’s biological sex. Gender identity Personal depictions of masculinity and femininity. Gender roles Societal expectations of masculinity and femininity. Heterosexual Opposite-sex attraction. Homosexual Same-sex attraction. Intersex Born with either an absence or some combination of male and female reproductive organs, sex hormones, or sex chromosomes. Masochism Receiving pain from another person to experience pleasure for one’s self. Masturbation Tactile stimulation of the body for sexual pleasure. Monozygotic twins Twins conceived from a single ovum and a single sperm, therefore genetically identical. Oral sex Cunnilingus or fellatio. Paraphilic disorders Sexual behaviors that cause harm to others or one’s self. Replacement fantasy Fantasizing about someone other than one’s current partner. Sadism Inflicting pain upon another person to experience pleasure for one’s self. Safer-sex practices Doing anything that may decrease the probability of sexual assault, sexually transmitted infections, or unwanted pregnancy; this may include using condoms, honesty, and communication. Sex An organism’s means of biological reproduction. Sexual attraction The capacity a person has to elicit or feel sexual interest. Sexual consent Permission that is voluntary, conscious, and able to be withdrawn at any time. Sexual fluidity Personal sexual attributes changing due to psychosocial circumstances. Sexual literacy The lifelong pursuit of accurate human sexuality knowledge, and recognition of its various multicultural, historical, and societal contexts; the ability to critically evaluate sources and discern empirical evidence from unreliable and inaccurate information; the acknowledgment of humans as sexual beings; and an appreciation of sexuality’s contribution to enhancing one’s well-being and pleasure in life. Sexual orientation A person’s sexual attraction to other people. Survey method One method of research that uses a predetermined and methodical list of questions, systematically given to samples of individuals, to predict behaviors within the population. Transgender A person whose gender identity or gender role does not correspond with his/her birth sex. Transgender female (TGF) A transgender person whose birth sex was male. Transgender male (TGM) A transgender person whose birth sex was female.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_1%3A_Biological_Basis_of_Behavior/1.10%3A_The_Psychology_of_Human_Sexuality.txt
By Sue Carter and Stephen Porges University of North Carolina, Northeastern University - Boston Love is deeply biological. It pervades every aspect of our lives and has inspired countless works of art. Love also has a profound effect on our mental and physical state. A “broken heart” or a failed relationship can have disastrous effects; bereavement disrupts human physiology and may even precipitate death. Without loving relationships, humans fail to flourish, even if all of their other basic needs are met. As such, love is clearly not “just” an emotion; it is a biological process that is both dynamic and bidirectional in several dimensions. Social interactions between individuals, for example, trigger cognitive and physiological processes that influence emotional and mental states. In turn, these changes influence future social interactions. Similarly, the maintenance of loving relationships requires constant feedback through sensory and cognitive systems; the body seeks love and responds constantly to interactions with loved ones or to the absence of such interactions. The evolutionary principles and ancient hormonal and neural systems that support the beneficial and healing effects of loving relationships are described here. learning objectives • Understand the role of Oxytocin in social behaviors. • Articulate the functional differences between Vasopressin and Oxytocin. • List sex differences in reaction to stress. Introduction Although evidence exists for the healing power of love, only recently has science turned its attention to providing a physiological explanation for love. The study of love in this context offers insight into many important topics, including the biological basis of interpersonal relationships and why and how disruptions in social bonds have such pervasive consequences for behavior and physiology. Some of the answers will be found in our growing knowledge of the neurobiological and endocrinological mechanisms of social behavior and interpersonal engagement. The evolution of social behavior Nothing in biology makes sense except in the light of evolution. Theodosius Dobzhansky’s famous dictum also holds true for explaining the evolution of love. Life on earth is fundamentally social: The ability to dynamically interact with other living organisms to support mutual homeostasis, growth, and reproduction evolved very early. Social interactions are present in primitive invertebrates and even among prokaryotes: Bacteria recognize and approach members of their own species. Bacteria also reproduce more successfully in the presence of their own kind and are able to form communities with physical and chemical characteristics that go far beyond the capabilities of the individual cell (Ingham & Ben-Jacob, 2008). As another example, various insect species have evolved particularly complex social systems, known as eusociality. Characterized by a division of labor, eusociality appears to have evolved independently at least 11 times in insects. Research on honeybees indicates that a complex set of genes and their interactions regulate eusociality, and that these resulted from an “accelerated form of evolution” (Woodard et al., 2011). In other words, molecular mechanisms favoring high levels of sociality seem to be on an evolutionary fast track. The evolutionary pathways that led from reptiles to mammals allowed the emergence of the unique anatomical systems and biochemical mechanisms that enable social engagement and selectively reciprocal sociality. Reptiles show minimal parental investment in offspring and form nonselective relationships between individuals. Pet owners may become emotionally attached to their turtle or snake, but this relationship is not reciprocal. In contrast, most mammals show intense parental investment in offspring and form lasting bonds with their children. Many mammalian species—including humans, wolves, and prairie voles—also develop long-lasting, reciprocal, and selective relationships between adults, with several features of what humans experience as “love.” In turn, these reciprocal interactions trigger dynamic feedback mechanisms that foster growth and health. What is love? An evolutionary and physiological perspective Human love is more complex than simple feedback mechanisms. Love may create its own reality. The biology of love originates in the primitive parts of the brain—the emotional core of the human nervous system—which evolved long before the cerebral cortex. The brain “in love” is flooded with vague sensations, often transmitted by the vagus nerve, and creating much of what we experience as emotion. The modern cortex struggles to interpret love’s primal messages, and weaves a narrative around incoming visceral experiences, potentially reacting to that narrative rather than to reality. It also is helpful to realize that mammalian social behavior is supported by biological components that were repurposed or co-opted over the course of mammalian evolution, eventually permitting lasting relationships between adults. Is there a hormone of love and other relationships? One element that repeatedly appears in the biochemistry of love is the neuropeptide oxytocin. In large mammals, oxytocin adopts a central role in reproduction by helping to expel the big-brained baby from the uterus, ejecting milk and sealing a selective and lasting bond between mother and offspring (Keverne, 2006). Mammalian offspring crucially depend on their mother’s milk for some time after birth. Human mothers also form a strong and lasting bond with their newborns immediately after birth, in a time period that is essential for the nourishment and survival of the baby. However, women who give birth by cesarean section without going through labor, or who opt not to breastfeed, are still able to form a strong emotional bond with their children. Furthermore, fathers, grandparents, and adoptive parents also form lifelong attachments to children. Preliminary evidence suggests that the simple presence of an infant can release oxytocin in adults as well (Feldman, 2012; Kenkel et al., 2012). The baby virtually forces us to love it. The case for a major role for oxytocin in love is strong, but until recently was based largely on extrapolation from research on parental behavior (Feldman, 2012) or social behaviors in animals (Carter, 1998; Kenkel et al., 2012). However, recent human experiments have shown that intranasal delivery of oxytocin can facilitate social behaviors, including eye contact and social cognition (Meyer-Lindenberg, Domes, Kirsch, & Heinrichs, 2011)—behaviors that are at the heart of love. Of course, oxytocin is not the molecular equivalent of love. Rather, it is just one important component of a complex neurochemical system that allows the body to adapt to highly emotional situations. The systems necessary for reciprocal social interactions involve extensive neural networks through the brain and autonomic nervous system that are dynamic and constantly changing across the life span of an individual. We also now know that the properties of oxytocin are not predetermined or fixed. Oxytocin’s cellular receptors are regulated by other hormones and epigenetic factors. These receptors change and adapt based on life experiences. Both oxytocin and the experience of love can change over time. In spite of limitations, new knowledge of the properties of oxytocin has proven useful in explaining several enigmatic features of love. Stress and love Emotional bonds can form during periods of extreme duress, especially when the survival of one individual depends on the presence and support of another. There also is evidence that oxytocin is released in response to acutely stressful experiences, perhaps serving as hormonal “insurance” against overwhelming stress. Oxytocin may help to ensure that parents and others will engage with and care for infants; develop stable, loving relationships; and seek out and receive support from others in times of need. Animal models and the biology of social bonds To dissect the anatomy and chemistry of love, scientists needed a biological equivalent of the Rosetta Stone. Just as the actual stone helped linguists decipher an archaic language by comparison to a known one, animal models are helping biologists draw parallels between ancient physiology and contemporary behaviors. Studies of socially monogamous mammals that form long-lasting social bonds, such as prairie voles, have been especially helpful to an understanding the biology of human social behavior. There is more to love than oxytocin Research in prairie voles showed that, as in humans, oxytocin plays a major role in social interactions and parental behavior (Carter, 1998; Carter, Boone, Pournajafi-Nazarloo, & Bales, 2009; Kenkel et al., 2012). Of course, oxytocin does not act alone. Its release and actions depend on many other neurochemicals, including endogenous opioids and dopamine (Aragona & Wang, 2009). Particularly important to social bonding are the interactions of oxytocin with a related neuropeptide known as vasopressin. The systems regulated by oxytocin and vasopressin are sometimes redundant. Both peptides are implicated in behaviors that require social engagement by either males or females, such as huddling over an infant (Kenkel et al., 2012). For example, it was necessary in voles to block both oxytocin and vasopressin receptors to induce a significant reduction in social engagement, either among adults or between adults and infants. Blocking only one of these two receptors did not eliminate social approach or contact. However, antagonists for either the oxytocin or vasopressin receptor inhibited the selective sociality, which is essential for the expression of a social bond (Bales, Kim, Lewis-Reese, & Carter, 2004; Cho, DeVries, Williams, & Carter, 1999). If we accept selective social bonds, parenting, and mate protection as proxies for love in humans, research in animals supports the hypothesis that oxytocin and vasopressin interact to allow the dynamic behavioral states and behaviors necessary for love. Oxytocin and vasopressin have shared functions, but they are not identical in their actions. The specific behavioral roles of oxytocin and vasopressin are especially difficult to untangle because they are components of an integrated neural network with many points of intersection. Moreover, the genes that regulate the production of oxytocin and vasopressin are located on the same chromosome, possibly allowing coordinated synthesis or release of these peptides. Both peptides can bind to and have antagonist or agonist effects on each other’s receptors. Furthermore, the pathways necessary for reciprocal social behavior are constantly adapting: These peptides and the systems that they regulate are always in flux. In spite of these difficulties, some of the different functions of oxytocin and vasopressin have been identified. Functional differences between vasopressin and oxytocin Vasopressin is associated with physical and emotional mobilization, and can help support vigilance and behaviors needed for guarding a partner or territory (Carter, 1998), as well as other forms of adaptive self-defense (Ferris, 2008). Vasopressin also may protect against physiologically “shutting down” in the face of danger. In many mammalian species, mothers exhibit agonistic behaviors in defense of their young, possibly through the interactive actions of vasopressin and oxytocin (Bosch & Neumann, 2012). Prior to mating, prairie voles are generally social, even toward strangers. However, within a day or so of mating, they begin to show high levels of aggression toward intruders (Carter, DeVries, & Getz, 1995), possibly serving to protect or guard a mate, family, or territory. This mating-induced aggression is especially obvious in males. Oxytocin, in contrast, is associated with immobility without fear. This includes relaxed physiological states and postures that permit birth, lactation, and consensual sexual behavior. Although not essential for parenting, the increase of oxytocin associated with birth and lactation may make it easier for a woman to be less anxious around her newborn and to experience and express loving feelings for her child (Carter & Altemus, 1997). In highly social species such as prairie voles (Kenkel et al., 2013), and presumably in humans, the intricate molecular dances of oxytocin and vasopressin fine-tune the coexistence of caretaking and protective aggression. Fatherhood also has a biological basis The biology of fatherhood is less well-studied than motherhood is. However, male care of offspring also appears to rely on both oxytocin and vasopressin (Kenkel et al., 2012), probably acting in part through effects on the autonomic nervous system (Kenkel et al., 2013). Even sexually naïve male prairie voles show spontaneous parental behavior in the presence of an infant (Carter et al., 1995). However, the stimuli from infants or the nature of the social interactions that release oxytocin and vasopressin may differ between the sexes (Feldman, 2012). At the heart of the benefits of love is a sense of safety Parental care and support in a safe environment are particularly important for mental health in social mammals, including humans and prairie voles. Studies of rodents and of lactating women suggest that oxytocin has the important capacity to modulate the behavioral and autonomic distress that typically follows separation from a mother, child, or partner, reducing defensive behaviors and thereby supporting growth and health (Carter, 1998). The absence of love in early life can be detrimental to mental and physical health During early life in particular, trauma or neglect may produce behaviors and emotional states in humans that are socially pathological. Because the processes involved in creating social behaviors and social emotions are delicately balanced, these be may be triggered in inappropriate contexts, leading to aggression toward friends or family. Alternatively, bonds may be formed with prospective partners who fail to provide social support or protection. Sex differences exist in the consequences of early life experiences Males seem to be especially vulnerable to the negative effects of early experiences, possibly helping to explain the increased sensitivity of males to various developmental disorders. The implications of sex differences in the nervous system and in the response to stressful experiences for social behavior are only slowly becoming apparent (Carter et al., 2009). Both males and females produce vasopressin and oxytocin and are capable of responding to both hormones. However, in brain regions that are involved in defensive aggression, such as the extended amygdala and lateral septum, the production of vasopressin is androgen-dependent. Thus, in the face of a threat, males may be experiencing higher central levels of vasopressin. Oxytocin and vasopressin pathways, including the peptides and their receptors, are regulated by coordinated genetic, hormonal, and epigenetic factors that influence the adaptive and behavioral functions of these peptides across the animal’s life span. As a result, the endocrine and behavioral consequences of a stress or challenge may be different for males and females (DeVries, DeVries, Taymans, & Carter, 1996). For example, when unpaired prairie voles were exposed to an intense but brief stressor, such as a few minutes of swimming, or injection of the adrenal hormone corticosterone, the males (but not females) quickly formed new pair bonds. These and other experiments suggest that males and females have different coping strategies, and possibly may experience both stressful experiences, and even love, in ways that are gender-specific. In the context of nature and evolution, sex differences in the nervous system are important. However, sex differences in brain and behavior also may help to explain gender differences in the vulnerability to mental and physical disorders (Taylor, et al., 2000). Better understanding these differences will provide clues to the physiology of human mental health in both sexes. Loving relationships in early life can have epigenetic consequences Love is “epigenetic.” That is, positive experiences in early life can act upon and alter the expression of specific genes. These changes in gene expression may have behavioral consequences through simple biochemical changes, such as adding a methyl group to a particular site within the genome (Zhang & Meaney, 2010). It is possible that these changes in the genome may even be passed to the next generation. Social behaviors, emotional attachment to others, and long-lasting reciprocal relationships also are both plastic and adaptive, and so is the biology upon which they are based. For example, infants of traumatized or highly stressed parents might be chronically exposed to vasopressin, either through their own increased production of the peptide, or through higher levels of vasopressin in maternal milk. Such increased exposure could sensitize the infant to defensive behaviors or create a lifelong tendency to overreact to threat. Based on research in rats, it seems that in response to adverse early experiences of chronic isolation, the genes for vasopressin receptors can become upregulated (Zhang et al., 2012), leading to an increased sensitivity to acute stressors or anxiety that may persist throughout life. Epigenetic programming triggered by early life experiences is adaptive in allowing neuroendocrine systems to project and plan for future behavioral demands. But epigenetic changes that are long-lasting also can create atypical social or emotional behaviors (Zhang & Meaney, 2010) that may be especially likely to surface in later life, and in the face of social or emotional challenges. Exposure to exogenous hormones in early life also may be epigenetic. For example, prairie voles treated postnatally with vasopressin (especially males) were later more aggressive, whereas those exposed to a vasopressin antagonist showed less aggression in adulthood. Conversely, in voles the exposure of infants to slightly increased levels of oxytocin during development increased the tendency to show a pair bond. However, these studies also showed that a single exposure to a higher level of oxytocin in early life could disrupt the later capacity to pair bond (Carter et al., 2009). There is little doubt that either early social experiences or the effects of developmental exposure to these neuropeptides holds the potential to have long-lasting effects on behavior. Both parental care and exposure to oxytocin in early life can permanently modify hormonal systems, altering the capacity to form relationships and influence the expression of love across the life span. Our preliminary findings in voles further suggest that early life experiences affect the methylation of the oxytocin receptor gene and its expression (Connelly, Kenkel, Erickson, & Carter, 2011). Thus, we can plausibly argue that love is epigenetic. The absence of social behavior or isolation also has consequences for the oxytocin system Given the power of positive social experiences, it is not surprising that a lack of social relationships also may lead to alterations in behavior as well as changes in oxytocin and vasopressin pathways. We have found that social isolation reduced the expression of the gene for the oxytocin receptor, and at the same time increased the expression of genes for the vasopressin peptide. In female prairie voles, isolation also was accompanied by an increase in blood levels of oxytocin, possibly as a coping mechanism. However, over time, isolated prairie voles of both sexes showed increases in measures of depression, anxiety, and physiological arousal, and these changes were observed even when endogenous oxytocin was elevated. Thus, even the hormonal insurance provided by endogenous oxytocin in face of the chronic stress of isolation was not sufficient to dampen the consequences of living alone. Predictably, when isolated voles were given additional exogenous oxytocin, this treatment did restore many of these functions to normal (Grippo, Trahanas, Zimmerman, Porges, & Carter, 2009). In modern societies, humans can survive, at least after childhood, with little or no human contact. Communication technology, social media, electronic parenting, and many other recent technological advances may reduce social behaviors, placing both children and adults at risk for social isolation and disorders of the autonomic nervous system, including deficits in their capacity for social engagement and love (Porges, 2011). Social engagement actually helps us to cope with stress. The same hormones and areas of the brain that increase the capacity of the body to survive stress also enable us to better adapt to an ever-changing social and physical environment. Individuals with strong emotional support and relationships are more resilient in the face of stressors than those who feel isolated or lonely. Lesions in various bodily tissues, including the brain, heal more quickly in animals that are living socially versus in isolation (Karelina & DeVries, 2011). The protective effects of positive sociality seem to rely on the same cocktail of hormones that carries a biological message of “love” throughout the body. Can love—or perhaps oxytocin—be a medicine? Although research has only begun to examine the physiological effects of these peptides beyond social behavior, there is a wealth of new evidence showing that oxytocin can influence physiological responses to stress and injury. As only one example, the molecules associated with love have restorative properties, including the ability to literally heal a “broken heart.” Oxytocin receptors are expressed in the heart, and precursors for oxytocin appear to be critical for the development of the fetal heart (Danalache, Gutkowska, Slusarz, Berezowska, & Jankowski, 2010). Oxytocin exerts protective and restorative effects in part through its capacity to convert undifferentiated stem cells into cardiomyocytes. Oxytocin can facilitate adult neurogenesis and tissue repair, especially after a stressful experience. We now know that oxytocin has direct anti-inflammatory and antioxidant properties in in vitro models of atherosclerosis (Szeto et al., 2008). The heart seems to rely on oxytocin as part of a normal process of protection and self-healing. Thus, oxytocin exposure early in life not only regulates our ability to love and form social bonds, it also affects our health and well-being. Oxytocin modulates the hypothalamic–pituitary adrenal (HPA) axis, especially in response to disruptions in homeostasis (Carter, 1998), and coordinates demands on the immune system and energy balance. Long-term, secure relationships provide emotional support and down-regulate reactivity of the HPA axis, whereas intense stressors, including birth, trigger activation of the HPA axis and sympathetic nervous system. The ability of oxytocin to regulate these systems probably explains the exceptional capacity of most women to cope with the challenges of childbirth and childrearing. Dozens of ongoing clinical trials are currently attempting to examine the therapeutic potential of oxytocin in disorders ranging from autism to heart disease. Of course, as in hormonal studies in voles, the effects are likely to depend on the history of the individual and the context, and to be dose-dependent. As this research is emerging, a variety of individual differences and apparent discrepancies in the effects of exogenous oxytocin are being reported. Most of these studies do not include any information on the endogenous hormones, or on the oxytocin or vasopressin receptors, which are likely to affect the outcome of such treatments. Conclusion Research in this field is new and there is much left to understand. However, it is already clear that both love and oxytocin are powerful. Of course, with power comes responsibility. Although research into mechanisms through which love—or hormones such as oxytocin—may protect us against stress and disease is in its infancy, this knowledge will ultimately increase our understanding of the way that our emotions impact upon health and disease. The same molecules that allow us to give and receive love also link our need for others with health and well-being. Acknowledgments C. Sue Carter and Stephen W. Porges are both Professors of Psychiatry at the University of North Carolina, Chapel Hill, and also are Research Professors of Psychology at Northeastern University, Boston. Discussions of “love and forgiveness” with members of the Fetzer Institute’s Advisory Committee on Natural Sciences led to this essay and are gratefully acknowledged here. We are especially appreciative of thoughtful editorial input from Dr. James Harris. Studies from the authors’ laboratories were sponsored by the National Institutes of Health. We also express our gratitude for this support and to our colleagues, whose input and hard work informed the ideas expressed in this article. A version of this paper was previously published in EMBO Reports in the series on “Sex and Society”; this paper is reproduced with the permission of the publishers of that journal. Outside Resources Book: C. S. Carter, L. Ahnert et al. (Eds.), (2006). Attachment and bonding: A new synthesis. Cambridge, MA: MIT Press. Book: Porges, S.W. (2011). The polyvagal theory: Neurophysiological foundations of emotions, attachment, communication and self-regulation. New York, NY: Norton. Web: Database of publicly and privately supported clinical studies of human participants conducted around the world. http://www.clinicaltrials.gov Web: PubMed comprises over 22 million citations for biomedical literature from MEDLINE, life science journals, and online books. PubMed citations and abstracts include the fields of biomedicine and health, covering portions of the life sciences, behavioral sciences, chemical sciences, and bioengineering. PubMed also provides access to additional relevant web sites and links to the other NCBI molecular biology resources. http://www.ncbi.nlm.nih.gov/pubmed Web: Website of author Stephen Porges http://www.stephenporges.com/ Discussion Questions 1. If love is so important in human behavior, why is it so hard to describe and understand? 2. Discuss the role of evolution in understanding what humans call “love” or other forms of prosociality. 3. What are the common biological and neuroendocrine elements that appear in maternal love and adult-adult relationships? 4. Oxytocin and vasopressin are biochemically similar. What are some of the differences between the actions of oxytocin and vasopressin? 5. How may the properties of oxytocin and vasopressin help us understand the biological bases of love? 6. What are common features of the biochemistry of “love” and “safety,” and why are these important to human health? Vocabulary Epigenetics Heritable changes in gene activity that are not caused by changes in the DNA sequence. en.Wikipedia.org/wiki/Epigenetics Oxytocin A nine amino acid mammalian neuropeptide. Oxytocin is synthesized primarily in the brain, but also in other tissues such as uterus, heart and thymus, with local effects. Oxytocin is best known as a hormone of female reproduction due to its capacity to cause uterine contractions and eject milk. Oxytocin has effects on brain tissue, but also acts throughout the body in some cases as an antioxidant or anti-inflammatory. Vagus nerve The 10th cranial nerve. The mammalian vagus has an older unmyelinated branch which originates in the dorsal motor complex and a more recently evolved, myelinated branch, with origins in the ventral vagal complex including the nucleus ambiguous. The vagus is the primary source of autonomic-parasympathetic regulation for various internal organs, including the heart, lungs and other parts of the viscera. The vagus nerve is primarily sensory (afferent), transmitting abundant visceral input to the central nervous system. Vasopressin A nine amino acid mammalian neuropeptide. Vasopressin is synthesized primarily in the brain, but also may be made in other tissues. Vasopressin is best known for its effects on the cardiovascular system (increasing blood pressure) and also the kidneys (causing water retention). Vasopressin has effects on brain tissue, but also acts throughout the body.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_1%3A_Biological_Basis_of_Behavior/1.11%3A_Biochemistry_of_Love.txt
By Aneeq Ahmad Henderson State University The mammalian nervous system is a complex biological organ, which enables many animals including humans to function in a coordinated fashion. The original design of this system is preserved across many animals through evolution; thus, adaptive physiological and behavioral functions are similar across many animal species. Comparative study of physiological functioning in the nervous systems of different animals lend insights to their behavior and their mental processing and make it easier for us to understand the human brain and behavior. In addition, studying the development of the nervous system in a growing human provides a wealth of information about the change in its form and behaviors that result from this change. The nervous system is divided into central and peripheral nervous systems, and the two heavily interact with one another. The peripheral nervous system controls volitional (somatic nervous system) and nonvolitional (autonomic nervous system) behaviors using cranial and spinal nerves. The central nervous system is divided into forebrain, midbrain, and hindbrain, and each division performs a variety of tasks; for example, the cerebral cortex in the forebrain houses sensory, motor, and associative areas that gather sensory information, process information for perception and memory, and produce responses based on incoming and inherent information. To study the nervous system, a number of methods have evolved over time; these methods include examining brain lesions, microscopy, electrophysiology, electroencephalography, and many scanning technologies. Learning Objectives • Describe and understand the development of the nervous system. • Learn and understand the two important parts of the nervous system. • Explain the two systems in the peripheral nervous system and what you know about the different regions and areas of the central nervous system. • Learn and describe different techniques of studying the nervous system. Understand which of these techniques are important for cognitive neuroscientists. • Describe the reasons for studying different nervous systems in animals other than human beings. Explain what lessons we learn from the evolutionary history of this organ. Evolution of the Nervous System Many scientists and thinkers (Cajal, 1937; Crick & Koch, 1990; Edelman, 2004) believe that the human nervous system is the most complex machine known to man. Its complexity points to one undeniable fact—that it has evolved slowly over time from simpler forms. Evolution of the nervous system is intriguing not because we can marvel at this complicated biological structure, but it is fascinating because it inherits a lineage of a long history of many less complex nervous systems (Figure 1.2.1), and it documents a record of adaptive behaviors observed in life forms other than humans. Thus, evolutionary study of the nervous system is important, and it is the first step in understanding its design, its workings, and its functional interface with the environment. The brains of some animals, like apes, monkeys, and rodents, are structurally similar to humans (Figure 1.2.1), while others are not (e.g., invertebrates, single-celled organisms). Does anatomical similarity of these brains suggest that behaviors that emerge in these species are also similar? Indeed, many animals display behaviors that are similar to humans, e.g., apes use nonverbal communication signals with their hands and arms that resemble nonverbal forms of communication in humans (Gardner & Gardner, 1969; Goodall, 1986; Knapp & Hall, 2009). If we study very simple behaviors, like physiological responses made by individual neurons, then brain-based behaviors of invertebrates (Kandel & Schwartz, 1982) look very similar to humans, suggesting that from time immemorial such basic behaviors have been conserved in the brains of many simple animal forms and in fact are the foundation of more complex behaviors in animals that evolved later (Bullock, 1984). Even at the micro-anatomical level, we note that individual neurons differ in complexity across animal species. Human neurons exhibit more intricate complexity than other animals; for example, neuronal processes (dendrites) in humans have many more branch points, branches, and spines. Complexity in the structure of the nervous system, both at the macro- and micro-levels, give rise to complex behaviors. We can observe similar movements of the limbs, as in nonverbal communication, in apes and humans, but the variety and intricacy of nonverbal behaviors using hands in humans surpasses apes. Deaf individuals who use American Sign Language (ASL) express themselves in English nonverbally; they use this language with such fine gradation that many accents of ASL exist (Walker, 1987). Complexity of behavior with increasing complexity of the nervous system, especially the cerebral cortex, can be observed in the genus Homo (Figure 1.2.2). If we compare sophistication of material culture in Homo habilis (2 million years ago; brain volume ~650 cm3) and Homo sapiens (300,000 years to now; brain volume ~1400 cm3), the evidence shows that Homo habilis used crude stone tools compared with modern tools used by Homo sapiens to erect cities, develop written languages, embark on space travel, and study her own self. All of this is due to increasing complexity of the nervous system. What has led to the complexity of the brain and nervous system through evolution, to its behavioral and cognitive refinement? Darwin (1859, 1871) proposed two forces of natural and sexual selection as work engines behind this change. He prophesied, “psychology will be based on a new foundation, that of the necessary acquirement of each mental power and capacity by gradation” that is, psychology will be based on evolution (Rosenzweig, Breedlove, & Leiman, 2002). Development of the Nervous System Where the study of change in the nervous system over eons is immensely captivating, studying the change in a single brain during individual development is no less engaging. In many ways the ontogeny (development) of the nervous system in an individual mimics the evolutionary advancement of this structure observed across many animal species. During development, the nervous tissue emerges from the ectoderm (one of the three layers of the mammalian embryo) through the process of neural induction. This process causes the formation of the neural tube, which extends in a rostrocaudal (head-to-tail) plane. The tube, which is hollow, seams itself in the rostrocaudal direction. In some disease conditions, the neural tube does not close caudally and results in an abnormality called spina bifida. In this pathological condition, the lumbar and sacral segments of the spinal cord are disrupted. As gestation progresses, the neural tube balloons up (cephalization) at the rostral end, and forebrain, midbrain, hindbrain, and the spinal cord can be visually delineated (day 40). About 50 days into gestation, six cephalic areas can be anatomically discerned (also see below for a more detailed description of these areas). The progenitor cells (neuroblasts) that form the lining (neuroepithelium) of the neural tube generate all the neurons and glial cells of the central nervous system. During early stages of this development, neuroblasts rapidly divide and specialize into many varieties of neurons and glial cells, but this proliferation of cells is not uniform along the neural tube—that is why we see the forebrain and hindbrain expand into larger cephalic tissues than the midbrain. The neuroepithelium also generates a group of specialized cells that migrate outside the neural tube to form the neural crest. This structure gives rise to sensory and autonomic neurons in the peripheral nervous system. The Structure of the Nervous System The mammalian nervous system is divided into central and peripheral nervous systems. The Peripheral Nervous System The peripheral nervous system is divided into somatic and autonomic nervous systems (Figure 3). Where the somatic nervous system consists of cranial nerves (12 pairs) and spinal nerves (31 pairs) and is under the volitional control of the individual in maneuvering bodily muscles, the autonomic nervous system also running through these nerves lets the individual have little control over muscles and glands. Main divisions of the autonomic nervous system that control visceral structures are the sympathetic and parasympathetic nervous systems. At an appropriate cue (say a fear-inducing object like a snake), the sympathetic division generally energizes many muscles (e.g., heart) and glands (e.g., adrenals), causing activity and release of hormones that lead the individual to negotiate the fear-causing snake with fight-or-flight responses. Whether the individual decides to fight the snake or run away from it, either action requires energy; in short, the sympathetic nervous system says “go, go, go.” The parasympathetic nervous system, on the other hand, curtails undue energy mobilization into muscles and glands and modulates the response by saying “stop, stop, stop.” This push–pull tandem system regulates fight-or-flight responses in all of us. The Central Nervous System The central nervous system is divided into a number of important parts (see Figure 1.2.4), including the spinal cord, each specialized to perform a set of specific functions. Telencephalon or cerebrum is a newer development in the evolution of the mammalian nervous system. In humans, it is about the size of a large napkin and when crumpled into the skull, it forms furrows called sulci (singular form, sulcus). The bulges between sulci are called gyri (singular form, gyrus). The cortex is divided into two hemispheres, and each hemisphere is further divided into four lobes (Figure 5a), which have specific functions. The division of these lobes is based on two delineating sulci: the central sulcus divides the hemisphere into frontal and parietal-occipital lobes and the lateral sulcus marks the temporal lobe, which lies below. Just in front of the central sulcus lies an area called the primary motor cortex (precentral gyrus), which connects to the muscles of the body, and on volitional command moves them. From mastication to movements in the genitalia, the body map is represented on this strip (Figure 1.2.6). Some body parts, like fingers, thumbs, and lips, occupy a greater representation on the strip than, say, the trunk. This disproportionate representation of the body on the primary motor cortex is called the magnification factor (Rolls & Cowey, 1970) and is seen in other motor and sensory areas. At the lower end of the central sulcus, close to the lateral sulcus, lies the Broca’s area (Figure 1.2.8) in the left frontal lobe, which is involved with language production. Damage to this part of the brain led Pierre Paul Broca, a French neuroscientist in 1861, to document many different forms of aphasias, in which his patients would lose the ability to speak or would retain partial speech impoverished in syntax and grammar (AAAS, 1880). It is no wonder that others have found subvocal rehearsal and central executive processes of working memory in this frontal lobe (Smith & Jonides, 1997, 1999). Just behind the central gyrus, in the parietal lobe, lies the primary somatosensory cortex (Figure 1.2.7) on the postcentral gyrus, which represents the whole body receiving inputs from the skin and muscles. The primary somatosensory cortex parallels, abuts, and connects heavily to the primary motor cortex and resembles it in terms of areas devoted to bodily representation. All spinal and some cranial nerves (e.g., the facial nerve) send sensory signals from skin (e.g., touch) and muscles to the primary somatosensory cortex. Close to the lower (ventral) end of this strip, curved inside the parietal lobe, is the taste area (secondary somatosensory cortex), which is involved with taste experiences that originate from the tongue, pharynx, epiglottis, and so forth. Just below the parietal lobe, and under the caudal end of the lateral fissure, in the temporal lobe, lies the Wernicke’s area (Demonet et al., 1992). This area is involved with language comprehension and is connected to the Broca’s area through the arcuate fasciculus, nerve fibers that connect these two regions. Damage to the Wernicke’s area (Figure 1.2.8) results in many kinds of agnosias; agnosia is defined as an inability to know or understand language and speech-related behaviors. So an individual may show word deafness, which is an inability to recognize spoken language, or word blindness, which is an inability to recognize written or printed language. Close in proximity to the Wernicke’s area is the primary auditory cortex, which is involved with audition, and finally the brain region devoted to smell (olfaction) is tucked away inside the primary olfactory cortex (prepyriform cortex). At the very back of the cerebral cortex lies the occipital lobe housing the primary visual cortex. Optic nerves travel all the way to the thalamus(lateral geniculate nucleus, LGN) and then to visual cortex, where images that are received on the retina are projected (Hubel, 1995). In the past 50 to 60 years, visual sense and visual pathways have been studied extensively, and our understanding about them has increased manifold. We now understand that all objects that form images on the retina are transformed (transduction) in neural language handed down to the visual cortex for further processing. In the visual cortex, all attributes (features) of the image, such as the color, texture, and orientation, are decomposed and processed by different visual cortical modules (Van Essen, Anderson & Felleman, 1992) and then recombined to give rise to singular perception of the image in question. If we cut the cerebral hemispheres in the middle, a new set of structures come into view. Many of these perform different functions vital to our being. For example, the limbic system contains a number of nuclei that process memory (hippocampus and fornix) and attention and emotions (cingulate gyrus); the globus pallidus is involved with motor movements and their coordination; the hypothalamus and thalamus are involved with drives, motivations, and trafficking of sensory and motor throughputs. The hypothalamus plays a key role in regulating endocrine hormones in conjunction with the pituitary gland that extends from the hypothalamus through a stalk (infundibulum). As we descend down the thalamus, the midbrain comes into view with superior and inferior colliculi, which process visual and auditory information, as does the substantia nigra, which is involved with notorious Parkinson’s disease, and the reticular formation regulating arousal, sleep, and temperature. A little lower, the hindbrain with the pons processes sensory and motor information employing the cranial nerves, works as a bridge that connects the cerebral cortex with the medulla, and reciprocally transfers information back and forth between the brain and the spinal cord. The medulla oblongata processes breathing, digestion, heart and blood vessel function, swallowing, and sneezing. The cerebellum controls motor movement coordination, balance, equilibrium, and muscle tone. The midbrain and the hindbrain, which make up the brain stem, culminate in the spinal cord. Whereas inside the cerebral cortex, the gray matter (neuronal cell bodies) lies outside and white matter (myelinated axons) inside; in the spinal cord this arrangement reverses, as the gray matter resides inside and the white matter outside. Paired nerves (ganglia) exit the spinal cord, some closer in direction towards the back (dorsal) and others towards the front (ventral). The dorsal nerves (afferent) receive sensory information from skin and muscles, and ventral nerves (efferent) send signals to muscles and organs to respond. Studying the Nervous System The study of the nervous system involves anatomical and physiological techniques that have improved over the years in efficiency and caliber. Clearly, gross morphology of the nervous system requires an eye-level view of the brain and the spinal cord. However, to resolve minute components, optical and electron microscopic techniques are needed. Light microscopes and, later, electron microscopes have changed our understanding of the intricate connections that exist among nerve cells. For example, modern staining procedures (immunocytochemistry) make it possible to see selected neurons that are of one type or another or are affected by growth. With better resolution of the electron microscopes, fine structures like the synaptic cleft between the pre- and post-synaptic neurons can be studied in detail. Along with the neuroanatomical techniques, a number of other methodologies aid neuroscientists in studying the function and physiology of the nervous system. Early on, lesion studies in animals (and study of neurological damage in humans) provided information about the function of the nervous system, by ablating (removing) parts of the nervous system or using neurotoxins to destroy them and documenting the effects on behavior or mental processes. Later, more sophisticated microelectrode techniques were introduced, which led to recording from single neurons in the animal brains and investigating their physiological functions. Such studies led to formulating theories about how sensory and motor information are processed in the brain. To study many neurons (millions of them at a time) electroencephalographic (EEG) techniques were introduced. These methods are used to study how large ensembles of neurons, representing different parts of the nervous system, with (event-related potentials) or without stimulation function together. In addition, many scanning techniques that visualize the brain in conjunction with methods mentioned above are used to understand the details of the structure and function of the brain. These include computerized axial tomography (CAT), which uses X-rays to capture many pictures of the brain and sandwiches them into 3-D models to study it. The resolution of this method is inferior to magnetic resonance imaging (MRI), which is yet another way to capture brain images using large magnets that bobble (precession) hydrogen nuclei in the brain. Although the resolution of MRI scans is much better than CAT scans, they do not provide any functional information about the brain. Positron Emission Tomography (PET) involves the acquisition of physiologic (functional) images of the brain based on the detection of positrons. Radio-labeled isotopes of certain chemicals, such as an analog of glucose (fluorodeoxyglucose), enters the active nerve cells and emits positrons, which are captured and mapped into scans. Such scans show how the brain and its many modules become active (or not) when energized with entering glucose analog. Disadvantages of PET scans include being invasive and rendering poor spatial resolution. The latter is why modern PET machines are coupled with CAT scanners to gain better resolution of the functioning brain. Finally, to avoid the invasiveness of PET, functional MRI (fMRI) techniques were developed. Brain images based on fMRI technique visualize brain function by changes in the flow of fluids (blood) in brain areas that occur over time. These scans provide a wealth of functional information about the brain as the individual may engage in a task, which is why the last two methods of brain scanning are very popular among cognitive neuroscientists. Understanding the nervous system has been a long journey of inquiry, spanning several hundreds of years of meticulous studies carried out by some of the most creative and versatile investigators in the fields of philosophy, evolution, biology, physiology, anatomy, neurology, neuroscience, cognitive sciences, and psychology. Despite our profound understanding of this organ, its mysteries continue to surprise us, and its intricacies make us marvel at this complex structure unmatched in the universe. Outside Resources Video: Pt. 1 video on the anatomy of the nervous system Video: Pt. 2 video on the anatomy of the nervous system Video: To look at functions of the brain and neurons, watch Web: To look at different kinds of brains, visit http://brainmuseum.org/ Discussion Questions 1. Why is it important to study the nervous system in an evolutionary context? 2. How can we compare changes in the nervous system made through evolution to changes made during development? 3. What are the similarities and differences between the somatic and autonomic nervous systems? 4. Describe functions of the midbrain and hindbrain. 5. Describe the anatomy and functions of the forebrain. 6. Compare and contrast electroencephalograms to electrophysiological techniques. 7. Which brain scan methodologies are important for cognitive scientists? Why? Vocabulary Afferent nerves Nerves that carry messages to the brain or spinal cord. Agnosias Due to damage of Wernicke’s area. An inability to recognize objects, words, or faces. Aphasia Due to damage of the Broca’s area. An inability to produce or understand words. Arcuate fasciculus A fiber tract that connects Wernicke’s and Broca’s speech areas. Autonomic nervous system A part of the peripheral nervous system that connects to glands and smooth muscles. Consists of sympathetic and parasympathetic divisions. Broca’s area An area in the frontal lobe of the left hemisphere. Implicated in language production. Central sulcus The major fissure that divides the frontal and the parietal lobes. Cerebellum A nervous system structure behind and below the cerebrum. Controls motor movement coordination, balance, equilibrium, and muscle tone. Cerebrum Consists of left and right hemispheres that sit at the top of the nervous system and engages in a variety of higher-order functions. Cingulate gyrus A medial cortical portion of the nervous tissue that is a part of the limbic system. Computerized axial tomography A noninvasive brain-scanning procedure that uses X-ray absorption around the head. Ectoderm The outermost layer of a developing fetus. Efferent nerves Nerves that carry messages from the brain to glands and organs in the periphery. Electroencephalography A technique that is used to measure gross electrical activity of the brain by placing electrodes on the scalp. Event-related potentials A physiological measure of large electrical change in the brain produced by sensory stimulation or motor responses. Forebrain A part of the nervous system that contains the cerebral hemispheres, thalamus, and hypothalamus. Fornix (plural form, fornices) A nerve fiber tract that connects the hippocampus to mammillary bodies. Frontal lobe The most forward region (close to forehead) of the cerebral hemispheres. Functional magnetic resonance imaging (or fMRI) A noninvasive brain-imaging technique that registers changes in blood flow in the brain during a given task (also see magnetic resonance imaging). Globus pallidus A nucleus of the basal ganglia. Gray matter Composes the bark or the cortex of the cerebrum and consists of the cell bodies of the neurons (see also white matter). Gyrus (plural form, gyri) A bulge that is raised between or among fissures of the convoluted brain. Hippocampus (plural form, hippocampi) A nucleus inside (medial) the temporal lobe implicated in learning and memory. Homo habilis A human ancestor, handy man, that lived two million years ago. Homo sapiens Modern man, the only surviving form of the genus Homo. Hypothalamus Part of the diencephalon. Regulates biological drives with pituitary gland. Immunocytochemistry A method of staining tissue including the brain, using antibodies. Lateral geniculate nucleus (or LGN) A nucleus in the thalamus that is innervated by the optic nerves and sends signals to the visual cortex in the occipital lobe. Lateral sulcus The major fissure that delineates the temporal lobe below the frontal and the parietal lobes. Lesion studies A surgical method in which a part of the animal brain is removed to study its effects on behavior or function. Limbic system A loosely defined network of nuclei in the brain involved with learning and emotion. Magnetic resonance imaging Or MRI is a brain imaging noninvasive technique that uses magnetic energy to generate brain images (also see fMRI). Magnification factor Cortical space projected by an area of sensory input (e.g., mm of cortex per degree of visual field). Medulla oblongata An area just above the spinal cord that processes breathing, digestion, heart and blood vessel function, swallowing, and sneezing. Neural crest A set of primordial neurons that migrate outside the neural tube and give rise to sensory and autonomic neurons in the peripheral nervous system. Neural induction A process that causes the formation of the neural tube. Neuroblasts Brain progenitor cells that asymmetrically divide into other neuroblasts or nerve cells. Neuroepithelium The lining of the neural tube. Occipital lobe The back part of the cerebrum, which houses the visual areas. Parasympathetic nervous system A division of the autonomic nervous system that is slower than its counterpart—that is, the sympathetic nervous system—and works in opposition to it. Generally engaged in “rest and digest” functions. Parietal lobe An area of the cerebrum just behind the central sulcus that is engaged with somatosensory and gustatory sensation. Pons A bridge that connects the cerebral cortex with the medulla, and reciprocally transfers information back and forth between the brain and the spinal cord. Positron Emission Tomography (or PET) An invasive procedure that captures brain images with positron emissions from the brain after the individual has been injected with radio-labeled isotopes. Primary Motor Cortex A strip of cortex just in front of the central sulcus that is involved with motor control. Primary Somatosensory Cortex A strip of cerebral tissue just behind the central sulcus engaged in sensory reception of bodily sensations. Rostrocaudal A front-back plane used to identify anatomical structures in the body and the brain. Somatic nervous system A part of the peripheral nervous system that uses cranial and spinal nerves in volitional actions. Spina bifida A developmental disease of the spinal cord, where the neural tube does not close caudally. Sulcus (plural form, sulci) The crevices or fissures formed by convolutions in the brain. Sympathetic nervous system A division of the autonomic nervous system, that is faster than its counterpart that is the parasympathetic nervous system and works in opposition to it. Generally engaged in “fight or flight” functions. Temporal lobe An area of the cerebrum that lies below the lateral sulcus; it contains auditory and olfactory (smell) projection regions. Thalamus A part of the diencephalon that works as a gateway for incoming and outgoing information. Transduction A process in which physical energy converts into neural energy. Wernicke’s area A language area in the temporal lobe where linguistic information is comprehended (Also see Broca’s area). White matter Regions of the nervous system that represent the axons of the nerve cells; whitish in color because of myelination of the nerve cells. Working memory Short transitory memory processed in the hippocampus.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_1%3A_Biological_Basis_of_Behavior/1.2%3A_The_Nervous_System.txt
By Sharon Furtak California State University, Sacramento This module on the biological basis of behavior provides an overview of the basic structure of neurons and their means of communication. Neurons, cells in the central nervous system, receive information from our sensory systems (vision, audition, olfaction, gustation, and somatosensation) about the world around us; in turn, they plan and execute appropriate behavioral responses, including attending to a stimulus, learning new information, speaking, eating, mating, and evaluating potential threats. The goal of this module is to become familiar with the anatomical structure of neurons and to understand how neurons communicate by electrochemical signals to process sensory information and produce complex behaviors through networks of neurons. Having a basic knowledge of the fundamental structure and function of neurons is a necessary foundation as you move forward in the field of psychology. Learning objectives • Differentiate the functional roles between the two main cell classes in the brain, neurons and glia. • Describe how the forces of diffusion and electrostatic pressure work collectively to facilitate electrochemical communication. • Define resting membrane potential, excitatory postsynaptic potentials, inhibitory postsynaptic potentials, and action potentials. • Explain features of axonal and synaptic communication in neurons. Introduction Imagine trying to string words together into a meaningful sentence without knowing the meaning of each word or its function (i.e., Is it a verb, a noun, or an adjective?). In a similar fashion, to appreciate how groups of cells work together in a meaningful way in the brain as a whole, we must first understand how individual cells in the brain function. Much like words, brain cells, called neurons, have an underlying structure that provides the foundation for their functional purpose. Have you ever seen a neuron? Did you know that the basic structure of a neuron is similar whether it is from the brain of a rat or a human? How do the billions of neurons in our brain allow us to do all the fun things we enjoy, such as texting a friend, cheering on our favorite sports team, or laughing? Our journey in answering these questions begins more than 100 years ago with a scientist named Santiago Ramón y Cajal. Ramón y Cajal (1911) boldly concluded that discrete individual neurons are the structural and functional units of the nervous system. He based his conclusion on the numerous drawings he made of Golgi-stained tissue, a stain named after the scientist who discovered it, Camillo Golgi. Scientists use several types of stains to visualize cells. Each stain works in a unique way, which causes them to look differently when viewed under a microscope. For example, a very common Nissl stain labels only the main part of the cell (i.e., the cell body; see left and middle panels of Figure 1.3.1). In contrast, a Golgi stain fills the cell body and all the processes that extend outward from it (see right panel of Figure 1.3.1). A more notable characteristic of a Golgi stain is that it only stains approximately 1–2% of neurons (Pasternak & Woolsey, 1975; Smit & Colon, 1969), permitting the observer to distinguish one cell from another. These qualities allowed Cajal to examine the full anatomical structure of individual neurons for the first time. This significantly enhanced our appreciation of the intricate networks their processes form. Based on his observation of Golgi-stained tissue, Cajal suggested neurons were distinguishable processing units rather than continuous structures. This was in opposition to the dominant theory at the time proposed by Joseph von Gerlach, which stated that the nervous system was composed of a continuous network of nerves (for review see, Lopez-Munoz, Boya, & Alamo, 2006). Camillo Golgi himself had been an avid supporter of Gerlach’s theory. Despite their scientific disagreement, Cajal and Camillo Golgi shared the Nobel Prize for Medicine in 1906 for their combined contribution to the advancement of science and our understanding of the structure of the nervous system. This seminal work paved the pathway to our current understanding of the basic structure of the nervous system described in this module (for review see: De Carlos & Borrell, 2007; Grant, 2007). Before moving forward, there will be an introduction to some basic terminology regarding the anatomy of neurons in the section called “The Structure of the Neuron,” below. Once we have reviewed this fundamental framework, the remainder of the module will focus on the electrochemical signals through which neurons communicate. While the electrochemical process might sound intimidating, it will be broken down into digestible sections. The first subsection, “Resting Membrane Potential,” describes what occurs in a neuron at rest, when it is theoretically not receiving or sending signals. Building upon this knowledge, we will examine the electrical conductance that occurs within a single neuron when it receives signals. Finally, the module will conclude with a description of the electrical conductance, which results in communication between neurons through a release of chemicals. At the end of the module, you should have a broad concept of how each cell and large groups of cells send and receive information by electrical and chemical signals. A note of encouragement: This module introduces a vast amount of technical terminology that at times may feel overwhelming. Do not get discouraged or bogged down in the details. Utilize the glossary at the end of the module as a quick reference guide; tab the glossary page so that you can easily refer to it while reading the module. The glossary contains all terms in bold typing. Terms in italics are additional significant terms that may appear in other modules but are not contained within the glossary. On your first read of this module, I suggest focusing on the broader concepts and functional aspects of the terms instead of trying to commit all the terminology to memory. That is right, I said read first! I highly suggest reading this module at least twice, once prior to and again following the course lecture on this material. Repetition is the best way to gain clarity and commit to memory the challenging concepts and detailed vocabulary presented here. The Structure of the Neuron Basic Nomenclature There are approximately 100 billion neurons in the human brain (Williams & Herrup, 1988). Each neuron has three main components: dendrites, the soma, and the axon (see Figure 1.3.2). Dendrites are processes that extend outward from the soma, or cell body, of a neuron and typically branch several times. Dendrites receive information from thousands of other neurons and are the main source of input of the neuron. The nucleus, which is located within the soma, contains genetic information, directs protein synthesis, and supplies the energy and the resources the neuron needs to function. The main source of output of the neuron is the axon. The axon is a process that extends far away from the soma and carries an important signal called an action potential to another neuron. The place at which the axon of one neuron comes in close contact to the dendrite of another neuron is a synapse (see Figures 2–3). Typically, the axon of a neuron is covered with an insulating substance called a myelin sheath that allows the signal and communication of one neuron to travel rapidly to another neuron. The axon splits many times, so that it can communicate, or synapse, with several other neurons (see Figure 1.3.2). At the end of the axon is a terminal button, which forms synapses with spines, or protrusions, on the dendrites of neurons. Synapsesform between the presynaptic terminal button (neuron sending the signal) and the postsynapticmembrane (neuron receiving the signal; see Figure 3). Here we will focus specifically on synapses between the terminal button of an axon and a dendritic spine; however, synapses can also form between the terminal button of an axon and the soma or the axon of another neuron. A very small space called a synaptic gap or a synaptic cleft, approximately 5 nm (nanometers), exists between the presynaptic terminal button and the postsynaptic dendritic spine. To give you a better idea of the size, a dime is 1.35 mm (millimeter) thick. There are 1,350,000 nm in the thickness of a dime. In the presynaptic terminal button, there are synaptic vesicles that package together groups of chemicals called neurotransmitters (see Figure 1.3.3). Neurotransmitters are released from the presynaptic terminal button, travel across the synaptic gap, and activate ion channels on the postsynaptic spine by binding to receptor sites. We will discuss the role of receptors in more detail later in the module. Types of Cells in the Brain Not all neurons are created equal! There are neurons that help us receive information about the world around us, sensory neurons. There are motor neurons that allow us to initiate movement and behavior, ultimately allowing us to interact with the world around us. Finally, there are interneurons, which process the sensory input from our environment into meaningful representations, plan the appropriate behavioral response, and connect to the motor neurons to execute these behavioral plans. There are three main categories of neurons, each defined by its specific structure. The structures of these three different types of neurons support their unique functions. Unipolar neurons are structured in such a way that is ideal for relaying information forward, so they have one neurite (axon) and no dendrites. They are involved in transmission of physiological information from the body’s periphery such as communicating body temperature through the spinal cord up to the brain. Bipolar neurons are involved in sensory perception such as perception of light in the retina of the eye. They have one axon and one dendrite which help acquire and pass sensory information to various centers in the brain. Finally, multipolar neurons are the most common and they communicate sensory and motor information in the brain. For example, their firing causes muscles in the body to contract. Multipolar neurons have one axon and many dendrites which allows them to communicate with other neurons. One of the most prominent neurons is a pyramidal neuron, which falls under the multipolar category. It gets its name from the triangular or pyramidal shape of its soma (for examples see, Furtak, Moyer, & Brown, 2007). In addition to neurons, there is a second type of cell in the brain called glia cells. Glia cells have several functions, just a few of which we will discuss here. One type of glia cell, called oligodendroglia, forms the myelin sheaths mentioned above (Simons & Trotter, 2007; see Fig. 1.3.2). Oligodendroglia wrap their dendritic processes around the axons of neurons many times to form the myelin sheath. One cell will form the myelin sheath on several axons. Other types of glia cells, such as microglia and astrocytes, digest debris of dead neurons, carry nutritional support from blood vessels to the neurons, and help to regulate the ionic composition of the extracellular fluid. While glial cells play a vital role in neuronal support, they do not participate in the communication between cells in the same fashion as neurons do. Communication Within and Between Neurons Thus far, we have described the main characteristics of neurons, including how their processes come in close contact with one another to form synapses. In this section, we consider the conduction of communication within a neuron and how this signal is transmitted to the next neuron. There are two stages of this electrochemical action in neurons. The first stage is the electrical conduction of dendritic input to the initiation of an action potential within a neuron. The second stage is a chemical transmission across the synaptic gap between the presynaptic neuron and the postsynaptic neuron of the synapse. To understand these processes, we first need to consider what occurs within a neuron when it is at a steady state, called resting membrane potential. Resting Membrane Potential The intracellular (inside the cell) fluid and extracellular (outside the cell) fluid of neurons is composed of a combination of ions (electrically charged molecules; see Figure 1.3.4). Cations are positively charged ions, and anions are negatively charged ions. The composition of intracellular and extracellular fluid is similar to salt water, containing sodium (Na+), potassium (K+), chloride (Cl-), and anions (A-). The cell membrane, which is composed of a lipid bilayer of fat molecules, separates the cell from the surrounding extracellular fluid. There are proteins that span the membrane, forming ion channels that allow particular ions to pass between the intracellular and extracellular fluid (see Figure 1.3.4). These ions are in different concentrations inside the cell relative to outside the cell, and the ions have different electrical charges. Due to this difference in concentration and charge, two forces act to maintain a steady state when the cell is at rest: diffusion and electrostatic pressure. Diffusion is the force on molecules to move from areas of high concentration to areas of low concentration. Electrostatic pressure is the force on two ions with similar charge to repel each other and the force of two ions with opposite charge to attract to one another. Remember the saying, opposites attract? Regardless of the ion, there exists a membrane potential at which the force of diffusion is equal and opposite of the force of electrostatic pressure. This voltage, called the equilibrium potential, is the voltage at which no ions flow. Since there are several ions that can permeate the cell’s membrane, the baseline electrical charge inside the cell compared with outside the cell, referred to as resting membrane potential, is based on the collective drive of force on several ions. Relative to the extracellular fluid, the membrane potential of a neuron at rest is negatively charged at approximately -70 mV (see Figure 1.3.5). These are very small voltages compared with the voltages of batteries and electrical outlets, which we encounter daily, that range from 1.5 to 240 V. Let us see how these two forces, diffusion and electrostatic pressure, act on the four groups of ions mentioned above. 1. Anions (A-): Anions are highly concentrated inside the cell and contribute to the negative charge of the resting membrane potential. Diffusion and electrostatic pressure are not forces that determine A- concentration because A- is impermeable to the cell membrane. There are no ion channels that allow for A- to move between the intracellular and extracellular fluid. 2. Potassium (K+): The cell membrane is very permeable to potassium at rest, but potassium remains in high concentrations inside the cell. Diffusion pushes K+ outside the cell because it is in high concentration inside the cell. However, electrostatic pressure pushes K+ inside the cell because the positive charge of K+ is attracted to the negative charge inside the cell. In combination, these forces oppose one another with respect to K+. 3. Chloride (Cl-): The cell membrane is also very permeable to chloride at rest, but chloride remains in high concentration outside the cell. Diffusion pushes Cl- inside the cell because it is in high concentration outside the cell. However, electrostatic pressure pushes Cl- outside the cell because the negative charge of Cl- is attracted to the positive charge outside the cell. Similar to K+, these forces oppose one another with respect to Cl-. 4. Sodium (Na+): The cell membrane is not very permeable to sodium at rest. Diffusion pushes Na+ inside the cell because it is in high concentration outside the cell. Electrostatic pressure also pushes Na+ inside the cell because the positive charge of Na+ is attracted to the negative charge inside the cell. Both of these forces push Na+ inside the cell; however, Na+ cannot permeate the cell membrane and remains in high concentration outside the cell. The small amounts of Na+ inside the cell are removed by a sodium-potassium pump, which uses the neuron’s energy (adenosine triphosphate, ATP) to pump 3 Na+ ions out of the cell in exchange for bringing 2 K+ ions inside the cell. Action Potential Now that we have considered what occurs in a neuron at rest, let us consider what changes occur to the resting membrane potential when a neuron receives input, or information, from the presynaptic terminal button of another neuron. Our understanding of the electrical signals or potentials that occurs within a neuron results from the seminal work of Hodgkin and Huxleythat began in the 1930s at a well-known marine biology lab in Woodshole, MA. Their work, for which they won the Nobel Prize in Medicine in 1963, has resulted in the general model of electrochemical transduction that is described here (Hodgkin & Huxley, 1952). Hodgkin and Huxley studied a very large axon in the squid, a common species for that region of the United States. The giant axon of the squid is roughly 100 times larger than that of axons in the mammalian brain, making it much easier to see. Activation of the giant axon is responsible for a withdrawal response the squid uses when trying to escape from a predator, such as large fish, birds, sharks, and even humans. When was the last time you had calamari? The large axon size is no mistake in nature’s design; it allows for very rapid transmission of an electrical signal, enabling a swift escape motion in the squid from its predators. While studying this species, Hodgkin and Huxley noticed that if they applied an electrical stimulus to the axon, a large, transient electrical current conducted down the axon. This transient electrical current is known as an action potential (see Figure 1.3.5). An action potential is an all-or-nothing response that occurs when there is a change in the charge or potential of the cell from its resting membrane potential (-70 mV) in a more positive direction, which is a depolarization (see Figure 1.3.5). What is meant by an all-or-nothing response? I find that this concept is best compared to the binary code used in computers, where there are only two possibilities, 0 or 1. There is no halfway or in-between these possible values; for example, 0.5 does not exist in binary code. There are only two possibilities, either the value of 0 or the value of 1. The action potential is the same in this respect. There is no halfway; it occurs, or it does not occur. There is a specific membrane potential that the neuron must reach to initiate an action potential. This membrane potential, called the threshold of excitation, is typically around -50 mV. If the threshold of excitation is reached, then an action potential is triggered. How is an action potential initiated? At any one time, each neuron is receiving hundreds of inputs from the cells that synapse with it. These inputs can cause several types of fluctuations in the neuron’s membrane potentials (see Figure 5): 1. excitatory postsynaptic potentials (EPSPs): a depolarizing current that causes the membrane potential to become more positive and closer to the threshold of excitation; or 2. inhibitory postsynaptic potentials (IPSPs): a hyperpolarizing current that causes the membrane potential to become more negative and further away from the threshold of excitation. These postsynaptic potentials, EPSPs and IPSPs, summate or add together in time and space. The IPSPs make the membrane potential more negative, but how much so depends on the strength of the IPSPs. The EPSPs make the membrane potential more positive; again, how much more positive depends on the strength of the EPSPs. If you have two small EPSPs at the same time and the same synapse then the result will be a large EPSP. If you have a small EPSP and a small IPSP at the same time and the same synapse then they will cancel each other out. Unlike the action potential, which is an all-or-nothing response, IPSPs and EPSPs are smaller and graded potentials, varying in strength. The change in voltage during an action potential is approximately 100 mV. In comparison, EPSPs and IPSPs are changes in voltage between 0.1 to 40 mV. They can be different strengths, or gradients, and they are measured by how far the membrane potentials diverge from the resting membrane potential. I know the concept of summation can be confusing. As a child, I use to play a game in elementary school with a very large parachute where you would try to knock balls out of the center of the parachute. This game illustrates the properties of summation rather well. In this game, a group of children next to one another would work in unison to produce waves in the parachute in order to cause a wave large enough to knock the ball out of the parachute. The children would initiate the waves at the same time and in the same direction. The additive result was a larger wave in the parachute, and the balls would bounce out of the parachute. However, if the waves they initiated occurred in the opposite direction or with the wrong timing, the waves would cancel each other out, and the balls would remain in the center of the parachute. EPSPs or IPSPs in a neuron work in the same fashion to the properties of the waves in the parachute; they either add or cancel each other out. If you have two EPSPs, then they sum together and become a larger depolarization. Similarly, if two IPSPs come into the cell at the same time, they will sum and become a larger hyperpolarization in membrane potential. However, if two inputs were opposing one another, moving the potential in opposite directions, such as an EPSP and an IPSP, their sum would cancel each other out. At any moment in time, each cell is receiving mixed messages, both EPSPs and IPSPs. If the summation of EPSPs is strong enough to depolarize the membrane potential to reach the threshold of excitation, then it initiates an action potential. The action potential then travels down the axon, away from the soma, until it reaches the ends of the axon (the terminal button). In the terminal button, the action potential triggers the release of neurotransmitters from the presynaptic terminal button into the synaptic gap. These neurotransmitters, in turn, cause EPSPs and IPSPs in the postsynaptic dendritic spines of the next cell (see Figures 1.3.4 & 1.3.6). The neurotransmitter released from the presynaptic terminal button binds with ionotropic receptors in a lock-and-key fashion on the post-synaptic dendritic spine. Ionotropic receptors are receptors on ion channels that open, allowing some ions to enter or exit the cell, depending upon the presence of a particular neurotransmitter. The type of neurotransmitter and the permeability of the ion channel it activates will determine if an EPSP or IPSP occurs in the dendrite of the post-synaptic cell. These EPSPs and IPSPs summate in the same fashion described above and the entire process occurs again in another cell. The Change in Membrane Potential During an Action Potential We discussed previously which ions are involved in maintaining the resting membrane potential. Not surprisingly, some of these same ions are involved in the action potential. When the cell becomes depolarized (more positively charged) and reaches the threshold of excitation, this causes a voltage-dependent Na+ channel to open. A voltage-dependent ion channel is a channel that opens, allowing some ions to enter or exit the cell, depending upon when the cell reaches a particular membrane potential. When the cell is at resting membrane potential, these voltage-dependent Na+ channels are closed. As we learned earlier, both diffusion and electrostatic pressure are pushing Na+ inside the cells. However, Na+ cannot permeate the membrane when the cell is at rest. Now that these channels are open, Na+ rushes inside the cell, causing the cell to become very positively charged relative to the outside of the cell. This is responsible for the rising or depolarizing phase of the action potential (see Figure 1.3.5). The inside of the cell becomes very positively charged, +40mV. At this point, the Na+ channels close and become refractory. This means the Na+ channels cannot reopen again until after the cell returns to the resting membrane potential. Thus, a new action potential cannot occur during the refractory period. The refractory period also ensures the action potential can only move in one direction down the axon, away from the soma. As the cell becomes more depolarized, a second type of voltage-dependent channel opens; this channel is permeable to K+. With the cell very positive relative to the outside of the cell (depolarized) and the high concentration of K+ within the cell, both the force of diffusion and the force of electrostatic pressure drive K+ outside of the cell. The movement of K+ out of the cell causes the cell potential to return back to the resting membrane potential, the falling or hyperpolarizing phase of the action potential (see Figure 1.3.5). A short hyperpolarization occurs partially due to the gradual closing of the K+ channels. With the Na+ closed, electrostatic pressure continues to push K+ out of the cell. In addition, the sodium-potassium pump is pushing Na+ out of the cell. The cell returns to the resting membrane potential, and the excess extracellular K+ diffuses away. This exchange of Na+ and K+ ions happens very rapidly, in less than 1 msec. The action potential occurs in a wave-like motion down the axon until it reaches the terminal button. Only the ion channels in very close proximity to the action potential are affected. Earlier you learned that axons are covered in myelin. Let us consider how myelin speeds up the process of the action potential. There are gaps in the myelin sheaths called nodes of Ranvier. The myelin insulates the axon and does not allow any fluid to exist between the myelin and cell membrane. Under the myelin, when the Na+ and K+ channels open, no ions flow between the intracellular and extracellular fluid. This saves the cell from having to expend the energy necessary to rectify or regain the resting membrane potential. (Remember, the pumps need ATP to run.) Under the myelin, the action potential degrades some, but is still large enough in potential to trigger a new action potential at the next node of Ranvier. Thus, the action potential actively jumps from node to node; this process is known as saltatory conduction. In the presynaptic terminal button, the action potential triggers the release of neurotransmitters (see Figure 1.3.3). Neurotransmitters cross the synaptic gap and open subtypes of receptors in a lock-and-key fashion (see Figure 3). Depending on the type of neurotransmitter, an EPSP or IPSP occurs in the dendrite of the post-synaptic cell. Neurotransmitters that open Na+ or calcium (Ca+) channels cause an EPSP; an example is the NMDA receptors, which are activated by glutamate (the main excitatory neurotransmitter in the brain). In contrast, neurotransmitters that open Cl- or K+ channels cause an IPSP; an example is gamma-aminobutryric acid (GABA) receptors, which are activated by GABA, the main inhibitory neurotransmitter in the brain. Once the EPSPs and IPSPs occur in the postsynaptic site, the process of communication within and between neurons cycles on (see Figure 1.3.6). A neurotransmitter that does not bind to receptors is broken down and inactivated by enzymes or glial cells, or it is taken back into the presynaptic terminal button in a process called reuptake, which will be discussed further in the module on psychopharmacology. Outside Resources Video Series: Neurobiology/Biopsychology - Tutorial animations of action potentials, resting membrane potentials, and synaptic transmission. http://www.sumanasinc.com/webcontent/animations/neurobiology.html Video: An animation of an action potential Video: An animation of neurotransmitter actions at the synapse Video: An interactive animation that allows students to observe the results of manipulations to excitatory and inhibitory post-synaptic potentials. Also includes animations and explanations of transmission and neural circuits. apps.childrenshospital.org/clinical/animation/neuron/ Video: Another animation of an action potential Video: Another animation of neurotransmitter actions at the synapse Video: Domino Action Potential: This hands-on activity helps students grasp the complex process of the action potential, as well as become familiar with the characteristics of transmission (e.g., all-or-none response, refractory period). Video: For perspective on techniques in neuroscience to look inside the brain Video: The Behaving Brain is the third program in the DISCOVERING PSYCHOLOGY series. This program looks at the structure and composition of the human brain: how neurons function, how information is collected and transmitted, and how chemical reactions relate to thought and behavior. www.learner.org/series/discoveringpsychology/03/e03expand.html Video: You can grow new brain cells. Here\'s how. -Can we, as adults, grow new neurons? Neuroscientist Sandrine Thuret says that we can, and she offers research and practical advice on how we can help our brains better perform neurogenesis—improving mood, increasing memory formation and preventing the decline associated with aging along the way. Web: For more information on the Nobel Prize shared by Ramón y Cajal and Golgi http://www.nobelprize.org/nobel_priz...aureates/1906/ Discussion Questions 1. What structures of a neuron are the main input and output of that neuron? 2. What does the statement mean that communication within and between cells is an electrochemical process? 3. How does myelin increase speed and efficiency of the action potential? 4. How does diffusion and electrostatic pressure contribute to the resting membrane potential and the action potential? 5. Describe the cycle of communication within and between neurons. Vocabulary Action potential A transient all-or-nothing electrical current that is conducted down the axon when the membrane potential reaches the threshold of excitation. Axon Part of the neuron that extends off the soma, splitting several times to connect with other neurons; main output of the neuron. Cell membrane A bi-lipid layer of molecules that separates the cell from the surrounding extracellular fluid. Dendrite Part of a neuron that extends away from the cell body and is the main input to the neuron. Diffusion The force on molecules to move from areas of high concentration to areas of low concentration. Electrostatic pressure The force on two ions with similar charge to repel each other; the force of two ions with opposite charge to attract to one another. Excitatory postsynaptic potentials A depolarizing postsynaptic current that causes the membrane potential to become more positive and move towards the threshold of excitation. Inhibitory postsynaptic potentials A hyperpolarizing postsynaptic current that causes the membrane potential to become more negative and move away from the threshold of excitation. Ion channels Proteins that span the cell membrane, forming channels that specific ions can flow through between the intracellular and extracellular space. Ionotropic receptor Ion channel that opens to allow ions to permeate the cell membrane under specific conditions, such as the presence of a neurotransmitter or a specific membrane potential. Myelin sheath Substance around the axon of a neuron that serves as insulation to allow the action potential to conduct rapidly toward the terminal buttons. Neurotransmitters Chemical substance released by the presynaptic terminal button that acts on the postsynaptic cell. Nucleus Collection of nerve cells found in the brain which typically serve a specific function. Resting membrane potential The voltage inside the cell relative to the voltage outside the cell while the cell is a rest (approximately -70 mV). Sodium-potassium pump An ion channel that uses the neuron’s energy (adenosine triphosphate, ATP) to pump three Na+ ions outside the cell in exchange for bringing two K+ ions inside the cell. Soma Cell body of a neuron that contains the nucleus and genetic information, and directs protein synthesis. Spines Protrusions on the dendrite of a neuron that form synapses with terminal buttons of the presynaptic axon. Synapse Junction between the presynaptic terminal button of one neuron and the dendrite, axon, or soma of another postsynaptic neuron. Synaptic gap Also known as the synaptic cleft; the small space between the presynaptic terminal button and the postsynaptic dendritic spine, axon, or soma. Synaptic vesicles Groups of neurotransmitters packaged together and located within the terminal button. Terminal button The part of the end of the axon that form synapses with postsynaptic dendrite, axon, or soma. Threshold of excitation Specific membrane potential that the neuron must reach to initiate an action potential.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_1%3A_Biological_Basis_of_Behavior/1.3%3A_Neurons.txt
By Robert Biswas-Diener Portland State University The brain is the most complex part of the human body. It is the center of consciousness and also controls all voluntary and involuntary movement and bodily functions. It communicates with each part of the body through the nervous system, a network of channels that carry electrochemical signals. Learning objectives • Name the various parts of the nervous system and their respective functions • Explain how neurons communicate with each other • Identify the location and function of the limbic system • Articulate how the primary motor cortex is an example of brain region specialization • Name at least three neuroimaging techniques and describe how they work In the 1800s a German scientist by the name of Ernst Weber conducted several experiments meant to investigate how people perceive the world via their own bodies (Hernstein & Boring, 1966). It is obvious that we use our sensory organs—our eyes, and ears, and nose—to take in and understand the world around us. Weber was particularly interested in the sense of touch. Using a drafting compass he placed the two points far apart and set them on the skin of a volunteer. When the points were far apart the research participants could easily distinguish between them. As Weber repeated the process with ever closer points, however, most people lost the ability to tell the difference between them. Weber discovered that the ability to recognize these “just noticeable differences” depended on where on the body the compass was positioned. Your back, for example, is far less sensitive to touch than is the skin on your face. Similarly, the tip of your tongue is extremely sensitive! In this way, Weber began to shed light on the way that nerves, the nervous system, and the brain form the biological foundation of psychological processes. In this module we will explore the biological side of psychology by paying particular attention to the brain and to the nervous system. Understanding the nervous system is vital to understanding psychology in general. It is through the nervous system that we experience pleasure and pain, feel emotions, learn and use language, and plan goals, just to name a few examples. In the pages that follow we will begin by examining how the human nervous system develops and then we will learn about the parts of the brain and how they function. We will conclude with a section on how modern psychologists study the brain. It is worth mentioning here, at the start, that an introduction to the biological aspects of psychology can be both the most interesting and most frustrating of all topics for new students of psychology. This is, in large part, due to the fact that there is so much new information to learn and new vocabulary associated with all the various parts of the brain and nervous system. In fact, there are 30 key vocabulary words presented in this module! We encourage you not to get bogged down in difficult words. Instead, pay attention to the broader concepts, perhaps even skipping over the vocabulary on your first reading. It is helpful to pass back through with a second reading, once you are already familiar with the topic, with attention to learning the vocabulary. Nervous System development across the human lifespan As a species, humans have evolved a complex nervous system and brain over millions of years. Comparisons of our nervous systems with those of other animals, such as chimpanzees, show some similarities (Darwin, 1859). Researchers can also use fossils to study the relationship between brain volume and human behavior over the course of evolutionary history. Homo habilis, for instance, a human ancestor living about 2 million years ago shows a larger brain volume than its own ancestors but far less than modern homo sapiens. The main difference between humans and other animals-- in terms of brain development-- is that humans have a much more developed frontal cortex (the front part of the brain associated with planning). Interestingly, a person’s unique nervous system develops over the course of their lifespan in a way that resembles the evolution of nervous systems in animals across vast stretches of time. For example, the human nervous system begins developing even before a person is born. It begins as a simple bundle of tissue that forms into a tube and extends along the head-to-tail plane becoming the spinal cord and brain. 25 days into its development, the embryo has a distinct spinal cord, as well as hindbrain, midbrain and forebrain (Stiles & Jernigan, 2010). What, exactly, is this nervous system that is developing and what does it do? The nervous system can be thought of as the body’s communication network that consists of all nerve cells. There are many ways in which we can divide the nervous system to understand it more clearly. One common way to do so is by parsing it into the central nervous system and the peripheral nervous system. Each of these can be sub-divided, in turn. Let’s take a closer, more in-depth look at each. And, don’t worry, the nervous system is complicated with many parts and many new vocabulary words. It might seem overwhelming at first but through the figures and a little study you can get it. The Central Nervous System (CNS): The Neurons inside the Brain The Central Nervous System, or CNS for short, is made up of the brain and spinal cord (see Figure 1.4.2). The CNS is the portion of the nervous system that is encased in bone (the brain is protected by the skull and the spinal cord is protected by the spinal column). It is referred to as “central” because it is the brain and spinal cord that are primarily responsible for processing sensory information—touching a hot stove or seeing a rainbow, for example—and sending signals to the peripheral nervous system for action. It communicates largely by sending electrical signals through individual nerve cells that make up the fundamental building blocks of the nervous system, called neurons. There are approximately 100 billion neurons in the human brain and each has many contacts with other neurons, called synapses (Brodal, 1992). If we were able to magnify a view of individual neurons we would see that they are cells made from distinct parts (see Figure 1.4.3). The three main components of a neuron are the dendrites, the soma, and the axon. Neurons communicate with one another by receiving information through the dendrites, which act as an antenna. When the dendrites channel this information to the soma, or cell body, it builds up as an electro-chemical signal. This electrical part of the signal, called an action potential shoots down the axon, a long tail that leads away from the soma and toward the next neuron. When people talk about “nerves” in the nervous system, it typically refers to bundles of axons that form long neural wires along which electrical signals can travel. Cell-to-cell communication is helped by the fact that the axon is covered by a myelin sheath—a layer of fatty cells that allow the signal to travel very rapidly from neuron to neuron (Kandel, Schwartz & Jessell, 2000) If we were to zoom in still further we could take a closer look at the synapse, the space between neurons (see Figure 1.4.4). Here, we would see that there is a space between neurons, called the synaptic gap. To give you a sense of scale we can compare the synaptic gap to the thickness of a dime, the thinnest of all American coins (about 1.35 mm). You could stack approximately 70,000 synaptic gaps in the thickness of a single coin! As the action potential, the electrical signal reaches the end of the axon, tiny packets of chemicals, called neurotransmitters, are released. This is the chemical part of the electro-chemical signal. These neurotransmitters are the chemical signals that travel from one neuron to another, enabling them to communicate with one another. There are many different types of neurotransmitters and each has a specialized function. For example, serotonin affects sleep, hunger and mood. Dopamine is associated with attention, learning and pleasure (Kandel & Schwartz, 1982) It is amazing to realize that when you think—when you reach out to grab a glass of water, when you realize that your best friend is happy, when you try to remember the name of the parts of a neuron—what you are experiencing is actually electro-chemical impulses shooting between nerves! The Central Nervous System: Looking at the Brain as a Whole If we were to zoom back out and look at the central nervous system again we would see that the brain is the largest single part of the central nervous system. The brain is the headquarters of the entire nervous system and it is here that most of your sensing, perception, thinking, awareness, emotions, and planning take place. For many people the brain is so important that there is a sense that it is there—inside the brain—that a person’s sense of self is located (as opposed to being primarily in your toes, by contrast). The brain is so important, in fact, that it consumes 20% of the total oxygen and calories we consume even though it is only, on average, about 2% of our overall weight. It is helpful to examine the various parts of the brain and to understand their unique functions to get a better sense of the role the brain plays. We will start by looking at very general areas of the brain and then we will zoom in and look at more specific parts. Anatomists and neuroscientists often divide the brain into portions based on the location and function of various brain parts. Among the simplest ways to organize the brain is to describe it as having three basic portions: the hindbrain, midbrain and forebrain. Another way to look at the brain is to consider the brain stem, the Cerebellum, and the Cerebrum. There is another part, called the Limbic System that is less well defined. It is made up of a number of structures that are “sub-cortical” (existing in the hindbrain) as well as cortical regions of the brain (see Figure 1.4.5). The brain stem is the most basic structure of the brain and is located at the top of the spine and bottom of the brain. It is sometimes considered the “oldest” part of the brain because we can see similar structures in other, less evolved animals such as crocodiles. It is in charge of a wide range of very basic “life support” functions for the human body including breathing, digestion, and the beating of the heart. Amazingly, the brain stem sends the signals to keep these processes running smoothly without any conscious effort on our behalf. The limbic system is a collection of highly specialized neural structures that sit at the top of the brain stem, which are involved in regulating our emotions. Collectively, the limbic system is a term that doesn’t have clearly defined areas as it includes forebrain regions as well as hindbrain regions. These include the amygdala, the thalamus, the hippocampus, the insula cortex, the anterior cingulate cortex, and the prefrontal cortex. These structures influence hunger, the sleep-wake cycle, sexual desire, fear and aggression, and even memory. The cerebellum is a structure at the very back of the brain. Aristotle referred to it as the “small brain” based on its appearance and it is principally involved with movement and posture although it is also associated with a variety of other thinking processes. The cerebellum, like the brain stem, coordinates actions without the need for any conscious awareness. The cerebrum (also called the “cerebral cortex”) is the “newest,” most advanced portion of the brain. The cerebral hemispheres (the left and right hemispheres that make up each side of the top of the brain) are in charge of the types of processes that are associated with more awareness and voluntary control such as speaking and planning as well as contain our primary sensory areas (such as seeing, hearing, feeling, and moving). These two hemispheres are connected to one another by a thick bundle of axons called the corpus callosum. There are instances in which people—either because of a genetic abnormality or as the result of surgery—have had their corpus callosum severed so that the two halves of the brain cannot easily communicate with one another. The rare split-brain patients offer helpful insights into how the brain works. For example, we now understand that the brain is contralateral, or opposite-sided. This means that the left side of the brain is responsible for controlling a number of sensory and motor functions of the right side of the body, and vice versa. Consider this striking example: A split brain patient is seated at a table and an object such as a car key can be placed where a split-brain patient can only see it through the right visual field. Right visual field images will be processed on the left side of the brain and left visual field images will be processed on the right side of the brain. Because language is largely associated with the left side of the brain the patient who sees car key in the right visual field when asked “What do you see?” would answer, “I see a car key.” In contrast, a split-brain patient who only saw the car key in the left visual field, thus the information went to the non-language right side of the brain, might have a difficult time speaking the word “car key.” In fact in this case, the patient is likely to respond “I didn’t see anything at all.” However, if asked to draw the item with their left hand—a process associated with the right side of the brain—the patient will be able to do so! See the outside resources below for a video demonstration of this striking phenomenon. Besides looking at the brain as an organ that is made up of two halves we can also examine it by looking at its four various lobes of the cerebral cortex, the outer part of the brain (see Figure 1.4.6). Each of these is associated with a specific function. The occipital lobe, located at the back of the cerebral cortex, is the house of the visual area of the brain. You can see the road in front of you when you are driving, track the motion of a ball in the air thanks to the occipital lobe. The temporal lobe, located on the underside of the cerebral cortex, is where sounds and smells are processed. The parietal lobe, at the upper back of the cerebral cortex, is where touch and taste are processed. Finally, the frontal lobe, located at the forward part of the cerebral cortex is where behavioral motor plans are processed as well as a number of highly complicated processes occur including speech and language use, creative problem solving, and planning and organization. One particularly fascinating area in the frontal lobe is called the “primary motor cortex”. This strip running along the side of the brain is in charge of voluntary movements like waving goodbye, wiggling your eyebrows, and kissing. It is an excellent example of the way that the various regions of the brain are highly specialized. Interestingly, each of our various body parts has a unique portion of the primary motor cortex devoted to it (see Figure 1.4.7). Each individual finger has about as much dedicated brain space as your entire leg. Your lips, in turn, require about as much dedicated brain processing as all of your fingers and your hand combined! Because the cerebral cortex in general, and the frontal lobe in particular, are associated with such sophisticated functions as planning and being self-aware they are often thought of as a higher, less primal portion of the brain. Indeed, other animals such as rats and kangaroos while they do have frontal regions of their brain do not have the same level of development in the cerebral cortices. The closer an animal is to humans on the evolutionary tree—think chimpanzees and gorillas, the more developed is this portion of their brain. The Peripheral Nervous System In addition to the central nervous system (the brain and spinal cord) there is also a complex network of nerves that travel to every part of the body. This is called the peripheral nervous system (PNS) and it carries the signals necessary for the body to survive (see Figure 1.4.8). Some of the signals carried by the PNS are related to voluntary actions. If you want to type a message to a friend, for instance, you make conscious choices about which letters go in what order and your brain sends the appropriate signals to your fingers to do the work. Other processes, by contrast, are not voluntary. Without your awareness your brain is also sending signals to your organs, your digestive system, and the muscles that are holding you up right now with instructions about what they should be doing. All of this occurs through the pathways of your peripheral nervous system. How we study the brain The brain is difficult to study because it is housed inside the thick bone of the skull. What’s more, it is difficult to access the brain without hurting or killing the owner of the brain. As a result, many of the earliest studies of the brain (and indeed this is still true today) focused on unfortunate people who happened to have damage to some particular area of their brain. For instance, in the 1880s a surgeon named Paul Broca conducted an autopsy on a former patient who had lost his powers of speech. Examining his patient’s brain, Broca identified a damaged area—now called the “Broca’s Area”—on the left side of the brain (see Figure 1.4.9) (AAAS, 1880). Over the years a number of researchers have been able to gain insights into the function of specific regions of the brain from these types of patients. An alternative to examining the brains or behaviors of humans with brain damage or surgical lesions can be found in the instance of animals. Some researchers examine the brains of other animals such as rats, dogs and monkeys. Although animals brains differ from human brains in both size and structure there are many similarities as well. The use of animals for study can yield important insights into human brain function. In modern times, however, we do not have to exclusively rely on the study of people with brain lesions. Advances in technology have led to ever more sophisticated imaging techniques. Just as X-ray technology allows us to peer inside the body, neuroimaging techniques allow us glimpses of the working brain (Raichle,1994). Each type of imaging uses a different technique and each has its own advantages and disadvantages. Positron Emission Tomography (PET) records metabolic activity in the brain by detecting the amount of radioactive substances, which are injected into a person’s bloodstream, the brain is consuming. This technique allows us to see how much an individual uses a particular part of the brain while at rest, or not performing a task. Another technique, known as Functional Magnetic Resonance Imaging (fMRI) relies on blood flow. This method measures changes in the levels of naturally occurring oxygen in the blood. As a brain region becomes active, it requires more oxygen. This technique measures brain activity based on this increase oxygen level. This means fMRI does not require a foreign substance to be injected into the body. Both PET and fMRI scans have poor temporal resolution , meaning that they cannot tell us exactly when brain activity occurred. This is because it takes several seconds for blood to arrive at a portion of the brain working on a task. One imaging technique that has better temporal resolution is Electroencephalography (EEG), which measures electrical brain activity instead of blood flow. Electrodes are place on the scalp of participants and they are nearly instantaneous in picking up electrical activity. Because this activity could be coming from any portion of the brain, however, EEG is known to have poor spatial resolution, meaning that it is not accurate with regards to specific location. Another technique, known as Diffuse Optical Imaging (DOI) can offer high temporal and spatial resolution. DOI works by shining infrared light into the brain. It might seem strange that light can pass through the head and brain. Light properties change as they pass through oxygenated blood and through active neurons. As a result, researchers can make inferences regarding where and when brain activity is happening. Conclusion It has often been said that the brain studies itself. This means that humans are uniquely capable of using our most sophisticated organ to understand our most sophisticated organ. Breakthroughs in the study of the brain and nervous system are among the most exciting discoveries in all of psychology. In the future, research linking neural activity to complex, real world attitudes and behavior will help us to understand human psychology and better intervene in it to help people. Outside Resources Video: Animation of Neurons Video: Split Brain Patient Web: Animation of the Magnetic Resonance Imaging (MRI) http://sites.sinauer.com/neuroscience5e/animations01.01.html Web: Animation of the Positron Emission Tomography (PET) http://sites.sinauer.com/neuroscience5e/animations01.02.html Web: Teaching resources and videos for teaching about the brain, from Colorado State University: www.learner.org/resources/series142.html Web: The Brain Museum http://brainmuseum.org/ Discussion Questions 1. In your opinion is learning about the functions of various parts of the brain by studying the abilities of brain damaged patients ethical. What, in your opinion, are the potential benefits and considerations? 2. Are research results on the brain more compelling to you than are research results from survey studies on attitudes? Why or why not? How does biological research such as studies of the brain influence public opinion regarding the science of psychology? 3. If humans continue to evolve what changes might you predict in our brains and cognitive abilities? 4. Which brain scanning techniques, or combination of techniques, do you find to be the best? Why? Why do you think scientists may or may not employ exactly your recommended techniques? Vocabulary Action Potential A transient all-or-nothing electrical current that is conducted down the axon when the membrane potential reaches the threshold of excitation. Axon Part of the neuron that extends off the soma, splitting several times to connect with other neurons; main output of the neuron. Brain Stem The “trunk” of the brain comprised of the medulla, pons, midbrain, and diencephalon. Broca’s Area An area in the frontal lobe of the left hemisphere. Implicated in language production. Central Nervous System The portion of the nervous system that includes the brain and spinal cord. Cerebellum The distinctive structure at the back of the brain, Latin for “small brain.” Cerebrum Usually refers to the cerebral cortex and associated white matter, but in some texts includes the subcortical structures. Contralateral Literally “opposite side”; used to refer to the fact that the two hemispheres of the brain process sensory information and motor commands for the opposite side of the body (e.g., the left hemisphere controls the right side of the body). Corpus Callosum The thick bundle of nerve cells that connect the two hemispheres of the brain and allow them to communicate. Dendrites Part of a neuron that extends away from the cell body and is the main input to the neuron. Diffuse Optical Imaging (DOI) A neuroimaging technique that infers brain activity by measuring changes in light as it is passed through the skull and surface of the brain. Electroencephalography (EEG) A neuroimaging technique that measures electrical brain activity via multiple electrodes on the scalp. Frontal Lobe The front most (anterior) part of the cerebrum; anterior to the central sulcus and responsible for motor output and planning, language, judgment, and decision-making. Functional Magnetic Resonance Imaging (fMRI) Functional magnetic resonance imaging (fMRI): A neuroimaging technique that infers brain activity by measuring changes in oxygen levels in the blood. Limbic System Includes the subcortical structures of the amygdala and hippocampal formation as well as some cortical structures; responsible for aversion and gratification. Myelin Sheath Fatty tissue, that insulates the axons of the neurons; myelin is necessary for normal conduction of electrical impulses among neurons. Nervous System The body’s network for electrochemical communication. This system includes all the nerves cells in the body. Neurons Individual brain cells Neurotransmitters Chemical substance released by the presynaptic terminal button that acts on the postsynaptic cell. Occipital Lobe The back most (posterior) part of the cerebrum; involved in vision. Parietal Lobe The part of the cerebrum between the frontal and occipital lobes; involved in bodily sensations, visual attention, and integrating the senses. Peripheral Nervous System All of the nerve cells that connect the central nervous system to all the other parts of the body. Positron Emission Tomography (PET) A neuroimaging technique that measures brain activity by detecting the presence of a radioactive substance in the brain that is initially injected into the bloodstream and then pulled in by active brain tissue. Soma Cell body of a neuron that contains the nucleus and genetic information, and directs protein synthesis. Spatial Resolution A term that refers to how small the elements of an image are; high spatial resolution means the device or technique can resolve very small elements; in neuroscience it describes how small of a structure in the brain can be imaged. Split-brain Patient A patient who has had most or all of his or her corpus callosum severed. Synapses Junction between the presynaptic terminal button of one neuron and the dendrite, axon, or soma of another postsynaptic neuron. Synaptic Gap Also known as the synaptic cleft; the small space between the presynaptic terminal button and the postsynaptic dendritic spine, axon, or soma. Temporal Lobe The part of the cerebrum in front of (anterior to) the occipital lobe and below the lateral fissure; involved in vision, auditory processing, memory, and integrating vision and audition. Temporal Resolution A term that refers to how small a unit of time can be measured; high temporal resolution means capable of resolving very small units of time; in neuroscience it describes how precisely in time a process can be measured in the brain.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_1%3A_Biological_Basis_of_Behavior/1.4%3A_The_Brain_and_Nervous_System.txt
By Randy J. Nelson The Ohio State University The goal of this module is to introduce you to the topic of hormones and behavior. This field of study is also called behavioral endocrinology, which is the scientific study of the interaction between hormones and behavior. This interaction is bidirectional: hormones can influence behavior, and behavior can sometimes influence hormone concentrations. Hormones are chemical messengers released from endocrine glands that travel through the blood system to influence the nervous system to regulate behaviors such as aggression, mating, and parenting of individuals. learning objectives • Define the basic terminology and basic principles of hormone–behavior interactions. • Explain the role of hormones in behavioral sex differentiation. • Explain the role of hormones in aggressive behavior. • Explain the role of hormones in parental behavior. • Provide examples of some common hormone–behavior interactions. Introduction This module describes the relationship between hormones and behavior. Many readers are likely already familiar with the general idea that hormones can affect behavior. Students are generally familiar with the idea that sex-hormone concentrations increase in the blood during puberty and decrease as we age, especially after about 50 years of age. Sexual behavior shows a similar pattern. Most people also know about the relationship between aggression and anabolic steroid hormones, and they know that administration of artificial steroid hormones sometimes results in uncontrollable, violent behavior called “roid rage.” Many different hormones can influence several types of behavior, but for the purpose of this module, we will restrict our discussion to just a few examples of hormones and behaviors. For example, are behavioral sex differences the result of hormones, the environment, or some combination of factors? Why are men much more likely than women to commit aggressive acts? Are hormones involved in mediating the so-called maternal “instinct”? Behavioral endocrinologists are interested in how the general physiological effects of hormones alter the development and expression of behavior and how behavior may influence the effects of hormones. This module describes, both phenomenologically and functionally, how hormones affect behavior. To understand the hormone-behavior relationship, it is important briefly to describe hormones. Hormones are organic chemical messengers produced and released by specialized glands called endocrine glands. Hormones are released from these glands into the blood, where they may travel to act on target structures at some distance from their origin. Hormones are similar in function to neurotransmitters, the chemicals used by the nervous system in coordinating animals’ activities. However, hormones can operate over a greater distance and over a much greater temporal range than neurotransmitters (Focus Topic 1). Examples of hormones that influence behavior include steroid hormones such as testosterone (a common type of androgen), estradiol (a common type of estrogen), progesterone (a common type of progestin), and cortisol (a common type of glucocorticoid) (Table 1, A-B). Several types of protein or peptide (small protein) hormones also influence behavior, including oxytocin, vasopressin, prolactin, and leptin. Focus Topic 1: Neural Transmission versus Hormonal Communication Although neural and hormonal communication both rely on chemical signals, several prominent differences exist. Communication in the nervous system is analogous to traveling on a train. You can use the train in your travel plans as long as tracks exist between your proposed origin and destination. Likewise, neural messages can travel only to destinations along existing nerve tracts. Hormonal communication, on the other hand, is like traveling in a car. You can drive to many more destinations than train travel allows because there are many more roads than railroad tracks. Similarly, hormonal messages can travel anywhere in the body via the circulatory system; any cell receiving blood is potentially able to receive a hormonal message. Neural and hormonal communication differ in other ways as well. To illustrate them, consider the differences between digital and analog technologies. Neural messages are digital, all-or-none events that have rapid onset and offset: neural signals can take place in milliseconds. Accordingly, the nervous system mediates changes in the body that are relatively rapid. For example, the nervous system regulates immediate food intake and directs body movement. In contrast, hormonal messages are analog, graded events that may take seconds, minutes, or even hours to occur. Hormones can mediate long-term processes, such as growth, development, reproduction, and metabolism. Hormonal and neural messages are both chemical in nature, and they are released and received by cells in a similar manner; however, there are important differences as well. Neurotransmitters, the chemical messengers used by neurons, travel a distance of only 20–30 nanometers (30 X 10–9 m)—to the membrane of the postsynaptic neuron, where they bind with receptors. Hormones enter the circulatory system and may travel from 1 millimeter to >2 meters before arriving at a target cell, where they bind with specific receptors. Another distinction between neural and hormonal communication is the degree of voluntary control that can be exerted over their functioning. In general, there is more voluntary control of neural than of hormonal signals. It is virtually impossible to will a change in your thyroid hormone levels, for example, whereas moving your limbs on command is easy. Although these are significant differences, the division between the nervous system and the endocrine system is becoming more blurred as we learn more about how the nervous system regulates hormonal communication. A better understanding of the interface between the endocrine system and the nervous system, called neuroendocrinology, is likely to yield important advances in the future study of the interaction between hormones and behavior. Hormones coordinate the physiology and behavior of individuals by regulating, integrating, and controlling bodily functions. Over evolutionary time, hormones have often been co-opted by the nervous system to influence behavior to ensure reproductive success. For example, the same hormones, testosterone and estradiol, that cause gamete (egg or sperm) maturation also promote mating behavior. This dual hormonal function ensures that mating behavior occurs when animals have mature gametes available for fertilization. Another example of endocrine regulation of physiological and behavioral function is provided by pregnancy. Estrogens and progesterone concentrations are elevated during pregnancy, and these hormones are often involved in mediating maternal behavior in the mothers. Not all cells are influenced by each and every hormone. Rather, any given hormone can directly influence only cells that have specific hormone receptors for that particular hormone. Cells that have these specific receptors are called target cells for the hormone. The interaction of a hormone with its receptor begins a series of cellular events that eventually lead to activation of enzymatic pathways or, alternatively, turns on or turns off gene activation that regulates protein synthesis. The newly synthesized proteins may activate or deactivate other genes, causing yet another cascade of cellular events. Importantly, sufficient numbers of appropriate hormone receptors must be available for a specific hormone to produce any effects. For example, testosterone is important for male sexual behavior. If men have too little testosterone, then sexual motivation may be low, and it can be restored by testosterone treatment. However, if men have normal or even elevated levels of testosterone yet display low sexual drive, then it might be possible for a lack of receptors to be the cause and treatment with additional hormones will not be effective. How might hormones affect behavior? In terms of their behavior, one can think of humans and other animals conceptually as comprised of three interacting components: (1) input systems (sensory systems), (2) integrators (the central nervous system), and (3) output systems, or effectors (e.g., muscles). Hormones do not causebehavioral changes. Rather, hormones influence these three systems so that specific stimuli are more likely to elicit certain responses in the appropriate behavioral or social context. In other words, hormones change the probability that a particular behavior will be emitted in the appropriate situation (Nelson, 2011). This is a critical distinction that can affect how we think of hormone-behavior relationships. We can apply this three-component behavioral scheme to a simple behavior, singing in zebra finches. Only male zebra finches sing. If the testes of adult male finches are removed, then the birds reduce singing, but castrated finches resume singing if the testes are reimplanted, or if the birds are treated with either testosterone or estradiol. Although we commonly consider androgens to be “male” hormones and estrogens to be “female” hormones, it is common for testosterone to be converted to estradiol in nerve cells (Figure 1.5.1). Thus, many male-like behaviors are associated with the actions of estrogens! Indeed, all estrogens must first be converted from androgens because of the typical biochemical synthesis process. If the converting enzyme is low or missing, then it is possible for females to produce excessive androgens and subsequently develop associated male traits. It is also possible for estrogens in the environment to affect the nervous system of animals, including people (e.g., Kidd et al., 2007). Again, singing behavior is most frequent when blood testosterone or estrogen concentrations are high. Males sing to attract mates or ward off potential competitors from their territories. Although it is apparent from these observations that estrogens are somehow involved in singing, how might the three-component framework just introduced help us to formulate hypotheses to explore estrogen’s role in this behavior? By examining input systems, we could determine whether estrogens alter the birds’ sensory capabilities, making the environmental cues that normally elicit singing more salient. If this were the case, then females or competitors might be more easily seen or heard. Estrogens also could influence the central nervous system. Neuronal architecture or the speed of neural processing could change in the presence of estrogens. Higher neural processes (e.g., motivation, attention, or perception) also might be influenced. Finally, the effector organs, muscles in this case, could be affected by the presence of estrogens. Blood estrogen concentrations might somehow affect the muscles of a songbird’s syrinx (the vocal organ of birds). Estrogens, therefore, could affect birdsong by influencing the sensory capabilities, central processing system, or effector organs of an individual bird. We do not understand completely how estrogen, derived from testosterone, influences birdsong, but in most cases, hormones can be considered to affect behavior by influencing one, two, or all three of these components, and this three-part framework can aid in the design of hypotheses and experiments to explore these issues. How might behaviors affect hormones? The birdsong example demonstrates how hormones can affect behavior, but as noted, the reciprocal relation also occurs; that is, behavior can affect hormone concentrations. For example, the sight of a territorial intruder may elevate blood testosterone concentrations in resident male birds and thereby stimulate singing or fighting behavior. Similarly, male mice or rhesus monkeys that lose a fight decrease circulating testosterone concentrations for several days or even weeks afterward. Comparable results have also been reported in humans. Testosterone concentrations are affected not only in humans involved in physical combat, but also in those involved in simulated battles. For example, testosterone concentrations were elevated in winners and reduced in losers of regional chess tournaments. People do not have to be directly involved in a contest to have their hormones affected by the outcome of the contest. Male fans of both the Brazilian and Italian teams were recruited to provide saliva samples to be assayed for testosterone before and after the final game of the World Cup soccer match in 1994. Brazil and Italy were tied going into the final game, but Brazil won on a penalty kick at the last possible moment. The Brazilian fans were elated and the Italian fans were crestfallen. When the samples were assayed, 11 of 12 Brazilian fans who were sampled had increased testosterone concentrations, and 9 of 9 Italian fans had decreased testosterone concentrations, compared with pre-game baseline values (Dabbs, 2000). In some cases, hormones can be affected by anticipation of behavior. For example, testosterone concentrations also influence sexual motivation and behavior in women. In one study, the interaction between sexual intercourse and testosterone was compared with other activities (cuddling or exercise) in women (van Anders, Hamilton, Schmidt, & Watson, 2007). On three separate occasions, women provided a pre-activity, post-activity, and next-morning saliva sample. After analysis, the women’s testosterone was determined to be elevated prior to intercourse as compared to other times. Thus, an anticipatory relationship exists between sexual behavior and testosterone. Testosterone values were higher post-intercourse compared to exercise, suggesting that engaging in sexual behavior may also influence hormone concentrations in women. Sex Differences Hens and roosters are different. Cows and bulls are different. Men and women are different. Even girls and boys are different. Humans, like many animals, are sexually dimorphic (di, “two”; morph, “type”) in the size and shape of their bodies, their physiology, and for our purposes, their behavior. The behavior of boys and girls differs in many ways. Girls generally excel in verbal abilities relative to boys; boys are nearly twice as likely as girls to suffer from dyslexia (reading difficulties) and stuttering and nearly 4 times more likely to suffer from autism. Boys are generally better than girls at tasks that require visuospatial abilities. Girls engage in nurturing behaviors more frequently than boys. More than 90% of all anorexia nervosa cases involve young women. Young men are twice as likely as young women to suffer from schizophrenia. Boys are much more aggressive and generally engage in more rough-and-tumble play than girls (Berenbaum, Martin, Hanish, Briggs, & Fabes, 2008). Many sex differences, such as the difference in aggressiveness, persist throughout adulthood. For example, there are many more men than women serving prison sentences for violent behavior. The hormonal differences between men and women may account for adult sex differences that develop during puberty, but what accounts for behavioral sex differences among children prior to puberty and activation of their gonads? Hormonal secretions from the developing gonads determine whether the individual develops in a male or female manner. The mammalian embryonic testes produce androgens, as well as peptide hormones, that steer the development of the body, central nervous system, and subsequent behavior in a male direction. The embryonic ovaries of mammals are virtually quiescent and do not secrete high concentrations of hormones. In the presence of ovaries, or in the complete absence of any gonads, morphological, neural, and, later, behavioral development follows a female pathway. Gonadal steroid hormones have organizational (or programming) effects upon brain and behavior (Phoenix, Goy, Gerall, & Young, 1959). The organizing effects of steroid hormones are relatively constrained to the early stages of development. An asymmetry exists in the effects of testes and ovaries on the organization of behavior in mammals. Hormone exposure early in life has organizational effects on subsequent rodent behavior; early steroid hormone treatment causes relatively irreversible and permanent masculinization of rodent behavior (mating and aggressive). These early hormone effects can be contrasted with the reversible behavioral influences of steroid hormones provided in adulthood, which are called activational effects. The activational effects of hormones on adult behavior are temporary and may wane soon after the hormone is metabolized. Thus, typical male behavior requires exposure to androgens during gestation (in humans) or immediately after birth (in rodents) to somewhat masculinize the brain and also requires androgens during or after puberty to activate these neural circuits. Typical female behavior requires a lack of exposure to androgens early in life which leads to feminization of the brain and also requires estrogens to activate these neural circuits in adulthood. But this simple dichotomy, which works well with animals with very distinct sexual dimorphism in behavior, has many caveats when applied to people. If you walk through any major toy store, then you will likely observe a couple of aisles filled with pink boxes and the complete absence of pink packaging of toys in adjacent aisles. Remarkably, you will also see a strong self-segregation of boys and girls in these aisles. It is rare to see boys in the “pink” aisles and vice versa. The toy manufacturers are often accused of making toys that are gender biased, but it seems more likely that boys and girls enjoy playing with specific types and colors of toys. Indeed, toy manufacturers would immediately double their sales if they could sell toys to both sexes. Boys generally prefer toys such as trucks and balls and girls generally prefer toys such as dolls. Although it is doubtful that there are genes that encode preferences for toy cars and trucks on the Y chromosome, it is possible that hormones might shape the development of a child’s brain to prefer certain types of toys or styles of play behavior. It is reasonable to believe that children learn which types of toys and which styles of play are appropriate to their gender. How can we understand and separate the contribution of physiological mechanisms from learning to understand sex differences in human behaviors? To untangle these issues, animal models are often used. Unlike the situation in humans, where sex differences are usually only a matter of degree (often slight), in some animals, members of only one sex may display a particular behavior. As noted, often only male songbirds sing. Studies of such strongly sex-biased behaviors are particularly valuable for understanding the interaction among behavior, hormones, and the nervous system. A study of vervet monkeys calls into question the primacy of learning in the establishment of toy preferences (Alexander & Hines, 2002). Female vervet monkeys preferred girl-typical toys, such as dolls or cooking pots, whereas male vervet monkeys preferred boy-typical toys, such as cars or balls. There were no sex differences in preference for gender-neutral toys, such as picture books or stuffed animals. Presumably, monkeys have no prior concept of “boy” or “girl” toys. Young rhesus monkeys also show similar toy preferences. What then underlies the sex difference in toy preference? It is possible that certain attributes of toys (or objects) appeal to either boys or girls. Toys that appeal to boys or male vervet or rhesus monkeys, in this case, a ball or toy car, are objects that can be moved actively through space, toys that can be incorporated into active, rough and tumble play. The appeal of toys that girls or female vervet monkeys prefer appears to be based on color. Pink and red (the colors of the doll and pot) may provoke attention to infants. Society may reinforce such stereotypical responses to gender-typical toys. The sex differences in toy preferences emerge by 12 or 24 months of age and seem fixed by 36 months of age, but are sex differences in toy preference present during the first year of life? It is difficult to ask pre-verbal infants what they prefer, but in studies where the investigators examined the amount of time that babies looked at different toys, eye-tracking data indicate that infants as young as 3 months showed sex differences in toy preferences; girls preferred dolls, whereas boys preferred trucks. Another result that suggests, but does not prove, that hormones are involved in toy preferences is the observation that girls diagnosed with congenital adrenal hyperplasia (CAH), whose adrenal glands produce varying amounts of androgens early in life, played with masculine toys more often than girls without CAH. Further, a dose-response relationship between the extent of the disorder (i.e., degree of fetal androgen exposure) and degree of masculinization of play behavior was observed. Are the sex differences in toy preferences or play activity, for example, the inevitable consequences of the differential endocrine environments of boys and girls, or are these differences imposed by cultural practices and beliefs? Are these differences the result of receiving gender-specific toys from an early age, or are these differences some combination of endocrine and cultural factors? Again, these are difficult questions to unravel in people. Even when behavioral sex differences appear early in development, there seems to be some question regarding the influences of societal expectations. One example is the pattern of human play behavior during which males are more physical; this pattern is seen in a number of other species including nonhuman primates, rats, and dogs. Is the difference in the frequency of rough-and-tumble play between boys and girls due to biological factors associated with being male or female, or is it due to cultural expectations and learning? If there is a combination of biological and cultural influences mediating the frequency of rough-and-tumble play, then what proportion of the variation between the sexes is due to biological factors and what proportion is due to social influences? Importantly, is it appropriate to talk about “normal” sex differences when these traits virtually always arrange themselves along a continuum rather than in discrete categories? Sex differences are common in humans and in nonhuman animals. Because males and females differ in the ratio of androgenic and estrogenic steroid hormone concentrations, behavioral endocrinologists have been particularly interested in the extent to which behavioral sex differences are mediated by hormones. The process of becoming female or male is called sexual differentiation. The primary step in sexual differentiation occurs at fertilization. In mammals, the ovum (which always contains an X chromosome) can be fertilized by a sperm bearing either a Y or an X chromosome; this process is called sex determination. The chromosomal sex of homogametic mammals (XX) is female; the chromosomal sex of heterogametic mammals (XY) is male. Chromosomal sex determines gonadal sex. Virtually all subsequent sexual differentiation is typically the result of differential exposure to gonadal steroid hormones. Thus, gonadal sex determines hormonal sex, which regulates morphological sex. Morphological differences in the central nervous system, as well as in some effector organs, such as muscles, lead to behavioral sex differences. The process of sexual differentiation is complicated, and the potential for errors is present. Perinatal exposure to androgens is the most common cause of anomalous sexual differentiation among females. The source of androgen may be internal (e.g., secreted by the adrenal glands) or external (e.g., exposure to environmental estrogens). Turner syndrome results when the second X chromosome is missing or damaged; these individuals possess dysgenic ovaries and are not exposed to steroid hormones until puberty. Interestingly, women with Turner syndrome often have impaired spatial memory. Female mammals are considered the “neutral” sex; additional physiological steps are required for male differentiation, and more steps bring more possibilities for errors in differentiation. Some examples of male anomalous sexual differentiation include 5α-reductase deficiency (in which XY individuals are born with ambiguous genitalia because of a lack of dihydrotestosterone and are reared as females, but masculinization occurs during puberty) and androgen insensitivity syndrome or TFM (in which XY individuals lack receptors for androgens and develop as females). By studying individuals who do not neatly fall into the dichotic boxes of female or male and for whom the process of sexual differentiation is atypical, behavioral endocrinologists glean hints about the process of typical sexual differentiation. We may ultimately want to know how hormones mediate sex differences in the human brain and behavior (to the extent to which these differences occur). To understand the mechanisms underlying sex differences in the brain and behavior, we return to the birdsong example. Birds provide the best evidence that behavioral sex differences are the result of hormonally induced structural changes in the brain (Goodson, Saldanha, Hahn, & Soma, 2005). In contrast to mammals, in which structural differences in neural tissues have not been directly linked to behavior, structural differences in avian brains have been directly linked to a sexually behavior: birdsong. Several brain regions in songbirds display significant sex differences in size. Two major brain circuit pathways, (1) the song production motor pathway and (2) the auditory transmission pathway, have been implicated in the learning and production of birdsong. Some parts of the song production pathway of male zebra finches are 3 to 6 times larger than those of female conspecifics. The larger size of these brain areas reflects that neurons in these nuclei are larger, more numerous, and farther apart. Although castration of adult male birds reduces singing, it does not reduce the size of the brain nuclei controlling song production. Similarly, androgen treatment of adult female zebra finches does not induce changes either in singing or in the size of the song control regions. Thus, activational effects of steroid hormones do not account for the sex differences in singing behavior or brain nucleus size in zebra finches. The sex differences in these structures are organized or programmed in the egg by estradiol (masculinizes) or the lack of steroids (feminizes). Taken together, estrogens appear to be necessary to activate the neural machinery underlying the song system in birds. The testes of birds primarily produce androgens, which enter the circulation. The androgens enter neurons containing aromatase, which converts them to estrogens. Indeed, the brain is the primary source of estrogens, which activate masculine behaviors in many bird species. Sex differences in human brain size have been reported for years. More recently, sex differences in specific brain structures have been discovered (Figure 1.5.2). Sex differences in a number of cognitive functions have also been reported. Females are generally more sensitive to auditory information, whereas males are more sensitive to visual information. Females are also typically more sensitive than males to taste and olfactory input. Women display less lateralization of cognitive functions than men. On average, females generally excel in verbal, perceptual, and fine motor skills, whereas males outperform females on quantitative and visuospatial tasks, including map reading and direction finding. Although reliable sex differences can be documented, these differences in ability are slight. It is important to note that there is more variation within each sex than betweenthe sexes for most cognitive abilities (Figure 1.5.3). Aggressive Behaviors The possibility for aggressive behavior exists whenever the interests of two or more individuals are in conflict (Nelson, 2006). Conflicts are most likely to arise over limited resources such as territories, food, and mates. A social interaction decides which animal gains access to the contested resource. In many cases, a submissive posture or gesture on the part of one animal avoids the necessity of actual combat over a resource. Animals may also participate in threat displays or ritualized combat in which dominance is determined but no physical damage is inflicted. There is overwhelming circumstantial evidence that androgenic steroid hormones mediate aggressive behavior across many species. First, seasonal variations in blood plasma concentrations of testosterone and seasonal variations in aggression coincide. For instance, the incidence of aggressive behavior peaks for male deer in autumn, when they are secreting high levels of testosterone. Second, aggressive behaviors increase at the time of puberty, when the testes become active and blood concentrations of androgens rise. Juvenile deer do not participate in the fighting during the mating season. Third, in any given species, males are generally more aggressive than females. This is certainly true of deer; relative to stags, female deer rarely display aggressive behavior, and their rare aggressive acts are qualitatively different from the aggressive behavior of aggressive males. Finally, castration typically reduces aggression in males, and testosterone replacement therapy restores aggression to pre-castration levels. There are some interesting exceptions to these general observations that are outside the scope of this module. As mentioned, males are generally more aggressive than females. Certainly, human males are much more aggressive than females. Many more men than women are convicted of violent crimes in North America. The sex differences in human aggressiveness appear very early. At every age throughout the school years, many more boys than girls initiate physical assaults. Almost everyone will acknowledge the existence of this sex difference, but assigning a cause to behavioral sex differences in humans always elicits much debate. It is possible that boys are more aggressive than girls because androgens promote aggressive behavior and boys have higher blood concentrations of androgens than girls. It is possible that boys and girls differ in their aggressiveness because the brains of boys are exposed to androgens prenatally and the “wiring” of their brains is thus organized in a way that facilitates the expression of aggression. It is also possible that boys are encouraged and girls are discouraged by family, peers, or others from acting in an aggressive manner. These three hypotheses are not mutually exclusive, but it is extremely difficult to discriminate among them to account for sex differences in human aggressiveness. What kinds of studies would be necessary to assess these hypotheses? It is usually difficult to separate out the influences of environment and physiology on the development of behavior in humans. For example, boys and girls differ in their rough-and-tumble play at a very young age, which suggests an early physiological influence on aggression. However, parents interact with their male and female offspring differently; they usually play more roughly with male infants than with females, which suggests that the sex difference in aggressiveness is partially learned. This difference in parental interaction style is evident by the first week of life. Because of these complexities in the factors influencing human behavior, the study of hormonal effects on sex-differentiated behavior has been pursued in nonhuman animals, for which environmental influences can be held relatively constant. Animal models for which sexual differentiation occurs postnatally are often used so that this process can be easily manipulated experimentally. Again, with the appropriate animal model, we can address the questions posed above: Is the sex difference in aggression due to higher adult blood concentrations of androgens in males than in females, or are males more aggressive than females because their brains are organized differently by perinatal hormones? Are males usually more aggressive than females because of an interaction of early and current blood androgen concentrations? If male mice are castrated prior to their sixth day of life, then treated with testosterone propionate in adulthood, they show low levels of aggression. Similarly, females ovariectomized prior to their sixth day but given androgens in adulthood do not express male-like levels of aggression. Treatment of perinatally gonadectomized males or females with testosterone prior to their sixth day life and also in adulthood results in a level of aggression similar to that observed in typical male mice. Thus, in mice, the proclivity for males to act more aggressively than females is organized perinatally by androgens but also requires the presence of androgens after puberty in order to be fully expressed. In other words, aggression in male mice is both organized and activated by androgens. Testosterone exposure in adulthood without prior organization of the brain by steroid hormones does not evoke typical male levels of aggression. The hormonal control of aggressive behavior in house mice is thus similar to the hormonal mediation of heterosexual male mating behavior in other rodent species. Aggressive behavior is both organized and activated by androgens in many species, including rats, hamsters, voles, dogs, and possibly some primate species. Parental Behaviors Parental behavior can be considered to be any behavior that contributes directly to the survival of fertilized eggs or offspring that have left the body of the female. There are many patterns of mammalian parental care. The developmental status of the newborn is an important factor driving the type and quality of parental care in a species. Maternal care is much more common than paternal care. The vast majority of research on the hormonal correlates of mammalian parental behavior has been conducted on rats. Rats bear altricial young, and mothers perform a cluster of stereotyped maternal behaviors, including nest building, crouching over the pups to allow nursing and to provide warmth, pup retrieval, and increased aggression directed at intruders. If you expose nonpregnant female rats (or males) to pups, their most common reaction is to huddle far away from them. Rats avoid new things (neophobia). However, if you expose adult rats to pups every day, they soon begin to behave maternally. This process is called concaveation or sensitization and it appears to serve to reduce the adult rats’ fear of pups. Of course a new mother needs to act maternal as soon as her offspring arrive—not in a week. The onset of maternal behavior in rats is mediated by hormones. Several methods of study, such as hormone removal and replacement therapy, have been used to determine the hormonal correlates of rat maternal behavior. A fast decline of blood concentrations of progesterone in late pregnancy after sustained high concentrations of this hormone, in combination with high concentrations of estradiol and probably prolactin and oxytocin, induces female rats to behave maternally almost immediately in the presence of pups. This pattern of hormones at parturition overrides the usual fear response of adult rats toward pups, and it permits the onset of maternal behavior. Thus, the so-called maternal “instinct” requires hormones to increase the approach tendency and lower the avoidance tendency. Laboratory strains of mice and rats are usually docile, but mothers can be quite aggressive toward animals that venture too close to their litter. Progesterone appears to be the primary hormone that induces this maternal aggression in rodents, but species differences exist. The role of maternal aggression in women’s behavior has not been adequately described or tested. A series of elegant experiments by Alison Fleming and her collaborators studied the endocrine correlates of the behavior of human mothers as well as the endocrine correlates of maternal attitudes as expressed in self-report questionnaires. Responses such as patting, cuddling, or kissing the baby were called affectionate behaviors; talking, singing, or cooing to the baby were considered vocal behaviors. Both affectionate and vocal behaviors were considered approach behaviors. Basic caregiving activities, such as changing diapers and burping the infants, were also recorded. In these studies, no relationship between hormone concentrations and maternal responsiveness, as measured by attitude questionnaires, was found. For example, most women showed an increasing positive self-image during early pregnancy that dipped during the second half of pregnancy, but recovered after parturition. A related dip in feelings of maternal engagement occurred during late pregnancy, but rebounded substantially after birth in most women. However, when behavior, rather than questionnaire responses, was compared with hormone concentrations, a different story emerged. Blood plasma concentrations of cortisol were positively associated with approach behaviors. In other words, women who had high concentrations of blood cortisol, in samples obtained immediately before or after nursing, engaged in more physically affectionate behaviors and talked more often to their babies than mothers with low cortisol concentrations. Additional analyses from this study revealed that the correlation was even greater for mothers that had reported positive maternal regard (feelings and attitudes) during gestation. Indeed, nearly half of the variation in maternal behavior among women could be accounted for by cortisol concentrations and positive maternal attitudes during pregnancy. Presumably, cortisol does not induce maternal behaviors directly, but it may act indirectly on the quality of maternal care by evoking an increase in the mother’s general level of arousal, thus increasing her responsiveness to infant-generated cues. New mothers with high cortisol concentrations were also more attracted to their infant’s odors, were superior in identifying their infants, and generally found cues from infants highly appealing (Fleming, Steiner, & Corter, 1997). The medial preoptic area is critical for the expression of rat maternal behavior. The amygdala appears to tonically inhibit the expression of maternal behavior. Adult rats are fearful of pups, a response that is apparently mediated by chemosensory information. Lesions of the amygdala or afferent sensory pathways from the vomeronasal organ to the amygdala disinhibit the expression of maternal behavior. Hormones or sensitization likely act to disinhibit the amygdala, thus permitting the occurrence of maternal behavior. Although correlations have been established, direct evidence of brain structural changes in human mothers remains unspecified (Fleming & Gonzalez, 2009). Considered together, there are many examples of hormones influencing behavior and of behavior feeding back to influence hormone secretion. More and more examples of hormone–behavior interactions are discovered, including hormones in the mediation of food and fluid intake, social interactions, salt balance, learning and memory, stress coping, as well as psychopathology including depression, anxiety disorders, eating disorders, postpartum depression, and seasonal depression. Additional research should reveal how these hormone–behavior interactions are mediated. Outside Resources Book: Adkins-Regan, E. (2005). Hormones and animal social behavior. Princeton, NJ: Princeton University Press. Book: Beach, F. A. (1948). Hormones and behavior. New York: Paul Hoeber. Book: Beach, F. A. (1975). Behavioral endocrinology: An emerging discipline. American Scientist, 63: 178–187. Book: Nelson, R. J. (2011). An introduction to behavioral endocrinology (4th ed.). Sunderland, MA: Sinauer Associates. Book: Pfaff, D. W. (2009). Hormones, brain, and behavior (2nd ed.). New York: Academic Press. Book: Pfaff, D. W., Phillips, I. M., & Rubin, R. T. (2005). Principles of hormone/behavior relations. New York: Academic Press. Video: Endocrinology Video (Playlist) - This YouTube playlist contains many helpful videos on the biology of hormones, including reproduction and behavior. This would be a helpful resource for students struggling with hormone synthesis, reproduction, regulation of biological functions, and signaling pathways. https://www.youtube.com/playlist?list=PLqTetbgey0aemiTfD8QkMsSUq8hQzv-vA Video: Paul Zak: Trust, morality - and oxytocin- This Ted talk explores the roles of oxytocin in the body. Paul Zak discusses biological functions of oxytocin, like lactation, as well as potential behavioral functions, like empathy. Video: Sex Differentiation- This video discusses gonadal differentiation, including the role of androgens in the development of male features. Video: The Teenage Brain Explained- This is a great video explaining the roles of hormones during puberty. Web: Society for Behavioral Neuroendocrinology - This website contains resources on current news and research in the field of neuroendocrinology. http://sbn.org/home.aspx Discussion Questions 1. What are some of the problems associated with attempting to determine causation in a hormone–behavior interaction? What are the best ways to address these problems? 2. Hormones cause changes in the rates of cellular processes or in cellular morphology. What are some ways that these hormonally induced cellular changes might theoretically produce profound changes in behavior? 3. List and describe some behavioral sex differences that you have noticed between boys and girls. What causes girls and boys to choose different toys? Do you think that the sex differences you have noted arise from biological causes or are learned? How would you go about establishing your opinions as fact? 4. Why is it inappropriate to refer to androgens as “male” hormones and estrogens as “female” hormones? 5. Imagine that you discovered that the brains of architects were different from those of non-architects—specifically, that the “drawstraightem nuclei” of the right temporal lobe were enlarged in architects as compared with non-architects. Would you argue that architects were destined to be architects because of their brain organization or that experience as an architect changed their brains? How would you resolve this issue? Vocabulary 5α-reductase An enzyme required to convert testosterone to 5α-dihydrotestosterone. Aggression A form of social interaction that includes threat, attack, and fighting. Aromatase An enzyme that converts androgens into estrogens. Chromosomal sex The sex of an individual as determined by the sex chromosomes (typically XX or XY) received at the time of fertilization. Defeminization The removal of the potential for female traits. Demasculinization The removal of the potential for male traits. Dihydrotestosterone (DHT) A primary androgen that is an androgenic steroid product of testosterone and binds strongly to androgen receptors. Endocrine gland A ductless gland from which hormones are released into the blood system in response to specific biological signals. Estrogen Any of the C18 class of steroid hormones, so named because of the estrus-generating properties in females. Biologically important estrogens include estradiol and estriol. Feminization The induction of female traits. Gonadal sex The sex of an individual as determined by the possession of either ovaries or testes. Females have ovaries, whereas males have testes. Hormone An organic chemical messenger released from endocrine cells that travels through the blood to interact with target cells at some distance to cause a biological response. Masculinization The induction of male traits. Maternal behavior Parental behavior performed by the mother or other female. Neurotransmitter A chemical messenger that travels between neurons to provide communication. Some neurotransmitters, such as norepinephrine, can leak into the blood system and act as hormones. Oxytocin A peptide hormone secreted by the pituitary gland to trigger lactation, as well as social bonding. Parental behavior Behaviors performed in relation to one’s offspring that contributes directly to the survival of those offspring Paternal behavior Parental behavior performed by the father or other male. Progesterone A primary progestin that is involved in pregnancy and mating behaviors. Progestin A class of C21 steroid hormones named for their progestational (pregnancy-supporting) effects. Progesterone is a common progestin. Prohormone A molecule that can act as a hormone itself or be converted into another hormone with different properties. For example, testosterone can serve as a hormone or as a prohormone for either dihydrotestosterone or estradiol. Prolactin A protein hormone that is highly conserved throughout the animal kingdom. It has many biological functions associated with reproduction and synergistic actions with steroid hormones. Receptor A chemical structure on the cell surface or inside of a cell that has an affinity for a specific chemical configuration of a hormone, neurotransmitter, or other compound. Sex determination The point at which an individual begins to develop as either a male or a female. In animals that have sex chromosomes, this occurs at fertilization. Females are XX and males are XY. All eggs bear X chromosomes, whereas sperm can either bear X or Y chromosomes. Thus, it is the males that determine the sex of the offspring. Sex differentiation The process by which individuals develop the characteristics associated with being male or female. Differential exposure to gonadal steroids during early development causes sexual differentiation of several structures including the brain. Target cell A cell that has receptors for a specific chemical messenger (hormone or neurotransmitter). Testosterone The primary androgen secreted by the testes of most vertebrate animals, including men.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_1%3A_Biological_Basis_of_Behavior/1.5%3A_Hormones_and_Behavior.txt
By David M. Buss University of Texas at Austin Evolution or change over time occurs through the processes of natural and sexual selection. In response to problems in our environment, we adapt both physically and psychologically to ensure our survival and reproduction. Sexual selection theory describes how evolution has shaped us to provide a mating advantage rather than just a survival advantage and occurs through two distinct pathways: intrasexual competition and intersexual selection. Gene selection theory, the modern explanation behind evolutionary biology, occurs through the desire for gene replication. Evolutionary psychology connects evolutionary principles with modern psychology and focuses primarily on psychological adaptations: changes in the way we think in order to improve our survival. Two major evolutionary psychological theories are described: Sexual strategies theory describes the psychology of human mating strategies and the ways in which women and men differ in those strategies. Error management theory describes the evolution of biases in the way we think about everything. Learning objectives • Learn what “evolution” means. • Define the primary mechanisms by which evolution takes place. • Identify the two major classes of adaptations. • Define sexual selection and its two primary processes. • Define gene selection theory. • Understand psychological adaptations. • Identify the core premises of sexual strategies theory. • Identify the core premises of error management theory, and provide two empirical examples of adaptive cognitive biases. Introduction If you have ever been on a first date, you’re probably familiar with the anxiety of trying to figure out what clothes to wear or what perfume or cologne to put on. In fact, you may even consider flossing your teeth for the first time all year. When considering why you put in all this work, you probably recognize that you’re doing it to impress the other person. But how did you learn these particular behaviors? Where did you get the idea that a first date should be at a nice restaurant or someplace unique? It is possible that we have been taught these behaviors by observing others. It is also possible, however, that these behaviors—the fancy clothes, the expensive restaurant—are biologically programmed into us. That is, just as peacocks display their feathers to show how attractive they are, or some lizards do push-ups to show how strong they are, when we style our hair or bring a gift to a date, we’re trying to communicate to the other person: “Hey, I’m a good mate! Choose me! Choose me!" However, we all know that our ancestors hundreds of thousands of years ago weren’t driving sports cars or wearing designer clothes to attract mates. So how could someone ever say that such behaviors are “biologically programmed” into us? Well, even though our ancestors might not have been doing these specific actions, these behaviors are the result of the same driving force: the powerful influence of evolution. Yes, evolution—certain traits and behaviors developing over time because they are advantageous to our survival. In the case of dating, doing something like offering a gift might represent more than a nice gesture. Just as chimpanzees will give food to mates to show they can provide for them, when you offer gifts to your dates, you are communicating that you have the money or “resources” to help take care of them. And even though the person receiving the gift may not realize it, the same evolutionary forces are influencing his or her behavior as well. The receiver of the gift evaluates not only the gift but also the gift-giver's clothes, physical appearance, and many other qualities, to determine whether the individual is a suitable mate. But because these evolutionary processes are hardwired into us, it is easy to overlook their influence. To broaden your understanding of evolutionary processes, this module will present some of the most important elements of evolution as they impact psychology. Evolutionary theory helps us piece together the story of how we humans have prospered. It also helps to explain why we behave as we do on a daily basis in our modern world: why we bring gifts on dates, why we get jealous, why we crave our favorite foods, why we protect our children, and so on. Evolution may seem like a historical concept that applies only to our ancient ancestors but, in truth, it is still very much a part of our modern daily lives. Basics of Evolutionary Theory Evolution simply means change over time. Many think of evolution as the development of traits and behaviors that allow us to survive this “dog-eat-dog” world, like strong leg muscles to run fast, or fists to punch and defend ourselves. However, physical survival is only important if it eventually contributes to successful reproduction. That is, even if you live to be a 100-year-old, if you fail to mate and produce children, your genes will die with your body. Thus, reproductive success, not survival success, is the engine of evolution by natural selection. Every mating success by one person means the loss of a mating opportunity for another. Yet every living human being is an evolutionary success story. Each of us is descended from a long and unbroken line of ancestors who triumphed over others in the struggle to survive (at least long enough to mate) and reproduce. However, in order for our genes to endure over time—to survive harsh climates, to defeat predators—we have inherited adaptive, psychological processes designed to ensure success. At the broadest level, we can think of organisms, including humans, as having two large classes of adaptations—or traits and behaviors that evolved over time to increase our reproductive success. The first class of adaptations are called survival adaptations: mechanisms that helped our ancestors handle the “hostile forces of nature.” For example, in order to survive very hot temperatures, we developed sweat glands to cool ourselves. In order to survive very cold temperatures, we developed shivering mechanisms (the speedy contraction and expansion of muscles to produce warmth). Other examples of survival adaptations include developing a craving for fats and sugars, encouraging us to seek out particular foods rich in fats and sugars that keep us going longer during food shortages. Some threats, such as snakes, spiders, darkness, heights, and strangers, often produce fear in us, which encourages us to avoid them and thereby stay safe. These are also examples of survival adaptations. However, all of these adaptations are for physical survival, whereas the second class of adaptations are for reproduction, and help us compete for mates. These adaptations are described in an evolutionary theory proposed by Charles Darwin, called sexual selection theory. Sexual Selection Theory Darwin noticed that there were many traits and behaviors of organisms that could not be explained by “survival selection.” For example, the brilliant plumage of peacocks should actually lower their rates of survival. That is, the peacocks’ feathers act like a neon sign to predators, advertising “Easy, delicious dinner here!” But if these bright feathers only lower peacocks’ chances at survival, why do they have them? The same can be asked of similar characteristics of other animals, such as the large antlers of male stags or the wattles of roosters, which also seem to be unfavorable to survival. Again, if these traits only make the animals less likely to survive, why did they develop in the first place? And how have these animals continued to survive with these traits over thousands and thousands of years? Darwin’s answer to this conundrum was the theory of sexual selection: the evolution of characteristics, not because of survival advantage, but because of mating advantage. Sexual selection occurs through two processes. The first, intrasexual competition, occurs when members of one sex compete against each other, and the winner gets to mate with a member of the opposite sex. Male stags, for example, battle with their antlers, and the winner (often the stronger one with larger antlers) gains mating access to the female. That is, even though large antlers make it harder for the stags to run through the forest and evade predators (which lowers their survival success), they provide the stags with a better chance of attracting a mate (which increases their reproductive success). Similarly, human males sometimes also compete against each other in physical contests: boxing, wrestling, karate, or group-on-group sports, such as football. Even though engaging in these activities poses a "threat" to their survival success, as with the stag, the victors are often more attractive to potential mates, increasing their reproductive success. Thus, whatever qualities lead to success in intrasexual competition are then passed on with greater frequency due to their association with greater mating success. The second process of sexual selection is preferential mate choice, also called intersexual selection. In this process, if members of one sex are attracted to certain qualities in mates—such as brilliant plumage, signs of good health, or even intelligence—those desired qualities get passed on in greater numbers, simply because their possessors mate more often. For example, the colorful plumage of peacocks exists due to a long evolutionary history of peahens’ (the term for female peacocks) attraction to males with brilliantly colored feathers. In all sexually-reproducing species, adaptations in both sexes (males and females) exist due to survival selection and sexual selection. However, unlike other animals where one sex has dominant control over mate choice, humans have “mutual mate choice.” That is, both women and men typically have a say in choosing their mates. And both mates value qualities such as kindness, intelligence, and dependability that are beneficial to long-term relationships—qualities that make good partners and good parents. Gene Selection Theory In modern evolutionary theory, all evolutionary processes boil down to an organism’s genes. Genes are the basic “units of heredity,” or the information that is passed along in DNA that tells the cells and molecules how to “build” the organism and how that organism should behave. Genes that are better able to encourage the organism to reproduce, and thus replicate themselves in the organism’s offspring, have an advantage over competing genes that are less able. For example, take female sloths: In order to attract a mate, they will scream as loudly as they can, to let potential mates know where they are in the thick jungle. Now, consider two types of genes in female sloths: one gene that allows them to scream extremely loudly, and another that only allows them to scream moderately loudly. In this case, the sloth with the gene that allows her to shout louder will attract more mates—increasing reproductive success—which ensures that her genes are more readily passed on than those of the quieter sloth. Essentially, genes can boost their own replicative success in two basic ways. First, they can influence the odds for survival and reproduction of the organism they are in (individual reproductive success or fitness—as in the example with the sloths). Second, genes can also influence the organism to help other organisms who also likely contain those genes—known as “genetic relatives”—to survive and reproduce (which is called inclusive fitness). For example, why do human parents tend to help their own kids with the financial burdens of a college education and not the kids next door? Well, having a college education increases one’s attractiveness to other mates, which increases one’s likelihood for reproducing and passing on genes. And because parents’ genes are in their own children (and not the neighborhood children), funding their children’s educations increases the likelihood that the parents’ genes will be passed on. Understanding gene replication is the key to understanding modern evolutionary theory. It also fits well with many evolutionary psychological theories. However, for the time being, we’ll ignore genes and focus primarily on actual adaptations that evolved because they helped our ancestors survive and/or reproduce. Evolutionary Psychology Evolutionary psychology aims the lens of modern evolutionary theory on the workings of the human mind. It focuses primarily on psychological adaptations: mechanisms of the mind that have evolved to solve specific problems of survival or reproduction. These kinds of adaptations are in contrast to physiological adaptations, which are adaptations that occur in the body as a consequence of one’s environment. One example of a physiological adaptation is how our skin makes calluses. First, there is an “input,” such as repeated friction to the skin on the bottom of our feet from walking. Second, there is a “procedure,” in which the skin grows new skin cells at the afflicted area. Third, an actual callus forms as an “output” to protect the underlying tissue—the final outcome of the physiological adaptation (i.e., tougher skin to protect repeatedly scraped areas). On the other hand, a psychological adaptation is a development or change of a mechanism in the mind. For example, take sexual jealousy. First, there is an “input,” such as a romantic partner flirting with a rival. Second, there is a “procedure,” in which the person evaluates the threat the rival poses to the romantic relationship. Third, there is a behavioral output, which might range from vigilance (e.g., snooping through a partner’s email) to violence (e.g., threatening the rival). Evolutionary psychology is fundamentally an interactionist framework, or a theory that takes into account multiple factors when determining the outcome. For example, jealousy, like a callus, doesn’t simply pop up out of nowhere. There is an “interaction” between the environmental trigger (e.g., the flirting; the repeated rubbing of the skin) and the initial response (e.g., evaluation of the flirter’s threat; the forming of new skin cells) to produce the outcome. In evolutionary psychology, culture also has a major effect on psychological adaptations. For example, status within one’s group is important in all cultures for achieving reproductive success, because higher status makes someone more attractive to mates. In individualistic cultures, such as the United States, status is heavily determined by individual accomplishments. But in more collectivist cultures, such as Japan, status is more heavily determined by contributions to the group and by that group’s success. For example, consider a group project. If you were to put in most of the effort on a successful group project, the culture in the United States reinforces the psychological adaptation to try to claim that success for yourself (because individual achievements are rewarded with higher status). However, the culture in Japan reinforces the psychological adaptation to attribute that success to the whole group (because collective achievements are rewarded with higher status). Another example of cultural input is the importance of virginity as a desirable quality for a mate. Cultural norms that advise against premarital sex persuade people to ignore their own basic interests because they know that virginity will make them more attractive marriage partners. Evolutionary psychology, in short, does not predict rigid robotic-like “instincts.” That is, there isn’t one rule that works all the time. Rather, evolutionary psychology studies flexible, environmentally-connected and culturally-influenced adaptations that vary according to the situation. Psychological adaptations are hypothesized to be wide-ranging, and include food preferences, habitat preferences, mate preferences, and specialized fears. These psychological adaptations also include many traits that improve people's ability to live in groups, such as the desire to cooperate and make friends, or the inclination to spot and avoid frauds, punish rivals, establish status hierarchies, nurture children, and help genetic relatives. Research programs in evolutionary psychology develop and empirically test predictions about the nature of psychological adaptations. Below, we highlight a few evolutionary psychological theories and their associated research approaches. Sexual Strategies Theory Sexual strategies theory is based on sexual selection theory. It proposes that humans have evolved a list of different mating strategies, both short-term and long-term, that vary depending on culture, social context, parental influence, and personal mate value (desirability in the “mating market”). In its initial formulation, sexual strategies theory focused on the differences between men and women in mating preferences and strategies (Buss & Schmitt, 1993). It started by looking at the minimum parental investment needed to produce a child. For women, even the minimum investment is significant: after becoming pregnant, they have to carry that child for nine months inside of them. For men, on the other hand, the minimum investment to produce the same child is considerably smaller—simply the act of sex. These differences in parental investment have an enormous impact on sexual strategies. For a woman, the risks associated with making a poor mating choice is high. She might get pregnant by a man who will not help to support her and her children, or who might have poor-quality genes. And because the stakes are higher for a woman, wise mating decisions for her are much more valuable. For men, on the other hand, the need to focus on making wise mating decisions isn’t as important. That is, unlike women, men 1) don’t biologically have the child growing inside of them for nine months, and 2) do not have as high a cultural expectation to raise the child. This logic leads to a powerful set of predictions: In short-term mating, women will likely be choosier than men (because the costs of getting pregnant are so high), while men, on average, will likely engage in more casual sexual activities (because this cost is greatly lessened). Due to this, men will sometimes deceive women about their long-term intentions for the benefit of short-term sex, and men are more likely than women to lower their mating standards for short-term mating situations. An extensive body of empirical evidence supports these and related predictions (Buss & Schmitt, 2011). Men express a desire for a larger number of sex partners than women do. They let less time elapse before seeking sex. They are more willing to consent to sex with strangers and are less likely to require emotional involvement with their sex partners. They have more frequent sexual fantasies and fantasize about a larger variety of sex partners. They are more likely to regret missed sexual opportunities. And they lower their standards in short-term mating, showing a willingness to mate with a larger variety of women as long as the costs and risks are low. However, in situations where both the man and woman are interested in long-term mating, both sexes tend to invest substantially in the relationship and in their children. In these cases, the theory predicts that both sexes will be extremely choosy when pursuing a long-term mating strategy. Much empirical research supports this prediction, as well. In fact, the qualities women and men generally look for when choosing long-term mates are very similar: both want mates who are intelligent, kind, understanding, healthy, dependable, honest, loyal, loving, and adaptable. Nonetheless, women and men do differ in their preferences for a few key qualities in long-term mating, because of somewhat distinct adaptive problems. Modern women have inherited the evolutionary trait to desire mates who possess resources, have qualities linked with acquiring resources (e.g., ambition, wealth, industriousness), and are willing to share those resources with them. On the other hand, men more strongly desire youth and health in women, as both are cues to fertility. These male and female differences are universal in humans. They were first documented in 37 different cultures, from Australia to Zambia (Buss, 1989), and have been replicated by dozens of researchers in dozens of additional cultures (for summaries, see Buss, 2012). As we know, though, just because we have these mating preferences (e.g., men with resources; fertile women), people don't always get what they want. There are countless other factors which influence who people ultimately select as their mate. For example, the sex ratio (the percentage of men to women in the mating pool), cultural practices (such as arranged marriages, which inhibit individuals’ freedom to act on their preferred mating strategies), the strategies of others (e.g., if everyone else is pursuing short-term sex, it’s more difficult to pursue a long-term mating strategy), and many others all influence who we select as our mates. Sexual strategies theory—anchored in sexual selection theory— predicts specific similarities and differences in men and women’s mating preferences and strategies. Whether we seek short-term or long-term relationships, many personality, social, cultural, and ecological factors will all influence who our partners will be. Error Management Theory Error management theory (EMT) deals with the evolution of how we think, make decisions, and evaluate uncertain situations—that is, situations where there's no clear answer how we should behave. (Haselton & Buss, 2000; Haselton, Nettle, & Andrews, 2005). Consider, for example, walking through the woods at dusk. You hear a rustle in the leaves on the path in front of you. It could be a snake. Or, it could just be the wind blowing the leaves. Because you can't really tell why the leaves rustled, it’s an uncertain situation. The important question then is, what are the costs of errors in judgment? That is, if you conclude that it’s a dangerous snake so you avoid the leaves, the costs are minimal (i.e., you simply make a short detour around them). However, if you assume the leaves are safe and simply walk over them—when in fact it is a dangerous snake—the decision could cost you your life. Now, think about our evolutionary history and how generation after generation was confronted with similar decisions, where one option had low cost but great reward (walking around the leaves and not getting bitten) and the other had a low reward but high cost (walking through the leaves and getting bitten). These kinds of choices are called “cost asymmetries.” If during our evolutionary history we encountered decisions like these generation after generation, over time an adaptive bias would be created: we would make sure to err in favor of the least costly (in this case, least dangerous) option (e.g., walking around the leaves). To put it another way, EMT predicts that whenever uncertain situations present us with a safer versus more dangerous decision, we will psychologically adapt to prefer choices that minimize the cost of errors. EMT is a general evolutionary psychological theory that can be applied to many different domains of our lives, but a specific example of it is the visual descent illusion. To illustrate: Have you ever thought it would be no problem to jump off of a ledge, but as soon as you stood up there, it suddenly looked much higher than you thought? The visual descent illusion (Jackson & Cormack, 2008) states that people will overestimate the distance when looking down from a height (compared to looking up) so that people will be especially wary of falling from great heights—which would result in injury or death. Another example of EMT is the auditory looming bias: Have you ever noticed how an ambulance seems closer when it's coming toward you, but suddenly seems far away once it's immediately passed? With the auditory looming bias, people overestimate how close objects are when the sound is moving toward them compared to when it is moving away from them. From our evolutionary history, humans learned, "It’s better to be safe than sorry." Therefore, if we think that a threat is closer to us when it’s moving toward us (because it seems louder), we will be quicker to act and escape. In this regard, there may be times we ran away when we didn’t need to (a false alarm), but wasting that time is a less costly mistake than not acting in the first place when a real threat does exist. EMT has also been used to predict adaptive biases in the domain of mating. Consider something as simple as a smile. In one case, a smile from a potential mate could be a sign of sexual or romantic interest. On the other hand, it may just signal friendliness. Because of the costs to men of missing out on chances for reproduction, EMT predicts that men have a sexual overperception bias: they often misread sexual interest from a woman, when really it’s just a friendly smile or touch. In the mating domain, the sexual overperception bias is one of the best-documented phenomena. It’s been shown in studies in which men and women rated the sexual interest between people in photographs and videotaped interactions. As well, it’s been shown in the laboratory with participants engaging in actual “speed dating,” where the men interpret sexual interest from the women more often than the women actually intended it (Perilloux, Easton, & Buss, 2012). In short, EMT predicts that men, more than women, will over-infer sexual interest based on minimal cues, and empirical research confirms this adaptive mating bias. Conclusion Sexual strategies theory and error management theory are two evolutionary psychological theories that have received much empirical support from dozens of independent researchers. But, there are many other evolutionary psychological theories, such as social exchange theory for example, that also make predictions about our modern day behavior and preferences, too. The merits of each evolutionary psychological theory, however, must be evaluated separately and treated like any scientific theory. That is, we should only trust their predictions and claims to the extent they are supported by scientific studies. However, even if the theory is scientifically grounded, just because a psychological adaptation was advantageous in our history, it doesn't mean it's still useful today. For example, even though women may have preferred men with resources in generations ago, our modern society has advanced such that these preferences are no longer apt or necessary. Nonetheless, it's important to consider how our evolutionary history has shaped our automatic or "instinctual" desires and reflexes of today, so that we can better shape them for the future ahead. Outside Resources FAQs http://www.anth.ucsb.edu/projects/human/evpsychfaq.html Web: Articles and books on evolutionary psychology http://homepage.psy.utexas.edu/homep...Group/BussLAB/ Web: Main international scientific organization for the study of evolution and human behavior, HBES http://www.hbes.com/ Discussion Questions 1. How does change take place over time in the living world? 2. Which two potential psychological adaptations to problems of survival are not discussed in this module? 3. What are the psychological and behavioral implications of the fact that women bear heavier costs to produce a child than men do? 4. Can you formulate a hypothesis about an error management bias in the domain of social interaction? Vocabulary Adaptations Evolved solutions to problems that historically contributed to reproductive success. Error management theory (EMT) A theory of selection under conditions of uncertainty in which recurrent cost asymmetries of judgment or inference favor the evolution of adaptive cognitive biases that function to minimize the more costly errors. Evolution Change over time. Is the definition changing? Gene Selection Theory The modern theory of evolution by selection by which differential gene replication is the defining process of evolutionary change. Intersexual selection A process of sexual selection by which evolution (change) occurs as a consequences of the mate preferences of one sex exerting selection pressure on members of the opposite sex. Intrasexual competition A process of sexual selection by which members of one sex compete with each other, and the victors gain preferential mating access to members of the opposite sex. Natural selection Differential reproductive success as a consequence of differences in heritable attributes. Psychological adaptations Mechanisms of the mind that evolved to solve specific problems of survival or reproduction; conceptualized as information processing devices. Sexual selection The evolution of characteristics because of the mating advantage they give organisms. Sexual strategies theory A comprehensive evolutionary theory of human mating that defines the menu of mating strategies humans pursue (e.g., short-term casual sex, long-term committed mating), the adaptive problems women and men face when pursuing these strategies, and the evolved solutions to these mating problems.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_1%3A_Biological_Basis_of_Behavior/1.6%3A_Evolutionary_Theories_in_Psychology.txt
By Eric Turkheimer University of Virginia People have a deep intuition about what has been called the “nature–nurture question.” Some aspects of our behavior feel as though they originate in our genetic makeup, while others feel like the result of our upbringing or our own hard work. The scientific field of behavior genetics attempts to study these differences empirically, either by examining similarities among family members with different degrees of genetic relatedness, or, more recently, by studying differences in the DNA of people with different behavioral traits. The scientific methods that have been developed are ingenious, but often inconclusive. Many of the difficulties encountered in the empirical science of behavior genetics turn out to be conceptual, and our intuitions about nature and nurture get more complicated the harder we think about them. In the end, it is an oversimplification to ask how “genetic” some particular behavior is. Genes and environments always combine to produce behavior, and the real science is in the discovery of how they combine for a given behavior. Learning objectives • Understand what the nature–nurture debate is and why the problem fascinates us. • Understand why nature–nurture questions are difficult to study empirically. • Know the major research designs that can be used to study nature–nurture questions. • Appreciate the complexities of nature–nurture and why questions that seem simple turn out not to have simple answers. Introduction There are three related problems at the intersection of philosophy and science that are fundamental to our understanding of our relationship to the natural world: the mind–body problem, the free will problem, and the nature–nurture problem. These great questions have a lot in common. Everyone, even those without much knowledge of science or philosophy, has opinions about the answers to these questions that come simply from observing the world we live in. Our feelings about our relationship with the physical and biological world often seem incomplete. We are in control of our actions in some ways, but at the mercy of our bodies in others; it feels obvious that our consciousness is some kind of creation of our physical brains, at the same time we sense that our awareness must go beyond just the physical. This incomplete knowledge of our relationship with nature leaves us fascinated and a little obsessed, like a cat that climbs into a paper bag and then out again, over and over, mystified every time by a relationship between inner and outer that it can see but can’t quite understand. It may seem obvious that we are born with certain characteristics while others are acquired, and yet of the three great questions about humans’ relationship with the natural world, only nature–nurture gets referred to as a “debate.” In the history of psychology, no other question has caused so much controversy and offense: We are so concerned with nature–nurture because our very sense of moral character seems to depend on it. While we may admire the athletic skills of a great basketball player, we think of his height as simply a gift, a payoff in the “genetic lottery.” For the same reason, no one blames a short person for his height or someone’s congenital disability on poor decisions: To state the obvious, it’s “not their fault.” But we do praise the concert violinist (and perhaps her parents and teachers as well) for her dedication, just as we condemn cheaters, slackers, and bullies for their bad behavior. The problem is, most human characteristics aren’t usually as clear-cut as height or instrument-mastery, affirming our nature–nurture expectations strongly one way or the other. In fact, even the great violinist might have some inborn qualities—perfect pitch, or long, nimble fingers—that support and reward her hard work. And the basketball player might have eaten a diet while growing up that promoted his genetic tendency for being tall. When we think about our own qualities, they seem under our control in some respects, yet beyond our control in others. And often the traits that don’t seem to have an obvious cause are the ones that concern us the most and are far more personally significant. What about how much we drink or worry? What about our honesty, or religiosity, or sexual orientation? They all come from that uncertain zone, neither fixed by nature nor totally under our own control. One major problem with answering nature-nurture questions about people is, how do you set up an experiment? In nonhuman animals, there are relatively straightforward experiments for tackling nature–nurture questions. Say, for example, you are interested in aggressiveness in dogs. You want to test for the more important determinant of aggression: being born to aggressive dogs or being raised by them. You could mate two aggressive dogs—angry Chihuahuas—together, and mate two nonaggressive dogs—happy beagles—together, then switch half the puppies from each litter between the different sets of parents to raise. You would then have puppies born to aggressive parents (the Chihuahuas) but being raised by nonaggressive parents (the Beagles), and vice versa, in litters that mirror each other in puppy distribution. The big questions are: Would the Chihuahua parents raise aggressive beagle puppies? Would the beagle parents raise nonaggressive Chihuahua puppies? Would the puppies’ nature win out, regardless of who raised them? Or... would the result be a combination of nature and nurture? Much of the most significant nature–nurture research has been done in this way (Scott & Fuller, 1998), and animal breeders have been doing it successfully for thousands of years. In fact, it is fairly easy to breed animals for behavioral traits. With people, however, we can’t assign babies to parents at random, or select parents with certain behavioral characteristics to mate, merely in the interest of science (though history does include horrific examples of such practices, in misguided attempts at “eugenics,” the shaping of human characteristics through intentional breeding). In typical human families, children’s biological parents raise them, so it is very difficult to know whether children act like their parents due to genetic (nature) or environmental (nurture) reasons. Nevertheless, despite our restrictions on setting up human-based experiments, we do see real-world examples of nature-nurture at work in the human sphere—though they only provide partial answers to our many questions. The science of how genes and environments work together to influence behavior is called behavioral genetics. The easiest opportunity we have to observe this is the adoption study. When children are put up for adoption, the parents who give birth to them are no longer the parents who raise them. This setup isn’t quite the same as the experiments with dogs (children aren’t assigned to random adoptive parents in order to suit the particular interests of a scientist) but adoption still tells us some interesting things, or at least confirms some basic expectations. For instance, if the biological child of tall parents were adopted into a family of short people, do you suppose the child’s growth would be affected? What about the biological child of a Spanish-speaking family adopted at birth into an English-speaking family? What language would you expect the child to speak? And what might these outcomes tell you about the difference between height and language in terms of nature-nurture? Another option for observing nature-nurture in humans involves twin studies. There are two types of twins: monozygotic (MZ) and dizygotic (DZ). Monozygotic twins, also called “identical” twins, result from a single zygote (fertilized egg) and have the same DNA. They are essentially clones. Dizygotic twins, also known as “fraternal” twins, develop from two zygotes and share 50% of their DNA. Fraternal twins are ordinary siblings who happen to have been born at the same time. To analyze nature–nurture using twins, we compare the similarity of MZ and DZ pairs. Sticking with the features of height and spoken language, let’s take a look at how nature and nurture apply: Identical twins, unsurprisingly, are almost perfectly similar for height. The heights of fraternal twins, however, are like any other sibling pairs: more similar to each other than to people from other families, but hardly identical. This contrast between twin types gives us a clue about the role genetics plays in determining height. Now consider spoken language. If one identical twin speaks Spanish at home, the co-twin with whom she is raised almost certainly does too. But the same would be true for a pair of fraternal twins raised together. In terms of spoken language, fraternal twins are just as similar as identical twins, so it appears that the genetic match of identical twins doesn’t make much difference. Twin and adoption studies are two instances of a much broader class of methods for observing nature-nurture called quantitative genetics, the scientific discipline in which similarities among individuals are analyzed based on how biologically related they are. We can do these studies with siblings and half-siblings, cousins, twins who have been separated at birth and raised separately (Bouchard, Lykken, McGue, & Segal, 1990; such twins are very rare and play a smaller role than is commonly believed in the science of nature–nurture), or with entire extended families (see Plomin, DeFries, Knopik, & Neiderhiser, 2012, for a complete introduction to research methods relevant to nature–nurture). For better or for worse, contentions about nature–nurture have intensified because quantitative genetics produces a number called a heritability coefficient, varying from 0 to 1, that is meant to provide a single measure of genetics’ influence of a trait. In a general way, a heritability coefficient measures how strongly differences among individuals are related to differences among their genes. But beware: Heritability coefficients, although simple to compute, are deceptively difficult to interpret. Nevertheless, numbers that provide simple answers to complicated questions tend to have a strong influence on the human imagination, and a great deal of time has been spent discussing whether the heritability of intelligence or personality or depression is equal to one number or another. One reason nature–nurture continues to fascinate us so much is that we live in an era of great scientific discovery in genetics, comparable to the times of Copernicus, Galileo, and Newton, with regard to astronomy and physics. Every day, it seems, new discoveries are made, new possibilities proposed. When Francis Galton first started thinking about nature–nurture in the late-19th century he was very influenced by his cousin, Charles Darwin, but genetics per se was unknown. Mendel’s famous work with peas, conducted at about the same time, went undiscovered for 20 years; quantitative genetics was developed in the 1920s; DNA was discovered by Watson and Crick in the 1950s; the human genome was completely sequenced at the turn of the 21st century; and we are now on the verge of being able to obtain the specific DNA sequence of anyone at a relatively low cost. No one knows what this new genetic knowledge will mean for the study of nature–nurture, but as we will see in the next section, answers to nature–nurture questions have turned out to be far more difficult and mysterious than anyone imagined. What Have We Learned About Nature–Nurture? It would be satisfying to be able to say that nature–nurture studies have given us conclusive and complete evidence about where traits come from, with some traits clearly resulting from genetics and others almost entirely from environmental factors, such as childrearing practices and personal will; but that is not the case. Instead, everything has turned out to have some footing in genetics. The more genetically-related people are, the more similar they are—for everything: height, weight, intelligence, personality, mental illness, etc. Sure, it seems like common sense that some traits have a genetic bias. For example, adopted children resemble their biological parents even if they have never met them, and identical twins are more similar to each other than are fraternal twins. And while certain psychological traits, such as personality or mental illness (e.g., schizophrenia), seem reasonably influenced by genetics, it turns out that the same is true for political attitudes, how much television people watch (Plomin, Corley, DeFries, & Fulker, 1990), and whether or not they get divorced (McGue & Lykken, 1992). It may seem surprising, but genetic influence on behavior is a relatively recent discovery. In the middle of the 20th century, psychology was dominated by the doctrine of behaviorism, which held that behavior could only be explained in terms of environmental factors. Psychiatry concentrated on psychoanalysis, which probed for roots of behavior in individuals’ early life-histories. The truth is, neither behaviorism nor psychoanalysis is incompatible with genetic influences on behavior, and neither Freud nor Skinner was naive about the importance of organic processes in behavior. Nevertheless, in their day it was widely thought that children’s personalities were shaped entirely by imitating their parents’ behavior, and that schizophrenia was caused by certain kinds of “pathological mothering.” Whatever the outcome of our broader discussion of nature–nurture, the basic fact that the best predictors of an adopted child’s personality or mental health are found in the biological parents he or she has never met, rather than in the adoptive parents who raised him or her, presents a significant challenge to purely environmental explanations of personality or psychopathology. The message is clear: You can’t leave genes out of the equation. But keep in mind, no behavioral traits are completely inherited, so you can’t leave the environment out altogether, either. Trying to untangle the various ways nature-nurture influences human behavior can be messy, and often common-sense notions can get in the way of good science. One very significant contribution of behavioral genetics that has changed psychology for good can be very helpful to keep in mind: When your subjects are biologically-related, no matter how clearly a situation may seem to point to environmental influence, it is never safe to interpret a behavior as wholly the result of nurture without further evidence. For example, when presented with data showing that children whose mothers read to them often are likely to have better reading scores in third grade, it is tempting to conclude that reading to your kids out loud is important to success in school; this may well be true, but the study as described is inconclusive, because there are genetic as well asenvironmental pathways between the parenting practices of mothers and the abilities of their children. This is a case where “correlation does not imply causation,” as they say. To establish that reading aloud causes success, a scientist can either study the problem in adoptive families (in which the genetic pathway is absent) or by finding a way to randomly assign children to oral reading conditions. The outcomes of nature–nurture studies have fallen short of our expectations (of establishing clear-cut bases for traits) in many ways. The most disappointing outcome has been the inability to organize traits from more- to less-genetic. As noted earlier, everything has turned out to be at least somewhat heritable (passed down), yet nothing has turned out to be absolutely heritable, and there hasn’t been much consistency as to which traits are moreheritable and which are less heritable once other considerations (such as how accurately the trait can be measured) are taken into account (Turkheimer, 2000). The problem is conceptual: The heritability coefficient, and, in fact, the whole quantitative structure that underlies it, does not match up with our nature–nurture intuitions. We want to know how “important” the roles of genes and environment are to the development of a trait, but in focusing on “important” maybe we’re emphasizing the wrong thing. First of all, genes and environment are both crucial to every trait; without genes the environment would have nothing to work on, and too, genes cannot develop in a vacuum. Even more important, because nature–nurture questions look at the differences among people, the cause of a given trait depends not only on the trait itself, but also on the differences in that trait between members of the group being studied. The classic example of the heritability coefficient defying intuition is the trait of having two arms. No one would argue against the development of arms being a biological, genetic process. But fraternal twins are just as similar for “two-armedness” as identical twins, resulting in a heritability coefficient of zero for the trait of having two arms. Normally, according to the heritability model, this result (coefficient of zero) would suggest all nurture, no nature, but we know that’s not the case. The reason this result is not a tip-off that arm development is less genetic than we imagine is because people do not vary in the genes related to arm development—which essentially upends the heritability formula. In fact, in this instance, the opposite is likely true: the extent that people differ in arm number is likely the result of accidents and, therefore, environmental. For reasons like these, we always have to be very careful when asking nature–nurture questions, especially when we try to express the answer in terms of a single number. The heritability of a trait is not simply a property of that trait, but a property of the trait in a particular context of relevant genes and environmental factors. Another issue with the heritability coefficient is that it divides traits’ determinants into two portions—genes and environment—which are then calculated together for the total variability. This is a little like asking how much of the experience of a symphony comes from the horns and how much from the strings; the ways instruments or genes integrate is more complex than that. It turns out to be the case that, for many traits, genetic differences affect behavior under some environmental circumstances but not others—a phenomenon called gene-environment interaction, or G x E. In one well-known example, Caspi et al. (2002) showed that among maltreated children, those who carried a particular allele of the MAOA gene showed a predisposition to violence and antisocial behavior, while those with other alleles did not. Whereas, in children who had not been maltreated, the gene had no effect. Making matters even more complicated are very recent studies of what is known as epigenetics (see module, “Epigenetics” http://noba.to/37p5cb8v), a process in which the DNA itself is modified by environmental events, and those genetic changes transmitted to children. Some common questions about nature–nurture are, how susceptible is a trait to change, how malleable is it, and do we “have a choice” about it? These questions are much more complex than they may seem at first glance. For example, phenylketonuria is an inborn error of metabolism caused by a single gene; it prevents the body from metabolizing phenylalanine. Untreated, it causes mental retardation and death. But it can be treated effectively by a straightforward environmental intervention: avoiding foods containing phenylalanine. Height seems like a trait firmly rooted in our nature and unchangeable, but the average height of many populations in Asia and Europe has increased significantly in the past 100 years, due to changes in diet and the alleviation of poverty. Even the most modern genetics has not provided definitive answers to nature–nurture questions. When it was first becoming possible to measure the DNA sequences of individual people, it was widely thought that we would quickly progress to finding the specific genes that account for behavioral characteristics, but that hasn’t happened. There are a few rare genes that have been found to have significant (almost always negative) effects, such as the single gene that causes Huntington’s disease, or the Apolipoprotein gene that causes early onset dementia in a small percentage of Alzheimer’s cases. Aside from these rare genes of great effect, however, the genetic impact on behavior is broken up over many genes, each with very small effects. For most behavioral traits, the effects are so small and distributed across so many genes that we have not been able to catalog them in a meaningful way. In fact, the same is true of environmental effects. We know that extreme environmental hardship causes catastrophic effects for many behavioral outcomes, but fortunately extreme environmental hardship is very rare. Within the normal range of environmental events, those responsible for differences (e.g., why some children in a suburban third-grade classroom perform better than others) are much more difficult to grasp. The difficulties with finding clear-cut solutions to nature–nurture problems bring us back to the other great questions about our relationship with the natural world: the mind-body problem and free will. Investigations into what we mean when we say we are aware of something reveal that consciousness is not simply the product of a particular area of the brain, nor does choice turn out to be an orderly activity that we can apply to some behaviors but not others. So it is with nature and nurture: What at first may seem to be a straightforward matter, able to be indexed with a single number, becomes more and more complicated the closer we look. The many questions we can ask about the intersection among genes, environments, and human traits—how sensitive are traits to environmental change, and how common are those influential environments; are parents or culture more relevant; how sensitive are traits to differences in genes, and how much do the relevant genes vary in a particular population; does the trait involve a single gene or a great many genes; is the trait more easily described in genetic or more-complex behavioral terms?—may have different answers, and the answer to one tells us little about the answers to the others. It is tempting to predict that the more we understand the wide-ranging effects of genetic differences on all human characteristics—especially behavioral ones—our cultural, ethical, legal, and personal ways of thinking about ourselves will have to undergo profound changes in response. Perhaps criminal proceedings will consider genetic background. Parents, presented with the genetic sequence of their children, will be faced with difficult decisions about reproduction. These hopes or fears are often exaggerated. In some ways, our thinking may need to change—for example, when we consider the meaning behind the fundamental American principle that all men are created equal. Human beings differ, and like all evolved organisms they differ genetically. The Declaration of Independence predates Darwin and Mendel, but it is hard to imagine that Jefferson—whose genius encompassed botany as well as moral philosophy—would have been alarmed to learn about the genetic diversity of organisms. One of the most important things modern genetics has taught us is that almost all human behavior is too complex to be nailed down, even from the most complete genetic information, unless we’re looking at identical twins. The science of nature and nurture has demonstrated that genetic differences among people are vital to human moral equality, freedom, and self-determination, not opposed to them. As Mordecai Kaplan said about the role of the past in Jewish theology, genetics gets a vote, not a veto, in the determination of human behavior. We should indulge our fascination with nature–nurture while resisting the temptation to oversimplify it. Outside Resources Web: Institute for Behavioral Genetics http://www.colorado.edu/ibg/ Discussion Questions 1. Is your personality more like one of your parents than the other? If you have a sibling, is his or her personality like yours? In your family, how did these similarities and differences develop? What do you think caused them? 2. Can you think of a human characteristic for which genetic differences would play almost no role? Defend your choice. 3. Do you think the time will come when we will be able to predict almost everything about someone by examining their DNA on the day they are born? 4. Identical twins are more similar than fraternal twins for the trait of aggressiveness, as well as for criminal behavior. Do these facts have implications for the courtroom? If it can be shown that a violent criminal had violent parents, should it make a difference in culpability or sentencing? Vocabulary Adoption study A behavior genetic research method that involves comparison of adopted children to their adoptive and biological parents. Behavioral genetics The empirical science of how genes and environments combine to generate behavior. Heritability coefficient An easily misinterpreted statistical construct that purports to measure the role of genetics in the explanation of differences among individuals. Quantitative genetics Scientific and mathematical methods for inferring genetic and environmental processes based on the degree of genetic and environmental similarity among organisms. Twin studies A behavior genetic research method that involves comparison of the similarity of identical (monozygotic; MZ) and fraternal (dizygotic; DZ) twins. 1.8: Epigenetics in Psychology By Ian Weaver Dalhousie University Early life experiences exert a profound and long-lasting influence on physical and mental health throughout life. The efforts to identify the primary causes of this have significantly benefited from studies of the epigenome—a dynamic layer of information associated with DNA that differs between individuals and can be altered through various experiences and environments. The epigenome has been heralded as a key “missing piece” of the etiological puzzle for understanding how development of psychological disorders may be influenced by the surrounding environment, in concordance with the genome. Understanding the mechanisms involved in the initiation, maintenance, and heritability of epigenetic states is thus an important aspect of research in current biology, particularly in the study of learning and memory, emotion, and social behavior in humans. Moreover, epigenetics in psychology provides a framework for understanding how the expression of genes is influenced by experiences and the environment to produce individual differences in behavior, cognition, personality, and mental health. In this module, we survey recent developments revealing epigenetic aspects of mental health and review some of the challenges of epigenetic approaches in psychology to help explain how nurture shapes nature.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_1%3A_Biological_Basis_of_Behavior/1.7%3A_The_Nature-Nurture_Question.txt
By Don Lucas and Jennifer Fox Northwest Vista College It’s natural to be curious about anatomy and physiology. Being knowledgeable about anatomy and physiology increases our potential for pleasure, physical and psychological health, and life satisfaction. Beyond personal curiosity, thoughtful discussions about anatomy and physiology with sexual partners reduces the potential for miscommunication, unintended pregnancies, sexually transmitted infections, and sexual dysfunctions. Lastly, and most importantly, an appreciation of both the biological and psychological motivating forces behind sexual curiosity, desire, and the capacities of our brains can enhance the health of relationships. Learning objectives • Explain why people are curious about their own sexual anatomies and physiologies. • List the sexual organs of the female and male. • Describe the sexual response cycle. • Distinguish between pleasure and reproduction as motives behind sexuality. • Compare the central nervous system motivating sexual behaviors to the autonomic nervous system motivating sexual behaviors. • Discuss the relationship between pregnancy and birth control. • Analyze how sexually transmitted infections are associated with sexual behaviors. • Understand the effects of sexual dysfunctions and their treatments on sexual behaviors. Introduction Most people are curious about sex. Google processes over 3.5 billion search queries per day (Google Search Statistics)—tens of millions of which, performed under the cloak of anonymity, are about sex. What are the most frequently asked questions concerning sex on Google? Are they about extramarital affairs? Kinky fantasies? Sexual positions? Surprisingly, no. Usually they are practical and straightforward, and tend to be about sexual anatomy (Stephens-Davidowitz, 2015)—for example, “How big should my penis be?” and, “Is it healthy for my vagina to smell like vinegar?” Further, Google reveals that people are much more concerned about their own sexual anatomies than the anatomies of others; for instance, men are 170 times more likely than women to pose questions about penises (Stephens-Davidowitz, 2015). The second most frequently asked questions about sex on Google are about sexual physiology—for example, “How can I make my boyfriend climax more quickly?” “Why is sex painful?” and, “What exactly is an orgasm?” These searches are clear indicators that people have a tremendous interest in very basic questions about sexual anatomy and physiology. However, the accuracy of answers we get from friends, family, and even internet “authorities” to questions about sex is often unreliable (Fuxman et al., 2015; Simon & Daneback, 2013). For example, when Buhi and colleagues (2010) examined the content of 177 sexual-health websites, they found that nearly half contained inaccurate information. How about we—the authors of this module—make you a promise? If you learn this material, then we promise you won’t need nearly as many clandestine Google excursions, because this module contains unbiased and scientifically-based answers to many of the questions you likely have about sexual anatomy and physiology. Are you ready for a new twist on “sexually-explicit language”? Even though this module is about a fascinating topic—sex—it contains vocabulary that may be new or confusing to you. Learning this vocabulary may require extra effort, but if you understand these terms, you will understand sex and yourself better. Masters and Johnson Although people have always had sex, the scientific study of it has remained taboo until relatively recently. In fact, the study of sexual anatomy, physiology, and behavior wasn’t formally undertaken until the late 19th century, and only began to be taken seriously as recently as the 1950’s. Notably, William Masters (1915-2001) and Virginia Johnson (1925-2013) formed a research team in 1957 that expanded studies of sexuality from merely asking people about their sex lives to measuring people’s anatomy and physiology while they were actually having sex. Masters was a former Navy lieutenant, married father of two, and trained gynecologist with an interest in studying prostitutes. Johnson was a former country music singer, single mother of two, three-time divorcee, and two-time college dropout with an interest in studying sociology. And yes, if it piques your curiosity, Masters and Johnson were lovers (when Masters was still married); they eventually married each other, but later divorced. Despite their colorful private lives, they were dedicated researchers with an interest in understanding sex from a scientific perspective. Masters and Johnson used primarily plethysmography (the measuring of changes in blood- or airflow to organs) to determine sexual responses in a wide range of body parts—breasts, skin, various muscle structures, bladder, rectum, external sex organs, and lungs—as well as measurements of people’s pulse and blood pressure. They measured more than 10,000 orgasms in 700 individuals (18 to 89 years of age), during sex with partners or alone. Masters and Johnson’s findings were initially published in two best-selling books: Human Sexual Response, 1966, and Human Sexual Inadequacy, 1970. Their initial experimental techniques and data form the bases of our contemporary understanding of sexual anatomy and physiology. The Anatomy of Pleasure and Reproduction Sexual anatomy is typically discussed only in terms of reproduction (see e.g., King, 2015). However, reproduction is only a (small) part of what drives us sexually (Lucas & Fox, 2018). Full discussions of sexual anatomy also include the concept of pleasure. Thus, we will explore the sexual anatomies of females (see Figures 1.9.1 and 1.9.2) and males (see Figure 2) in terms of their capabilities for both reproduction and pleasure. Female Anatomy Many people find female sexual anatomy curious, confusing, and mysterious. This may be because so much of it is internal (inside the body), or because—historically—women have been expected to be modest and secretive regarding their bodies. Perhaps the most visible structure of female sexual anatomy is the vulva. The primary functions of the vulva are pleasure and protection. The vulva is composed of the female’s external sex organs (see Figure 1.9.1). It includes many parts: (a) the labia majora—the “large lips” enclosing and protecting the female’s internal sex organs; (b) the labia minora—the“small lips” surrounding and defining the openings of the vagina and urethra; (c) the minor and major vestibular glands (VGs). The minor VGs—also called Skene's glands (not pictured), are on the wall of the vagina and are associated with female ejaculation, and mythologically associated with the G-Spot (Kilchevsky et al., 2012; Wickman, 2017). The major VGs—also called Bartholin's glands—are located just to the left and right of the vagina and produce lubrication to aid in sexual intercourse. Most females—especially postmenopausal females—at some time in their lives report inadequate lubrication, which, in turn, leads to discomfort or pain during sexual intercourse (Nappi & Lachowsky, 2009). Extending foreplay and using commercial water-, silicone-, or oil-based personal lubricants are simple solutions to this common problem. The clitoris and vagina are considered parts of the vulva as well as internal sex organs (see Figure 1.9.2). They are the most talked about organs in relation to their capacities for female pleasure (e.g., Jannini et al., 2012). Most of the clitoris, which is composed of 18 parts with an average overall excited length of about four inches, cannot be seen (Ginger & Yang, 2011; O'Connell et al., 2005). The visible parts—the glans and prepuce—are located above the urethra and join the labia minora at its pinnacle. The clitoris is highly sensitive, composed of more than 8,000 sensory-nerve endings, and is associated with initiating orgasms; 90% of females can orgasm by clitoral stimulation alone (O'Connell et al., 2005; Thompson, 2016). The vagina, also called the “birth canal,” is a muscular canal that spans from the cervix to the introitus. It has an average overall excited length of about four and a half inches (Masters & Johnson, 1966) and has two parts: First, there is the inner two-thirds (posterior wall)—formed during the first trimester of pregnancy. Second, there is the outer one-third of the vagina (anterior wall). It is formed during the second trimester of pregnancy and is generally more sensitive than the inner portion, but dramatically less sensitive than the clitoris (Hines, 2001). Only between 10% and 30% of females achieve orgasms by vaginal stimulation alone (Thompson, 2016). At each end of the vagina are the cervix (the lower portion of the uterus) and the introitus (the vaginal opening to the outside of the body). The vagina acts as a transport mechanism for sperm cells coming in, and menstrual fluid and babies going out. A healthy vagina has a pH level of about four, which is acidic. When the pH level changes, often due to normal circumstances (e.g., menstruation, using tampons, sexual intercourse), it facilitates the reproduction of microorganisms that often cause vaginal odor and pain (Anderson, Klink & Cohrssen, 2004). This potential problem can be solved with over-the-counter vaginal gels or oral probiotics to maintain normal vaginal pH levels (Tachedjiana et al., in press). The primary functions of the internal sex organs of the female are to store, transport, and keep ovum cells (eggs) healthy; and produce hormones (see Figure 1b). These organs include: (a) the uterus (or womb)—where human development occurs until birth; (b) the ovaries—the glands that house the ova (eggs; about two million; Faddy et al., 1992) and produce progesterone, estrogen, and small amounts of testosterone; (c) the fallopian tubes—where fertilization is most likely to occur. These tubes allow for ovulation (about every 28 days), which is when ova travel from the ovaries to the uterus. If fertilization does not occur, menstruation begins. Menstruation, also known as a “period,” is the discharge of ova along with the lining of the uterus through the vagina, usually taking several days to complete. Male Anatomy The most prominent external sex organ for the male is the penis. The penis’s main functions are initiating orgasm, and transporting semen and urine from the body. On average, a flaccid penis is about three and a half inches in length, whereas an erect penis is about five inches (Veale et al., 2015; Wessells, Lue & McAninch, 1996). If you want to know the length of a particular male’s erect penis, you’ll have to actually see it—because there are noreliable correlations between the length of an erect penis and (a) the length of a flaccid penis, (b) the lengths of other body parts—including feet, hands, forearms, and overall height—or (c) race and ethnicity (Shah & Christopher, 2002; Siminoski & Bain, 1993; Veale et al., 2015; Wessells, Lue & McAninch, 1996). The penis has three parts: the root, shaft, and glans. Foreskin covers the glans, or head of the penis, except in circumcised males. The glans penis is highly sensitive, composed of more than 4,000 sensory-nerve endings, and associated with initiating orgasms (Halata, 1997). Lastly, it has the urethral opening that allows semen and urine to exit the body. In addition to the penis, there are other male external sex organs that have two primary functions: producing hormones and sperm cells. The scrotum is the sac of skin behind and below the penis containing the testicles. The testicles (or testes) are the glands that produce testosterone, progesterone, small amounts of estrogen, and sperm cells. Many people are surprised to learn that males also have internal sex organs. The primary functions of male internal sex organs are transporting sperm cells, keeping sperm cells healthy, and producing semen—the fluid in which sperm cells are transported. The male’s internal sex organs include: (a) the epididymis, which is a twisted duct that matures, stores, and transports sperm cells into the vas deferens; (b) the vas deferens—a muscular tube that transports mature sperm to the urethra, except in males who have had a vasectomy; (c) the seminal vesicles—glands that provide energy for sperm cells to move. This energy is in the form of sugar (fructose) and it composes about 75% of the semen. Sperm cells only compose about 1% of the semen (Owen & Katz, 2005); (d) the prostate gland, which provides additional fluid to the semen that nourishes the sperm cells; and the Cowper's glands, which produce a fluid that lubricates the urethra and neutralizes any acidity due to urine; (e) the urethra—the tube that carries urine and semen outside of the body. Sex on the Brain At first glance—or touch for that matter—the clitoris and penis are the parts of our anatomies that seem to bring the most pleasure. However, these two organs pale in comparison to our central nervous system’s capacity for pleasure. Extensive regions of the brain and brainstem are activated when a person experiences pleasure, including: the insula, temporal cortex, limbic system, nucleus accumbens, basal ganglia, superior parietal cortex, dorsolateral prefrontal cortex, and cerebellum (see Figure 1.9.4, Ortigue et al., 2007). Neuroimaging techniques show that these regions of the brain are active when patients have spontaneous orgasms involving no direct stimulation of the skin (e.g., Fadul et al., 2005) and when experimental participants self-stimulate erogenous zones (e.g., Komisaruk et al., 2011). Erogenous zones are sensitive areas of skin that are connected, via the nervous system, to the somatosensory cortex in the brain. The somatosensory cortex (SC) is the part of the brain primarily responsible for processing sensory information from the skin. The more sensitive an area of your skin is (e.g., your lips), the larger the corresponding area of the SC will be; the less sensitive an area of your skin is (e.g., your trunk), the smaller the corresponding area of the SC will be (see Figure 1.9.5, Penfield & Boldrey, 1937). When a sensitive area of a person’s body is touched, it is typically interpreted by the brain in one of three ways: “That tickles!” “That hurts!” or, “That…you need to do again!” Thus, the more sensitive areas of our bodies have greater potential to evoke pleasure. A study by Nummenmaa and his colleagues (2016) used a unique method to test this hypothesis. The Nummenmaa research team showed experimental participants images of same- and opposite-sex bodies. They then asked the participants to color the regions of the body that, when touched, they or members of the opposite sex would experience as sexually arousing while masturbating or having sex with a partner. Nummenmaa found the expected “hotspot” erogenous zones around the external sex organs, breasts, and anus, but also reported areas of the skin beyond these hotspots: “[T]actile stimulation of practically all bodily regions trigger sexual arousal….” Moreover, he concluded, “[H]aving sex with a partner…”—beyond the hotspots—“…reflects the role of touching in the maintenance of…pair bonds.” Physiology and the Sexual Response Cycle The brain and other sex organs respond to sexual stimuli in a universal fashion known as the sexual response cycle (SRC; Masters & Johnson, 1966). The SRC is composed of four phases: 1. Excitement: Activation of the sympathetic branch of the autonomic nervous system defines the excitement phase; heart rate and breathing accelerates, along with increased blood flow to the penis, vaginal walls, clitoris, and nipples. Involuntary muscular movements (myotonia), such as facial grimaces, also occur during this phase. 2. Plateau: Blood flow, heart rate, and breathing intensify during the plateau phase. During this phase, often referred to as “foreplay,” females experience an orgasmic platform—the outer third of the vaginal walls tightening—and males experience a release of pre-seminal fluid containing healthy sperm cells (Killick et al., 2011). This early release of fluid makes penile withdrawal a relatively ineffective form of birth control (Aisch & Marsh, 2014). (Question: What do you call a couple who use the withdrawal method of birth control? Answer: Parents.) 3. Orgasm: The shortest but most pleasurable phase is the orgasm phase. After reaching its climax, neuromuscular tension is released and the hormone oxytocin floods the bloodstream—facilitating emotional bonding. Although the rhythmic muscular contractions of an orgasm are temporally associated with ejaculation, this association is not necessary because orgasm and ejaculation are two separate physiological processes. 4. Resolution: The body returns to a pre-aroused state in the resolution phase. Males enter a refractory period of being unresponsive to sexual stimuli. The length of this period depends on age, frequency of recent sexual relations, level of intimacy with a partner, and novelty. Because females do not have a refractory period, they have a greater potential—physiologically—of having multiple orgasms. Ironically, females are also more likely to “fake” having orgasms (Opperman et al., 2014). Of interest to note, the SRC occurs regardless of the type of sexual behavior—whether the behavior is masturbation; romantic kissing; or oral, vaginal, or anal sex (Masters & Johnson, 1966). Further, a partner or environmental object is sufficient, but not necessary, for the SRC to occur. Pregnancy One of the potential outcomes of the SRC is pregnancy—the time a female carries a developing human within her uterus. How does this happen? The process begins during vaginal intercourse when the male ejaculates, or releases semen. Each ejaculate contains about 300 million sperm cells. These sperm compete to make their way through the cervix and into the uterus. Conception typically occurs within a fallopian tube when a single sperm cell comes into contact with an ovum (egg). The sperm carries either an X- or Y-chromosome to fertilize the ovum—which, itself, usually carries an X-chromosome. These chromosomes, in combination with one another, are what determine a person’s sex. The combination of two X chromosomes produces a female zygote (fertilized ovum). The combination of an X and Y chromosome produces a male zygote. XX- or XY-chromosomes form your 23rd set of chromosomes (most humans have a total of 46 chromosomes) commonly referred to as your chromosomal sexor genetic sex. Interestingly, at least 1 in every 1,000 conceptions results in a variation of chromosomal sex beyond the typical XX or XY sets. Some of these variations include, XXX, XXY, XYY, or even a single X (Dreger, 1998). In some cases, people may have unusual physical characteristics, such as being taller than average, having a thick neck, or being sterile (unable to reproduce); but in many cases, these individuals have no cognitive, physical, or sexual issues (Wisniewski et al., 2000). Almost 15 out of every 1,000 births are multiple births (twins, triplets, quadruplets, etc.). These can occur in a couple of ways. Dizygotic (fraternal) births are the result of a female releasing multiple ova of which more than one is fertilized by sperm. Because sperm carry either X or Y chromosomes, fraternal births can be any combination of sexes (e.g., two girls or a boy and a girl). They develop together in the uterus and are usually born within minutes of one another. Monozygotic (identical) births result from a special circumstance in which a fertilized ovum splits into multiple identical embryos and they develop simultaneously. Identical twins are, therefore, the same sex. Hours after conception, the zygote begins dividing into additional cells. It then starts traveling down the fallopian tube until it enters the uterus as a blastocyst. The blastocyst implants itself within the wall of the uterus to become an embryo (Moore, Persaud & Torchia, 2016). However, the percentage of successful implantations remains a mystery. Researchers believe the failure rate to be as high as 60% (Diedrich et al., 2007). Failed blastocysts are eliminated during menstruation, often without the female ever knowing conception occurred. Mothers are pregnant for three trimesters, a term that begins with their last menstrual period and ends about 40 weeks later; each trimester is 13 weeks. During the first trimester, most of the body parts of the embryo are formed, although at this stage they are not in the same proportions as they will be at birth. The brain and head, for example, account for about half of the body at this point. During the fifth and sixth weeks of gestation, the primitive gonads are formed. They eventually develop into ovaries or testes. Until the seventh week, the developing embryo has the potential of having either male (Wolffian ducts) or female (Mullerian ducts) internal sex organs, regardless of chromosomal sex. In fact, there is an innate tendency for all embryos to have female internal sex organs, unless there is the presence of the SRY gene, located on the Y-chromosome (Grumbach & Conte, 1998; Wizemann & Pardue, 2001). The SRY gene causes XY-embryos to develop testes (dividing cells from the medulla). The testes emit testosterone which stimulates the development of male internal sex organs—the Wolffian ducts transforming into the epididymis, seminal vesicles, and vas deferens. The testes also emit a Mullerian inhibiting substance, a hormone that causes the Mullerian ducts to atrophy. If the SRY gene is not present or active—typical for chromosomal females (XX)—then XX-embryos develop ovaries (dividing cells from the cortex) and the Mullerian ducts transform into female internal sex organs, including the fallopian tubes, uterus, cervix, and inner two-thirds of the vagina (Carlson, 1986). Without a burst of testosterone from the testes, the Wolffian ducts naturally deteriorate (Grumbach & Conte, 1998; Wizemann & Pardue, 2001). During the second trimester, expectant mothers can feel movement in their wombs. This is known as quickening. Inside the uterus, the embryo develops fine hair all over its body (called lanugo) as well as eyelashes and eyebrows. Major organs, such as the pancreas and liver, begin fully functioning. By the 20th week of gestation, the external sex organs are fully formed, which is why “sex determination” using ultrasound during this time is more accurate than in the first trimester (Igbinedion & Akhigbe, 2012; Odeh, Ophir & Bornstein, 2008). Formation of male external sex organs (e.g., the penis and scrotum) is dependent upon high levels of testosterone, whereas female external sex organs (e.g., the outer third of the vagina and the clitoris) form without hormonal influences (Carlson, 1986). Levels of sex hormones, such as estrogen, testosterone, and progesterone, begin affecting the brain during this trimester, impacting future emotions, behaviors, and thoughts related to gender identity and sexual orientation (Swaab, 2004). It’s important to understand that the interactions of chromosomal sex, gonadal sex, sex hormones, internal sex organs, external sex organs, and brain differentiations during this developmental stage are too complex to readily conform to the familiar categories of sex, gender, and sexual orientation historically used to describe people (Herdt, 1996). Toward the end of the second trimester—at about the 26th week—is the age of viability, when survival outside of the uterus has a probability of more than 90% (Rysavy et al., 2015). Interestingly, technological advances and changes in hospital care have affected the age of viability such that viability is possible earlier in pregnancy (Rysavy et al., 2015). During the third trimester, there is rapid development in the brain and rapid weight gain. Typically, by the 36th week, the fetus begins descending head-first into the uterine cavity. Getting ready for birth is not the only behavior exhibited during this last trimester. Erectile responses in male fetuses occur during this time (Haffner, 1999; Martinson, 1994; Parrot, 1994); and Giorgi and Siccardi (1996) reported ultrasonographic observations of a fetus performing self-exploration of her external sex organs. Most babies are born vaginally (through the vagina), though in the United States one-third are by Cesarean section (through the abdomen; Molina et al., 2015). A newborn’s health is initially determined by his/her weight (normally ranging between 2,500 and 4,000 grams)—though birth weight significantly differs between ethnicities (Jannsen et al., 2007). Birth Control Contraception, or birth control, reduces the probability of pregnancy resulting from sexual intercourse. There are various forms of birth control, including: hormonal, barrier, or natural. As shown in Table 1, the effectiveness of the different forms of birth control ranges widely, from 68% to 99.9% (optionsforsexualhealth.org). Hormonal forms of birth control release synthetic estrogen or progestin, which prevents ovulation and thickens cervical mucus, making it difficult for sperm to reach ova (sexandu.ca/contraception). There are a variety of ways to introduce these hormones into the body, including: implantable rods, birth control pills, injections, transdermal patches, IUDs, and vaginal rings. For example, the vaginal ring is 92% effective, easily inserted into and taken out of the vagina by the user, and comprised of thin plastic containing a combination of hormones that are released during the time it is in the vagina—usually about three weeks. Barrier forms of birth control prevent sperm from entering the uterus by creating a physical barrier or chemical barrier toxic to sperm. There are a variety of barrier methods, including: vasectomies, tubal ligations, male and female condoms, spermicides, diaphragms, and cervical caps. The most popular barrier method is the condom, which is 79-85% effective. The male condom is placed over the penis, whereas the female condom is worn inside the vagina and fits around the cervix. Condoms prevent bodily fluids from being exchanged and reduce skin-to-skin contact. For this reason, condoms are also used to reduce the risk of some sexually transmitted infections (STIs). However, it is important to note that male and female condoms, or two male condoms, should not be worn simultaneously during penetration; the friction between multiple condoms creates microscopic tears, rendering them ineffective (Munoz, Davtyan & Brown, 2014). Natural forms of birth control rely on knowledge of the menstrual cycle and awareness of the body. They include the Fertility Awareness Method (FAM), lactational amenorrhea method, and withdrawal. For example, the FAM is about 75% effective, and requires tracking the menstrual cycle, and avoiding sexual intercourse or using other forms of birth control during the female’s fertile window. About 30% of females’ fertile windows—the period when a female is most likely to conceive—are between days ten and seventeen of their menstrual cycle (Wilcox, Dunson & Baird, 2000). The remaining 70% of females experience irregular and less predictable fertile windows, reducing the efficacy of the FAM. Other forms of birth control that do not fit into the above categories include: emergency contraceptive pills, the copper IUD, and abstinence. Emergency contraceptive pills (e.g., Plan B) delay the release of an ovum if taken prior to ovulation. Emergency contraception is a form of birth control typically used after unprotected sex, condom mishaps, or sexual assault. The most effective form of emergency contraception is the copper IUD. A medical professional inserts the IUD through the opening of the cervix and into the uterus. It is more than 99% effective and may be left within the uterus for over 10 years. It differs from typical IUDs because it is hormone-free and uses copper ions to create an inhospitable environment for sperm, thus significantly reducing the chances of fertilization. Additionally, the copper ions alter the lining of the uterus, which significantly reduces the probability of implantation. Lastly, abstinence—avoiding any sexual behaviors that may lead to conception—is the only form of birth control with a 100% effective rate. There are many factors that determine the best birth control options for any particular person. Some factors are related to personality and habits. For example, if a woman is a forgetful person, “the pill” may not be her best option, since it requires being taken daily. Other factors that influence birth control choices include cost, age, education, religious beliefs, lifestyle, and sexual health. Sexually Transmitted Infections Unfortunately, a potential outcome of sexual activity is infection. Sexually transmitted infections (STIs) are like other transmittable infections, except STIs are primarily transmitted through social sexual behaviors. Social sexual behaviors include romantic kissing and oral, vaginal, and anal sex. Additionally, STIs can be transmitted through blood, and from mother to child during pregnancy and childbirth. STIs may lead to sexually transmitted diseases (STDs). Often, infections have no symptoms and do not lead to diseases. For example, the most common STI for men and women in the US is Human Papillomavirus (HPV). In most cases, HPV goes away on its own and has no symptoms. Only a fraction of HPV STIs develop into cervical, penile, mouth, or throat cancer (Centers for Disease Control and Prevention, CDCP, December 2016). There are more than 30 different STIs. STIs differ in their primary methods of transmission, symptoms, treatments, and whether they are caused by viruses or bacteria. Worldwide, some of the most common STIs are: genital herpes (500 million), HPV (290 million), trichomoniasis (143 million), chlamydia (131 million), gonorrhoea (78 million), human immunodeficiency virus (HIV, 36 million), and syphilis (6 million; World Health Organization, 2016). Medical testing to determine whether someone has an STI is relatively simple and often free (gettested.cdc.gov). Further, there are vaccines or treatments for all STIs, and many STIs are curable (e.g., chlamydia, gonorrhea, and trichomoniasis). However, without seeking treatment, all STIs have potential negative health effects, including death from some. For example, if untreated, HIV often leads to the STD acquired immune deficiency syndrome (AIDS)—over one million people die every year from AIDs (aids.gov). Unfortunately, many, if not most, people with STIs never get tested or treated. For example, as many as 30% of those with HIV and 90% of those with genital herpes are unaware of having an STI (Fleming et al., 1997; Nguyen & Holodniy, 2008). It is impossible to contract an STI from a person who does not have an STI. This may seem like an obvious statement, but a recent study asked 596 freshmen- and sophomore-level college students the following True/False question, “A person can get AIDS by having anal (rectal) intercourse even if neither partner is infected with the AIDS virus,” and found that 33% of them answered “true” (Lucas et al., 2016). What is obvious, is that false stereotypes about anal sex causing AIDS continue to misinform our collective sexual knowledge. Only open, honest, and comprehensive education about human sexuality can fight these STI stereotypes. To be clear, anal sex is associated with STIs, but it cannot cause an STI. Specifically, anal sex, when compared to vaginal sex (the second most likely method of transmission), oral sex (third most likely), and romantic kissing (fourth most likely), is associated with the greatest risk of transmitting and contracting STIs, because the tissue lining of the rectum is relatively thin and apt to tear and bleed, thereby passing on the infection (CDCP, 2016). A sexually active person’s chance of getting an STI depends on a variety of factors. Two of these are age and access to sex education. Young people between the ages of 15 and 24 account for more than 50% of all new STIs, even though they account for only about 25% of the sexually active population (Satterwhite et al., 2013). Generally, young males and females are equally susceptible to getting an STI; however, females are much more likely to suffer long-term health consequences of an STI. For example, each year in the US, undiagnosed STDs cause about 24,000 females to become infertile (CDCP, October 2016; DiClemente, Salazar & Crosby, 2007). Limited access to comprehensive sex education is also a major contributing factor toward the risk of contracting an STI. Unfortunately, some sex education is limited to the promotion of abstinence, and relies heavily on “virginity pledges.” A virginity pledge is a commitment to refrain from sexual intercourse until heterosexual marriage. Although virginity pledges fit well with some cultural and religious worldviews, they are only effective if people, in fact, remain abstinent. Unfortunately, this is not always the case; research reveals many ways these types of strategies can backfire. Adolescents who take virginity pledges are significantly less likely than other adolescents to use contraception when they do become sexually active (Bearman & Brückner, 2001). Further, virginity pledgers are four to six times more likely than non-pledgers to engage in both oral and anal intercourse (Paik, Sanchagrin & Heimer, 2016), often assuming they’re preserving their virgin status by simply avoiding vaginal sex. In fact, schools with students taking virginity pledges have significantly higher rates of STIs than other schools (Bearman & Brückner, 2001). Interestingly, senior citizens are one of the fastest growing segments of the European and US populations being diagnosed with STIs. The Centers for Disease Control and Prevention report a steady increase in people over 65 being diagnosed with HIV; since 2007, incidence of syphilis among seniors is up by 52% and chlamydia is up by 32%; and from 2010 to 2014, there was a 38% increase in STI diagnoses in people between the ages of 50 and 70 (Forster, 2016; Weiss, 2014). Why is this happening? Bear in mind, seniors are not necessarily more sexually knowledgeable than adolescents; they may have no greater access to comprehensive sex education than younger people (Adams, Oye & Parker, 2003). Even so, medical advances allow seniors to continue to be sexually active at later points in their lifespan—and to make the same mistakes adolescents make about safer sex. Safer Sex STIs are 100% preventable: Simply don’t engage in social sexual behaviors. But in the grand scheme of things, you may be surprised to hear, avoiding sex is detrimental to your physical and mental well-being—whereas, having sex can be widely beneficial (Charnetski & Brennan, 2004; Ditzen, Hoppmann & Klumb, 2008; Hall et al., 2010). Thus, we recommend safer-sex practices, such as communication, honesty, and barrier methods. Safer-sex practices always begin with communication. Before engaging in sexual behaviors with a partner, a clear, honest, and explicit understanding of your boundaries, as well as your partner’s, should be established. Safer sex involves discussing and using barriers—male condoms, female condoms, or dental dams—relative to your specific sexual behaviors. Also, keep in mind: Although safer sex may use some of the same tools as birth control, safer sex is not birth control. Birth control focuses on reproduction; safer sex focuses on well-being. A proactive approach to behaving sexually may at first seem burdensome, but it can be easily reimagined as “foreplay,” is associated with greater sexual satisfaction, increases the probability of orgasm, and addresses fears people have during sex (see Table 2; Jalili, 2016; Nuno, 2017). Sexual Dysfunctions Roughly 43% of women and 31% of men suffer from a clinically significant impairment to their ability to experience sexual pleasure or responsiveness as outlined by the SRC (Rosen, 2000). The Diagnostic and Statistical Manual of Mental Disorders, 5th edition (DSM) refers to these difficulties as sexual dysfunctions. According to the DSM, there are four male-specific dysfunctions: • delayed ejaculation • erectile disorder (ED) • male hypoactive sexual desire disorder • premature ejaculation (PE) There are three female-specific dysfunctions: • female orgasmic disorder • female sexual interest/arousal disorder • genito-pelvic pain/penetration disorder There is also one non-gender-specific sexual dysfunction: substance-/medication-induced sexual dysfunction (American Psychiatric Association, 2013). The most commonly reported male sexual dysfunctions are premature ejaculation (PE) and erectile dysfunction (ED), whereas females most frequently report dysfunctions involving desire and arousal. Females are also more likely to experience multiple sexual dysfunctions (McCabe et al., 2016). PE is a pattern of early ejaculation that impairs sexual performance and causes personal distress. In severe cases, ejaculation may occur prior to the start of sexual activity or within 15 seconds of penetration (American Psychiatric Association, 2013). PE is a fairly common sexual dysfunction, with prevalence rates ranging from 20-30%. Relationship and intimacy difficulties, as well as anxiety, low self-confidence, and depression, are often associated with PE. Most males with PE do not seek treatment (Porst et al., 2007). ED is the frequent difficulty to either obtain or maintain an erection, or a significant decrease in erectile firmness. Normal aging increases the prevalence and incidence rates of erectile difficulties, especially after the age of 50 (American Psychiatric Association, 2013). However, recent studies have found significant increases in the prevalence of ED in young men, less than 30 years of age (e.g., Capogrosso et al., 2013). Female sexual interest/arousal disorder (FSIAD) is characterized by reduced or absent sexual interest or arousal. A person diagnosed with FSIAD has had an absence of at least three of the following emotions, behaviors, and thoughts for more than six months: • interest in sexual activity • sexual or erotic thoughts and fantasies • initiation of sexual activity • sexual excitement or pleasure during sexual activity • sexual interest/arousal in response to sexual or erotic cues • genital or non-genital sensations during sexual activity FSIAD is not diagnosed if the presenting symptoms are a result of insufficient stimulation or lack of sexual knowledge—such as the erroneous expectation that penile-vaginal intercourse always results in orgasm (American Psychiatric Association, 2013). Treatments When it comes to treating sexual dysfunctions, there’s some good news and there’s some bad news. The good news is that most sexual dysfunctions have treatments—however, most people don’t seek them out (Gott & Hinchliff, 2003). So, the further good news is that—once you have the knowledge (say, from this module)—if you experience such difficulties, getting treatment is just a matter of making the choice to seek it out. Unfortunately, the bad news is that most treatments for sexual dysfunctions don’t address the psychological and sociocultural underpinnings of the problems, but instead focus exclusively on the physiological roots. For example, Montague et al. (2007, pg. 1-7) make this point perfectly clear in The American Urological Association’s treatment options for ED: “The currently available therapies…for the treatment of erectile dysfunction include the following: oral phosphodiesterase type 5 inhibitors, intra-urethral alprostadil, intracavernous vasoactive drug injection, vacuum constriction devices, and penile prosthesis implantation.” Treatments that focus solely on managing symptoms with biological fixes neglect the fundamental issue of sexual dysfunctions being grounded in psychological, relational, and social contexts. For example, a female seeking treatment for inadequate lubrication during intercourse is most likely to be prescribed a supplemental lubricant to alleviate her symptoms. The next time she is sexually intimate, the lubricant may solve her vaginal dryness, but her lack of natural arousal and lubrication due to partner abuse, is completely overlooked (Kleinplatz, 2012). There are numerous factors associated with sexual dysfunctions, including: relationship issues; adverse sexual attitudes and beliefs; medical issues; sexually-oppressive cultural attitudes, codes, or laws; and a general lack of knowledge. Thus, treatments for sexual dysfunctions should address the physiological, psychological, and sociocultural roots of the problem. Conclusion We hope the information in this module has a positive impact on your physical, psychological, and relational health. As we initially promised, your clandestine Google searches should decrease now that you’ve acquired a scientifically-based foundation in sexual anatomy and physiology. What we neglected to mention earlier is that this foundation may dramatically increase your overt Google searches about sexuality! Exploring human sexuality is a limitless enterprise. And, by embracing your innate curiosity and sexual knowledge, we predict your sexual-literacy journeys are just beginning. Acknowledgements The authors are indebted to Robert Biswas-Diener, Trina Cowan, Kara Paige, and Liz Wright for editing drafts of this module. Outside Resources Journal: The Journal of Sex Research www.sexscience.org/journal_of_sex_research/ Journal: The Journal of Sexual Medicine http://www.jsm.jsexmed.org/ Organization: Advocates for Youth partners with youth leaders, adult allies, and youth-serving organizations to advocate for policies and champion programs that recognize young people’s rights to honest sexual health information; accessible, confidential, and affordable sexual health services; and the resources and opportunities necessary to create sexual health equity for all youth. http://www.advocatesforyouth.org/ Organization: SIECUS - the Sexuality Information and Education Council of the United States - was founded in 1964 to provide education and information about sexuality and sexual and reproductive health. http://www.siecus.org/ Organization: The Guttmacher Institute is a leading research and policy organization committed to advancing sexual and reproductive health and rights in the United States and globally. https://www.guttmacher.org/ Video: 5MIweekly—YouTube channel with weekly videos that playfully and scientifically examine human sexuality. https://www.youtube.com/channel/UCQFQ0vPPNPS-LYhlbKOzpFw Video: Sexplanations—YouTube channel with shame-free educational videos on everything sex. https://www.youtube.com/user/sexplanations Video: YouTube - AsapSCIENCE https://www.youtube.com/user/AsapSCIENCE Web: Kinsey Confidential—Podcast with empirically-based answers about sexual questions. kinseyconfidential.org/ Web: Sex & Psychology Web: Sex & Psychology—Blog about the science of sex, love, and relationships. http://www.lehmiller.com/ Discussion Questions 1. Consider your own source(s) of sexual anatomy and physiology information previous to this module. Discuss at least three of your own prior sexual beliefs challenged by the content of this module. 2. Pretend you are tasked with teaching a group of adolescents about sexual anatomy, but with a twist: You must teach through the lens of pleasure instead of reproduction. What would your talking points be? Be sure to incorporate the role of the brain in evoking sexual pleasure. 3. Given how universal and similar the sexual response cycle is for both males and females, why do you think males enter a refractory period during the resolution phase and females do not? Consider potential evolutionary reasons for why this occurs. 4. Imagine yourself as a developing human being from conception to birth. Using a first-person point of view, create a commentary that addresses the significant milestones achieved in each trimester. 5. Pretend your hypothetical adolescent daughter has expressed interest in birth control. During her appointment with a health care provider, what are some factors that should be considered prior to selecting the best birth control method for her? 6. Describe at least three ways you can reduce your chances of contracting a sexually transmitted infection. 7. How can practicing safer sex enhance your well-being? 8. As discussed within the module, numerous influences contribute to the development and maintenance of a sexual dysfunction, such as, adverse sexual attitudes and beliefs. Which influences, if any, can you relate to? How do you plan on addressing those influences to achieve optimal sexual health? Vocabulary Abstinence Avoiding any sexual behaviors that may lead to conception. Age of viability The age at which a fetus can survive outside of the uterus. Barrier forms of birth control Methods in which sperm is prevented from entering the uterus, either through physical or chemical barriers. Cervix The lower portion of the uterus that connects to the vagina. Chromosomal sex Also known as genetic sex; defined by the 23rd set of chromosomes. Clitoris A sensitive and erectile part of the vulva; its main function is to initiate orgasms. Conception Occurs typically within the fallopian tube, when a single sperm fertilizes an ovum cell. Cowper's glands Glands that produce a fluid that lubricates the urethra and neutralizes any acidity due to urine. Emergency contraception A form of birth control used in a variety of circumstances, such as after unprotected sex, condom mishaps, or sexual assault. Epididymis A twisted duct that matures, stores, and transports sperm cells into the vas deferens. Erogenous zones Highly sensitive areas of the body. Excitement phase The activation of the sympathetic branch of the autonomic nervous system defines this phase of the sexual response cycle; heart rate and breathing accelerate, along with increased blood flow to the penis, vaginal walls, clitoris, and nipples. Fallopian tubes The female’s internal sex organ where fertilization is most likely to occur. Foreskin The skin covering the glans or head of the penis. Glans penis The highly sensitive head of the penis, associated with initiating orgasms. Hormonal forms of birth control Methods by which synthetic estrogen or progesterone are released to prevent ovulation and thicken cervical mucus. Introitus The vaginal opening to the outside of the body. Labia majora The “large lips” enclosing and protecting the female internal sex organs. Labia minora The “small lips” surrounding and defining the openings of the vagina and urethra. Menstruation The process by which ova as well as the lining of the uterus are discharged from the vagina after fertilization does not occur. Mullerian ducts Primitive female internal sex organs. Myotonia Involuntary muscular movements, such as facial grimaces, that occur during the excitement phase of the sexual response cycle. Natural forms of birth control Methods that rely on knowledge of the menstrual cycle and awareness of the body. Neuroimaging techniques Seeing and measuring live and active brains by such techniques as electroencephalography (EEG), computerized axial tomography (CAT), and functional magnetic resonance imaging (fMRI). Orgasm phase The shortest, but most pleasurable, phase of the sexual response cycle. Orgasmic platform The tightening of the outer third of the vaginal walls during the plateau phase of the sexual response cycle. Ovaries The glands housing the ova and producing progesterone, estrogen, and small amounts of testosterone. Ovulation When ova travel from the ovaries to the uterus. Oxytocin A neurotransmitter that regulates bonding and sexual reproduction. Penis The most prominent external sex organ in males; it has three main functions: initiating orgasm, and transporting semen and urine outside of the body. Plateau phase The phase of the sexual response cycle in which blood flow, heart rate, and breathing intensify. Plethysmography The measuring of changes in blood - or airflow - to organs. Pregnancy The time in which a female carries a developing human within her uterus. Primitive gonads Reproductive structures in embryos that will eventually develop into ovaries or testes. Prostate gland A male gland that releases prostatic fluid to nourish sperm cells. Quickening The feeling of fetal movement. Refractory period Time following male ejaculation in which he is unresponsive to sexual stimuli. Resolution phase The phase of the sexual response cycle in which the body returns to a pre-aroused state. Safer-sex practices Doing anything that may decrease the probability of sexual assault, sexually transmitted infections, or unwanted pregnancy; these may include using condoms, honesty, and communication. Scrotum The sac of skin behind and below the penis, containing the testicles. Semen The fluid that sperm cells are transported within. Seminal vesicles Glands that provide sperm cells the energy that allows them to move. Sexual dysfunctions A range of clinically significant impairments in a person’s ability to experience pleasure or respond sexually as outlined by the sexual response cycle. Sexual response cycle Excitement, Plateau, Orgasm, and Resolution. Sexually transmitted infections (STIs) Infections primarily transmitted through social sexual behaviors. Skene’s glands Also called minor vestibular glands, these glands are on the anterior wall of the vagina and are associated with female ejaculation. Somatosensory cortex A portion of the parietal cortex that processes sensory information from the skin. Testicles Also called testes—the glands producing testosterone, progesterone, small amounts of estrogen, and sperm cells. Trimesters Phases of gestation, beginning with the last menstrual period and ending about 40 weeks later; each trimester is roughly 13 weeks in length. Urethra The tube that carries urine and semen outside of the body. Uterus Also called the womb—the female’s internal sex organ where offspring develop until birth. Vagina Also called the birth canal—a muscular canal that spans from the cervix to the introitus, it acts as a transport mechanism for sperm cells coming in, and menstrual fluid and babies going out. Vas deferens A muscular tube that transports mature sperm to the urethra. Vasectomy A surgical form of birth control in males, in which the vas deferens is intentionally damaged. Vestibular glands (VGs) Also called major vestibular glands, these glands are located just to the left and right of the vagina, and produce lubrication to aid in sexual intercourse. Vulva The female’s external sex organs. Wolffian ducts Primitive male internal sex organs. Zygote Fertilized ovum.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_1%3A_Biological_Basis_of_Behavior/1.9%3A_Human_Sexual_Anatomy_and_Physiology.txt
• 10.1: Positive Psychology A brief history of the positive psychology movement is presented, and key themes within positive psychology are identified. Three important positive psychology topics are gratitude, forgiveness, and humility. Ten key findings within the field of positive psychology are put forth, and the most important empirical findings regarding gratitude, forgiveness, and humility are discussed. • 10.2: Happiness: The Science of Subjective Well-Being Subjective well-being (SWB) is the scientific term for happiness and life satisfaction—thinking and feeling that your life is going well, not badly. Scientists rely primarily on self-report surveys to assess the happiness of individuals, but they have validated these scales with other types of measures. People’s levels of subjective well-being are influenced by both internal factors, such as personality and outlook, and external factors, such as the society in which they live. • 10.3: Optimal Levels of Happiness This module asks two questions: “Is happiness good?” and “Is happier better?” (i.e., is there any benefit to be happier, even if one is already moderately happy?) The answer to the first question is by and large “yes.” The answer to the second question is, “it depends.” That is, the optimal level of happiness differs, depending on specific life domains. • 10.4: The Healthy Life Our emotions, thoughts, and behaviors play an important role in our health. Not only do they influence our day-to-day health practices, but they can also influence how our body functions. This module provides an overview of health psychology, which is a field devoted to understanding the connections between psychology and health. Chapter 10: Well Being By Robert A. Emmons University of California, Davis A brief history of the positive psychology movement is presented, and key themes within positive psychology are identified. Three important positive psychology topics are gratitude, forgiveness, and humility. Ten key findings within the field of positive psychology are put forth, and the most important empirical findings regarding gratitude, forgiveness, and humility are discussed. Assessment techniques for these three strengths are described, and interventions for increasing gratitude, developing forgiveness, and becoming more humble are briefly considered. learning objectives • Describe what positive psychology is, who started it, and why it came into existence. • Identify some of the most important findings from the science of positive psychology with respect to forgiveness, gratitude, and humility. • Explore how positive psychology might make a difference in how you think about your own life, the nature of human nature, and what is really important to you. Introduction Positive psychology is a popular movement that began in the late 1990’s. It is the branch of psychology that has as its primary focus on the strengths, virtues, and talents that contribute to successful functioning and enable individuals and communities to flourish. Core topics include happiness, resiliency, well-being, and states of flow and engagement. It was spearheaded by a former president of the American Psychological Association, Martin Seligman. Throughout most of its history, psychology was concerned with identifying and remedying human ills. It has largely focused on decreasing maladaptive emotions and behaviors, while generally ignoring positive and optimal functioning. In contrast, the goal of positive psychology is to identify and enhance the human strengths and virtues that make life worth living. Unlike the positive thinking or new thought movements that are associated with people like Norman Vincent Peale or Rhonda Byrne (The Secret), positive psychology pursues scientifically informed perspectives on what makes life worth living. It is empirically based. It focuses on measuring aspects of the human condition that lead to happiness, fulfillment, and flourishing. The science of happiness is covered in other modules within this section of this book. Therefore, aside from key findings summarized in Table 1, the emphasis in this module will be on other topics within positive psychology. Moving from an exclusive focus on distress, disorder, and dysfunction, positive psychology shifts the scientific lens to a concentration on well-being, health, and optimal functioning. Positive psychology provides a different vantage point through which to understand human experience. Recent developments have produced a common framework and that locates the study of positive states, strengths and virtues in relation to each other and links them to important life outcomes. Recent developments suggest that problems in psychological functioning may be more profitably dealt with as the absence, excess, or opposite of these strengths rather than traditional diagnostic categories of mental illness. The principal claim of positive psychology is that the study of health, fulfillment and well-being is as deserving of study as illness, dysfunction, and distress, has resonated well with both the academic community and the general public. As a relatively new field of research, positive psychology lacked a common vocabulary for discussing measurable positive traits before 2004. Traditional psychology benefited from the creation of Diagnostic and Statistical Manual of Mental Disorders (DSM), which provided researchers and clinicians with the same set of language from which they could talk about the negative. As a first step in remedying this disparity between traditional and positive psychology, Chris Peterson and Martin Seligman set out to identify, organize and measure character. The Values in Action (VIA) classification of strengths was an important initial step toward specifying important positive traits (Peterson & Seligman, 2004). Peterson and Seligman examined ancient cultures (including their religions, politics, education and philosophies) for information about how people in the past construed human virtue. The researchers looked for virtues that were present across cultures and time. Six core virtues emerged from their analysis: courage, justice, humanity, temperance, transcendence and wisdom. The VIA is the positive psychology counterpart to the DSM used in traditional psychology and psychiatry. Unlike the DSM, which scientifically categorizes human deficits and disorders, the VIA classifies positive human strengths. This approach vastly departs from the medical model of traditional psychology, which focuses on fixing deficits. In contrast, positive psychologists emphasize that people should focus and build upon on what they are doing well. The VIA is a tool by which people can identify their own character strengths and learn how to capitalize on them. It consists of 240 questions that ask respondents to report the degree to which statements reflecting each of the strengths apply to themselves. For example, the character strength of hope is measured with items that include ‘‘I know that I will succeed with the goals I set for myself.’’ The strength of gratitude is measured with such items as ‘‘At least once a day, I stop and count my blessings.’’ Within the United States, the most commonly endorsed strengths are kindness, fairness, honesty, gratitude and judgment (Park, Peterson & Seligman, 2006). Worldwide, the following strengths were most associated with positive life satisfaction: hope, zest, gratitude and love. The researchers called these strengths of the heart. Moreover, strengths associated with knowledge, such as love of learning and curiosity, were least correlated with life satisfaction (Park, Peterson & Seligman, 2005). Three Key Strengths Forgiveness, gratitude, and humility are three key strengths that have been the focus of sustained research programs within positive psychology. What have we learned about each of these and why do these matter for human flourishing? Forgiveness Forgiveness is essential to harmonious long-term relationships between individuals, whether between spouses or nations, dyads or collectives. At the level of the individual, forgiveness of self can help one achieve an inner peace as well as peace with others and with God. Wrongdoing against others can result in guilt, and self-loathing. Resentment can give away to hate and intolerance. Both perpetrator and victim suffer. Conversely, forgiveness can be an avenue to healing. It is the basic building block of loving relationships with others. When one person or nation does something to hurt another, the relationship between the two can be irrevocably damaged. Because the potential for conflict is seemingly built into human nature, the prospects for long-term peace may seem faint. Forgiveness offers another way. If the victim can forgive the perpetrator, the relationship may be restored and possibly even saved from termination. The essence of forgiveness is that it creates a possibility for a relationship to recover from the damage caused by the offending party’s offense. Forgiveness is thus a powerful pro-social process. It can benefit human social life by helping relationships to heal. , on the social level, forgiveness may be the critical element needed for world peace. Culligan (2002) wrote "Forgiveness may ultimately be the most powerful weapon for breaking the dreadful cycle of violence." Research is answering fundamental questions about what forgiveness is and isn’t, how it develops, what are its physiological correlates and physical effects, whether it is always beneficial, and how people—if they are so motivated—might be helped to forgive. Forgiveness is not excusing, condoning, tolerating, or forgetting that one has been hurt because of the actions of another. Forgiveness is letting go of negative thoughts (e.g. wishing the offender harm), negative behaviors (e.g. a desire to retaliate, and negative feelings (e.g. resentment) toward the offender (McCullough, Root, & Cohen, 2006). There have been numerous studies looking at forgiveness interventions. The interventions involved counseling and exercises which were used to help people move from anger and resentment towards forgiveness. In one study, incest survivors who experienced the forgiveness intervention had at the end of the intervention increased abilities to forgive others, increased hopefulness and decreased levels of anxiety and depression. In another study, college students were randomized to a group that received a forgiveness education program and another group who studied human relations. The group that received the forgiveness education program showed higher levels of hope and an increased willingness to forgive others. This greater self-forgiveness was associated with increased self-esteem, lower levels of anxiety, lower levels of depression and a more positive view of their patient. In many of these studies, it was shown that people who are able to forgive are more likely to have better interpersonal functioning and therefore social support. The act of forgiveness can result in less anxiety and depression, better health outcomes, increased coping with stress, and increased closeness to God and others (Enright, 2001). Gratitude Gratitude is a feeling of appreciation or thankfulness in response to receiving a benefit. The emerging science of gratitude has produced some important findings. From childhood to old age, accumulating evidence documents the wide array of psychological, physical, and relational benefits associated with gratitude (Wood, Froh, & Geraghty, 2010). Gratitude is important not only because it helps us feel good, but also because it inspires us to do good. Gratitude heals, energizes, and transforms lives in a myriad of ways consistent with the notion that virtue is both its own reward and produces other rewards (Emmons, 2007). To give a flavor of these research findings, dispositional gratitude has been found to be positively associated qualities such as empathy, forgiveness, and the willingness to help others. For example, people who rated themselves as having a grateful disposition perceived themselves as having more socially helpful characteristics, expressed by their empathetic behavior, and emotional support for friends within the last month (McCullough, Emmons, & Tsang, 2002). In our research, when people report feeling grateful, thankful, and appreciative in their daily lives, they also feel more loving, forgiving, joyful, and enthusiastic. Notably, the family, friends, partners and others who surround them consistently report that people who practice gratitude are viewed as more helpful, more outgoing, more optimistic, and more trustworthy (Emmons & McCullough, 2003). Expressing gratitude for life’s blessings – that is, a sense of wonder, thankfulness and appreciation– is likely to elevate happiness for a number of reasons. Grateful thinking fosters the savoring of positive life experiences and situations, so that people can extract the maximum possible satisfaction and enjoyment from their circumstances. Counting one’s blessings may directly counteract the effects of hedonic adaptation, the process by which our happiness level returns, again and again, to its set range, by preventing people from taking the good things in their lives for granted. If we consciously remind ourselves of our blessings, it should become harder to take them for granted and adapt to them. And the very act of viewing good things as gifts itself is likely to be beneficial for mood. How much does it matter? Consider these eye-popping statistics. People are 25% happier if they keep gratitude journals, sleep 1/2 hour more per evening, and exercise 33% more each week compared to persons who are not keeping journals. They achieve up to a 10% reduction in systolic blood pressure, and decrease their dietary fat intake by up to 20%. Lives marked by frequent positive emotions of joy, love and gratitude are up to 7 years longer than lives bereft of these pleasant feelings. The science of gratitude has also revealed some surprising findings. For example, students who practice gratitude increase their grade point average. Occasional gratitude journaling boosts well-being more than the regular practice of counting blessings. Remembering one’s sorrows, failures, and other painful experiences is more beneficial to happiness than recalling only successes. Becoming aware that a very pleasant experience is about to end enhances feelings of gratitude for it. Thinking about the absence of something positive in your life produces more gratitude and happiness than imagining its presence. To assess your own level of gratefulness, take the test in Table 2. Humility What is humility and why does it matter? Although the etymological roots of humility are in lowliness and self-abasement (from the Latin term humilis meaning “lowly, humble,” or literally “on the ground” and from the Latin term humus meaning "earth"), the emerging consensus among scholars is that humility is a psychological and intellectual virtue, or a character strength. There is no simple definition but it seems to involve the following elements: A clear and accurate (not underestimated) sense of one’s abilities and achievements; the ability to acknowledge one’s mistakes, imperfections, gaps in knowledge, and limitations (often with reference to a “higher power”); an openness to new ideas, contradictory information, and advice keeping one’s abilities and accomplishments in perspective; relatively low self-focus or an ability to “forget the self”; appreciation of the value of all things, as well as the many different ways that people and things can contribute to our world. In contemporary society, it is easy to overlook the merits of humility. In politics, business and sports, the egoists command our attention. “Show me someone without an ego,” said real estate mogul Donald Drumpf, “and I’ll show you a loser.” In contrast, the primary message of this book is that the unassuming virtue of humility, rather than representing weakness or inferiority, as is commonly assumed, is a strength of character that produces positive, beneficial results for self and society. Successful people are humble people. They are more likely to flourish in life, in more domains, than are people who are less humble (Exline & Hill, 2012). Do you think you are you a humble person? For obvious reasons, you cannot rate your own level of humility. It’s an elusive concept to get at scientifically. “I am very humble” is self-contradictory. This has not discouraged personality psychologists from developing questionnaires to get at it, albeit indirectly. For example, to what extent do you identify with each of the following statements: 1. I generally have a good idea about the things I do well or do poorly. 2. I have difficulty accepting advice from other people. 3. I try my best in things, but I realize that I have a lot of work to do in many areas. 4. I am keenly aware of what little I know about the world. Questions such as these tap various facets of the humble personality, including an appreciation and recognition of one’s limitations, and an accurate assessment of oneself. Humble people are more likely to flourish in life, in more domains, than are people who are less humble. Consider a handful of findings from recent research studies and surveys: • People who say they feel humble when they are praised report that the experience made them want to be nice to people, increase their efforts, and challenge themselves • Humble people are more admired and the trait of humility is viewed positively by most • Humble teachers are rated as more effective and humble lawyers as more likeable by jurors • CEO’s who possessed a rare combination of extreme humility and strong professional will were catalysts for transforming a good company into a great one • Over 80% of adults surveyed indicated that it is important that professionals demonstrate modesty/humility in their work • Humility is positively associated with academic success in the form of higher grades (Exline & Hill, 2012). The science of positive psychology has grown remarkably quickly since it first appeared on the scene in the late 1990’s. Already, considerable progress has been made in understanding empirically the foundations of a good life. Knowledge from basic research in positive psychology is being applied in a number of settings, from psychotherapy to workplace settings to schools and even to the military (Biswas-Diener, 2011); A proper blend of science and practice will be required in order for positive psychology to fully realize its potential in dealing with the future challenges that we face as humans. Outside Resources Web: Authentic Happiness. http://www.authentichappiness.sas.upenn.edu Web: The International Positive Psychology Association (IPPA). http://www.ippanetwork.org/ Discussion Questions 1. Can you think of people in your life who are very humble? What do they do or say that expresses their humility? To what extent do you think it would be good if you were more humble? To what extent do you think it would be good if you were less humble? 2. How can thinking gratefully about an unpleasant event from your past help you to deal positively with it? As the result of this event, what kinds of things do you now feel thankful or grateful for? How has this event benefited you as a person? How have you grown? Were there personal strengths that grew out of your experience? 3. Mahatma Gandhi once said, “The weak can never forgive. Forgiveness is the attribute of the strong.” What do you think he meant by this? Do you agree or disagree? What are some of the obstacles you have faced in your own life when trying to forgive others? Vocabulary Character strength A positive trait or quality deemed to be morally good and is valued for itself as well as for promoting individual and collective well-being. Flourishing To live optimally psychologically, relationally, and spiritually. Forgiveness The letting go of negative thoughts, feelings, and behaviors toward an offender. Gratitude A feeling of appreciation or thankfulness in response to receiving a benefit. Humility Having an accurate view of self—not too high or low—and a realistic appraisal of one’s strengths and weaknesses, especially in relation to other people. Positive psychology The science of human flourishing. Positive Psychology is an applied science with an emphasis on real world intervention. Pro-social Thoughts, actions, and feelings that are directed towards others and which are positive in nature.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_10%3A_Well_Being/10.1%3A_Positive_Psychology.txt
By Edward Diener University of Utah, University of Virginia Subjective well-being (SWB) is the scientific term for happiness and life satisfaction—thinking and feeling that your life is going well, not badly. Scientists rely primarily on self-report surveys to assess the happiness of individuals, but they have validated these scales with other types of measures. People’s levels of subjective well-being are influenced by both internal factors, such as personality and outlook, and external factors, such as the society in which they live. Some of the major determinants of subjective well-being are a person’s inborn temperament, the quality of their social relationships, the societies they live in, and their ability to meet their basic needs. To some degree people adapt to conditions so that over time our circumstances may not influence our happiness as much as one might predict they would. Importantly, researchers have also studied the outcomes of subjective well-being and have found that “happy” people are more likely to be healthier and live longer, to have better social relationships, and to be more productive at work. In other words, people high in subjective well-being seem to be healthier and function more effectively compared to people who are chronically stressed, depressed, or angry. Thus, happiness does not just feel good, but it is good for people and for those around them. learning objectives • Describe three major forms of happiness and a cause of each of them. • Be able to list two internal causes of subjective well-being and two external causes of subjective well-being. • Describe the types of societies that experience the most and least happiness, and why they do. • Describe the typical course of adaptation to events in terms of the time course of SWB. • Describe several of the beneficial outcomes of being a happy person. • Describe how happiness is typically measured. Introduction When people describe what they most want out of life, happiness is almost always on the list, and very frequently it is at the top of the list. When people describe what they want in life for their children, they frequently mention health and wealth, occasionally they mention fame or success—but they almost always mention happiness. People will claim that whether their kids are wealthy and work in some prestigious occupation or not, “I just want my kids to be happy.” Happiness appears to be one of the most important goals for people, if not the most important. But what is it, and how do people get it? In this module I describe “happiness” or subjective well-being (SWB) as a process—it results from certain internal and external causes, and in turn it influences the way people behave, as well as their physiological states. Thus, high SWB is not just a pleasant outcome but is an important factor in our future success. Because scientists have developed valid ways of measuring “happiness,” they have come in the past decades to know much about its causes and consequences. Types of Happiness Philosophers debated the nature of happiness for thousands of years, but scientists have recently discovered that happiness means different things. Three major types of happiness are high life satisfaction, frequent positive feelings, and infrequent negative feelings (Diener, 1984). “Subjective well-being” is the label given by scientists to the various forms of happiness taken together. Although there are additional forms of SWB, the three in the table below have been studied extensively. The table also shows that the causes of the different types of happiness can be somewhat different. You can see in the table that there are different causes of happiness, and that these causes are not identical for the various types of SWB. Therefore, there is no single key, no magic wand—high SWB is achieved by combining several different important elements (Diener & Biswas-Diener, 2008). Thus, people who promise to know the key to happiness are oversimplifying. Some people experience all three elements of happiness—they are very satisfied, enjoy life, and have only a few worries or other unpleasant emotions. Other unfortunate people are missing all three. Most of us also know individuals who have one type of happiness but not another. For example, imagine an elderly person who is completely satisfied with her life—she has done most everything she ever wanted—but is not currently enjoying life that much because of the infirmities of age. There are others who show a different pattern, for example, who really enjoy life but also experience a lot of stress, anger, and worry. And there are those who are having fun, but who are dissatisfied and believe they are wasting their lives. Because there are several components to happiness, each with somewhat different causes, there is no magic single cure-all that creates all forms of SWB. This means that to be happy, individuals must acquire each of the different elements that cause it. Causes of Subjective Well-Being There are external influences on people’s happiness—the circumstances in which they live. It is possible for some to be happy living in poverty with ill health, or with a child who has a serious disease, but this is difficult. In contrast, it is easier to be happy if one has supportive family and friends, ample resources to meet one’s needs, and good health. But even here there are exceptions—people who are depressed and unhappy while living in excellent circumstances. Thus, people can be happy or unhappy because of their personalities and the way they think about the world or because of the external circumstances in which they live. People vary in their propensity to happiness—in their personalities and outlook—and this means that knowing their living conditions is not enough to predict happiness. In the table below are shown internal and external circumstances that influence happiness. There are individual differences in what makes people happy, but the causes in the table are important for most people (Diener, Suh, Lucas, & Smith, 1999; Lyubomirsky, 2013; Myers, 1992). Societal Influences on Happiness When people consider their own happiness, they tend to think of their relationships, successes and failures, and other personal factors. But a very important influence on how happy people are is the society in which they live. It is easy to forget how important societies and neighborhoods are to people’s happiness or unhappiness. In Figure 10.2.1, I present life satisfaction around the world. You can see that some nations, those with the darkest shading on the map, are high in life satisfaction. Others, the lightest shaded areas, are very low. The grey areas in the map are places we could not collect happiness data—they were just too dangerous or inaccessible. Can you guess what might make some societies happier than others? Much of North America and Europe have relatively high life satisfaction, and much of Africa is low in life satisfaction. For life satisfaction living in an economically developed nation is helpful because when people must struggle to obtain food, shelter, and other basic necessities, they tend to be dissatisfied with lives. However, other factors, such as trusting and being able to count on others, are also crucial to the happiness within nations. Indeed, for enjoying life our relationships with others seem more important than living in a wealthy society. One factor that predicts unhappiness is conflict—individuals in nations with high internal conflict or conflict with neighboring nations tend to experience low SWB. Money and Happiness Will money make you happy? A certain level of income is needed to meet our needs, and very poor people are frequently dissatisfied with life (Diener & Seligman, 2004). However, having more and more money has diminishing returns—higher and higher incomes make less and less difference to happiness. Wealthy nations tend to have higher average life satisfaction than poor nations, but the United States has not experienced a rise in life satisfaction over the past decades, even as income has doubled. The goal is to find a level of income that you can live with and earn. Don’t let your aspirations continue to rise so that you always feel poor, no matter how much money you have. Research shows that materialistic people often tend to be less happy, and putting your emphasis on relationships and other areas of life besides just money is a wise strategy. Money can help life satisfaction, but when too many other valuable things are sacrificed to earn a lot of money—such as relationships or taking a less enjoyable job—the pursuit of money can harm happiness. There are stories of wealthy people who are unhappy and of janitors who are very happy. For instance, a number of extremely wealthy people in South Korea have committed suicide recently, apparently brought down by stress and other negative feelings. On the other hand, there is the hospital janitor who loved her life because she felt that her work in keeping the hospital clean was so important for the patients and nurses. Some millionaires are dissatisfied because they want to be billionaires. Conversely, some people with ordinary incomes are quite happy because they have learned to live within their means and enjoy the less expensive things in life. It is important to always keep in mind that high materialism seems to lower life satisfaction—valuing money over other things such as relationships can make us dissatisfied. When people think money is more important than everything else, they seem to have a harder time being happy. And unless they make a great deal of money, they are not on average as happy as others. Perhaps in seeking money they sacrifice other important things too much, such as relationships, spirituality, or following their interests. Or it may be that materialists just can never get enough money to fulfill their dreams—they always want more. To sum up what makes for a happy life, let’s take the example of Monoj, a rickshaw driver in Calcutta. He enjoys life, despite the hardships, and is reasonably satisfied with life. How could he be relatively happy despite his very low income, sometimes even insufficient to buy enough food for his family? The things that make Monoj happy are his family and friends, his religion, and his work, which he finds meaningful. His low income does lower his life satisfaction to some degree, but he finds his children to be very rewarding, and he gets along well with his neighbors. I also suspect that Monoj’s positive temperament and his enjoyment of social relationships help to some degree to overcome his poverty and earn him a place among the happy. However, Monoj would also likely be even more satisfied with life if he had a higher income that allowed more food, better housing, and better medical care for his family. Besides the internal and external factors that influence happiness, there are psychological influences as well—such as our aspirations, social comparisons, and adaptation. People’s aspirations are what they want in life, including income, occupation, marriage, and so forth. If people’s aspirations are high, they will often strive harder, but there is also a risk of them falling short of their aspirations and being dissatisfied. The goal is to have challenging aspirations but also to be able to adapt to what actually happens in life. One’s outlook and resilience are also always very important to happiness. Every person will have disappointments in life, fail at times, and have problems. Thus, happiness comes not to people who never have problems—there are no such individuals—but to people who are able to bounce back from failures and adapt to disappointments. This is why happiness is never caused just by what happens to us but always includes our outlook on life. Adaptation to Circumstances The process of adaptation is important in understanding happiness. When good and bad events occur, people often react strongly at first, but then their reactions adapt over time and they return to their former levels of happiness. For instance, many people are euphoric when they first marry, but over time they grow accustomed to the marriage and are no longer ecstatic. The marriage becomes commonplace and they return to their former level of happiness. Few of us think this will happen to us, but the truth is that it usually does. Some people will be a bit happier even years after marriage, but nobody carries that initial “high” through the years. People also adapt over time to bad events. However, people take a long time to adapt to certain negative events such as unemployment. People become unhappy when they lose their work, but over time they recover to some extent. But even after a number of years, unemployed individuals sometimes have lower life satisfaction, indicating that they have not completely habituated to the experience. However, there are strong individual differences in adaptation, too. Some people are resilient and bounce back quickly after a bad event, and others are fragile and do not ever fully adapt to the bad event. Do you adapt quickly to bad events and bounce back, or do you continue to dwell on a bad event and let it keep you down? An example of adaptation to circumstances is shown in Figure 10.2.2, which shows the daily moods of “Harry,” a college student who had Hodgkin’s lymphoma (a form of cancer). As can be seen, over the 6-week period when I studied Harry’s moods, they went up and down. A few times his moods dropped into the negative zone below the horizontal blue line. Most of the time Harry’s moods were in the positive zone above the line. But about halfway through the study Harry was told that his cancer was in remission—effectively cured—and his moods on that day spiked way up. But notice that he quickly adapted—the effects of the good news wore off, and Harry adapted back toward where he was before. So even the very best news one can imagine—recovering from cancer—was not enough to give Harry a permanent “high.” Notice too, however, that Harry’s moods averaged a bit higher after cancer remission. Thus, the typical pattern is a strong response to the event, and then a dampening of this joy over time. However, even in the long run, the person might be a bit happier or unhappier than before. Outcomes of High Subjective Well-Being Is the state of happiness truly a good thing? Is happiness simply a feel-good state that leaves us unmotivated and ignorant of the world’s problems? Should people strive to be happy, or are they better off to be grumpy but “realistic”? Some have argued that happiness is actually a bad thing, leaving us superficial and uncaring. Most of the evidence so far suggests that happy people are healthier, more sociable, more productive, and better citizens (Diener & Tay, 2012; Lyubomirsky, King, & Diener, 2005). Research shows that the happiest individuals are usually very sociable. The table below summarizes some of the major findings. Although it is beneficial generally to be happy, this does not mean that people should be constantly euphoric. In fact, it is appropriate and helpful sometimes to be sad or to worry. At times a bit of worry mixed with positive feelings makes people more creative. Most successful people in the workplace seem to be those who are mostly positive but sometimes a bit negative. Thus, people need not be a superstar in happiness to be a superstar in life. What is not helpful is to be chronically unhappy. The important question is whether people are satisfied with how happy they are. If you feel mostly positive and satisfied, and yet occasionally worry and feel stressed, this is probably fine as long as you feel comfortable with this level of happiness. If you are a person who is chronically unhappy much of the time, changes are needed, and perhaps professional intervention would help as well. Measuring Happiness SWB researchers have relied primarily on self-report scales to assess happiness—how people rate their own happiness levels on self-report surveys. People respond to numbered scales to indicate their levels of satisfaction, positive feelings, and lack of negative feelings. You can see where you stand on these scales by going to internal.psychology.illinois....er/scales.html or by filling out the Flourishing Scale below. These measures will give you an idea of what popular scales of happiness are like. The self-report scales have proved to be relatively valid (Diener, Inglehart, & Tay, 2012), although people can lie, or fool themselves, or be influenced by their current moods or situational factors. Because the scales are imperfect, well-being scientists also sometimes use biological measures of happiness (e.g., the strength of a person’s immune system, or measuring various brain areas that are associated with greater happiness). Scientists also use reports by family, coworkers, and friends—these people reporting how happy they believe the target person is. Other measures are used as well to help overcome some of the shortcomings of the self-report scales, but most of the field is based on people telling us how happy they are using numbered scales. There are scales to measure life satisfaction (Pavot & Diener, 2008), positive and negative feelings, and whether a person is psychologically flourishing (Diener et al., 2009). Flourishing has to do with whether a person feels meaning in life, has close relationships, and feels a sense of mastery over important life activities. You can take the well-being scales created in the Diener laboratory, and let others take them too, because they are free and open for use. Some Ways to Be Happier Most people are fairly happy, but many of them also wish they could be a bit more satisfied and enjoy life more. Prescriptions about how to achieve more happiness are often oversimplified because happiness has different components and prescriptions need to be aimed at where each individual needs improvement—one size does not fit all. A person might be strong in one area and deficient in other areas. People with prolonged serious unhappiness might need help from a professional. Thus, recommendations for how to achieve happiness are often appropriate for one person but not for others. With this in mind, I list in Table 4 below some general recommendations for you to be happier (see also Lyubomirsky, 2013): Outside Resources Web: Barbara Fredrickson’s website on positive emotions www.unc.edu/peplab/news.html Web: Ed Diener’s website internal.psychology.illinois.edu/~ediener/ Web: International Positive Psychology Association http://www.ippanetwork.org/ Web: Positive Acorn Positive Psychology website http://positiveacorn.com/ Web: Sonja Lyubomirsky’s website on happiness http://sonjalyubomirsky.com/ Web: University of Pennsylvania Positive Psychology Center website http://www.ppc.sas.upenn.edu/ Web: World Database on Happiness www1.eur.nl/fsw/happiness/ Discussion Questions 1. Which do you think is more important, the “top-down” personality influences on happiness or the “bottom-up” situational circumstances that influence it? In other words, discuss whether internal sources such as personality and outlook or external factors such situations, circumstances, and events are more important to happiness. Can you make an argument that both are very important? 2. Do you know people who are happy in one way but not in others? People who are high in life satisfaction, for example, but low in enjoying life or high in negative feelings? What should they do to increase their happiness across all three types of subjective well-being? 3. Certain sources of happiness have been emphasized in this book, but there are others. Can you think of other important sources of happiness and unhappiness? Do you think religion, for example, is a positive source of happiness for most people? What about age or ethnicity? What about health and physical handicaps? If you were a researcher, what question might you tackle on the influences on happiness? 4. Are you satisfied with your level of happiness? If not, are there things you might do to change it? Would you function better if you were happier? 5. How much happiness is helpful to make a society thrive? Do people need some worry and sadness in life to help us avoid bad things? When is satisfaction a good thing, and when is some dissatisfaction a good thing? 6. How do you think money can help happiness? Interfere with happiness? What level of income will you need to be satisfied? Vocabulary Adaptation The fact that after people first react to good or bad events, sometimes in a strong way, their feelings and reactions tend to dampen down over time and they return toward their original level of subjective well-being. “Bottom-up” or external causes of happiness Situational factors outside the person that influence his or her subjective well-being, such as good and bad events and circumstances such as health and wealth. Happiness The popular word for subjective well-being. Scientists sometimes avoid using this term because it can refer to different things, such as feeling good, being satisfied, or even the causes of high subjective well-being. Life satisfaction A person reflects on their life and judges to what degree it is going well, by whatever standards that person thinks are most important for a good life. Negative feelings Undesirable and unpleasant feelings that people tend to avoid if they can. Moods and emotions such as depression, anger, and worry are examples. Positive feelings Desirable and pleasant feelings. Moods and emotions such as enjoyment and love are examples. Subjective well-being The name that scientists give to happiness—thinking and feeling that our lives are going very well. Subjective well-being scales Self-report surveys or questionnaires in which participants indicate their levels of subjective well-being, by responding to items with a number that indicates how well off they feel. “Top-down” or internal causes of happiness The person’s outlook and habitual response tendencies that influence their happiness—for example, their temperament or optimistic outlook on life.
textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_(Noba)/Chapter_10%3A_Well_Being/10.2%3A_Happiness%3A_The_Science_of_Subjective_Well-Being.txt