chapter
stringlengths 1.97k
1.53M
| path
stringlengths 47
241
|
---|---|
• 3.1: Vision
Vision is the sensory modality that transforms light into a psychological experience of the world around you, with minimal bodily effort. This module provides an overview of the most significant steps in this transformation and strategies that your brain uses to achieve this visual understanding of the environment.
• 3.2: Hearing
Hearing allows us to perceive the world of acoustic vibrations all around us, and provides us with our most important channels of communication. This module reviews the basic mechanisms of hearing, beginning with the anatomy and physiology of the ear and a brief review of the auditory pathways up to the auditory cortex
• 3.3: Taste and Smell
Humans are omnivores (able to survive on many different foods). The omnivore’s dilemma is to identify foods that are healthy and avoid poisons. Taste and smell cooperate to solve this dilemma. Stimuli for both taste and smell are chemicals. Smell results from a biological system that essentially permits the brain to store rough sketches of the chemical structures of odor stimuli in the environment
• 3.4: Touch and Pain
The sensory systems of touch and pain provide us with information about our environment and our bodies that is often crucial for survival and well-being. Moreover, touch is a source of pleasure. In this module, we review how information about our environment and our bodies is coded in the periphery & interpreted by the brain as touch and pain sensations. We discuss how these experiences are often dramatically shaped by top-down factors like motivation, expectation, mood, fear, stress, & context.
• 3.5: The Vestibular System
The vestibular system functions to detect head motion and position relative to gravity and is primarily involved in the fine control of visual gaze, posture, orthostasis, spatial orientation, and navigation. Vestibular signals are highly processed in many regions of the brain and are involved in many essential functions. In this module, we provide an overview of how the vestibular system works and how vestibular signals are used to guide behavior.
• 3.6: Multi-Modal Perception
Most of the time, we perceive the world as a unified bundle of sensations from multiple sensory modalities. In other words, our perception is multimodal. This module provides an overview of multimodal perception, including information about its neurobiology and its psychological effects.
• 3.7: Eyewitness Testimony and Memory Biases
Eyewitnesses can provide very compelling legal testimony, but rather than recording experiences flawlessly, their memories are susceptible to a variety of errors and biases. They (like the rest of us) can make errors in remembering specific details and can even remember whole events that did not actually happen. In this module, we discuss several of the common types of errors, and what they can tell us about human memory and its interactions with the legal system.
03: Sensation and Perception
By Simona Buetti and Alejandro Lleras
University of Illinois at Urbana-Champaign
Vision is the sensory modality that transforms light into a psychological experience of the world around you, with minimal bodily effort. This module provides an overview of the most significant steps in this transformation and strategies that your brain uses to achieve this visual understanding of the environment.
learning objectives
• Describe how the eye transforms light information into neural energy.
• Understand what sorts of information the brain is interested in extracting from the environment and why it is useful.
• Describe how the visual system has adapted to deal with different lighting conditions.
• Understand the value of having two eyes.
• Understand why we have color vision.
• Understand the interdependence between vision and other brain functions.
What Is Vision?
Think about the spectacle of a starry night. You look up at the sky, and thousands of photons from distant stars come crashing into your retina, a light-sensitive structure at the back of your eyeball. These photons are millions of years old and have survived a trip across the universe, only to run into one of your photoreceptors. Tough luck: in one thousandth of a second, this little bit of light energy becomes the fuel to a photochemical reaction known as photoactivation. The light energy becomes neural energy and triggers a cascade of neural activity that, a few hundredths of a second later, will result in your becoming aware of that distant star. You and the universe united by photons. That is the amazing power of vision. Light brings the world to you. Without moving, you know what’s out there. You can recognize friends coming to meet you before you are able to hear them coming, ripe fruits from green ones on trees without having to taste them and before reaching out to grab them. You can also tell how quickly a ball is moving in your direction (Will it hit you? Can you hit it?).
How does all of that happen? First, light enters the eyeball through a tiny hole known as the pupil and, thanks to the refractive properties of your cornea and lens, this light signal gets projected sharply into the retina (see Outside Resources for links to a more detailed description of the eye structure). There, light is transduced into neural energy by about 200 million photoreceptor cells.
This is where the information carried by the light about distant objects and colors starts being encoded by our brain. There are two different types of photoreceptors: rods and cones. The human eye contains more rods than cones. Rods give us sensitivity under dim lighting conditions and allow us to see at night. Cones allow us to see fine details in bright light and give us the sensation of color. Cones are tightly packed around the fovea (the central region of the retina behind your pupil) and more sparsely elsewhere. Rods populate the periphery (the region surrounding the fovea) and are almost absent from the fovea.
But vision is far more complex than just catching photons. The information encoded by the photoreceptors undergoes a rapid and continuous set of ever more complex analysis so that, eventually, you can make sense of what’s out there. At the fovea, visual information is encoded separately from tiny portions of the world (each about half the width of a human hair viewed at arm’s length) so that eventually the brain can reconstruct in great detail fine visual differences from locations at which you are directly looking. This fine level of encoding requires lots of light and it is slow going (neurally speaking). In contrast, in the periphery, there is a different encoding strategy: detail is sacrificed in exchange for sensitivity. Information is summed across larger sections of the world. This aggregation occurs quickly and allows you to detect dim signals under very low levels of light, as well as detect sudden movements in your peripheral vision.
The Importance of Contrast
What happens next? Well, you might think that the eye would do something like record the amount of light at each location in the world and then send this information to the visual-processing areas of the brain (an astounding 30% of the cortex is influenced by visual signals!). But, in fact, that is not what eyes do. As soon as photoreceptors capture light, the nervous system gets busy analyzing differences in light, and it is these differences that get transmitted to the brain. The brain, it turns out, cares little about the overall amount of light coming from a specific part of the world, or in the scene overall. Rather, it wants to know: does the light coming from this one point differ from the light coming from the point next to it? Place your hand on the table in front of you. The contour of your hand is actually determined by the difference in light—the contrast—between the light coming from the skin in your hand and the light coming from the table underneath. To find the contour of your hand, we simply need to find the regions in the image where the difference in light between two adjacent points is maximal. Two points on your skin will reflect similar levels of light back to you, as will two points on the table. On the other hand, two points that fall on either side of the boundary contour between your hand and the table will reflect very different light.
The fact that the brain is interested in coding contrast in the world reveals something deeply important about the forces that drove the evolution of our brain: encoding the absolute amount of light in the world tells us little about what is out there. But if your brain can detect the sudden appearance of a difference in light somewhere in front of you, then it must be that something new is there. That contrast signal is information. That information may represent something that you like (food, a friend) or something dangerous approaching (a tiger, a cliff). The rest of your visual system will work hard to determine what that thing is, but as quickly as 10ms after light enters your eyes, ganglion cells in your retinae have already encoded all the differences in light from the world in front of you.
Contrast is so important that your neurons go out of their way not only to encode differences in light but to exaggerate those differences for you, lest you miss them. Neurons achieve this via a process known as lateral inhibition. When a neuron is firing in response to light, it produces two signals: an output signal to pass on to the next level in vision, and a lateral signal to inhibit all neurons that are next to it. This makes sense on the assumption that nearby neurons are likely responding to the same light coming from nearby locations, so this information is somewhat redundant. The magnitude of the lateral inhibitory signal a neuron produces is proportional to the excitatory input that neuron receives: the more a neuron fires, the stronger the inhibition it produces. Figure 8.2.1 illustrates how lateral inhibition amplifies contrast signals at the edges of surfaces.
Sensitivity to Different Light Conditions
Let’s think for a moment about the range of conditions in which your visual system must operate day in and day out. When you take a walk outdoors on a sunny day, as many as billions of photons enter your eyeballs every second. In contrast, when you wake up in the middle of the night in a dark room, there might be as little as a few hundred photons per second entering your eyes. To deal with these extremes, the visual system relies on the different properties of the two types of photoreceptors. Rods are mostly responsible for processing light when photons are scarce (just a single photon can make a rod fire!), but it takes time to replenish the visual pigment that rods require for photoactivation. So, under bright conditions, rods are quickly bleached (Stuart & Brige, 1996) and cannot keep up with the constant barrage of photons hitting them. That’s when the cones become useful. Cones require more photons to fire and, more critically, their photopigments replenish much faster than rods’ photopigments, allowing them to keep up when photons are abundant.
What happens when you abruptly change lighting conditions? Under bright light, your rods are bleached. When you move into a dark environment, it will take time (up to 30 minutes) before they chemically recover (Hurley, 2002). Once they do, you will begin to see things around you that initially you could not. This phenomenon is called dark adaptation. When you go from dark to bright light (as you exit a tunnel on a highway, for instance), your rods will be bleached in a blaze and you will be blinded by the sudden light for about 1 second. However, your cones are ready to fire! Their firing will take over and you will quickly begin to see at this higher level of light.
A similar, but more subtle, adjustment occurs when the change in lighting is not so drastic. Think about your experience of reading a book at night in your bed compared to reading outdoors: the room may feel to you fairly well illuminated (enough so you can read) but the light bulbs in your room are not producing the billions of photons that you encounter outside. In both cases, you feel that your experience is that of a well-lit environment. You don’t feel one experience as millions of times brighter than the other. This is because vision (as much of perception) is not proportional: seeing twice as many photons does not produce a sensation of seeing twice as bright a light. The visual system tunes into the current experience by favoring a range of contrast values that is most informative in that environment (Gardner et al., 2005). This is the concept of contrast gain: the visual system determines the mean contrast in a scene and represents values around that mean contrast best, while ignoring smaller contrast differences. (See the Outside Resources section for a demonstration.)
The Reconstruction Process
What happens once information leaves your eyes and enters the brain? Neurons project first into the thalamus, in a section known as the lateral geniculate nucleus. The information then splits and projects towards two different parts of the brain. Most of the computations regarding reflexive eye movements are computed in subcortical regions, the evolutionarily older part of the brain. Reflexive eye movements allow you to quickly orient your eyes towards areas of interest and to track objects as they move. The more complex computations, those that eventually allow you to have a visual experience of the world, all happen in the cortex, the evolutionarily newer region of the brain. The first stop in the cortex is at the primary visual cortex (also known as V1). Here, the “reconstruction” process begins in earnest: based on the contrast information arriving from the eyes, neurons will start computing information about color and simple lines, detecting various orientations and thicknesses. Small-scale motion signals are also computed (Hubel & Wiesel, 1962).
As information begins to flow towards other “higher” areas of the system, more complex computations are performed. For example, edges are assigned to the object to which they belong, backgrounds are separated from foregrounds, colors are assigned to surfaces, and the global motion of objects is computed. Many of these computations occur in specialized brain areas. For instance, an area called MT processes global-motion information; the parahippocampal place area identifies locations and scenes; the fusiform face area specializes in identifying objects for which fine discriminations are required, like faces. There is even a brain region specialized in letter and word processing. These visual-recognition areas are located along the ventral pathway of the brain (also known as the What pathway). Other brain regions along the dorsal pathway (or Where-and-How pathway) will compute information about self- and object-motion, allowing you to interact with objects, navigate the environment, and avoid obstacles (Goodale and Milner, 1992).
Now that you have a basic understanding of how your visual system works, you can ask yourself the question: why do you have two eyes? Everything that we discussed so far could be computed with information coming from a single eye. So why two? Looking at the animal kingdom gives us a clue. Animals who tend to be prey have eyes located on opposite sides of their skull. This allows them to detect predators whenever one appears anywhere around them. Humans, like most predators, have two eyes pointing in the same direction, encoding almost the exact scene twice. This redundancy gives us a binocular advantage: having two eyes not only provides you with two chances at catching a signal in front of you, but the minute difference in perspective that you get from each eye is used by your brain to reconstruct the sense of three-dimensional space. You can get an estimate of how far distant objects are from you, their size, and their volume. This is no easy feat: the signal in each eye is a two-dimensional projection of the world, like two separate pictures drawn upon your retinae. Yet, your brain effortlessly provides you with a sense of depth by combining those two signals. This 3-D reconstruction process also relies heavily on all the knowledge you acquired through experience about spatial information. For instance, your visual system learns to interpret how the volume, distance, and size of objects change as they move closer or farther from you. (See the Outside Resources section for demonstrations.)
The Experience of Color
Perhaps one of the most beautiful aspects of vision is the richness of the color experience that it provides us. One of the challenges that we have as scientists is to understand why the human color experience is what it is. Perhaps you have heard that dogs only have 2 types of color photoreceptors, whereas humans have 3, chickens have 4, and mantis shrimp have 16. Why is there such variation across species? Scientists believe each species has evolved with different needs and uses color perception to signal information about food, reproduction, and health that are unique to their species. For example, humans have a specific sensitivity that allows you to detect slight changes in skin tone. You can tell when someone is embarrassed, aroused, or ill. Detecting these subtle signals is adaptive in a social species like ours.
How is color coded in the brain? The two leading theories of color perception were proposed in the mid-19th century, about 100 years before physiological evidence was found to corroborate them both (Svaetichin, 1956). Trichromacy theory, proposed by Young (1802) and Helmholtz (1867), proposed that the eye had three different types of color-sensitive cells based on the observation that any one color can be reproduced by combining lights from three lamps of different hue. If you can adjust separately the intensity of each light, at some point you will find the right combination of the three lights to match any color in the world. This principle is used today on TVs, computer screens, and any colored display. If you look closely enough at a pixel, you will find that it is composed of a blue, a red, and a green light, of varying intensities. Regarding the retina, humans have three types of cones: S-cones, M-cones, and L-cones (also known as blue, green, and red cones, respectively) that are sensitive to three different wavelengths of light.
Around the same time, Hering made a puzzling discovery: some colors are impossible to create. Whereas you can make yellowish greens, bluish reds, greenish blues, and reddish yellows by combining two colors, you can never make a reddish green or a bluish yellow. This observation led Hering (1892) to propose the Opponent Process theory of color: color is coded via three opponent channels (red-green, blue-yellow, and black-white). Within each channel, a comparison is constantly computed between the two elements in the pair. In other words, colors are encoded as differences between two hues and not as simple combinations of hues. Again, what matters to the brain is contrast. When one element is stronger than the other, the stronger color is perceived and the weaker one is suppressed. You can experience this phenomenon by following the link below.
nobaproject.com/assets/modules/module-visio...
When both colors in a pair are present to equal extents, the color perception is canceled and we perceive a level of grey. This is why you cannot see a reddish green or a bluish yellow: they cancel each other out. By the way, if you are wondering where the yellow signal comes from, it turns out that it is computed by averaging the M- and L-cone signals. Are these colors uniquely human colors? Some think that they are: the red-green contrast, for example, is finely tuned to detect changes in human skin tone so you can tell when someone blushes or becomes pale. So, the next time you go out for a walk with your dog, look at the sunset and ask yourself, what color does my dog see? Probably none of the orange hues you do!
So now, you can ask yourself the question: do all humans experience color in the same way? Color-blind people, as you can imagine, do not see all the colors that the rest of us see, and this is due to the fact that they lack one (or more) cones in their retina. Incidentally, there are a few women who actually have four different sets of cones in their eyes, and recent research suggests that their experience of color can be (but not always is) richer than the one from three-coned people. A slightly different question, though, is whether all three-coned people have the same internal experiences of colors: is the red inside your head the same red inside your mom’s head? That is an almost impossible question to answer that has been debated by philosophers for millennia, yet recent data suggests that there might in fact be cultural differences in the way we perceive color. As it turns out, not all cultures categorize colors in the same way, for example. And some groups “see” different shades of what we in the Western world would call the “same” color, as categorically different colors. The Berinmo tribe in New Guinea, for instance, appear to experience green shades that denote leaves that are alive as belonging to an entirely different color category than the sort of green shades that denote dying leaves. Russians, too, appear to experience light and dark shades of blue as different categories of colors, in a way that most Westerners do not. Further, current brain imaging research suggests that people’s brains change (increase in white-matter volume) when they learn new color categories! These are intriguing and suggestive findings, for certain, that seem to indicate that our cultural environment may in fact have some (small) but definite impact on how people use and experience colors across the globe.
Integration with Other Modalities
Vision is not an encapsulated system. It interacts with and depends on other sensory modalities. For example, when you move your head in one direction, your eyes reflexively move in the opposite direction to compensate, allowing you to maintain your gaze on the object that you are looking at. This reflex is called the vestibulo-ocular reflex. It is achieved by integrating information from both the visual and the vestibular system (which knows about body motion and position). You can experience this compensation quite simply. First, while you keep your head still and your gaze looking straight ahead, wave your finger in front of you from side to side. Notice how the image of the finger appears blurry. Now, keep your finger steady and look at it while you move your head from side to side. Notice how your eyes reflexively move to compensate the movement of your head and how the image of the finger stays sharp and stable. Vision also interacts with your proprioceptive system, to help you find where all your body parts are, and with your auditory system, to help you understand the sounds people make when they speak. You can learn more about this in the Noba module about multimodal perception (http://noba.to/cezw4qyn).
Finally, vision is also often implicated in a blending-of-sensations phenomenon known as synesthesia. Synesthesia occurs when one sensory signal gives rise to two or more sensations. The most common type is grapheme-color synesthesia. About 1 in 200 individuals experience a sensation of color associated with specific letters, numbers, or words: the number 1 might always be seen as red, the number 2 as orange, etc. But the more fascinating forms of synesthesia blend sensations from entirely different sensory modalities, like taste and color or music and color: the taste of chicken might elicit a sensation of green, for example, and the timbre of violin a deep purple.
Concluding Remarks
We are at an exciting moment in our scientific understanding of vision. We have just begun to get a functional understanding of the visual system. It is not sufficiently evolved for us to recreate artificial visual systems (i.e., we still cannot make robots that “see” and understand light signals as we do), but we are getting there. Just recently, major breakthroughs in vision science have allowed researchers to significantly improve retinal prosthetics: photosensitive circuits that can be implanted on the back of the eyeball of blind people that connect to visual areas of the brain and have the capacity to partially restore a “visual experience” to these patients (Nirenberg & Pandarinath, 2012). And using functional magnetic brain imaging, we can now “decode” from your brain activity the images that you saw in your dreams while you were asleep (Horikawa, Tamaki, Miyawaki, & Kamitani, 2013)! Yet, there is still so much more to understand. Consider this: if vision is a construction process that takes time, whatever we see now is no longer what is front of us. Yet, humans can do amazing time-sensitive feats like hitting a 90-mph fastball in a baseball game. It appears then that a fundamental function of vision is not just to know what is happening around you now, but actually to make an accurate inference about what you are about to see next (Enns & Lleras, 2008), so that you can keep up with the world. Understanding how this future-oriented, predictive function of vision is achieved in the brain is probably the next big challenge in this fascinating realm of research.
Outside Resources
Video: Acquired knowledge and its impact on our three-dimensional interpretation of the world - 3D Street Art
Video: Acquired knowledge and its impact on our three-dimensional interpretation of the world - Anamorphic Illusions
Video: Acquired knowledge and its impact on our three-dimensional interpretation of the world - Optical Illusion
Web: Amazing library with visual phenomena and optical illusions, explained
http://michaelbach.de/ot/index.html
Web: Anatomy of the eye
http://www.eyecareamerica.org/eyecare/anatomy/
Web: Demonstration of contrast gain adaptation
http://www.michaelbach.de/ot/lum_contrast-adapt/
Web: Demonstration of illusory contours and lateral inhibition. Mach bands
http://michaelbach.de/ot/lum-MachBands/index.html
Web: Demonstration of illusory contrast and lateral inhibition. The Hermann grid
http://michaelbach.de/ot/lum_herGrid/
Web: Further information regarding what and where/how pathways
http://www.scholarpedia.org/article/...where_pathways
Discussion Questions
1. When running in the dark, it is recommended that you never look straight at the ground. Why? What would be a better strategy to avoid obstacles?
2. The majority of ganglion cells in the eye specialize in detecting drops in the amount of light coming from a given location. That is, they increase their firing rate when they detect less light coming from a specific location. Why might the absence of light be more important than the presence of light? Why would it be evolutionarily advantageous to code this type of information?
3. There is a hole in each one of your eyeballs called the optic disk. This is where veins enter the eyeball and where neurons (the axons of the ganglion cells) exit the eyeball. Why do you not see two holes in the world all the time? Close one eye now. Why do you not see a hole in the world now? To “experience” a blind spot, follow the instructions in this website: http://michaelbach.de/ot/cog_blindSpot/index.html
4. Imagine you were given the task of testing the color-perception abilities of a newly discovered species of monkeys in the South Pacific. How would you go about it?
5. An important aspect of emotions is that we sense them in ourselves much in the same way as we sense other perceptions like vision. Can you think of an example where the concept of contrast gain can be used to understand people’s responses to emotional events?
Vocabulary
Binocular advantage
Benefits from having two eyes as opposed to a single eye.
Cones
Photoreceptors that operate in lighted environments and can encode fine visual details. There are three different kinds (S or blue, M or green and L or red) that are each sensitive to slightly different types of light. Combined, these three types of cones allow you to have color vision.
Contrast
Relative difference in the amount and type of light coming from two nearby locations.
Contrast gain
Process where the sensitivity of your visual system can be tuned to be most sensitive to the levels of contrast that are most prevalent in the environment.
Dark adaptation
Process that allows you to become sensitive to very small levels of light, so that you can actually see in the near-absence of light.
Lateral inhibition
A signal produced by a neuron aimed at suppressing the response of nearby neurons.
Opponent Process Theory
Theory of color vision that assumes there are four different basic colors, organized into two pairs (red/green and blue/yellow) and proposes that colors in the world are encoded in terms of the opponency (or difference) between the colors in each pair. There is an additional black/white pair responsible for coding light contrast.
Photoactivation
A photochemical reaction that occurs when light hits photoreceptors, producing a neural signal.
Primary visual cortex (V1)
Brain region located in the occipital cortex (toward the back of the head) responsible for processing basic visual information like the detection, thickness, and orientation of simple lines, color, and small-scale motion.
Rods
Photoreceptors that are very sensitive to light and are mostly responsible for night vision.
Synesthesia
The blending of two or more sensory experiences, or the automatic activation of a secondary (indirect) sensory experience due to certain aspects of the primary (direct) sensory stimulation.
Trichromacy theory
Theory that proposes that all of your color perception is fundamentally based on the combination of three (not two, not four) different color signals.
Vestibulo-ocular reflex
Coordination of motion information with visual information that allows you to maintain your gaze on an object while you move.
What pathway
Pathway of neural processing in the brain that is responsible for your ability to recognize what is around you.
Where-and-How pathway
Pathway of neural processing in the brain that is responsible for you knowing where things are in the world and how to interact with them. | textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/03%3A_Sensation_and_Perception/3.01%3A_Vision.txt |
By Andrew J. Oxenham
University of Minnesota
Hearing allows us to perceive the world of acoustic vibrations all around us, and provides us with our most important channels of communication. This module reviews the basic mechanisms of hearing, beginning with the anatomy and physiology of the ear and a brief review of the auditory pathways up to the auditory cortex. An outline of the basic perceptual attributes of sound, including loudness, pitch, and timbre, is followed by a review of the principles of tonotopic organization, established in the cochlea. An overview of masking and frequency selectivity is followed by a review of the perception and neural mechanisms underlying spatial hearing. Finally, an overview is provided of auditory scene analysis, which tackles the important question of how the auditory system is able to make sense of the complex mixtures of sounds that are encountered in everyday acoustic environments.
learning objectives
• Describe the basic auditory attributes of sound.
• Describe the structure and general function of the auditory pathways from the outer ear to the auditory cortex.
• Discuss ways in which we are able to locate sounds in space.
• Describe various acoustic cues that contribute to our ability to perceptually segregate simultaneously arriving sounds.
Hearing forms a crucial part of our everyday life. Most of our communication with others, via speech or music, reaches us through the ears. Indeed, a saying, often attributed to Helen Keller, is that blindness separates us from things, but deafness separates us from people. The ears respond to acoustic information, or sound—tiny and rapid variations in air pressure. Sound waves travel from the source and produce pressure variations in the listener’s ear canals, causing the eardrums (or tympanic membranes) to vibrate. This module provides an overview of the events that follow, which convert these simple mechanical vibrations into our rich experience known as hearing, or auditory perception.
Perceptual Attributes of Sound
There are many ways to describe a sound, but the perceptual attributes of a sound can typically be divided into three main categories—namely, loudness, pitch, and timbre. Although all three refer to perception, and not to the physical sounds themselves, they are strongly related to various physical variables.
Loudness
The most direct physical correlate of loudness is sound intensity (or sound pressure) measured close to the eardrum. However, many other factors also influence the loudness of a sound, including its frequency content, its duration, and the context in which it is presented. Some of the earliest psychophysical studies of auditory perception, going back more than a century, were aimed at examining the relationships between perceived loudness, the physical sound intensity, and the just-noticeable differences in loudness (Fechner, 1860; Stevens, 1957). A great deal of time and effort has been spent refining various measurement methods. These methods involve techniques such as magnitude estimation, where a series of sounds (often sinusoids, or pure tones of single frequency) are presented sequentially at different sound levels, and subjects are asked to assign numbers to each tone, corresponding to the perceived loudness. Other studies have examined how loudness changes as a function of the frequency of a tone, resulting in the international standard iso-loudness-level contours (ISO, 2003), which are used in many areas of industry to assess noise and annoyance issues. Such studies have led to the development of computational models that are designed to predict the loudness of arbitrary sounds (e.g., Moore, Glasberg, & Baer, 1997).
Pitch
Pitch plays a crucial role in acoustic communication. Pitch variations over time provide the basis of melody for most types of music; pitch contours in speech provide us with important prosodic information in non-tone languages, such as English, and help define the meaning of words in tone languages, such as Mandarin Chinese. Pitch is essentially the perceptual correlate of waveform periodicity, or repetition rate: The faster a waveform repeats over time, the higher is its perceived pitch. The most common pitch-evoking sounds are known as harmonic complex tones. They are complex because they consist of more than one frequency, and they are harmonic because the frequencies are all integer multiples of a common fundamental frequency (F0). For instance, a harmonic complex tone with a F0 of 100 Hz would also contain energy at frequencies of 200, 300, 400 Hz, and so on. These higher frequencies are known as harmonics or overtones, and they also play an important role in determining the pitch of a sound. In fact, even if the energy at the F0 is absent or masked, we generally still perceive the remaining sound to have a pitch corresponding to the F0. This phenomenon is known as the “pitch of the missing fundamental,” and it has played an important role in the formation of theories and models about pitch (de Cheveigné, 2005). We hear pitch with sufficient accuracy to perceive melodies over a range of F0s from about 30 Hz (Pressnitzer, Patterson, & Krumbholz, 2001) up to about 4–5 kHz (Attneave & Olson, 1971; Oxenham, Micheyl, Keebler, Loper, & Santurette, 2011). This range also corresponds quite well to the range covered by musical instruments; for instance, the modern grand piano has notes that extend from 27.5 Hz to 4,186 Hz. We are able to discriminate changes in frequency above 5,000 Hz, but we are no longer very accurate in recognizing melodies or judging musical intervals.
Timbre
Timbre refers to the quality of sound, and is often described using words such as bright, dull, harsh, and hollow. Technically, timbre includes anything that allows us to distinguish two sounds that have the same loudness, pitch, and duration. For instance, a violin and a piano playing the same note sound very different, based on their sound quality or timbre.
An important aspect of timbre is the spectral content of a sound. Sounds with more high-frequency energy tend to sound brighter, tinnier, or harsher than sounds with more low-frequency content, which might be described as deep, rich, or dull. Other important aspects of timbre include the temporal envelope (or outline) of the sound, especially how it begins and ends. For instance, a piano has a rapid onset, or attack, produced by the hammer striking the string, whereas the attack of a clarinet note can be much more gradual. Artificially changing the onset of a piano note by, for instance, playing a recording backwards, can dramatically alter its character so that it is no longer recognizable as a piano note. In general, the overall spectral content and the temporal envelope can provide a good first approximation to any sound, but it turns out that subtle changes in the spectrum over time (or spectro-temporal variations) are crucial in creating plausible imitations of natural musical instruments (Risset & Wessel, 1999).
An Overview of the Auditory System
Our auditory perception depends on how sound is processed through the ear. The ear can be divided into three main parts—the outer, middle, and inner ear (see Figure 8.4.1). The outer ear consists of the pinna (the visible part of the ear, with all its unique folds and bumps), the ear canal (or auditory meatus), and the tympanic membrane. Of course, most of us have two functioning ears, which turn out to be particularly useful when we are trying to figure out where a sound is coming from. As discussed below in the section on spatial hearing, our brain can compare the subtle differences in the signals at the two ears to localize sounds in space. However, this trick does not always help: for instance, a sound directly in front or directly behind you will not produce a difference between the ears. In these cases, the filtering produced by the pinnae helps us localize sounds and resolve potential front-back and up-down confusions. More generally, the folds and bumps of the pinna produce distinct peaks and dips in the frequency response that depend on the location of the sound source. The brain then learns to associate certain patterns of spectral peaks and dips with certain spatial locations. Interestingly, this learned association remains malleable, or plastic, even in adulthood. For instance, a study that altered the pinnae using molds found that people could learn to use their “new” ears accurately within a matter of a few weeks (Hofman, Van Riswick, & Van Opstal, 1998). Because of the small size of the pinna, these kinds of acoustic cues are only found at high frequencies, above about 2 kHz. At lower frequencies, the sound is basically unchanged whether it comes from above, in front, or below. The ear canal itself is a tube that helps to amplify sound in the region from about 1 to 4 kHz—a region particularly important for speech communication.
The middle ear consists of an air-filled cavity, which contains the middle-ear bones, known as the incus, malleus, and stapes, or anvil, hammer, and stirrup, because of their respective shapes. They have the distinction of being the smallest bones in the body. Their primary function is to transmit the vibrations from the tympanic membrane to the oval window of the cochlea and, via a form of lever action, to better match the impedance of the air surrounding the tympanic membrane with that of the fluid within the cochlea.
The inner ear includes the cochlea, encased in the temporal bone of the skull, in which the mechanical vibrations of sound are transduced into neural signals that are processed by the brain. The cochlea is a spiral-shaped structure that is filled with fluid. Along the length of the spiral runs the basilar membrane, which vibrates in response to the pressure differences produced by vibrations of the oval window. Sitting on the basilar membrane is the organ of Corti, which runs the entire length of the basilar membrane from the base (by the oval window) to the apex (the “tip” of the spiral). The organ of Corti includes three rows of outer hair cells and one row of inner hair cells. The hair cells sense the vibrations by way of their tiny hairs, or stereocillia. The outer hair cells seem to function to mechanically amplify the sound-induced vibrations, whereas the inner hair cells form synapses with the auditory nerve and transduce those vibrations into action potentials, or neural spikes, which are transmitted along the auditory nerve to higher centers of the auditory pathways.
One of the most important principles of hearing—frequency analysis—is established in the cochlea. In a way, the action of the cochlea can be likened to that of a prism: the many frequencies that make up a complex sound are broken down into their constituent frequencies, with low frequencies creating maximal basilar-membrane vibrations near the apex of the cochlea and high frequencies creating maximal basilar-membrane vibrations nearer the base of the cochlea. This decomposition of sound into its constituent frequencies, and the frequency-to-place mapping, or “tonotopic” representation, is a major organizational principle of the auditory system, and is maintained in the neural representation of sounds all the way from the cochlea to the primary auditory cortex. The decomposition of sound into its constituent frequency components is part of what allows us to hear more than one sound at a time. In addition to representing frequency by place of excitation within the cochlea, frequencies are also represented by the timing of spikes within the auditory nerve. This property, known as “phase locking,” is crucial in comparing time-of-arrival differences of waveforms between the two ears (see the section on spatial hearing, below).
Unlike vision, where the primary visual cortex (or V1) is considered an early stage of processing, auditory signals go through many stages of processing before they reach the primary auditory cortex, located in the temporal lobe. Although we have a fairly good understanding of the electromechanical properties of the cochlea and its various structures, our understanding of the processing accomplished by higher stages of the auditory pathways remains somewhat sketchy. With the possible exception of spatial localization and neurons tuned to certain locations in space (Harper & McAlpine, 2004; Knudsen & Konishi, 1978), there is very little consensus on the how, what, and where of auditory feature extraction and representation. There is evidence for a “pitch center” in the auditory cortex from both human neuroimaging studies (e.g., Griffiths, Buchel, Frackowiak, & Patterson, 1998; Penagos, Melcher, & Oxenham, 2004) and single-unit physiology studies (Bendor & Wang, 2005), but even here there remain some questions regarding whether a single area of cortex is responsible for coding single features, such as pitch, or whether the code is more distributed (Walker, Bizley, King, & Schnupp, 2011).
Audibility, Masking, and Frequency Selectivity
Overall, the human cochlea provides us with hearing over a very wide range of frequencies. Young people with normal hearing are able to perceive sounds with frequencies ranging from about 20 Hz all the way up to 20 kHz. The range of intensities we can perceive is also impressive: the quietest sounds we can hear in the medium-frequency range (between about 1 and 4 kHz) have a sound intensity that is about a factor of 1,000,000,000,000 less intense than the loudest sound we can listen to without incurring rapid and permanent hearing loss. In part because of this enormous dynamic range, we tend to use a logarithmic scale, known as decibels (dB), to describe sound pressure or intensity. On this scale, 0 dB sound pressure level (SPL) is defined as 20 micro-Pascals (μPa), which corresponds roughly to the quietest perceptible sound level, and 120 dB SPL is considered dangerously loud.
Masking is the process by which the presence of one sound makes another sound more difficult to hear. We all encounter masking in our everyday lives, when we fail to hear the phone ring while we are taking a shower, or when we struggle to follow a conversation in a noisy restaurant. In general, a more intense sound will mask a less intense sound, provided certain conditions are met. The most important condition is that the frequency content of the sounds overlap, such that the activity in the cochlea produced by a masking sound “swamps” that produced by the target sound. Another type of masking, known as “suppression,” occurs when the response to the masker reduces the neural (and in some cases, the mechanical) response to the target sound. Because of the way that filtering in the cochlea functions, low-frequency sounds are more likely to mask high frequencies than vice versa, particularly at high sound intensities. This asymmetric aspect of masking is known as the “upward spread of masking.” The loss of sharp cochlear tuning that often accompanies cochlear damage leads to broader filtering and more masking—a physiological phenomenon that is likely to contribute to the difficulties experienced by people with hearing loss in noisy environments (Moore, 2007).
Although much masking can be explained in terms of interactions within the cochlea, there are other forms that cannot be accounted for so easily, and that can occur even when interactions within the cochlea are unlikely. These more central forms of masking come in different forms, but have often been categorized together under the term “informational masking” (Durlach et al., 2003; Watson & Kelly, 1978). Relatively little is known about the causes of informational masking, although most forms can be ascribed to a perceptual “fusion” of the masker and target sounds, or at least a failure to segregate the target from the masking sounds. Also relatively little is known about the physiological locus of informational masking, except that at least some forms seem to originate in the auditory cortex and not before (Gutschalk, Micheyl, & Oxenham, 2008).
Spatial Hearing
In contrast to vision, we have a 360° field of hearing. Our auditory acuity is, however, at least an order of magnitude poorer than vision in locating an object in space. Consequently, our auditory localization abilities are most useful in alerting us and allowing us to orient towards sources, with our visual sense generally providing the finer-grained analysis. Of course, there are differences between species, and some, such as barn owls and echolocating bats, have developed highly specialized sound localization systems.
Our ability to locate sound sources in space is an impressive feat of neural computation. The two main sources of information both come from a comparison of the sounds at the two ears. The first is based on interaural time differences (ITD) and relies on the fact that a sound source on the left will generate sound that will reach the left ear slightly before it reaches the right ear. Although sound is much slower than light, its speed still means that the time of arrival differences between the two ears is a fraction of a millisecond. The largest ITD we encounter in the real world (when sounds are directly to the left or right of us) are only a little over half a millisecond. With some practice, humans can learn to detect an ITD of between 10 and 20 μs (i.e., 20 millionths of a second) (Klump & Eady, 1956).
The second source of information is based in interaural level differences (ILDs). At higher frequencies (higher than about 1 kHz), the head casts an acoustic “shadow,” so that when a sound is presented from the left, the sound level at the left ear is somewhat higher than the sound level at the right ear. At very high frequencies, the ILD can be as much as 20 dB, and we are sensitive to differences as small as 1 dB.
As mentioned briefly in the discussion of the outer ear, information regarding the elevation of a sound source, or whether it comes from in front or behind, is contained in high-frequency spectral details that result from the filtering effects of the pinnae.
In general, we are most sensitive to ITDs at low frequencies (below about 1.5 kHz). At higher frequencies we can still perceive changes in timing based on the slowly varying temporal envelope of the sound but not the temporal fine structure (Bernstein & Trahiotis, 2002; Smith, Delgutte, & Oxenham, 2002), perhaps because of a loss of neural phase-locking to the temporal fine structure at high frequencies. In contrast, ILDs are most useful at high frequencies, where the head shadow is greatest. This use of different acoustic cues in different frequency regions led to the classic and very early “duplex theory” of sound localization (Rayleigh, 1907). For everyday sounds with a broad frequency spectrum, it seems that our perception of spatial location is dominated by interaural time differences in the low-frequency temporal fine structure (Macpherson & Middlebrooks, 2002).
As with vision, our perception of distance depends to a large degree on context. If we hear someone shouting at a very low sound level, we infer that the shouter must be far away, based on our knowledge of the sound properties of shouting. In rooms and other enclosed locations, the reverberation can also provide information about distance: As a speaker moves further away, the direct sound level decreases but the sound level of the reverberation remains about the same; therefore, the ratio of direct-to-reverberant energy decreases (Zahorik & Wightman, 2001).
Auditory Scene Analysis
There is usually more than one sound source in the environment at any one time—imagine talking with a friend at a café, with some background music playing, the rattling of coffee mugs behind the counter, traffic outside, and a conversation going on at the table next to yours. All these sources produce sound waves that combine to form a single complex waveform at the eardrum, the shape of which may bear very little relationship to any of the waves produced by the individual sound sources. Somehow the auditory system is able to break down, or decompose, these complex waveforms and allow us to make sense of our acoustic environment by forming separate auditory “objects” or “streams,” which we can follow as the sounds unfold over time (Bregman, 1990).
A number of heuristic principles have been formulated to describe how sound elements are grouped to form a single object or segregated to form multiple objects. Many of these originate from the early ideas proposed in vision by the so-called Gestalt psychologists, such as Max Wertheimer. According to these rules of thumb, sounds that are in close proximity, in time or frequency, tend to be grouped together. Also, sounds that begin and end at the same time tend to form a single auditory object. Interestingly, spatial location is not always a strong or reliable grouping cue, perhaps because the location information from individual frequency components is often ambiguous due to the effects of reverberation. Several studies have looked into the relative importance of different cues by “trading off” one cue against another. In some cases, this has led to the discovery of interesting auditory illusions, where melodies that are not present in the sounds presented to either ear emerge in the perception (Deutsch, 1979), or where a sound element is perceptually “lost” in competing perceptual organizations (Shinn-Cunningham, Lee, & Oxenham, 2007).
More recent attempts have used computational and neutrally based approaches to uncover the mechanisms of auditory scene analysis (e.g., Elhilali, Ma, Micheyl, Oxenham, & Shamma, 2009), and the field of computational auditory scene analysis (CASA) has emerged in part as an effort to move towards more principled, and less heuristic, approaches to understanding the parsing and perception of complex auditory scenes (e.g., Wang & Brown, 2006). Solving this problem will not only provide us with a better understanding of human auditory perception, but may provide new approaches to “smart” hearing aids and cochlear implants, as well as automatic speech recognition systems that are more robust to background noise.
Conclusion
Hearing provides us with our most important connection to the people around us. The intricate physiology of the auditory system transforms the tiny variations in air pressure that reach our ear into the vast array of auditory experiences that we perceive as speech, music, and sounds from the environment around us. We are only beginning to understand the basic principles of neural coding in higher stages of the auditory system, and how they relate to perception. However, even our rudimentary understanding has improved the lives of hundreds of thousands through devices such as cochlear implants, which re-create some of the ear’s functions for people with profound hearing loss.
Outside Resources
Audio: Auditory Demonstrations from Richard Warren’s lab at the University of Wisconsin, Milwaukee
www4.uwm.edu/APL/demonstrations.html
Audio: Auditory Demonstrations. CD published by the Acoustical Society of America (ASA). You can listen to the demonstrations here
www.feilding.net/sfuad/musi30...1/demos/audio/
Web: Demonstrations and illustrations of cochlear mechanics can be found here
http://lab.rockefeller.edu/hudspeth/...calSimulations
Web: More demonstrations and illustrations of cochlear mechanics
www.neurophys.wisc.edu/animations/
Discussion Questions
1. Based on the available acoustic cues, how good do you think we are at judging whether a low-frequency sound is coming from in front of us or behind us? How might we solve this problem in the real world?
2. Outer hair cells contribute not only to amplification but also to the frequency tuning in the cochlea. What are some of the difficulties that might arise for people with cochlear hearing loss, due to these two factors? Why do hearing aids not solve all these problems?
3. Why do you think the auditory system has so many stages of processing before the signals reach the auditory cortex, compared to the visual system? Is there a difference in the speed of processing required?
Vocabulary
Cochlea
Snail-shell-shaped organ that transduces mechanical vibrations into neural signals.
Interaural differences
Differences (usually in time or intensity) between the two ears.
Pinna
Visible part of the outer ear.
Tympanic membrane
Ear drum, which separates the outer ear from the middle ear. | textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/03%3A_Sensation_and_Perception/3.02%3A_Hearing.txt |
By Linda Bartoshuk and Derek Snyder
University of Florida
Humans are omnivores (able to survive on many different foods). The omnivore’s dilemma is to identify foods that are healthy and avoid poisons. Taste and smell cooperate to solve this dilemma. Stimuli for both taste and smell are chemicals. Smell results from a biological system that essentially permits the brain to store rough sketches of the chemical structures of odor stimuli in the environment. Thus, people in very different parts of the world can learn to like odors (paired with calories) or dislike odors (paired with nausea) that they encounter in their worlds. Taste information is preselected (by the nature of the receptors) to be relevant to nutrition. No learning is required; we are born loving sweet and hating bitter. Taste inhibits a variety of other systems in the brain. Taste damage releases that inhibition, thus intensifying sensations like those evoked by fats in foods. Ear infections and tonsillectomies both can damage taste. Adults who have experienced these conditions experience intensified sensations from fats and enhanced palatability of high-fat foods. This may explain why individuals who have had ear infections or tonsillectomies tend to gain weight.
learning objectives
• Explain the salient properties of taste and smell that help solve the omnivore’s dilemma.
• Distinguish between the way pleasure/displeasure is produced by smells and tastes.
• Explain how taste damage can have extensive unexpected consequences.
The Omnivore's Dilemma
Humans are omnivores. We can survive on a wide range of foods, unlike species, such as koalas, that have a highly specialized diet (for koalas, eucalyptus leaves). With our amazing dietary range comes a problem: the omnivore’s dilemma (Pollan, 2006; Rozin & Rozin, 1981). To survive, we must identify healthy food and avoid poisons. The senses of taste and smell cooperate to give us this ability. Smell also has other important functions in lower animals (e.g., avoid predators, identify sexual partners), but these functions are less important in humans. This module will focus on the way taste and smell interact in humans to solve the omnivore’s dilemma.
Taste and Smell Anatomy
Taste (gustation) and smell (olfaction) are both chemical senses; that is, the stimuli for these senses are chemicals. The more complex sense is olfaction. Olfactory receptors are complex proteins called G protein-coupled receptors (GPCRs). These structures are proteins that weave back and forth across the membranes of olfactory cells seven times, forming structures outside the cell that sense odorant molecules and structures inside the cell that activate the neural message ultimately conveyed to the brain by olfactory neurons. The structures that sense odorants can be thought of as tiny binding pockets with sites that respond to active parts of molecules (e.g., carbon chains). There are about 350 functional olfactory genes in humans; each gene expresses a particular kind of olfactory receptor. All olfactory receptors of a given kind project to structures called glomeruli (paired clusters of cells found on both sides of the brain). For a single molecule, the pattern of activation across the glomeruli paints a picture of the chemical structure of the molecule. Thus, the olfactory system can identify a vast array of chemicals present in the environment. Most of the odors we encounter are actually mixtures of chemicals (e.g., bacon odor). The olfactory system creates an image for the mixture and stores it in memory just as it does for the odor of a single molecule (Shepherd, 2005).
Taste is simpler than olfaction. Bitter and sweet utilize GPCRs, just as olfaction does, but the number of different receptors is much smaller. For bitter, 25 receptors are tuned to different chemical structures (Meyerhof et al., 2010). Such a system allows us to sense many different poisons.
Sweet is even simpler. The primary sweet receptor is composed of two different G protein-coupled receptors; each of these two proteins ends in large structures reminiscent of Venus flytraps. This complex receptor has multiple sites that can bind different structures. The Venus flytrap endings open so that even some very large molecules can fit inside and stimulate the receptor.
Bitter is inclusive (i.e., multiple receptors tuned to very different chemical structures feed into common neurons). Sweet is exclusive. There are many sugars with similar structures, but only three of these are particularly important to humans (sucrose, glucose, and fructose). Thus, our sweet receptor tunes out most sugars, leaving only the most important to stimulate the sweet receptor. However, the ability of the sweet receptor to respond to some non-sugars presents us with one of the great mysteries of taste. Several non-sugar molecules can stimulate the primary sweet receptor (e.g., saccharine, aspartame, cyclamate). These have given rise to the artificial sweetener industry, but their biological significance is unknown. What biological purpose is served by allowing these non-sugar molecules to stimulate the primary sweet receptor?
Some would have us believe that artificial sweeteners are a boon to those who want to lose weight. It seems like a no-brainer. Sugars have calories; saccharin does not. Theoretically, if we replace sugar with saccharin in our diets, we will lose weight. In fact, recent work showed that rats actually gained weight when saccharin was substituted for glucose (Swithers & Davidson, 2008). It turns out that substituting saccharin for sugar can increase appetite so more is eaten later. In addition, eating artificial sweeteners appears to alter metabolism, thus making losing weight even harder. So why did nature give us artificial sweeteners? We don’t know.
One more mystery about sweet deserves comment. The discovery of the sweet receptor was met with great excitement because many investigators had searched for it for years. The fact that this complex receptor had multiple sites to which different molecules could bind explained why many different molecules taste sweet. However, this is actually a serious problem. No matter what molecule stimulates this receptor, the neural output from that receptor is the same. This would mean that the sweetness of all sweet substances would have to be the same. Yet artificial sweeteners do not taste exactly like sugar. The answer may lie in the fact that one of the two proteins that makes up the receptor can act alone, but only strong concentrations of sugar stimulate this isolated protein receptor. This permits the brain to distinguish between the sweetness of sugar and the sweetness of non-sugar molecules.
Salty and sour are the simplest tastes; these stimuli ionize (break into positively and negatively charged particles). The first event in the transduction series is the movement of the positively charged particle through channels in the taste cell membrane (Chaudhari & Roper, 2010).
Solving the omnivore’s dilemma: Taste affect is hard-wired
The pleasure associated with sweet and salty and the displeasure associated with sour and bitter are hard-wired in the brain. Newborns love sweet (taste of mother’s milk) and hate bitter (poisons) immediately. The receptors mediating salty taste are not mature at birth in humans, but when they are mature a few weeks after birth, the baby likes dilute salt (although more concentrated salt will evoke stinging sensations that will be avoided). Sour is generally disliked (protecting against tissue damage from acid?), but to the amazement of many parents, some young children appear to actually like the sour candies available today; this may be related to the breadth of their experience with fruits (Liem & Mennella, 2003). This hard-wired affect is the most salient characteristic of taste and this is why we classify only those taste qualities with hard-wired affect as “basic tastes.”
Another contribution to the omnivore’s dilemma: Olfactory affect is learned
The biological functions of olfaction depend on how odors enter our noses. Sniffing brings odorants through our nostrils. The odorants hit the turbinate bones and a puff of the odorized air rises to the top of the nasal cavity, where it goes through a narrow opening (the olfactory cleft) and arrives at the olfactory mucosa (the tissue that houses the olfactory receptors). Technically, this is called “orthonasal olfaction.” Orthonasal olfaction tells us about the world external to our bodies.
When we chew and swallow food, the odorants emitted by the food are forced up behind the palate (roof of the mouth) and enter our noses from the back; this is called “retronasal olfaction.” Ortho and retronasal olfaction involve the same odor molecules and the same olfactory receptors; however, the brain can tell the difference between the two and does not send the input to the same areas. Retronasal olfaction and taste project to some common areas where they are presumably integrated into flavor. Flavors tell us about the food we are eating.
If retronasal olfaction is paired with nausea, the food evoking the retronasal olfactory sensation becomes disliked. If retronasal olfaction is paired with situations the brain deems valuable (calories, sweet taste, pleasure from other sources, etc.), the food evoking that sensation becomes liked. These are called conditioned aversions and preferences (Rozin & Vollmecke, 1986).
Those who have experienced a conditioned aversion may have found that the dislike (even disgust) evoked when a flavor is paired with nausea can generalize to the smell of the food alone (orthonasal olfaction). Some years ago, Jeremy Wolfe and Linda Bartoshuk surveyed conditioned aversions among college students and staff that had resulted from consuming foods/beverages associated with nausea (Bartoshuk & Wolfe, 1990). In 29% of the aversions, subjects reported that even the smell of the food/beverage had become aversive. Other properties of food objects can become aversive as well. In one unusual case, an aversion to cheese crackers generalized to vanilla wafers apparently because the containers were similar. Conditioned aversions function to protect us from ingesting a food that our brains associate with illness. Conditioned preferences are harder to form, but they help us learn what is safe to eat.
Is the affect associated with olfaction ever hard-wired? Pheromones are said to be olfactory molecules that evoke specific behaviors. Googling “human pheromone” will take you to websites selling various sprays that are supposed to make one more sexually appealing. However, careful research does not support such claims in humans or any other mammals (Doty, 2010). For example, amniotic fluid was at one time believed to contain a pheromone that attracted rat pups to their mother’s nipples so they could suckle. Early interest in identifying the molecule that acted as that pheromone gave way to understanding that the behavior was learned when a novel odorant, citral (which smells like lemons), was easily substituted for amniotic fluid (Pedersen, Williams, & Blass, 1982).
Central interactions: Key to understanding taste damage
The integration of retronasal olfaction and taste into flavor is not the only central interaction among the sensations evoked by foods. These integrations in most cases serve important biological functions, but occasionally they go awry and lead to clinical pathologies.
Taste is mediated by three cranial nerves; these are bilateral nerves, each of which innervates one side of the mouth. Since they do not connect in the peripheral nervous system, interactions across the midline must occur in the brain. Incidentally, studying interactions across the midline is a classic way to draw inferences about central interactions. Insights from studies of this type were very important to understanding central processes long before we had direct imaging of brain function.
Taste on the anterior two thirds of the tongue (the part you can stick out) is mediated by the chorda tympani nerve; taste on the posterior one third (the part that stays attached) is mediated by the glossopharyngeal nerve. Taste buds are tiny clusters of cells (like the segments of an orange) that are buried in the tissue of some papillae, the structures that give the tongue its bumpy appearance. Filiform papillae are the smallest and are distributed all over the tongue; they have no taste buds. In species like the cat, the filiform papillae are shaped like small spoons and help the cat hold liquids on the tongue while lapping (try lapping from a dish and you will see how hard it is without those special filiform papillae). Fungiform papillae (given this name because they resemble small button mushrooms) are larger circular structures on the anterior tongue (innervated by the chorda tympani). They contain about six taste buds each. Fungiform papillae can be seen with the naked eye, but swabbing blue food coloring on the tongue helps. The fungiform papillae do not stain as well as the rest of the tongue so they look like pink circles against a blue background. On some tongues, the spacing of fungiform papillae is like polka dots. Other tongues can have 10 times as many fungiform papillae, spaced so closely that there is little space between them. There is a connection between the density of fungiform papillae and the perception of taste. Those who experience the most intense taste sensations (we call them supertasters) tend to have the most fungiform papillae. Incidentally, this is a rare example in sensory processes of visible anatomical variation that correlates with function. We can look at the tongues of a variety of individuals and predict which of them will experience the most intense taste sensations.
The structures that house taste buds innervated by the glossopharyngeal nerve are called circumvallate papillae. They are relatively large structures arrayed in an inverted V shape across the back of the tongue. Each of them looks like a small island surrounded by a moat.
Taste nerves project to the brain, where they send inhibitory signals to one another. One of the biological consequences of this inhibition is taste constancy. Damage to one nerve reduces taste input but also reduces inhibition on the other nerves (Bartoshuk et al 2005). That release of inhibition intensifies the central neural signals from the undamaged nerves, thereby maintaining whole mouth function. Interestingly, this release of inhibition can be so powerful that it actually increases whole mouth taste. The small effect of limited taste damage is one of the earliest clinical observations. In 1825, Brillat-Savarin described in his book The Physiology of Taste an interview with an ex-prisoner who had suffered a horrible punishment: amputation of his tongue. “This man, whom I met in Amsterdam, where he made his living by running errands, had had some education, and it was easy to communicate with him by writing. After I had observed that the forepart of his tongue has been cut off clear to the ligament, I asked him if he still found any flavor in what he ate, and if his sense of taste had survived the cruelty to which he had been subjected. He replied that … he still possessed the ability to taste fairly well” (Brillat-Savarin, 1971, pg. 35). This injury damaged the chorda tympani but spared the glossopharyngeal nerve.
We now know that taste nerves not only inhibit one another but also inhibit other oral sensations. Thus, taste damage can intensify oral touch (fats) and oral burn (chilis). In fact, taste damage appears to be linked to pain in general. Consider an animal injured in the wild. If pain reduced eating, its chance of survival would be diminished. However, nature appears to have wired the brain such that taste input inhibits pain. Eating is reinforced and the animal’s chances of survival increase.
Taste damage and weight gain
The effects of taste damage depend on the extent of damage. If only one taste nerve is damaged, then release of inhibition occurs. If the damage is extensive enough, function is lost with one possible exception. Preliminary data suggest that the more extensive the damage to taste, the greater the intensification of pain; this is obviously of clinical interest.
Damage to a single taste nerve can intensify oral touch (e.g., the creamy, viscous sensations evoked by fats). Perhaps most surprising, damage to a single taste nerve can intensify retronasal olfaction; this may occur as a secondary result from the intensification of whole mouth taste.
These sensory changes can alter the palatability of foods; in particular, high-fat foods can be rendered more palatable. Thus one of the first areas we examined was the possibility that mild taste damage could lead to increases in body mass index. Middle ear infections (otitis media) can damage the chorda tympani nerve; a tonsillectomy can damage the glossopharyngeal nerve. Head trauma damages both nerves, although it tends to take its greatest toll on the chorda tympani nerve. All of these clinical conditions increase body mass index in some individuals. More work is needed, but we suspect a link between the intensification of fat sensations, enhancement of palatability of high-fat foods, and weight gain.
Outside Resources
Video: Inside the Psychologists Studio with Linda Bartoshuk
Video: Linda Bartoshuk at Nobel Conference 46
Video: Test your tongue: the science of taste
Discussion Questions
1. In this module, we have defined “basic tastes” in terms of whether or not a sensation produces hard-wired affect. Can you think of any other definitions of basic tastes?
2. Do you think omnivores, herbivores, or carnivores have a better chance at survival?
3. Olfaction is mediated by one cranial nerve. Taste is mediated by three cranial nerves. Why do you think evolution gave more nerves to taste than to smell? What are the consequences of this?
Vocabulary
Conditioned aversions and preferences
Likes and dislikes developed through associations with pleasurable or unpleasurable sensations.
Gustation
The action of tasting; the ability to taste.
Olfaction
The sense of smell; the action of smelling; the ability to smell.
Omnivore
A person or animal that is able to survive by eating a wide range of foods from plant or animal origin.
Orthonasal olfaction
Perceiving scents/smells introduced via the nostrils.
Retronasal olfaction
Perceiving scents/smells introduced via the mouth/palate. | textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/03%3A_Sensation_and_Perception/3.03%3A_Taste_and_Smell.txt |
By Guro E. Løseth, Dan-Mikael Ellingson, and Siri Leknes
University of Oslo, University of Gothenburg
The sensory systems of touch and pain provide us with information about our environment and our bodies that is often crucial for survival and well-being. Moreover, touch is a source of pleasure. In this module, we review how information about our environment and our bodies is coded in the periphery and interpreted by the brain as touch and pain sensations. We discuss how these experiences are often dramatically shaped by top-down factors like motivation, expectation, mood, fear, stress, and context. When well-functioning, these circuits promote survival and prepare us to make adaptive decisions. Pathological loss of touch can result in perceived disconnection from the body, and insensitivity to pain can be very dangerous, leading to maladaptive hazardous behavior. On the other hand, chronic pain conditions, in which these systems start signaling pain in response to innocuous touch or even in the absence of any observable sensory stimuli, have tremendous negative impact on the lives of the affected. Understanding how our sensory-processing mechanisms can be modulated psychologically and physiologically promises to help researchers and clinicians find new ways to alleviate the suffering of chronic-pain patients.
learning objectives
• Describe the transduction of somatosensory signals: The properties of the receptor types as well as the difference in the properties of C-afferents and A-afferents and what functions these are thought to have.
• Describe the social touch hypothesis and the role of affective touch in development and bonding.
• Explain the motivation–decision model and descending modulation of pain, and give examples on how this circuitry can promote survival.
• Explain how expectations and context affect pain and touch experiences.
• Describe the concept of chronic pain and why treatment is so difficult.
Introduction
Imagine a life free of pain. How would it be—calm, fearless, serene? Would you feel invulnerable, invincible? Getting rid of pain is a popular quest—a quick search for “pain-free life” on Google returns well over 4 million hits—including links to various bestselling self-help guides promising a pain-free life in only 7 steps, 6 weeks, or 3 minutes. Pain management is a billion-dollar market, and involves much more than just pharmaceuticals. Surely a life with no pain would be a better one?
Well, consider one of the “lucky few”: 12-year-old “Thomas” has never felt deep pain. Not even when a fracture made him walk around with one leg shorter than the other, so that the bones of his healthy leg were slowly crushed to destruction underneath the knee joint (see Figure 8.5.1 ). For Thomas and other members of a large Swedish family, life without pain is a harsh reality because of a mutated gene that affects the growth of the nerves conducting deep pain. Most of those affected suffer from joint damage and frequent fractures to bones in their feet and hands; some end up in wheelchairs even before they reach puberty (Minde et al., 2004). It turns out pain—generally—serves us well.
Living without a sense of touch sounds less attractive than being free of pain—touch is a source of pleasure and essential to how we feel. Losing the sense of touch has severe implications—something patient G. L. experienced when an antibiotics treatment damaged the type of nerves that signal touch from her skin and the position of her joints and muscles. She reported feeling like she’d lost her physical self from her nose down, making her “disembodied”—like she no longer had any connection to the body attached to her head. If she didn’t look at her arms and legs they could just “wander off” without her knowing—initially she was unable to walk, and even after she relearned this skill she was so dependent on her visual attention that closing her eyes would cause her to land in a hopeless heap on the floor. Only light caresses like those from her children’s hands can make her feel she has a body, but even these sensations remain vague and elusive (Olausson et al., 2002; Sacks, 1985).
Sensation
Cutaneous Senses of the Skin Connect the Brain to the Body and the Outside World
Touch and pain are aspects of the somatosensory system, which provides our brain with information about our own body (interoception) and properties of the immediate external world (exteroception) (Craig, 2002). We have somatosensory receptors located all over the body, from the surface of our skin to the depth of our joints. The information they send to the central nervous system is generally divided into four modalities: cutaneous senses(senses of the skin), proprioception (body position), kinesthesis (body movement), and nociception (pain, discomfort). We are going to focus on the cutaneous senses, which respond to tactile, thermal, and pruritic (itchy) stimuli, and events that cause tissue damage (and hence pain). In addition, there is growing evidence for a fifth modality specifically channeling pleasant touch (McGlone & Reilly, 2010).
Different Receptor Types Are Sensitive to Specific Stimuli
The skin can convey many sensations, such as the biting cold of a wind, the comfortable pressure of a hand holding yours, or the irritating itch from a woolen scarf. The different types of information activate specific receptors that convert the stimulation of the skin to electrical nerve impulses, a process called transduction. There are three main groups of receptors in our skin: mechanoreceptors, responding to mechanical stimuli, such as stroking, stretching, or vibration of the skin; thermoreceptors, responding to cold or hot temperatures; and chemoreceptors, responding to certain types of chemicals either applied externally or released within the skin (such as histamine from an inflammation). For an overview of the different receptor types and their properties, see Box 1. The experience of pain usually starts with activation of nociceptorsreceptors that fire specifically to potentially tissue-damaging stimuli. Most of the nociceptors are subtypes of either chemoreceptors or mechanoreceptors. When tissue is damaged or inflamed, certain chemical substances are released from the cells, and these substances activate the chemosensitive nociceptors. Mechanoreceptive nociceptors have a high threshold for activation—they respond to mechanical stimulation that is so intense it might damage the tissue.
Action Potentials in the Receptor Cells Travel as Nerve Impulses with Different Speeds
When you step on a pin, this activates a host of mechanoreceptors, many of which are nociceptors. You may have noticed that the sensation changes over time. First you feel a sharp stab that propels you to remove your foot, and only then you feel a wave of more aching pain. The sharp stab is signaled via fast-conducting A-fibers, which project to the somatosensory cortex. This part of the cortex is somatotopically organized—that is, the sensory signals are represented according to where in the body they stem from (see illustrations, Figure 8.5.2). The unpleasant ache you feel after the sharp pin stab is a separate, simultaneous signal sent from the nociceptors in your foot via thin C-pain or Aδ-fibers to the insular cortex and other brain regions involved in processing of emotion and interoception (see Figure 8.5.3 for a schematic representation of this pathway). The experience of stepping on a pin is, in other words, composed by two separate signals: one discriminatory signal allowing us to localize the touch stimulus and distinguish whether it’s a blunt or a sharp stab; and one affective signal that lets us know that stepping on the pin is bad. It is common to divide pain into sensory–discriminatory and affective–motivational aspects (Auvray, Myin, & Spence, 2010). This distinction corresponds, at least partly, to how this information travels from the peripheral to the central nervous system and how it is processed in the brain (Price, 2000).
Affective Aspects of Touch Are Important for Development and Relationships
Touch senses are not just there for discrimination or detection of potentially painful events, as Harlow and Suomi (1970) demonstrated in a series of heartbreaking experiments where baby monkeys were taken from their mothers. The infant monkeys could choose between two artificial surrogate mothers—one “warm” mother without food but with a furry, soft cover; and one cold, steel mother with food. The monkey babies spent most of their time clinging to the soft mother, and only briefly moved over to the hard, steel mother to feed, indicating that touch is of “overpowering importance” to the infant (Harlow & Suomi, 1970, p. 161). Gentle touch is central for creating and maintaining social relationships in primates; they groom each other by stroking the fur and removing parasites—an activity important not only for their individual well-being but also for group cohesion (Dunbar, 2010; Keverne, Martensz, & Tuite, 1989). Although people don’t groom each other in the same way, gentle touch is important for us, too.
The sense of touch is the first to develop while one is in the womb, and human infants crave touch from the moment they’re born. From studies of human orphans, we know that touch is also crucial for human development. In Romanian orphanages where the babies were fed but not given regular attention or physical contact, the children suffered cognitive and neurodevelopmental delay (Simons & Land, 1987). Physical contact helps a crying baby calm down, and the soothing touch a mother gives to her child is thought to reduce the levels of stress hormones such as cortisol. High levels of cortisol have negative effects on neural development, and they can even lead to cell loss (Feldman, Singer, & Zagoory, 2010; Fleming, O'Day, & Kraemer, 1999; Pechtel & Pizzagalli, 2011). Thus, stress reduction through hugs and caresses might be important not only for children’s well-being, but also for the development of the infant brain.
The skin senses are similar across species, likely reflecting the evolutionary advantage of being able to tell what is touching you, where it’s happening, and whether or not it’s likely to cause tissue damage. An intriguing line of touch research suggests that humans, cats, and other animals have a special, evolutionarily preserved system that promotes gentle touch because it carries social and emotional significance. On a peripheral level, this system consists of a subtype of C-fibers that responds not to painful stimuli, but rather to gentle stroking touch—called C-tactile fibers. The firing rate of the C-tactile fibers correlates closely with how pleasant the stroking feels—suggesting they are coding specifically for the gentle caresses typical of social affiliative touch (Löken, Wessberg, Morrison, McGlone, & Olausson, 2009). This finding has led to the social touch hypothesis, which proposes that C-tactile fibers form a system for touch perception that supports social bonding (Morrison, Löken, & Olausson, 2010; Olausson, Wessberg, Morrison, McGlone, & Vallbo, 2010). The discovery of the C-tactile system suggests that touch is organized in a similar way to pain; fast-conducting A-fibers contribute to sensory–discriminatory aspects, while thin C-fibers contribute to affective–motivational aspects (Löken, Wessberg, Morrison, McGlone, & Olausson, 2009). However, while these “hard-wired” afferent systems often provide us with accurate information about our environment and our bodies, how we experience touch or pain depends very much on top-down sources like motivation, expectation, mood, fear, and stress.
Modulation
Pain Is Necessary for Survival, but Our Brain Can Stop It if It Needs To
In April 2003, the climber Aron Ralston found himself at the floor of Blue John Canyon in Utah, forced to make an appalling choice: face a slow but certain death—or amputate his right arm. Five days earlier he fell down the canyon—since then he had been stuck with his right arm trapped between an 800-lb boulder and the steep sandstone wall. Weak from lack of food and water and close to giving up, it occurred to him like an epiphany that if he broke the two bones in his forearm he could manage to cut off the rest with his pocket knife. The thought of freeing himself and surviving made him so exited he spent the next 40 minutes completely engrossed in the task: first snapping his bones using his body as a lever, then sticking his fingers into the arm, pinching bundles of muscle fibers and severing them one by one, before cutting the blue arteries and the pale “noodle-like” nerves. The pain was unimportant. Only cutting through the thick white main nerve made him stop for a minute—the flood of pain, he describes, was like thrusting his entire arm “into a cauldron of magma.” Finally free, he rappelled down a cliff and walked another 7 miles until he was rescued by some hikers (Ralston, 2010). How is it possible to do something so excruciatingly painful to yourself, and still manage to walk, talk, and think rationally afterwards? The answer lies within the brain, where signals from the body are interpreted. When we perceive somatosensory and nociceptive signals from the body, the experience is highly subjective and malleable by motivation, attention, emotion, and context.
The Motivation–Decision Model and Descending Modulation of Pain
According to the motivation–decision model, the brain automatically and continuously evaluates the pros and cons of any situation—weighing impending threats and available rewards (Fields, 2004, 2006). Anything more important for survival than avoiding the pain activates the brain’s descending pain modulatory system—a top-down system involving several parts of the brain and brainstem, which inhibits nociceptive signaling so that the more important actions can be attended to. In Aron’s extreme case, his actions were likely based on such an unconscious decision process—taking into account his homeostatic state (his hunger, thirst, the inflammation and decay of his crushed hand slowly affecting the rest of his body), the sensory input available (the sweet smell of his dissolving skin, the silence around him indicating his solitude), and his knowledge about the threats facing him (death, or excruciating pain that won’t kill him) versus the potential rewards (survival, seeing his family again). Aron’s story illustrates the evolutionary advantage to being able to shut off pain: The descending pain modulatory system allows us to go through with potentially life-saving actions. However, when one has reached safety or obtained the reward, healing is more important. The very same descending system can then “crank up” nociception from the body to promote healing and motivate us to avoid potentially painful actions. To facilitate or inhibit nociceptive signals from the body, the descending pain modulatory system uses a set of ON- or OFF-cells in the brainstem, which regulates how much of the nociceptive signal reaches the brain. The descending system is dependent on opioid signaling, and analgesics like morphine relieve pain via this circuit (Petrovic, Kalso, Petersson, & Ingvar, 2002).
The Analgesic Power of Reward
Thinking about the good things, like his loved ones and the life ahead of him, was probably pivotal to Aron’s survival. The promise of a reward can be enough to relieve pain. Expecting pain relief (getting less pain is often the best possible outcome if you’re in pain, i.e., it is a reward) from a medical treatment contributes to the placebo effect—where pain relief is due at least partly to your brain’s descending modulation circuit, and such relief depends on the brain’s own opioid system (Eippert et al., 2009; Eippert, Finsterbusch, Bingel, & Buchel, 2009; Levine, Gordon, & Fields, 1978). Eating tasty food, listening to good music, or feeling pleasant touch on your skin also decreases pain in both animals and humans, presumably through the same mechanism in the brain (Leknes & Tracey, 2008). In a now classic experiment, Dum and Herz (1984) either fed rats normal rat food or let them feast on highly rewarding chocolate-covered candy (rats love sweets) while standing on a metal plate until they learned exactly what to expect when placed there. When the plate was heated up to a noxious/painful level, the rats that expected candy endured the temperature for twice as long as the rats expecting normal chow. Moreover, this effect was completely abolished when the rats’ opioid (endorphin) system was blocked with a drug, indicating that the analgesic effect of reward anticipation was caused by endorphin release.
For Aron the climber, both the stress from knowing that death was impending and the anticipation of the reward it would be to survive probably flooded his brain with endorphins, contributing to the wave of excitement and euphoria he experienced while he carried out the amputation “like a five-year-old unleashed on his Christmas presents” (Ralston, 2010). This altered his experience of the pain from the extreme tissue damage he was causing and enabled him to focus on freeing himself. Our brain, it turns out, can modulate the perception of how unpleasant pain is, while still retaining the ability to experience the intensity of the sensation (Rainville, Duncan, Price, Carrier, & Bushnell, 1997; Rainville, Feine, Bushnell, & Duncan, 1992). Social rewards, like holding the hand of your boyfriend or girlfriend, have pain-reducing effects. Even looking at a picture of him/her can have similar effects—in fact, seeing a picture of a person we feel close to not only reduces subjective pain ratings, but also the activity in pain-related brain areas (Eisenberger et al., 2011). The most common things to do when wanting to help someone through a painful experience—being present and holding the person’s hand—thus seems to have a measurably positive effect.
When Touch Becomes Painful or Pain Becomes Chronic
Chances are you’ve been sunburned a few times in your life and have experienced how even the lightest pat on the back or the softest clothes can feel painful on your over-sensitive skin. This condition, where innocuous touch gives a burning, tender sensation, is similar to a chronic condition called allodynia—where neuronal disease or injury makes touch that is normally pleasant feel unpleasantly painful. In allodynia, neuronal injury in the spinal dorsal horn causes Aβ-afferents, which are activated by non-nociceptive touch, to access nociceptive pathways (Liljencrantz et al., 2013). The result is that even gentle touch is interpreted by the brain as painful. While an acute pain response to noxious stimuli has a vital protective function, allodynia and other chronic pain conditions constitute a tremendous source of unnecessary suffering that affects millions of people. Approximately 100 million Americans suffer from chronic pain, and annual economic cost associated is estimated to be \$560–\$635 billion (Committee on Advancing Pain Research, Care, & Institute of Medicine, 2011). Chronic pain conditions are highly diverse, and they can involve changes on peripheral, spinal, central, and psychological levels. The mechanisms are far from fully understood, and developing appropriate treatment remains a huge challenge for pain researchers.
Chronic pain conditions often begin with an injury to a peripheral nerve or the tissue surrounding it, releasing hormones and inflammatory molecules that sensitize nociceptors. This makes the nerve and neighboring afferents more excitable, so that also uninjured nerves become hyperexcitable and contribute to the persistence of pain. An injury might also make neurons fire nonstop regardless of external stimuli, providing near-constant input to the pain system. Sensitization can also happen in the brain and in the descending modulatory system of the brainstem (Zambreanu, Wise, Brooks, Iannetti, & Tracey, 2005). Exactly on which levels the pain perception is altered in chronic pain patients can be extremely difficult to pinpoint, making treatment an often exhausting process of trial and error. Suffering from chronic pain has dramatic impacts on the lives of the afflicted. Being in pain over a longer time can lead to depression, anxiety (fear or anticipation of future pain), and immobilization, all of which may in turn exacerbate pain (Wiech & Tracey, 2009). Negative emotion and attention to pain can increase sensitization to pain, possibly by keeping the descending pain modulatory system in facilitation mode. Distraction is therefore a commonly used technique in hospitals where patients have to undergo painful treatments like changing bandages on large burns. For chronic pain patients, however, diverting attention is not a long-term solution. Positive factors like social support can reduce the risk of chronic pain after an injury, and so they can help to adjust to bodily change as a result of injury. We haveve already talked about how having a hand to hold might alleviate suffering. Chronic pain treatment should target these emotional and social factors as well as the physiological.
The Power of the Mind
The context of pain and touch has a great impact on how we interpret it. Just imagine how different it would feel to Aron if someone amputated his hand against his will and for no discernible reason. Prolonged pain from injuries can be easier to bear if the incident causing them provides a positive context—like a war wound that testifies to a soldier’s courage and commitment—or phantom pain from a hand that was cut off to enable life to carry on. The relative meaning of pain is illustrated by a recent experiment, where the same moderately painful heat was administered to participants in two different contexts—one control context where the alternative was a nonpainful heat; and another where the alternative was an intensely painful heat. In the control context, where the moderate heat was the least preferable outcome, it was (unsurprisingly) rated as painful. In the other context it was the best possible outcome, and here the exact same moderately painful heat was actually rated as pleasant—because it meant the intensely painful heat had been avoided. This somewhat surprising change in perception—where pain becomes pleasant because it represents relief from something worse—highlights the importance of the meaning individuals ascribe to their pain, which can have decisive effects in pain treatment (Leknes et al., 2013). In the case of touch, knowing who or what is stroking your skin can make all the difference—try thinking about slugs the next time someone strokes your skin if you want an illustration of this point. In a recent study, a group of heterosexual males were told that they were about to receive sensual caresses on the leg by either a male experimenter or by an attractive female experimenter (Gazzola et al., 2012). The study participants could not see who was touching them. Although it was always the female experimenter who performed the caress, the heterosexual males rated the otherwise pleasant sensual caresses as clearly unpleasant when they believed the male experimenter did it. Moreover, brain responses to the “male touch” in somatosensory cortex were reduced, exemplifying how top-down regulation of touch resembles top-down pain inhibition.
Pain and pleasure not only share modulatory systems—another common attribute is that we don’t need to be on the receiving end of it ourselves in order to experience it. How did you feel when you read about Aron cutting through his own tissue, or “Thomas” destroying his own bones unknowingly? Did you cringe? It’s quite likely that some of your brain areas processing affective aspects of pain were active even though the nociceptors in your skin and deep tissue were not firing. Pain can be experienced vicariously, as can itch, pleasurable touch, and other sensations. Tania Singer and her colleagues found in an fMRI study that some of the same brain areas that were active when participants felt pain on their own skin (anterior cingulate and insula) were also active when they were given a signal that a loved one was feeling the pain. Those who were most “empathetic” also showed the largest brain responses (Singer et al., 2004). A similar effect has been found for pleasurable touch: The posterior insula of participants watching videos of someone else’s arm being gently stroked shows the same activation as if they were receiving the touch themselves (Morrison, Bjornsdotter, & Olausson, 2011).
Summary
Sensory experiences connect us to the people around us, to the rest of the world, and to our own bodies. Pleasant or unpleasant, they’re part of being human. In this module, we have seen how being able to inhibit pain responses is central to our survival—and in cases like that of climber Aron Ralston, that ability can allow us to do extreme things. We have also seen how important the ability to feel pain is to our health—illustrated by young “Thomas,” who keeps injuring himself because he simply doesn’t notice pain. While “Thomas” has to learn to avoid harmful activities without the sensory input that normally guides us, G. L. has had to learn how to keep approaching and move about in a world she can hardly feel at all, with a body that is practically disconnected from her awareness. Too little sensation or too much of it leads to no good, no matter how pleasant or unpleasant the sensation usually feels. As long as we have nervous systems that function normally, we are able to adjust the volume of the sensory signals and our behavioral reactions according to the context we’re in. When it comes to sensory signals like touch and pain, we are interpreters, not measuring instruments. The quest for understanding how our sensory–processing mechanisms can be modulated, psychologically and physiologically, promises to help researchers and clinicians find new ways to alleviate distress from chronic pain.
Outside Resources
Book: Butler, D. S., Moseley, G. L., & Sunyata. (2003). Explain pain (p. 19). Australia: Noigroup.
Book: Kringelbach, M. L., & Berridge, K. C. (Eds.). (2010). Pleasures of the brain (p. 343). Oxford, UK: Oxford University Press.
Book: Ralston, A. (2004). Between a rock and a hard place: The basis of the motion picture 127 Hours. New York, NY: Atria.
Book: Sacks, O. (1998). The man who mistook his wife for a hat: And other clinical tales. New York, NY: Simon & Schuster.
Video: BBC Documentary series “Human Senses,” Episode 3: Touch and Vision
watchdocumentary.org/watch/hu...f3e33c14a.html
Video: BBC Documentary “Pleasure and Pain with Michael Mosley”
http://www.bbc.co.uk/programmes/b00y377q
Video: TEDxAdelaide - Lorimer Moseley – “Why Things Hurt”
Video: Trailer for the film 127 Hours, directed by Danny Boyle and released in 2010
Web: Homepage for the International Association for the Study of Pain
http://www.iasp-pain.org
Web: Proceedings of the National Academy of Sciences Colloquium "The Neurobiology of Pain"
http://www.pnas.org/content/96/14.toc#COLLOQUIUM
Web: Stanford School of Medicine Pain Management Center
http://paincenter.stanford.edu/
Website resource aiming to communicate “advances and issues in the clinical sciences as they relate to the role of the brain and mind in chronic pain disorders,” led by Dr. Lorimer Moseley
www.bodyinmind.org/
Discussion Questions
1. Your friend has had an accident and there is a chance the injury might cause pain over a prolonged period. How would you support your friend? What would you say and do to ease the pain, and why do you think it would work?
2. We have learned that touch and pain sensation in many aspects do not reflect “objectively” the outside world or the body state. Rather, these experiences are shaped by various top-down influences, and they can even occur without any peripheral activation. This is similar to the way other sensory systems work, e.g., the visual or auditory systems, and seems to reflect a general way the brain process sensory events. Why do you think the brain interprets the incoming sensory information instead of giving a one-to-one readout the way a thermometer and other measuring instruments would? Imagine you instead had “direct unbiased access” between stimuli and sensation. What would be the advantages and disadvantages of this?
3. Feelings of pain or touch are subjective—they have a particular quality that you perceive subjectively. How can we know whether the pain you feel is similar to the pain I feel? Is it possible that modern scientists can objectively measure such subjective feelings?
Vocabulary
A-fibers
Fast-conducting sensory nerves with myelinated axons. Larger diameter and thicker myelin sheaths increases conduction speed. Aβ-fibers conduct touch signals from low-threshold mechanoreceptors with a velocity of 80 m/s and a diameter of 10 μm; Aδ-fibers have a diameter of 2.5 μm and conduct cold, noxious, and thermal signals at 12 m/s. The third and fastest conducting A-fiber is the Aα, which conducts proprioceptive information with a velocity of 120 m/s and a diameter of 20 μm.
Allodynia
Pain due to a stimulus that does not normally provoke pain, e.g., when a light, stroking touch feels painful.
Analgesia
Pain relief.
C-fibers
C-fibers: Slow-conducting unmyelinated thin sensory afferents with a diameter of 1 μm and a conduction velocity of approximately 1 m/s. C-pain fibers convey noxious, thermal, and heat signals; C-tactile fibers convey gentle touch, light stroking.
Chronic pain
Persistent or recurrent pain, beyond usual course of acute illness or injury; sometimes present without observable tissue damage or clear cause.
C-pain or Aδ-fibers
C-pain fibers convey noxious, thermal, and heat signals
C-tactile fibers
C-tactile fibers convey gentle touch, light stroking
Cutaneous senses
The senses of the skin: tactile, thermal, pruritic (itchy), painful, and pleasant.
Descending pain modulatory system
A top-down pain-modulating system able to inhibit or facilitate pain. The pathway produces analgesia by the release of endogenous opioids. Several brain structures and nuclei are part of this circuit, such as the frontal lobe areas of the anterior cingulate cortex, orbitofrontal cortex, and insular cortex; and nuclei in the amygdala and the hypothalamus, which all project to a structure in the midbrain called the periaqueductal grey (PAG). The PAG then controls ascending pain transmission from the afferent pain system indirectly through the rostral ventromedial medulla (RVM) in the brainstem, which uses ON- and OFF-cells to inhibit or facilitate nociceptive signals at the spinal dorsal horn.
Endorphin
An endogenous morphine-like peptide that binds to the opioid receptors in the brain and body; synthesized in the body’s nervous system.
Exteroception
The sense of the external world, of all stimulation originating from outside our own bodies.
Interoception
The sense of the physiological state of the body. Hunger, thirst, temperature, pain, and other sensations relevant to homeostasis. Visceral input such as heart rate, blood pressure, and digestive activity give rise to an experience of the body’s internal states and physiological reactions to external stimulation. This experience has been described as a representation of “the material me,” and it is hypothesized to be the foundation of subjective feelings, emotion, and self-awareness.
Nociception
The neural process of encoding noxious stimuli, the sensory input from nociceptors. Not necessarily painful, and crucially not necessary for the experience of pain.
Nociceptors
High-threshold sensory receptors of the peripheral somatosensory nervous system that are capable of transducing and encoding noxious stimuli. Nociceptors send information about actual or impending tissue damage to the brain. These signals can often lead to pain, but nociception and pain are not the same.
Noxious stimulus
A stimulus that is damaging or threatens damage to normal tissues.
Ocial touch hypothesis
Proposes that social touch is a distinct domain of touch. C-tactile afferents form a special pathway that distinguishes social touch from other types of touch by selectively firing in response to touch of social-affective relevance; thus sending affective information parallel to the discriminatory information from the Aβ-fibers. In this way, the socially relevant touch stands out from the rest as having special positive emotional value and is processed further in affect-related brain areas such as the insula.
Pain
Defined as “an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage,” according to the International Association for the Study of Pain.
Phantom pain
Pain that appears to originate in an amputated limb.
Placebo effect
Effects from a treatment that are not caused by the physical properties of a treatment but by the meaning ascribed to it. These effects reflect the brain’s own activation of modulatory systems, which is triggered by positive expectation or desire for a successful treatment. Placebo analgesia is the most well-studied placebo effect and has been shown to depend, to a large degree, on opioid mechanisms. Placebo analgesia can be reversed by the pharmacological blocking of opioid receptors. The word “placebo” is probably derived from the Latin word “placebit” (“it will please”).
Sensitization
Increased responsiveness of nociceptive neurons to their normal input and/or recruitment of a response to normally subthreshold inputs. Clinically, sensitization may only be inferred indirectly from phenomena such as hyperalgesia or allodynia. Sensitization can occur in the central nervous system (central sensitization) or in the periphery (peripheral sensitization).
S ocial touch hypothesis
Proposes that social touch is a distinct domain of touch. C-tactile afferents form a special pathway that distinguishes social touch from other types of touch by selectively firing in response to touch of social-affective relevance; thus sending affective information parallel to the discriminatory information from the Aβ-fibers. In this way, the socially relevant touch stands out from the rest as having special positive emotional value and is processed further in affect-related brain areas such as the insula.
Somatosensory cortex
Consists of primary sensory cortex (S1) in the postcentral gyrus in the parietal lobes and secondary somatosensory cortex (S2), which is defined functionally and found in the upper bank of the lateral sulcus, called the parietal operculum. Somatosensory cortex also includes parts of the insular cortex.
Somatotopically organized
When the parts of the body that are represented in a particular brain region are organized topographically according to their physical location in the body (see Figure 8.5.2 illustration).
Spinothalamic tract
Runs through the spinal cord’s lateral column up to the thalamus. C-fibers enter the dorsal horn of the spinal cord and form a synapse with a neuron that then crosses over to the lateral column and becomes part of the spinothalamic tract.
Transduction
The mechanisms that convert stimuli into electrical signals that can be transmitted and processed by the nervous system. Physical or chemical stimulation creates action potentials in a receptor cell in the peripheral nervous system, which is then conducted along the axon to the central nervous system. | textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/03%3A_Sensation_and_Perception/3.04%3A_Touch_and_Pain.txt |
By Dora Angelaki and J. David Dickman
Baylor College of Medicine
The vestibular system functions to detect head motion and position relative to gravity and is primarily involved in the fine control of visual gaze, posture, orthostasis, spatial orientation, and navigation. Vestibular signals are highly processed in many regions of the brain and are involved in many essential functions. In this module, we provide an overview of how the vestibular system works and how vestibular signals are used to guide behavior.
learning objectives
• Define the basic structures of the vestibular receptor system.
• Describe the neuroanatomy of the vestibuloocular, vestibulospinal, and vestibulo-thalamo-cortical pathways.
• Describe the vestibular commissural system.
• Describe the different multisensory cortical areas for motion perception.
Introduction
Remember the dizzy feeling you got as a child after you jumped off the merry-go-round or spun around like a top? These feelings result from activation of the vestibular system, which detects our movements through space but is not a conscious sense like vision or hearing. In fact, most vestibular functions are imperceptible, but vestibular-related sensations such as motion sickness can pop up rapidly when riding on a roller coaster, having a bumpy plane ride, or a sailing a boat in rough seas. However, these sensations are really side effects and the vestibular system is actually extremely important for everyday activities, with vestibular signals being involved in much of the brain’s information processing that controls such fundamental functions as balance, posture, gaze stabilization, spatial orientation, and navigation, to name a few. In many regions of the brain, vestibular information is combined with signals from the other senses as well as with motor information to give rise to motion perception, body awareness, and behavioral control. Here, we will explore the workings of the vestibular system and consider some of the integrated computations the brain performs using vestibular signals to guide our common behavior.
Structure of the vestibular receptors
The vestibular receptors lie in the inner ear next to the auditory cochlea. They detect rotational motion (head turns), linear motion (translations), and tilts of the head relative to gravity and transduce these motions into neural signals that can be sent to the brain. There are five vestibular receptors in each ear (Hearing module, Figure 8.6.1- http://noba.to/jry3cu78), including three semicircular canals (horizontal, anterior, and posterior) that transduce rotational angular accelerations and two otolith receptors (utricle and saccule) that transduce linear accelerations (Lindeman, 1969). Together, the semicircular canals and otolith organs can respond to head motion and maintained static head position relative to gravity in all directions in 3D space.
These receptors are contained in a series of interconnected fluid filled tubes that are protected by a dense, overlying bone (Iurato, 1967). Each of the three semicircular canals lies in a plane that is orthogonal to the other two. The horizontal semicircular canal lies in a roughly horizontal head plane, whereas the anterior and posterior semicircular canals lie vertically in the head (Blanks, Curthoys, Bennett, & Markham, 1985). The semicircular canal receptor cells, termed hair cells, are located only in the middle of the circular tubes in a special epithelium, covered by a gelatinous membrane that stretches across the tube to form a fluid-tight seal like the skin of a drum (Figures 1A and 1B). Hair cells are so named due to an array of nearly 100 staggered-height stereocilia (like a church pipe organ) that protrude from the top of the cell into the overlying gelatin membrane (Wersäll, 1956). The shortest stereocilia are at one end of the cell and the tallest at the other (Lindeman, 1969). When the head is rotated, the fluid in the semicircular canals lags behind the head motion and pushes on the gelatin membrane, which bends the stereocilia.
As shown in Figure 8.6.2, when the head moves toward the receptor hair cells (e.g., left head turns for the left horizontal semicircular canal), the stereocilia are bent toward the tallest end and special mechanically gated ion channels in the tips of the cilia open, which excites (depolarizes) the cell (Shotwell, Jacobs, & Hudspeth, 1981). Head motion in the opposite direction causes bending toward the smallest stereocilia, which closes the channels and inhibits (hyperpolarizes) the cell. The left and right ear semicircular canals have opposite polarity, so for example, when you turn your head to the left, the receptors in the left horizontal semicircular canal will be excited while right ear horizontal canal receptors will be inhibited (Figure 8.6.3). The same relationship is true for the vertical semicircular canals. Vestibular afferent nerve fibers innervate the base of the hair cell and increase or decrease their neural firing rate as the receptor cell is excited or inhibited (Dickman and Correia, 1989), respectively, and then carry these signals regarding head rotational motion to the brain as part of the vestibulocochlear nerve(Cranial nerve VIII). They enter the brainstem and terminate in the ipsilateral vestibular nuclei, cerebellum, and reticular formation (Carleton & Carpenter, 1984; Dickman & Fang, 1996). The primary vestibular hair cell and afferent neurotransmitters are glutamate and aspartate. Due to the mechanical properties of the vestibular receptor system, rotational accelerations of the head are integrated into velocity signals (Van Egmond, Groen, & Jongkess, 1949) that are then encoded by semicircular canal afferents (Fernandez & Goldberg, 1971). Detection thresholds for rotational motion have shown that afferents can discriminate differences in head velocity on the order of 2 deg/sec, but also are sensitive to a broad range of natural head movements up to high head speeds in the hundreds of deg/sec (as you might experience when you make a fast head turn toward a loud sound, or are performing gymnastics; Sadeghi, Chacron, Taylor, & Cullen, 2007; Yu, Dickman, & Angelaki, 2012).
Otolith receptors are sensitive to linear accelerations and tilts of the head relative to gravity (Fernandez & Goldberg, 1976a). The utricle otolith receptor lies parallel to the horizontal semicircular canal and the saccule receptor lies vertical in the head (Hearing module, Figure 8.6.1- http://noba.to/jry3cu78). As shown in Figure 8.6.4, a special otolith epithelium contains receptor hair cells whose stereocilia extend into a gelatin membrane that is covered by a layer of calcium carbonate crystals, termed otoconia, like rocks piled up to form a jetty (Lindeman, 1969). Otoconia are not affected by fluid movements but instead are displaced by linear accelerations, including translations (e.g., forward/backward or upward/downward motions) or changes in head position relative to gravity. These linear accelerations produce displacements of the otoconia (due to their high mass), much like rocks rolling down a hill or your coffee cup falling off the car dashboard when you push the gas pedal. Movements of the otoconia bend the hair cell stereocilia and open/close channels in a similar way to that described for the semicircular canals. However, otolith hair cells are polarized such that the tallest stereocilia are pointing toward the center of the utricle and away from the center in the saccule, which effectively splits the receptors into two opposing groups (Flock, 1964; Lindeman, 1969). In this way, some hair cells are excited and some inhibited for each linear motion force or head tilt experienced, with the population of receptors and their innervating afferents being directionally tuned to all motions or head tilts in 3D space (Fernandez & Goldberg, 1976b).
All vestibular hair cells and afferents receive connections from vestibular efferents, which are fibers projecting from the brain out to the vestibular receptor organs, whose function is not well understood. It is thought that efferents control the sensitivity of the receptor (Boyle, Carey, & Highstein, 1991). The primary efferents neurotransmitter is acetylcholine (Anniko & Arnold, 1991).
The vestibular nuclei
The vestibular nuclei comprise a large set of neural elements in the brainstem that receive motion and other multisensory signals, then regulate movement responses and sensory experience. Many vestibular nuclei neurons have reciprocal connections with the cerebellum that form important regulatory mechanisms for the control of eye movements, head movements, and posture. There are four major vestibular nuclei that lie in the rostral medulla and caudal pons of the brainstem; all receive direct input from vestibular afferents (Brodal, 1984; Precht & Shimazu, 1965). Many of these nuclei neurons receive convergent motion information from the opposite ear through an inhibitory commissural pathway that uses gamma-aminobutyric acid (GABA) as a neurotransmitter (Kasahara & Uchino, 1974; Shimazu & Precht, 1966). The commissural pathway is highly organized such that cells receiving horizontal excitatory canal signals from the ipsilateral ear will also receive contralateral inhibitory horizontal canal signals from the opposite ear This fact gives rise to a “push-pull” vestibular function, whereby directional sensitivity to head movement is coded by opposing receptor signals. Because vestibular nuclei neurons receive information from bilateral inner ear receptors and because they maintain a high spontaneous firing rate (nearly 100 impulses/sec), they are thought to act to “compare” the relative discharge rates of left vs. right canal afferent firing activity. For example, during a leftward head turn, left brainstem nuclei neurons receive high firing-rate information from the left horizontal canal and low firing-rate information from the right horizontal canal. The comparison of activity is interpreted as a left head turn. Similar nuclei neuron responses exist when the head is pitched or rolled, with the vertical semicircular canals being stimulated by the rotational motion in their sensitivity planes. However, the opposing push-pull response from the vertical canals occurs with the anterior semicircular canal in one ear and the co-planar posterior semicircular canal of the opposite ear. Damage or disease that interrupts inner ear signal information from one side of the head can change the normal resting activity in the VIIIth nerve afferent fibers and will be interpreted by the brain as a head rotation, even though the head is stationary. These effects often lead to illusions of spinning or rotating that can be quite upsetting and may produce nausea or vomiting. However, over time the commissural fibers provide for vestibular compensation, a process by which the loss of unilateral vestibular receptor function is partially restored centrally and behavioral responses, such as the vestibuloocular reflex (VOR) and postural responses, mostly recover (Beraneck et al., 2003; Fetter & Zee, 1988,; Newlands, Hesse, Haque, & Angelaki, 2001; Newlands & Perachio, 1990).
In addition to the commissural pathway, many vestibular nuclei neurons receive proprioceptive signals from the spinal cord regarding muscle movement and position, visual signals regarding spatial motion, other multisensory (e.g., trigeminal) signals, and higher order signals from the cortex. It is thought that the cortical inputs regulate fine gaze and postural control, as well as suppress the normal compensatory reflexes during motion in order to elicit volitional movements. Of special significance are convergent signals from the semicircular canal and otolith afferents that allow central vestibular neurons to compute specific properties of head motion (Dickman & Angelaki, 2002). For example, Einstein (1907) showed that linear accelerations are equivalent whether they arise from translational motion or from tilts of the head relative to gravity. The otolith receptors cannot discriminate between the two, so how is it that we can tell the difference between when we are translating forward and tilting backward, where the linear acceleration signaled by the otolith afferents is the same? Vestibular nuclei and cerebellar neurons use convergent signals from both the semicircular canals and the otolith receptors to discriminate between tilt and translation, and as a result, some cells encode head tilt (Zhou, 2006) while other cells encode translational motion (Angelaki, Shaikh, Green, & Dickman, 2004).
Vestibuloocular system
The vestibular system is responsible for controlling gaze stability during motion (Crane & Demer, 1997). For example, if we want to read the sign in a store window while walking by, we must maintain foveal fixation on the words while compensating for the combined rotational and translational head movements incurred during our stride. The vestibular system regulates compensatory eye, neck, spinal, and limb movements in order to maintain gaze (Keshner & Peterson, 1995). One of the major components contributing to gaze stability is the VOR, which produces reflexive eye movements that are equal in magnitude and opposite in direction to the perceived head motion in 3D space (Wilson et al., 1995). The VOR is so accurate and fast that it allows people to maintain visual fixation on objects of interest while experiencing demanding motion conditions, such as running, skiing, playing tennis, and driving. In fact, gaze stabilization in humans has been shown to be completely compensatory (essentially perfect) for most natural behaviors. To produce the VOR, vestibular neurons must control each of the six pairs of eye muscles in unison through a specific set of connections to the oculomotor nuclei (Ezure & Graf, 1984). The anterior and posterior semicircular canals along with the saccule control vertical and torsional (turning of the eye around the line of sight) eye movements, while the horizontal canals and the utricle control horizontal eye movements.
To understand how the VOR works, let’s take the example of the compensatory response for a leftward head turn while reading the words on a computer screen. The basic pathway consists of horizontal semicircular canal afferents that project to specific neurons in the vestibular nuclei. These nuclei cells, in turn, send an excitatory signal to the contralateral abducens nucleus, which projects through the sixth cranial nerve to innervate the lateral rectus muscle (Figure 8.6.5). Some abducens neurons send an excitatory projection back across the midline to a subdivision of cells in the ipsilateral oculomotor nucleus, which, in turn, projects through the third cranial nerve to innervate the right (ipsilateral) medial rectus muscle. When a leftward head turn is made, the left horizontal canal vestibular afferents will increase their firing rate and consequently increase the activity of vestibular nuclei neurons projecting to the opposite (contralateral) right abducens nucleus. The abducens neurons produce contraction of the right lateral rectus and, through a separate cell projection to the left oculomotor nucleus, excite the left medial rectus muscles. In addition, matching bilateral inhibitory connections relax the left lateral rectus and right medial rectus eye muscles. The resulting rightward eye movement for both eyes stabilizes the object of interest upon the retina for greatest visual acuity.
During linear translations, a different type of VOR also occurs (Paige & Tomko, 1991). For example, sideways motion to the left results in a horizontal rightward eye movement to maintain visual stability on an object of interest. In a similar manner, vertical up–down head movements (such as occur while walking or running) elicit oppositely directed vertical eye movements (Angelaki, McHenry, & Hess, 2000). For these reflexes, the amplitude of the translational VOR depends on viewing distance. This is due to the fact that the vergence angle (i.e., the angle between the lines of sight for each eye) varies as a function of the inverse of the distance to the viewed visual object (Schwarz, Busettini, & Miles, 1989). Visual objects that are far away (2 meters or more) require no vergence angle, but as the visual objects get closer (e.g., when holding your finger close to your nose), a large vergence angle is needed. During translational motion, the eyes will change their vergence angle as the visual object moves from close to farther away (or vice versa). These responses are a result of activation of the otolith receptors, with connections to the oculomotor nuclei similar to those described above for the rotational vestibuloocular reflex. With tilts of the head, the resulting eye movement is termed torsion, and consists of a rotational eye movement around the line of sight that is in the direction opposite to the head tilt. As mentioned above, there are major reciprocal connections between the vestibular nuclei and the cerebellum. It has been well established that these connections are crucial for adaptive motor learning in the vestibuloocular reflex (Lisberger, Pavelko, & Broussard, 1994).
Vestibulo-spinal network
There are two vestibular descending pathways that regulate body muscle responses to motion and gravity, consisting of the lateral vestibulo-spinal tract (LVST) and the medial vestibulo-spinal tract(MVST). Reflexive control of head and neck muscles arises through the neurons in the medial vestibulospinal tract (MVST). These neurons comprise the rapid vestibulocollic reflex (VCR) that serves to stabilize the head in space and participates in gaze control (Peterson, Goldber, Bilotto, & Fuller, 1985). The MVST neurons receive input from vestibular receptors and the cerebellum, and somatosensory information from the spinal cord. MVST neurons carry both excitatory and inhibitory signals to innervate neck flexor and extensor motor neurons in the spinal cord. For example, if one trips over a crack in the pavement while walking, MVST neurons will receive downward and forward linear acceleration signals from the otolith receptors and forward rotation acceleration signals from the vertical semicircular canals. The VCR will compensate by providing excitatory signals to the dorsal neck flexor muscles and inhibitory signals to the ventral neck extensor muscles, which moves the head upward and opposite to the falling motion to protect it from impact.
The LVST comprises a topographic organization of vestibular nuclei cells that receive substantial input from the cerebellum, proprioceptive inputs from the spinal cord, and convergent afferent signals from vestibular receptors. LVST fibers project ipsilateral to many levels of motor neurons in the cord to provide coordination of different muscle groups for postural control (Shinoda, Sugiuchi, Futami, Ando, & Kawasaki, 1994). LVST neurons contain either acetylcholine or glutamate as a neurotransmitter and exert an excitatory influence upon extensor muscle motor neurons. For example, LVST fibers produce extension of the contralateral axial and limb musculature when the body is tilted sideways. These actions serve to stabilize the body’s center of gravity in order to preserve upright posture.
Vestibulo-autonomic control
Some vestibular nucleus neurons send projections to the reticular formation, dorsal pontine nuclei, and nucleus of the solitary tract. These connections regulate breathing and circulation through compensatory vestibular autonomic responses that stabilize respiration and blood pressure during body motion and changes relative to gravity. They may also be important for induction of motion sickness and emesis.
Vestibular signals in the thalamus and cortex
The cognitive perception of motion, spatial orientation, and navigation through space arises through multisensory information from vestibular, visual, and somatosensory signals in the thalamus and cortex (Figure 8.6.6). Vestibular nuclei neurons project bilaterally to the several thalamic regions. Neurons in the ventral posterior group respond to either vestibular signals alone, or to vestibular plus somatosensory signals, and projects to primary somatosensory cortex (area 3a, 2v), somatosensory association cortex, posterior parietal cortex (areas 5 and 7), and the insula of the temporal cortex (Marlinski & McCrea, 2008; Meng, May, Dickman, & Angelaki, 2007). The posterior nuclear group (PO), near the medial geniculate body, receives both vestibular and auditory signals as well as inputs from the superior colliculus and spinal cord, indicating an integration of multiple sensory signals. Some anterior pulvinar neurons also respond to motion stimuli and project to cortical area 3a, the posterior insula, and the temporo-parietal cortex (PIVC). In humans, electrical stimulation of the thalamic areas produces sensations of movement and sometimes dizziness.
Area 2v cells respond to motion, and electrical stimulation of this area in humans produces sensations of moving, spinning, or dizziness. Area 3a lies at the base of the central sulcus adjacent to the motor cortex and is thought to be involved in integrative motor control of the head and body (Guldin, Akbarian, & Grusser, 1992). Neurons in the PIVC are multisensory, responding to body motion, somatosensory, proprioceptive, and visual motion stimuli (Chen, DeAngelis, & Angelaki, 2011; Grusser, Pause, & Schreiter, 1982). PIVC and areas 3a and 2v are heavily interconnected. Vestibular neurons also have been observed in the posterior parietal cortex; in area 7, in the ventral intraparietal area (VIP), the medial intraparietal area (MIP), and the medial superior temporal area (MST). VIP contains multimodal neurons involved in spatial coding. MIP and MST neurons respond to body motion through space by multisensory integration of visual motion and vestibular signals (Gu, DeAngelis, & Angelaki , 2007) and many MST cells are directly involved in heading perception (Gu, Watkins, Angelaki, & DeAngelis, 2006). Lesions of the parietal cortical areas can result in confusions in spatial awareness. Finally, areas involved with the control of saccades and pursuit eye movements, including area 6, area 8, and the superior frontal gyrus, receive vestibular signals (Fukushima, Sato, Fukushima, Shinmei, & Kaneko, 2000). How these different cortical regions contribute to our perception of motion and spatial orientation is still not well understood.
Spatial orientation and navigation
Our ability to know where we are and to navigate different spatial locations is essential for survival. It is believed that a cognitive map of our environment is created through exploration and then used for spatial orientation and navigation, such as driving to the store, or walking through a dark house (McNaughton, Battaglia, Jensen, Moser, & Moser, 2006). Cells in the limbic system and the hippocampus that contribute to these functions have been identified, including place cells, grid cells, and head direction cells (Figure 6B). Place cells in the hippocampus encode specific locations in the environment (O’Keefe, 1976). Grid cells in the entorhinal cortex encode spatial maps in a tessellated pattern (Hafting, Fyhn, Molden, Moser, & Moser, 2005). Head direction cells in the anterior-dorsal thalamus encode heading direction, independent of spatial location (Taube, 1995). It is thought that these cell types work together to provide for spatial orientation, spatial memory, and our ability to navigate. Both place cells and head direction cells depend upon a functioning vestibular system to maintain their directional and orientation information (Stackman, Clark, & Taube, 2002). The pathway by which vestibular signals reach the navigation network is not well understood; however, damage to the vestibular system, hippocampus, and dorsal thalamus regions often disrupts our ability to orient in familiar environments, navigate from place to place, or even to find our way home.
Motion sickness
Although a number of conditions can produce motion sickness, it is generally thought that it is evoked from a mismatch in sensory cues between vestibular, visual, and proprioceptive signals (Yates, Miller, & Lucot, 1998). For example, reading a book in a car on a winding road can produce motion sickness, whereby the accelerations experienced by the vestibular system do not match the visual input. However, if one looks out the window at the scenery going by during the same travel, no sickness occurs because the visual and vestibular cues are in alignment. Sea sickness, a form of motion sickness, appears to be a special case and arises from unusual vertical oscillatory and roll motion. Human studies have found that low frequency oscillations of 0.2 Hz and large amplitudes (such as found in large seas during a storm) are most likely to cause motion sickness, with higher frequencies offering little problems.
Summary
Here, we have seen that the vestibular system transduces and encodes signals about head motion and position with respect to gravity, information that is then used by the brain for many essential functions and behaviors. We actually understand a great deal regarding vestibular contributions to fundamental reflexes, such as compensatory eye movements and balance during motion. More recent progress has been made toward understanding how vestibular signals combine with other sensory cues, such as vision, in the thalamus and cortex to give rise to motion perception. However, there are many complex cognitive abilities that we know require vestibular information to function, such as spatial orientation and navigation behaviors, but these systems are only just beginning to be investigated. Future research regarding vestibular system function will likely be geared to seeking answers to questions regarding how the brain copes with vestibular signal loss. In fact, according to the National Institutes of Health, nearly 35% of Americans over the age of 40 (69 million people) have reported chronic vestibular-related problems. It is therefore of significant importance to human health to better understand how vestibular cues contribute to common brain functions and how better treatment options for vestibular dysfunction can be realized.
Outside Resources
Animated Video of the Vestibular System
http://sites.sinauer.com/neuroscienc...ions14.01.html
Discussion Questions
1. If a person sustains loss of the vestibular receptors in one ear due to disease or trauma, what symptoms would the person suffer? Would the symptoms be permanent?
2. Often motion sickness is relieved when a person looks at far distance objects, such as things located on the far horizon. Why does far distance viewing help in motion sickness while close distance view (like reading a map or book) make it worse?
3. Vestibular signals combine with visual signals in certain areas of cortex and assist in motion perception. What types of cues does the visual system provide for self motion through space? What types of vestibular signals would be consistent with rotational versus translational motion?
Vocabulary
Abducens nucleus
A group of excitatory motor neurons in the medial brainstem that send projections through the VIth cranial nerve to control the ipsilateral lateral rectus muscle. In addition, abducens interneurons send an excitatory projection across the midline to a subdivision of cells in the ipsilateral oculomotor nucleus, which project through the IIIrd cranial nerve to innervate the ipsilateral medial rectus muscle.
Acetylcholine
An organic compound neurotransmitter consisting of acetic acid and choline. Depending upon the receptor type, acetycholine can have excitatory, inhibitory, or modulatory effects.
Afferent nerve fibers
Single neurons that innervate the receptor hair cells and carry vestibular signals to the brain as part of the vestibulocochlear nerve (cranial nerve VIII).
Aspartate
An excitatory amino acid neurotransmitter that is widely used by vestibular receptors, afferents, and many neurons in the brain.
Compensatory reflexes
A stabilizing motor reflex that occurs in response to a perceived movement, such as the vestibuloocular reflex, or the postural responses that occur during running or skiing.
Depolarized
When receptor hair cells have mechanically gated channels open, the cell increases its membrane voltage, which produces a release of neurotransmitter to excite the innervating nerve fiber.
Detection thresholds
The smallest amount of head motion that can be reliably reported by an observer.
Directional tuning
The preferred direction of motion that hair cells and afferents exhibit where a peak excitatory response occurs and the least preferred direction where no response occurs. Cells are said to be “tuned” for a best and worst direction of motion, with in-between motion directions eliciting a lesser but observable response.
Gamma-aminobutyric acid
A major inhibitory neurotransmitter in the vestibular commissural system.
Gaze stability
A combination of eye, neck, and head responses that are all coordinated to maintain visual fixation (fovea) upon a point of interest.
Glutamate
An excitatory amino acid neurotransmitter that is widely used by vestibular receptors, afferents, and many neurons in the brain.
Hair cells
The receptor cells of the vestibular system. They are termed hair cells due to the many hairlike cilia that extend from the apical surface of the cell into the gelatin membrane. Mechanical gated ion channels in the tips of the cilia open and close as the cilia bend to cause membrane voltage changes in the hair cell that are proportional to the intensity and direction of motion.
Hyperpolarizes
When receptor hair cells have mechanically gated channels close, the cell decreases its membrane voltage, which produces less release of neurotransmitters to inhibit the innervating nerve fiber.
Lateral rectus muscle
An eye muscle that turns outward in the horizontal plane.
Lateral vestibulo-spinal tract
Vestibular neurons that project to all levels of the spinal cord on the ipsilateral side to control posture and balance movements.
Mechanically gated ion channels
Ion channels located in the tips of the stereocilia on the receptor cells that open/close as the cilia bend toward the tallest/smallest cilia, respectively. These channels are permeable to potassium ions, which are abundant in the fluid bathing the top of the hair cells.
Medial vestibulo-spinal tract
Vestibular nucleus neurons project bilaterally to cervical spinal motor neurons for head and neck movement control. The tract principally functions in gaze direction and stability during motion.
Neurotransmitters
A chemical compound used to send signals from a receptor cell to a neuron, or from one neuron to another. Neurotransmitters can be excitatory, inhibitory, or modulatory and are packaged in small vesicles that are released from the end terminals of cells.
Oculomotor nuclei
Includes three neuronal groups in the brainstem, the abducens nucleus, the oculomotor nucleus, and the trochlear nucleus, whose cells send motor commands to the six pairs of eye muscles.
Oculomotor nucleus
A group of cells in the middle brainstem that contain subgroups of neurons that project to the medial rectus, inferior oblique, inferior rectus, and superior rectus muscles of the eyes through the 3rd cranial nerve.
Otoconia
Small calcium carbonate particles that are packed in a layer on top of the gelatin membrane that covers the otolith receptor hair cell stereocilia.
Otolith receptors
Two inner ear vestibular receptors (utricle and saccule) that transduce linear accelerations and head tilt relative to gravity into neural signals that are then transferred to the brain.
Proprioceptive
Sensory information regarding muscle position and movement arising from receptors in the muscles, tendons, and joints.
Semicircular canals
A set of three inner ear vestibular receptors (horizontal, anterior, posterior) that transduce head rotational accelerations into head rotational velocity signals that are then transferred to the brain. There are three semicircular canals in each ear, with the major planes of each canal being orthogonal to each other.
Stereocilia
Hairlike projections from the top of the receptor hair cells. The stereocilia are arranged in ascending height and when displaced toward the tallest cilia, the mechanical gated channels open and the cell is excited (depolarized). When the stereocilia are displaced toward the smallest cilia, the channels close and the cell is inhibited (hyperpolarized).
Torsion
A rotational eye movement around the line of sight that consists of a clockwise or counterclockwise direction.
Vergence angle
The angle between the line of sight for the two eyes. Low vergence angles indicate far-viewing objects, whereas large angles indicate viewing of near objects.
Vestibular compensation
Following injury to one side of vestibular receptors or the vestibulocochlear nerve, the central vestibular nuclei neurons gradually recover much of their function through plasticity mechanisms. The recovery is never complete, however, and extreme motion environments can lead to dizziness, nausea, problems with balance, and spatial memory.
Vestibular efferents
Nerve fibers originating from a nucleus in the brainstem that project from the brain to innervate the vestibular receptor hair cells and afferent nerve terminals. Efferents have a modulatory role on their targets, which is not well understood.
Vestibular system
Consists of a set of motion and gravity detection receptors in the inner ear, a set of primary nuclei in the brainstem, and a network of pathways carrying motion and gravity signals to many regions of the brain.
Vestibulocochlear nerve
The VIIIth cranial nerve that carries fibers innervating the vestibular receptors and the cochlea.
Vestibuloocular reflex
Eye movements produced by the vestibular brainstem that are equal in magnitude and opposite in direction to head motion. The VOR functions to maintain visual stability on a point of interest and is nearly perfect for all natural head movements. | textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/03%3A_Sensation_and_Perception/3.05%3A_The_Vestibular_System.txt |
By Lorin Lachs
California State University, Fresno
Most of the time, we perceive the world as a unified bundle of sensations from multiple sensory modalities. In other words, our perception is multimodal. This module provides an overview of multimodal perception, including information about its neurobiology and its psychological effects.
learning objectives
• Define the basic terminology and basic principles of multimodal perception.
• Describe the neuroanatomy of multisensory integration and name some of the regions of the cortex and midbrain that have been implicated in multisensory processing.
• Explain the difference between multimodal phenomena and crossmodal phenomena.
• Give examples of multimodal and crossmodal behavioral effects.
Perception: Unified
Although it has been traditional to study the various senses independently, most of the time, perception operates in the context of information supplied by multiple sensory modalities at the same time. For example, imagine if you witnessed a car collision. You could describe the stimulus generated by this event by considering each of the senses independently; that is, as a set of unimodal stimuli. Your eyes would be stimulated with patterns of light energy bouncing off the cars involved. Your ears would be stimulated with patterns of acoustic energy emanating from the collision. Your nose might even be stimulated by the smell of burning rubber or gasoline.
However, all of this information would be relevant to the same thing: your perception of the car collision. Indeed, unless someone was to explicitly ask you to describe your perception in unimodal terms, you would most likely experience the event as a unified bundle of sensations from multiple senses. In other words, your perception would be multimodal. The question is whether the various sources of information involved in this multimodal stimulus are processed separately by the perceptual system or not.
For the last few decades, perceptual research has pointed to the importance of multimodal perception: the effects on the perception of events and objects in the world that are observed when there is information from more than one sensory modality. Most of this research indicates that, at some point in perceptual processing, information from the various sensory modalities is integrated. In other words, the information is combined and treated as a unitary representation of the world.
Questions About Multimodal Perception
Several theoretical problems are raised by multimodal perception. After all, the world is a “blooming, buzzing world of confusion” that constantly bombards our perceptual system with light, sound, heat, pressure, and so forth. To make matters more complicated, these stimuli come from multiple events spread out over both space and time. To return to our example: Let’s say the car crash you observed happened on Main Street in your town. Your perception during the car crash might include a lot of stimulation that was not relevant to the car crash. For example, you might also overhear the conversation of a nearby couple, see a bird flying into a tree, or smell the delicious scent of freshly baked bread from a nearby bakery (or all three!). However, you would most likely not make the mistake of associating any of these stimuli with the car crash. In fact, we rarely combine the auditory stimuli associated with one event with the visual stimuli associated with another (although, under some unique circumstances—such as ventriloquism—we do). How is the brain able to take the information from separate sensory modalities and match it appropriately, so that stimuli that belong together stay together, while stimuli that do not belong together get treated separately? In other words, how does the perceptual system determine which unimodal stimuli must be integrated, and which must not?
Once unimodal stimuli have been appropriately integrated, we can further ask about the consequences of this integration: What are the effects of multimodal perception that would not be present if perceptual processing were only unimodal? Perhaps the most robust finding in the study of multimodal perception concerns this last question. No matter whether you are looking at the actions of neurons or the behavior of individuals, it has been found that responses to multimodal stimuli are typically greater than the combined response to either modality independently. In other words, if you presented the stimulus in one modality at a time and measured the response to each of these unimodal stimuli, you would find that adding them together would still not equal the response to the multimodal stimulus. This superadditive effect of multisensory integration indicates that there are consequences resulting from the integrated processing of multimodal stimuli.
The extent of the superadditive effect (sometimes referred to as multisensory enhancement) is determined by the strength of the response to the single stimulus modality with the biggest effect. To understand this concept, imagine someone speaking to you in a noisy environment (such as a crowded party). When discussing this type of multimodal stimulus, it is often useful to describe it in terms of its unimodal components: In this case, there is an auditory component (the sounds generated by the speech of the person speaking to you) and a visual component (the visual form of the face movements as the person speaks to you). In the crowded party, the auditory component of the person’s speech might be difficult to process (because of the surrounding party noise). The potential for visual information about speech—lipreading—to help in understanding the speaker’s message is, in this situation, quite large. However, if you were listening to that same person speak in a quiet library, the auditory portion would probably be sufficient for receiving the message, and the visual portion would help very little, if at all (Sumby & Pollack, 1954). In general, for a stimulus with multimodal components, if the response to each component (on its own) is weak, then the opportunity for multisensory enhancement is very large. However, if one component—by itself—is sufficient to evoke a strong response, then the opportunity for multisensory enhancement is relatively small. This finding is called the Principle of Inverse Effectiveness (Stein & Meredith, 1993) because the effectiveness of multisensory enhancement is inversely related to the unimodal response with the greatest effect.
Another important theoretical question about multimodal perception concerns the neurobiology that supports it. After all, at some point, the information from each sensory modality is definitely separated (e.g., light comes in through the eyes, and sound comes in through the ears). How does the brain take information from different neural systems (optic, auditory, etc.) and combine it? If our experience of the world is multimodal, then it must be the case that at some point during perceptual processing, the unimodal information coming from separate sensory organs—such as the eyes, ears, skin—is combined. A related question asks where in the brain this integration takes place. We turn to these questions in the next section.
Biological Bases of Multimodal Perception
Multisensory Neurons and Neural Convergence
A surprisingly large number of brain regions in the midbrain and cerebral cortex are related to multimodal perception. These regions contain neurons that respond to stimuli from not just one, but multiple sensory modalities. For example, a region called the superior temporal sulcus contains single neurons that respond to both the visual and auditory components of speech (Calvert, 2001; Calvert, Hansen, Iversen, & Brammer, 2001). These multisensory convergence zones are interesting, because they are a kind of neural intersection of information coming from the different senses. That is, neurons that are devoted to the processing of one sense at a time—say vision or touch—send their information to the convergence zones, where it is processed together.
One of the most closely studied multisensory convergence zones is the superior colliculus (Stein & Meredith, 1993), which receives inputs from many different areas of the brain, including regions involved in the unimodal processing of visual and auditory stimuli (Edwards, Ginsburgh, Henkel, & Stein, 1979). Interestingly, the superior colliculus is involved in the “orienting response,” which is the behavior associated with moving one’s eye gaze toward the location of a seen or heard stimulus. Given this function for the superior colliculus, it is hardly surprising that there are multisensory neurons found there (Stein & Stanford, 2008).
Crossmodal Receptive Fields
The details of the anatomy and function of multisensory neurons help to answer the question of how the brain integrates stimuli appropriately. In order to understand the details, we need to discuss a neuron’s receptive field. All over the brain, neurons can be found that respond only to stimuli presented in a very specific region of the space immediately surrounding the perceiver. That region is called the neuron’s receptive field. If a stimulus is presented in a neuron’s receptive field, then that neuron responds by increasing or decreasing its firing rate. If a stimulus is presented outside of a neuron’s receptive field, then there is no effect on the neuron’s firing rate. Importantly, when two neurons send their information to a third neuron, the third neuron’s receptive field is the combination of the receptive fields of the two input neurons. This is called neural convergence, because the information from multiple neurons converges on a single neuron. In the case of multisensory neurons, the convergence arrives from different sensory modalities. Thus, the receptive fields of multisensory neurons are the combination of the receptive fields of neurons located in different sensory pathways.
Now, it could be the case that the neural convergence that results in multisensory neurons is set up in a way that ignores the locations of the input neurons’ receptive fields. Amazingly, however, these crossmodal receptive fields overlap. For example, a multisensory neuron in the superior colliculus might receive input from two unimodal neurons: one with a visual receptive field and one with an auditory receptive field. It has been found that the unimodal receptive fields refer to the same locations in space—that is, the two unimodal neurons respond to stimuli in the same region of space. Crucially, the overlap in the crossmodal receptive fields plays a vital role in the integration of crossmodal stimuli. When the information from the separate modalities is coming from within these overlapping receptive fields, then it is treated as having come from the same location—and the neuron responds with a superadditive (enhanced) response. So, part of the information that is used by the brain to combine multimodal inputs is the location in space from which the stimuli came.
This pattern is common across many multisensory neurons in multiple regions of the brain. Because of this, researchers have defined the spatial principle of multisensory integration: Multisensory enhancement is observed when the sources of stimulation are spatially related to one another. A related phenomenon concerns the timing of crossmodal stimuli. Enhancement effects are observed in multisensory neurons only when the inputs from different senses arrive within a short time of one another (e.g., Recanzone, 2003).
Multimodal Processing in Unimodal Cortex
Multisensory neurons have also been observed outside of multisensory convergence zones, in areas of the brain that were once thought to be dedicated to the processing of a single modality (unimodal cortex). For example, the primary visual cortex was long thought to be devoted to the processing of exclusively visual information. The primary visual cortex is the first stop in the cortex for information arriving from the eyes, so it processes very low-level information like edges. Interestingly, neurons have been found in the primary visual cortex that receives information from the primary auditory cortex (where sound information from the auditory pathway is processed) and from the superior temporal sulcus (a multisensory convergence zone mentioned above). This is remarkable because it indicates that the processing of visual information is, from a very early stage, influenced by auditory information.
There may be two ways for these multimodal interactions to occur. First, it could be that the processing of auditory information in relatively late stages of processing feeds back to influence low-level processing of visual information in unimodal cortex (McDonald, Teder-Sälejärvi, Russo, & Hillyard, 2003). Alternatively, it may be that areas of unimodal cortex contact each other directly (Driver & Noesselt, 2008; Macaluso & Driver, 2005), such that multimodal integration is a fundamental component of all sensory processing.
In fact, the large numbers of multisensory neurons distributed all around the cortex—in multisensory convergence areas and in primary cortices—has led some researchers to propose that a drastic reconceptualization of the brain is necessary (Ghazanfar & Schroeder, 2006). They argue that the cortex should not be considered as being divided into isolated regions that process only one kind of sensory information. Rather, they propose that these areas only prefer to process information from specific modalities but engage in low-level multisensory processing whenever it is beneficial to the perceiver (Vasconcelos et al., 2011).
Behavioral Effects of Multimodal Perception
Although neuroscientists tend to study very simple interactions between neurons, the fact that they’ve found so many crossmodal areas of the cortex seems to hint that the way we experience the world is fundamentally multimodal. As discussed above, our intuitions about perception are consistent with this; it does not seem as though our perception of events is constrained to the perception of each sensory modality independently. Rather, we perceive a unified world, regardless of the sensory modality through which we perceive it.
It will probably require many more years of research before neuroscientists uncover all the details of the neural machinery involved in this unified experience. In the meantime, experimental psychologists have contributed to our understanding of multimodal perception through investigations of the behavioral effects associated with it. These effects fall into two broad classes. The first class—multimodal phenomena—concerns the binding of inputs from multiple sensory modalities and the effects of this binding on perception. The second class—crossmodal phenomena—concerns the influence of one sensory modality on the perception of another (Spence, Senkowski, & Roder, 2009).
Multimodal Phenomena
Audiovisual Speech
Multimodal phenomena concern stimuli that generate simultaneous (or nearly simultaneous) information in more than one sensory modality. As discussed above, speech is a classic example of this kind of stimulus. When an individual speaks, she generates sound waves that carry meaningful information. If the perceiver is also looking at the speaker, then that perceiver also has access to visual patterns that carry meaningful information. Of course, as anyone who has ever tried to lipread knows, there are limits on how informative visual speech information is. Even so, the visual speech pattern alone is sufficient for very robust speech perception. Most people assume that deaf individuals are much better at lipreading than individuals with normal hearing. It may come as a surprise to learn, however, that some individuals with normal hearing are also remarkably good at lipreading (sometimes called “speechreading”). In fact, there is a wide range of speechreading ability in both normal hearing and deaf populations (Andersson, Lyxell, Rönnberg, & Spens, 2001). However, the reasons for this wide range of performance are not well understood (Auer & Bernstein, 2007; Bernstein, 2006; Bernstein, Auer, & Tucker, 2001; Mohammed et al., 2005).
How does visual information about speech interact with auditory information about speech? One of the earliest investigations of this question examined the accuracy of recognizing spoken words presented in a noisy context, much like in the example above about talking at a crowded party. To study this phenomenon experimentally, some irrelevant noise (“white noise”—which sounds like a radio tuned between stations) was presented to participants. Embedded in the white noise were spoken words, and the participants’ task was to identify the words. There were two conditions: one in which only the auditory component of the words was presented (the “auditory-alone” condition), and one in both the auditory and visual components were presented (the “audiovisual” condition). The noise levels were also varied, so that on some trials, the noise was very loud relative to the loudness of the words, and on other trials, the noise was very soft relative to the words. Sumby and Pollack (1954) found that the accuracy of identifying the spoken words was much higher for the audiovisual condition than it was in the auditory-alone condition. In addition, the pattern of results was consistent with the Principle of Inverse Effectiveness: The advantage gained by audiovisual presentation was highest when the auditory-alone condition performance was lowest (i.e., when the noise was loudest). At these noise levels, the audiovisual advantage was considerable: It was estimated that allowing the participant to see the speaker was equivalent to turning the volume of the noise down by over half. Clearly, the audiovisual advantage can have dramatic effects on behavior.
Another phenomenon using audiovisual speech is a very famous illusion called the “McGurk effect” (named after one of its discoverers). In the classic formulation of the illusion, a movie is recorded of a speaker saying the syllables “gaga.” Another movie is made of the same speaker saying the syllables “baba.” Then, the auditory portion of the “baba” movie is dubbed onto the visual portion of the “gaga” movie. This combined stimulus is presented to participants, who are asked to report what the speaker in the movie said. McGurk and MacDonald (1976) reported that 98 percent of their participants reported hearing the syllable “dada”—which was in neither the visual nor the auditory components of the stimulus. These results indicate that when visual and auditory information about speech is integrated, it can have profound effects on perception.
Tactile/Visual Interactions in Body Ownership
Not all multisensory integration phenomena concern speech, however. One particularly compelling multisensory illusion involves the integration of tactile and visual information in the perception of body ownership. In the “rubber hand illusion” (Botvinick & Cohen, 1998), an observer is situated so that one of his hands is not visible. A fake rubber hand is placed near the obscured hand, but in a visible location. The experimenter then uses a light paintbrush to simultaneously stroke the obscured hand and the rubber hand in the same locations. For example, if the middle finger of the obscured hand is being brushed, then the middle finger of the rubber hand will also be brushed. This sets up a correspondence between the tactile sensations (coming from the obscured hand) and the visual sensations (of the rubber hand). After a short time (around 10 minutes), participants report feeling as though the rubber hand “belongs” to them; that is, that the rubber hand is a part of their body. This feeling can be so strong that surprising the participant by hitting the rubber hand with a hammer often leads to a reflexive withdrawing of the obscured hand—even though it is in no danger at all. It appears, then, that our awareness of our own bodies may be the result of multisensory integration.
Crossmodal Phenomena
Crossmodal phenomena are distinguished from multimodal phenomena in that they concern the influence one sensory modality has on the perception of another.
Visual Influence on Auditory Localization
A famous (and commonly experienced) crossmodal illusion is referred to as “the ventriloquism effect.” When a ventriloquist appears to make a puppet speak, she fools the listener into thinking that the location of the origin of the speech sounds is at the puppet’s mouth. In other words, instead of localizing the auditory signal (coming from the mouth of a ventriloquist) to the correct place, our perceptual system localizes it incorrectly (to the mouth of the puppet).
Why might this happen? Consider the information available to the observer about the location of the two components of the stimulus: the sounds from the ventriloquist’s mouth and the visual movement of the puppet’s mouth. Whereas it is very obvious where the visual stimulus is coming from (because you can see it), it is much more difficult to pinpoint the location of the sounds. In other words, the very precise visual location of mouth movement apparently overrides the less well-specified location of the auditory information. More generally, it has been found that the location of a wide variety of auditory stimuli can be affected by the simultaneous presentation of a visual stimulus (Vroomen & De Gelder, 2004). In addition, the ventriloquism effect has been demonstrated for objects in motion: The motion of a visual object can influence the perceived direction of motion of a moving sound source (Soto-Faraco, Kingstone, & Spence, 2003).
Auditory Influence on Visual Perception
A related illusion demonstrates the opposite effect: where sounds have an effect on visual perception. In the double flash illusion, a participant is asked to stare at a central point on a computer monitor. On the extreme edge of the participant’s vision, a white circle is briefly flashed one time. There is also a simultaneous auditory event: either one beep or two beeps in rapid succession. Remarkably, participants report seeing two visual flashes when the flash is accompanied by two beeps; the same stimulus is seen as a single flash in the context of a single beep or no beep (Shams, Kamitani, & Shimojo, 2000). In other words, the number of heard beeps influences the number of seen flashes!
Another illusion involves the perception of collisions between two circles (called “balls”) moving toward each other and continuing through each other. Such stimuli can be perceived as either two balls moving through each other or as a collision between the two balls that then bounce off each other in opposite directions. Sekuler, Sekuler, and Lau (1997) showed that the presentation of an auditory stimulus at the time of contact between the two balls strongly influenced the perception of a collision event. In this case, the perceived sound influences the interpretation of the ambiguous visual stimulus.
Crossmodal Speech
Several crossmodal phenomena have also been discovered for speech stimuli. These crossmodal speech effects usually show altered perceptual processing of unimodal stimuli (e.g., acoustic patterns) by virtue of prior experience with the alternate unimodal stimulus (e.g., optical patterns). For example, Rosenblum, Miller, and Sanchez (2007) conducted an experiment examining the ability to become familiar with a person’s voice. Their first interesting finding was unimodal: Much like what happens when someone repeatedly hears a person speak, perceivers can become familiar with the “visual voice” of a speaker. That is, they can become familiar with the person’s speaking style simply by seeing that person speak. Even more astounding was their crossmodal finding: Familiarity with this visual information also led to increased recognition of the speaker’s auditory speech, to which participants had never had exposure.
Similarly, it has been shown that when perceivers see a speaking face, they can identify the (auditory-alone) voice of that speaker, and vice versa (Kamachi, Hill, Lander, & Vatikiotis-Bateson, 2003; Lachs & Pisoni, 2004a, 2004b, 2004c; Rosenblum, Smith, Nichols, Lee, & Hale, 2006). In other words, the visual form of a speaker engaged in the act of speaking appears to contain information about what that speaker should sound like. Perhaps more surprisingly, the auditory form of speech seems to contain information about what the speaker should look like.
Conclusion
In this module, we have reviewed some of the main evidence and findings concerning the role of multimodal perception in our experience of the world. It appears that our nervous system (and the cortex in particular) contains considerable architecture for the processing of information arriving from multiple senses. Given this neurobiological setup, and the diversity of behavioral phenomena associated with multimodal stimuli, it is likely that the investigation of multimodal perception will continue to be a topic of interest in the field of experimental perception for many years to come.
Outside Resources
Article: A review of the neuroanatomy and methods associated with multimodal perception:
http://dx.doi.org/10.1016/j.neubiorev.2011.04.015
Journal: Experimental Brain Research Special issue: Crossmodal processing
www.springerlink.com/content/0014-4819/198/2-3
TED Talk: Optical Illusions
http://www.ted.com/talks/beau_lotto_...how_how_we_see
Video: McGurk demo
Video: The Rubber Hand Illusion
Web: Double-flash illusion demo
http://www.cns.atr.jp/~kmtn/soundInd...llusoryFlash2/
Discussion Questions
1. The extensive network of multisensory areas and neurons in the cortex implies that much perceptual processing occurs in the context of multiple inputs. Could the processing of unimodal information ever be useful? Why or why not?
2. Some researchers have argued that the Principle of Inverse Effectiveness (PoIE) results from ceiling effects: Multisensory enhancement cannot take place when one modality is sufficient for processing because in such cases it is not possible for processing to be enhanced (because performance is already at the “ceiling”). On the other hand, other researchers claim that the PoIE stems from the perceptual system’s ability to assess the relative value of stimulus cues, and to use the most reliable sources of information to construct a representation of the outside world. What do you think? Could these two possibilities ever be teased apart? What kinds of experiments might one conduct to try to get at this issue?
3. In the late 17th century, a scientist named William Molyneux asked the famous philosopher John Locke a question relevant to modern studies of multisensory processing. The question was this: Imagine a person who has been blind since birth, and who is able, by virtue of the sense of touch, to identify three dimensional shapes such as spheres or pyramids. Now imagine that this person suddenly receives the ability to see. Would the person, without using the sense of touch, be able to identify those same shapes visually? Can modern research in multimodal perception help answer this question? Why or why not? How do the studies about crossmodal phenomena inform us about the answer to this question?
Vocabulary
Bouncing balls illusion
The tendency to perceive two circles as bouncing off each other if the moment of their contact is accompanied by an auditory stimulus.
Crossmodal phenomena
Effects that concern the influence of the perception of one sensory modality on the perception of another.
Crossmodal receptive field
A receptive field that can be stimulated by a stimulus from more than one sensory modality.
Crossmodal stimulus
A stimulus with components in multiple sensory modalties that interact with each other.
Double flash illusion
The false perception of two visual flashes when a single flash is accompanied by two auditory beeps.
Integrated
The process by which the perceptual system combines information arising from more than one modality.
McGurk effect
An effect in which conflicting visual and auditory components of a speech stimulus result in an illusory percept.
Multimodal
Of or pertaining to multiple sensory modalities.
Multimodal perception
The effects that concurrent stimulation in more than one sensory modality has on the perception of events and objects in the world.
Multimodal phenomena
Effects that concern the binding of inputs from multiple sensory modalities.
Multisensory convergence zones
Regions in the brain that receive input from multiple unimodal areas processing different sensory modalities.
Multisensory enhancement
See “superadditive effect of multisensory integration.”
Primary auditory cortex
A region of the cortex devoted to the processing of simple auditory information.
Primary visual cortex
A region of the cortex devoted to the processing of simple visual information.
Principle of Inverse Effectiveness
The finding that, in general, for a multimodal stimulus, if the response to each unimodal component (on its own) is weak, then the opportunity for multisensory enhancement is very large. However, if one component—by itself—is sufficient to evoke a strong response, then the effect on the response gained by simultaneously processing the other components of the stimulus will be relatively small.
Receptive field
The portion of the world to which a neuron will respond if an appropriate stimulus is present there.
Rubber hand illusion
The false perception of a fake hand as belonging to a perceiver, due to multimodal sensory information.
Sensory modalities
A type of sense; for example, vision or audition.
Spatial principle of multisensory integration
The finding that the superadditive effects of multisensory integration are observed when the sources of stimulation are spatially related to one another.
Superadditive effect of multisensory integration
The finding that responses to multimodal stimuli are typically greater than the sum of the independent responses to each unimodal component if it were presented on its own.
Unimodal
Of or pertaining to a single sensory modality.
Unimodal components
The parts of a stimulus relevant to one sensory modality at a time.
Unimodal cortex
A region of the brain devoted to the processing of information from a single sensory modality. | textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/03%3A_Sensation_and_Perception/3.06%3A_Multi-Modal_Perception.txt |
By Cara Laney and Elizabeth F. Loftus
Reed College, University of California, Irvine
Eyewitnesses can provide very compelling legal testimony, but rather than recording experiences flawlessly, their memories are susceptible to a variety of errors and biases. They (like the rest of us) can make errors in remembering specific details and can even remember whole events that did not actually happen. In this module, we discuss several of the common types of errors, and what they can tell us about human memory and its interactions with the legal system.
learning objectives
• Describe the kinds of mistakes that eyewitnesses commonly make and some of the ways that this can impede justice.
• Explain some of the errors that are common in human memory.
• Describe some of the important research that has demonstrated human memory errors and their consequences.
What Is Eyewitness Testimony?
Eyewitness testimony is what happens when a person witnesses a crime (or accident, or other legally important event) and later gets up on the stand and recalls for the court all the details of the witnessed event. It involves a more complicated process than might initially be presumed. It includes what happens during the actual crime to facilitate or hamper witnessing, as well as everything that happens from the time the event is over to the later courtroom appearance. The eyewitness may be interviewed by the police and numerous lawyers, describe the perpetrator to several different people, and make an identification of the perpetrator, among other things.
Why Is Eyewitness Testimony an Important Area of Psychological Research?
When an eyewitness stands up in front of the court and describes what happened from her own perspective, this testimony can be extremely compelling—it is hard for those hearing this testimony to take it “with a grain of salt,” or otherwise adjust its power. But to what extent is this necessary?
There is now a wealth of evidence, from research conducted over several decades, suggesting that eyewitness testimony is probably the most persuasive form of evidence presented in court, but in many cases, its accuracy is dubious. There is also evidence that mistaken eyewitness evidence can lead to wrongful conviction—sending people to prison for years or decades, even to death row, for crimes they did not commit. Faulty eyewitness testimony has been implicated in at least 75% of DNA exoneration cases—more than any other cause (Garrett, 2011). In a particularly famous case, a man named Ronald Cotton was identified by a rape victim, Jennifer Thompson, as her rapist, and was found guilty and sentenced to life in prison. After more than 10 years, he was exonerated (and the real rapist identified) based on DNA evidence. For details on this case and other (relatively) lucky individuals whose false convictions were subsequently overturned with DNA evidence, see the Innocence Project website (http://www.innocenceproject.org/).
There is also hope, though, that many of the errors may be avoidable if proper precautions are taken during the investigative and judicial processes. Psychological science has taught us what some of those precautions might involve, and we discuss some of that science now.
Misinformation
In an early study of eyewitness memory, undergraduate subjects first watched a slideshow depicting a small red car driving and then hitting a pedestrian (Loftus, Miller, & Burns, 1978). Some subjects were then asked leading questions about what had happened in the slides. For example, subjects were asked, “How fast was the car traveling when it passed the yield sign?” But this question was actually designed to be misleading, because the original slide included a stop sign rather than a yield sign.
Later, subjects were shown pairs of slides. One of the pair was the original slide containing the stop sign; the other was a replacement slide containing a yield sign. Subjects were asked which of the pair they had previously seen. Subjects who had been asked about the yield sign were likely to pick the slide showing the yield sign, even though they had originally seen the slide with the stop sign. In other words, the misinformation in the leading question led to inaccurate memory.
This phenomenon is called the misinformation effect, because the misinformation that subjects were exposed to after the event (here in the form of a misleading question) apparently contaminates subjects’ memories of what they witnessed. Hundreds of subsequent studies have demonstrated that memory can be contaminated by erroneous information that people are exposed to after they witness an event (see Frenda, Nichols, & Loftus, 2011; Loftus, 2005). The misinformation in these studies has led people to incorrectly remember everything from small but crucial details of a perpetrator’s appearance to objects as large as a barn that wasn’t there at all.
These studies have demonstrated that young adults (the typical research subjects in psychology) are often susceptible to misinformation, but that children and older adults can be even more susceptible (Bartlett & Memon, 2007; Ceci & Bruck, 1995). In addition, misinformation effects can occur easily, and without any intention to deceive (Allan & Gabbert, 2008). Even slight differences in the wording of a question can lead to misinformation effects. Subjects in one study were more likely to say yes when asked “Did you see the broken headlight?” than when asked “Did you see a broken headlight?” (Loftus, 1975).
Other studies have shown that misinformation can corrupt memory even more easily when it is encountered in social situations (Gabbert, Memon, Allan, & Wright, 2004). This is a problem particularly in cases where more than one person witnesses a crime. In these cases, witnesses tend to talk to one another in the immediate aftermath of the crime, including as they wait for police to arrive. But because different witnesses are different people with different perspectives, they are likely to see or notice different things, and thus remember different things, even when they witness the same event. So when they communicate about the crime later, they not only reinforce common memories for the event, they also contaminate each other’s memories for the event (Gabbert, Memon, & Allan, 2003; Paterson & Kemp, 2006; Takarangi, Parker, & Garry, 2006).
The misinformation effect has been modeled in the laboratory. Researchers had subjects watch a video in pairs. Both subjects sat in front of the same screen, but because they wore differently polarized glasses, they saw two different versions of a video, projected onto a screen. So, although they were both watching the same screen, and believed (quite reasonably) that they were watching the same video, they were actually watching two different versions of the video (Garry, French, Kinzett, & Mori, 2008).
In the video, Eric the electrician is seen wandering through an unoccupied house and helping himself to the contents thereof. A total of eight details were different between the two videos. After watching the videos, the “co-witnesses” worked together on 12 memory test questions. Four of these questions dealt with details that were different in the two versions of the video, so subjects had the chance to influence one another. Then subjects worked individually on 20 additional memory test questions. Eight of these were for details that were different in the two videos. Subjects’ accuracy was highly dependent on whether they had discussed the details previously. Their accuracy for items they had not previously discussed with their co-witness was 79%. But for items that they had discussed, their accuracy dropped markedly, to 34%. That is, subjects allowed their co-witnesses to corrupt their memories for what they had seen.
Identifying Perpetrators
In addition to correctly remembering many details of the crimes they witness, eyewitnesses often need to remember the faces and other identifying features of the perpetrators of those crimes. Eyewitnesses are often asked to describe that perpetrator to law enforcement and later to make identifications from books of mug shots or lineups. Here, too, there is a substantial body of research demonstrating that eyewitnesses can make serious, but often understandable and even predictable, errors (Caputo & Dunning, 2007; Cutler & Penrod, 1995).
In most jurisdictions in the United States, lineups are typically conducted with pictures, called photo spreads, rather than with actual people standing behind one-way glass (Wells, Memon, & Penrod, 2006). The eyewitness is given a set of small pictures of perhaps six or eight individuals who are dressed similarly and photographed in similar circumstances. One of these individuals is the police suspect, and the remainder are “foils” or “fillers” (people known to be innocent of the particular crime under investigation). If the eyewitness identifies the suspect, then the investigation of that suspect is likely to progress. If a witness identifies a foil or no one, then the police may choose to move their investigation in another direction.
This process is modeled in laboratory studies of eyewitness identifications. In these studies, research subjects witness a mock crime (often as a short video) and then are asked to make an identification from a photo or a live lineup. Sometimes the lineups are target present, meaning that the perpetrator from the mock crime is actually in the lineup, and sometimes they are target absent, meaning that the lineup is made up entirely of foils. The subjects, or mock witnesses, are given some instructions and asked to pick the perpetrator out of the lineup. The particular details of the witnessing experience, the instructions, and the lineup members can all influence the extent to which the mock witness is likely to pick the perpetrator out of the lineup, or indeed to make any selection at all. Mock witnesses (and indeed real witnesses) can make errors in two different ways. They can fail to pick the perpetrator out of a target present lineup (by picking a foil or by neglecting to make a selection), or they can pick a foil in a target absent lineup (wherein the only correct choice is to not make a selection).
Some factors have been shown to make eyewitness identification errors particularly likely. These include poor vision or viewing conditions during the crime, particularly stressful witnessing experiences, too little time to view the perpetrator or perpetrators, too much delay between witnessing and identifying, and being asked to identify a perpetrator from a race other than one’s own (Bornstein, Deffenbacher, Penrod, & McGorty, 2012; Brigham, Bennett, Meissner, & Mitchell, 2007; Burton, Wilson, Cowan, & Bruce, 1999; Deffenbacher, Bornstein, Penrod, & McGorty, 2004).
It is hard for the legal system to do much about most of these problems. But there are some things that the justice system can do to help lineup identifications “go right.” For example, investigators can put together high-quality, fair lineups. A fair lineup is one in which the suspect and each of the foils is equally likely to be chosen by someone who has read an eyewitness description of the perpetrator but who did not actually witness the crime (Brigham, Ready, & Spier, 1990). This means that no one in the lineup should “stick out,” and that everyone should match the description given by the eyewitness. Other important recommendations that have come out of this research include better ways to conduct lineups, “double blind” lineups, unbiased instructions for witnesses, and conducting lineups in a sequential fashion (see Technical Working Group for Eyewitness Evidence, 1999; Wells et al., 1998; Wells & Olson, 2003).
Kinds of Memory Biases
Memory is also susceptible to a wide variety of other biases and errors. People can forget events that happened to them and people they once knew. They can mix up details across time and place. They can even remember whole complex events that never happened at all. Importantly, these errors, once made, can be very hard to unmake. A memory is no less “memorable” just because it is wrong.
Some small memory errors are commonplace, and you have no doubt experienced many of them. You set down your keys without paying attention, and then cannot find them later when you go to look for them. You try to come up with a person’s name but cannot find it, even though you have the sense that it is right at the tip of your tongue (psychologists actually call this the tip-of-the-tongue effect, or TOT) (Brown, 1991).
Other sorts of memory biases are more complicated and longer lasting. For example, it turns out that our expectations and beliefs about how the world works can have huge influences on our memories. Because many aspects of our everyday lives are full of redundancies, our memory systems take advantage of the recurring patterns by forming and using schemata, or memory templates (Alba & Hasher, 1983; Brewer & Treyens, 1981). Thus, we know to expect that a library will have shelves and tables and librarians, and so we don’t have to spend energy noticing these at the time. The result of this lack of attention, however, is that one is likely to remember schema-consistent information (such as tables), and to remember them in a rather generic way, whether or not they were actually present.
False Memory
Some memory errors are so “large” that they almost belong in a class of their own: false memories. Back in the early 1990s a pattern emerged whereby people would go into therapy for depression and other everyday problems, but over the course of the therapy develop memories for violent and horrible victimhood (Loftus & Ketcham, 1994). These patients’ therapists claimed that the patients were recovering genuine memories of real childhood abuse, buried deep in their minds for years or even decades. But some experimental psychologists believed that the memories were instead likely to be false—created in therapy. These researchers then set out to see whether it would indeed be possible for wholly false memories to be created by procedures similar to those used in these patients’ therapy.
In early false memory studies, undergraduate subjects’ family members were recruited to provide events from the students’ lives. The student subjects were told that the researchers had talked to their family members and learned about four different events from their childhoods. The researchers asked if the now undergraduate students remembered each of these four events—introduced via short hints. The subjects were asked to write about each of the four events in a booklet and then were interviewed two separate times. The trick was that one of the events came from the researchers rather than the family (and the family had actually assured the researchers that this event had not happened to the subject). In the first such study, this researcher-introduced event was a story about being lost in a shopping mall and rescued by an older adult. In this study, after just being asked whether they remembered these events occurring on three separate occasions, a quarter of subjects came to believe that they had indeed been lost in the mall (Loftus & Pickrell, 1995). In subsequent studies, similar procedures were used to get subjects to believe that they nearly drowned and had been rescued by a lifeguard, or that they had spilled punch on the bride’s parents at a family wedding, or that they had been attacked by a vicious animal as a child, among other events (Heaps & Nash, 1999; Hyman, Husband, & Billings, 1995; Porter, Yuille, & Lehman, 1999).
More recent false memory studies have used a variety of different manipulations to produce false memories in substantial minorities and even occasional majorities of manipulated subjects (Braun, Ellis, & Loftus, 2002; Lindsay, Hagen, Read, Wade, & Garry, 2004; Mazzoni, Loftus, Seitz, & Lynn, 1999; Seamon, Philbin, & Harrison, 2006; Wade, Garry, Read, & Lindsay, 2002). For example, one group of researchers used a mock-advertising study, wherein subjects were asked to review (fake) advertisements for Disney vacations, to convince subjects that they had once met the character Bugs Bunny at Disneyland—an impossible false memory because Bugs is a Warner Brothers character (Braun et al., 2002). Another group of researchers photoshopped childhood photographs of their subjects into a hot air balloon picture and then asked the subjects to try to remember and describe their hot air balloon experience (Wade et al., 2002). Other researchers gave subjects unmanipulated class photographs from their childhoods along with a fake story about a class prank, and thus enhanced the likelihood that subjects would falsely remember the prank (Lindsay et al., 2004).
Using a false feedback manipulation, we have been able to persuade subjects to falsely remember having a variety of childhood experiences. In these studies, subjects are told (falsely) that a powerful computer system has analyzed questionnaires that they completed previously and has concluded that they had a particular experience years earlier. Subjects apparently believe what the computer says about them and adjust their memories to match this new information. A variety of different false memories have been implanted in this way. In some studies, subjects are told they once got sick on a particular food (Bernstein, Laney, Morris, & Loftus, 2005). These memories can then spill out into other aspects of subjects’ lives, such that they often become less interested in eating that food in the future (Bernstein & Loftus, 2009b). Other false memories implanted with this methodology include having an unpleasant experience with the character Pluto at Disneyland and witnessing physical violence between one’s parents (Berkowitz, Laney, Morris, Garry, & Loftus, 2008; Laney & Loftus, 2008).
Importantly, once these false memories are implanted—whether through complex methods or simple ones—it is extremely difficult to tell them apart from true memories (Bernstein & Loftus, 2009a; Laney & Loftus, 2008).
Conclusion
To conclude, eyewitness testimony is very powerful and convincing to jurors, even though it is not particularly reliable. Identification errors occur, and these errors can lead to people being falsely accused and even convicted. Likewise, eyewitness memory can be corrupted by leading questions, misinterpretations of events, conversations with co-witnesses, and their own expectations for what should have happened. People can even come to remember whole events that never occurred.
The problems with memory in the legal system are real. But what can we do to start to fix them? A number of specific recommendations have already been made, and many of these are in the process of being implemented (e.g., Steblay & Loftus, 2012; Technical Working Group for Eyewitness Evidence, 1999; Wells et al., 1998). Some of these recommendations are aimed at specific legal procedures, including when and how witnesses should be interviewed, and how lineups should be constructed and conducted. Other recommendations call for appropriate education (often in the form of expert witness testimony) to be provided to jury members and others tasked with assessing eyewitness memory. Eyewitness testimony can be of great value to the legal system, but decades of research now argues that this testimony is often given far more weight than its accuracy justifies.
Outside Resources
Video 1: Eureka Foong's - The Misinformation Effect. This is a student-made video illustrating this phenomenon of altered memory. It was one of the winning entries in the 2014 Noba Student Video Award.
Video 2: Ang Rui Xia & Ong Jun Hao's - The Misinformation Effect. Another student-made video exploring the misinformation effect. Also an award winner from 2014.
Discussion Questions
1. Imagine that you are a juror in a murder case where an eyewitness testifies. In what ways might your knowledge of memory errors affect your use of this testimony?
2. How true to life do you think television shows such as CSI or Law & Order are in their portrayals of eyewitnesses?
3. Many jurisdictions in the United States use “show-ups,” where an eyewitness is brought to a suspect (who may be standing on the street or in handcuffs in the back of a police car) and asked, “Is this the perpetrator?” Is this a good or bad idea, from a psychological perspective? Why?
Vocabulary
False memories
Memory for an event that never actually occurred, implanted by experimental manipulation or other means.
Foils
Any member of a lineup (whether live or photograph) other than the suspect.
Misinformation effect
A memory error caused by exposure to incorrect information between the original event (e.g., a crime) and later memory test (e.g., an interview, lineup, or day in court).
Mock witnesses
A research subject who plays the part of a witness in a study.
Photo spreads
A selection of normally small photographs of faces given to a witness for the purpose of identifying a perpetrator.
Schema (plural: schemata)
A memory template, created through repeated exposure to a particular class of objects or events. | textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/03%3A_Sensation_and_Perception/3.07%3A_Eyewitness_Testimony_and_Memory_Biases.txt |
• 4.1: Factors Influencing Learning
Learning is a complex process that defies easy definition and description. This module reviews some of the philosophical issues involved with defining learning and describes in some detail the characteristics of learners and of encoding activities that seem to affect how well people can acquire new memories, knowledge, or skills. At the end, we consider a few basic principles that guide whether a particular attempt at learning will be successful or not.
• 4.2: Memory (Encoding, Storage, Retrieval)
“Memory” is a single term that reflects a number of different abilities: holding information briefly while working with it (working memory), remembering episodes of one’s life (episodic memory), and our general knowledge of facts of the world (semantic memory), among other types. Remembering episodes involves three processes: encoding information (learning it, by perceiving it and relating it to past knowledge), storing it (maintaining it over time), and then retrieving it (accessing the informa
• 4.3: Conditioning and Learning
Basic principles of learning are always operating and always influencing human behavior. This module discusses the two most fundamental forms of learning -- classical (Pavlovian) and instrumental (operant) conditioning. This module describes some of the most important things you need to know about classical and instrumental conditioning, and it illustrates some of the many ways they help us understand normal and disordered behavior in humans.
• 4.4: Forgetting and Amnesia
This module explores the causes of everyday forgetting and considers pathological forgetting in the context of amnesia. Forgetting is viewed as an adaptive process that allows us to be efficient in terms of the information we retain.
04: Learning and Memory
By Aaron Benjamin
University of Illinois at Urbana-Champaign
Learning is a complex process that defies easy definition and description. This module reviews some of the philosophical issues involved with defining learning and describes in some detail the characteristics of learners and of encoding activities that seem to affect how well people can acquire new memories, knowledge, or skills. At the end, we consider a few basic principles that guide whether a particular attempt at learning will be successful or not.
learning Objectives
• Consider what kinds of activities constitute learning.
• Name multiple forms of learning.
• List some individual differences that affect learning.
• Describe the effect of various encoding activities on learning.
• Describe three general principles of learning.
Introduction
What do you do when studying for an exam? Do you read your class notes and textbook (hopefully not for the very first time)? Do you try to find a quiet place without distraction? Do you use flash cards to test your knowledge? The choices you make reveal your theory of learning, but there is no reason for you to limit yourself to your own intuitions. There is a vast and vibrant science of learning, in which researchers from psychology, education, and neuroscience study basic principles of learning and memory.
In fact, learning is a much broader domain than you might think. Consider: Is listening to music a form of learning? More often, it seems listening to music is a way of avoiding learning. But we know that your brain’s response to auditory information changes with your experience with that information, a form of learning called auditory perceptual learning(Polley, Steinberg, & Merzenich, 2006). Each time we listen to a song, we hear it differently because of our experience. When we exhibit changes in behavior without having intended to learn something, that is called implicit learning (Seger, 1994), and when we exhibit changes in our behavior that reveal the influence of past experience even though we are not attempting to use that experience, that is called implicit memory (Richardson-Klavehn & Bjork, 1988).
Other well-studied forms of learning include the types of learning that are general across species. We can’t ask a slug to learn a poem or a lemur to learn to bat left-handed, but we can assess learning in other ways. For example, we can look for a change in our responses to things when we are repeatedly stimulated. If you live in a house with a grandfather clock, you know that what was once an annoying and intrusive sound is now probably barely audible to you. Similarly, poking an earthworm again and again is likely to lead to a reduction in its retraction from your touch. These phenomena are forms of nonassociative learning, in which single repeated exposure leads to a change in behavior (Pinsker, Kupfermann, Castelluci, & Kandel, 1970). When our response lessens with exposure, it is called habituation, and when it increases (like it might with a particularly annoying laugh), it is called sensitization. Animals can also learn about relationships between things, such as when an alley cat learns that the sound of janitors working in a restaurant precedes the dumping of delicious new garbage (an example of stimulus-stimulus learning called classical conditioning), or when a dog learns to roll over to get a treat (a form of stimulus-response learning called operant conditioning). These forms of learning will be covered in the module on Conditioning and Learning (http://noba.to/ajxhcqdr).
Here, we’ll review some of the conditions that affect learning, with an eye toward the type of explicit learning we do when trying to learn something. Jenkins (1979) classified experiments on learning and memory into four groups of factors (renamed here): learners, encoding activities, materials, and retrieval. In this module, we’ll focus on the first two categories; the module on Memory (http://noba.to/bdc4uger) will consider other factors more generally.
Learners
People bring numerous individual differences with them into memory experiments, and many of these variables affect learning. In the classroom, motivation matters (Pintrich, 2003), though experimental attempts to induce motivation with money yield only modest benefits (Heyer & O’Kelly, 1949). Learners are, however, quite able to allocate more effort to learning prioritized over unimportant materials (Castel, Benjamin, Craik, & Watkins, 2002).
In addition, the organization and planning skills that a learner exhibits matter a lot (Garavalia & Gredler, 2002), suggesting that the efficiency with which one organizes self-guided learning is an important component of learning. We will return to this topic soon.
One well-studied and important variable is working memory capacity. Working memory describes the form of memory we use to hold onto information temporarily. Working memory is used, for example, to keep track of where we are in the course of a complicated math problem, and what the relevant outcomes of prior steps in that problem are. Higher scores on working memory measures are predictive of better reasoning skills (Kyllonen & Christal, 1990), reading comprehension (Daneman & Carpenter, 1980), and even better control of attention (Kane, Conway, Hambrick, & Engle, 2008).
Anxiety also affects the quality of learning. For example, people with math anxiety have a smaller capacity for remembering math-related information in working memory, such as the results of carrying a digit in arithmetic (Ashcraft & Kirk, 2001). Having students write about their specific anxiety seems to reduce the worry associated with tests and increases performance on math tests (Ramirez & Beilock, 2011).
One good place to end this discussion is to consider the role of expertise. Though there probably is a finite capacity on our ability to store information (Landauer, 1986), in practice, this concept is misleading. In fact, because the usual bottleneck to remembering something is our ability to access information, not our space to store it, having more knowledge or expertise actually enhances our ability to learn new information. A classic example can be seen in comparing a chess master with a chess novice on their ability to learn and remember the positions of pieces on a chessboard (Chase & Simon, 1973). In that experiment, the master remembered the location of many more pieces than the novice, even after only a very short glance. Maybe chess masters are just smarter than the average chess beginner, and have better memory? No: The advantage the expert exhibited only was apparent when the pieces were arranged in a plausible format for an ongoing chess game; when the pieces were placed randomly, both groups did equivalently poorly. Expertise allowed the master to chunk (Simon, 1974) multiple pieces into a smaller number of pieces of information—but only when that information was structured in such a way so as to allow the application of that expertise.
Encoding Activities
What we do when we’re learning is very important. We’ve all had the experience of reading something and suddenly coming to the realization that we don’t remember a single thing, even the sentence that we just read. How we go about encoding information determines a lot about how much we remember.
You might think that the most important thing is to try to learn. Interestingly, this is not true, at least not completely. Trying to learn a list of words, as compared to just evaluating each word for its part of speech (i.e., noun, verb, adjective) does help you recall the words—that is, it helps you remember and write down more of the words later. But it actually impairs your ability to recognize the words—to judge on a later list which words are the ones that you studied (Eagle & Leiter, 1964). So this is a case in which incidental learning—that is, learning without the intention to learn—is better than intentional learning.
Such examples are not particularly rare and are not limited to recognition. Nairne, Pandeirada, and Thompson (2008) showed, for example, that survival processing—thinking about and rating each word in a list for its relevance in a survival scenario—led to much higher recall than intentional learning (and also higher, in fact, than other encoding activities that are also known to lead to high levels of recall). Clearly, merely intending to learn something is not enough. How a learner actively processes the material plays a large role; for example, reading words and evaluating their meaning leads to better learning than reading them and evaluating the way that the words look or sound (Craik & Lockhart, 1972). These results suggest that individual differences in motivation will not have a large effect on learning unless learners also have accurate ideas about how to effectively learn material when they care to do so.
So, do learners know how to effectively encode material? People allowed to freely allocate their time to study a list of words do remember those words better than a group that doesn’t have control over their own study time, though the advantage is relatively small and is limited to the subset of learners who choose to spend more time on the more difficult material (Tullis & Benjamin, 2011). In addition, learners who have an opportunity to review materials that they select for restudy often learn more than another group that is asked to restudy the materials that they didn’t select for restudy (Kornell & Metcalfe, 2006). However, this advantage also appears to be relatively modest (Kimball, Smith, & Muntean, 2012) and wasn’t apparent in a group of older learners (Tullis & Benjamin, 2012). Taken together, all of the evidence seems to support the claim that self-control of learning can be effective, but only when learners have good ideas about what an effective learning strategy is.
One factor that appears to have a big effect and that learners do not always appear to understand is the effect of scheduling repetitions of study. If you are studying for a final exam next week and plan to spend a total of five hours, what is the best way to distribute your study? The evidence is clear that spacing one’s repetitions apart in time is superior than massing them all together (Baddeley & Longman, 1978; Bahrick, Bahrick, Bahrick, & Bahrick, 1993; Melton, 1967). Increasing the spacing between consecutive presentations appears to benefit learning yet further (Landauer & Bjork, 1978).
A similar advantage is evident for the practice of interleaving multiple skills to be learned: For example, baseball batters improved more when they faced a mix of different types of pitches than when they faced the same pitches blocked by type (Hall, Domingues, & Cavazos, 1994). Students also showed better performance on a test when different types of mathematics problems were interleaved rather than blocked during learning (Taylor & Rohrer, 2010).
One final factor that merits discussion is the role of testing. Educators and students often think about testing as a way of assessing knowledge, and this is indeed an important use of tests. But tests themselves affect memory, because retrieval is one of the most powerful ways of enhancing learning (Roediger & Butler, 2013). Self-testing is an underutilized and potent means of making learning more durable.
General Principles of Learning
We’ve only begun to scratch the surface here of the many variables that affect the quality and content of learning (Mullin, Herrmann, & Searleman, 1993). But even within this brief examination of the differences between people and the activities they engage in can we see some basic principles of the learning process.
The value of effective metacognition
To be able to guide our own learning effectively, we must be able to evaluate the progress of our learning accurately and choose activities that enhance learning efficiently. It is of little use to study for a long time if a student cannot discern between what material she has or has not mastered, and if additional study activities move her no closer to mastery. Metacognition describes the knowledge and skills people have in monitoring and controlling their own learning and memory. We can work to acquire better metacognition by paying attention to our successes and failures in estimating what we do and don’t know, and by using testing often to monitor our progress.
Transfer-appropriate processing
Sometimes, it doesn’t make sense to talk about whether a particular encoding activity is good or bad for learning. Rather, we can talk about whether that activity is good for learning as revealed by a particular test. For example, although reading words for meaning leads to better performance on a test of recall or recognition than paying attention to the pronunciation of the word, it leads to worse performance on a test that taps knowledge of that pronunciation, such as whether a previously studied word rhymes with another word (Morris, Bransford, & Franks, 1977). The principle of transfer-appropriate processing states that memory is “better” when the test taps the same type of knowledge as the original encoding activity. When thinking about how to learn material, we should always be thinking about the situations in which we are likely to need access to that material. An emergency responder who needs access to learned procedures under conditions of great stress should learn differently from a hobbyist learning to use a new digital camera.
The value of forgetting
Forgetting is sometimes seen as the enemy of learning, but, in fact, forgetting is a highly desirable part of the learning process. The main bottleneck we face in using our knowledge is being able to access it. We have all had the experience of retrieval failure—that is, not being able to remember a piece of information that we know we have, and that we can access easily once the right set of cues is provided. Because access is difficult, it is important to jettison information that is not needed—that is, to forget it. Without forgetting, our minds would become cluttered with out-of-date or irrelevant information. And, just imagine how complicated life would be if we were unable to forget the names of past acquaintances, teachers, or romantic partners.
But the value of forgetting is even greater than that. There is lots of evidence that some forgetting is a prerequisite for more learning. For example, the previously discussed benefits of distributing practice opportunities may arise in part because of the greater forgetting that takes places between those spaced learning events. It is for this reason that some encoding activities that are difficult and lead to the appearance of slow learning actually lead to superior learning in the long run (Bjork, 2011). When we opt for learning activities that enhance learning quickly, we must be aware that these are not always the same techniques that lead to durable, long-term learning.
Conclusion
To wrap things up, let’s think back to the questions we began the module with. What might you now do differently when preparing for an exam? Hopefully, you will think about testing yourself frequently, developing an accurate sense of what you do and do not know, how you are likely to use the knowledge, and using the scheduling of tasks to your advantage. If you are learning a new skill or new material, using the scientific study of learning as a basis for the study and practice decisions you make is a good bet.
Outside Resources
Video: The First 20 hours – How to Learn Anything - Watch a video by Josh Kaufman about how we can get really good at almost anything with 20 hours of efficient practice.
Video: The Learning Scientists - Terrific YouTube Channel with videos covering such important topics as interleaving, spaced repetition, and retrieval practice.
https://www.youtube.com/channel/UCjbAmxL6GZXiaoXuNE7cIYg
Video: What we learn before we’re born - In this video, science writer Annie Murphy Paul answers the question “When does learning begin?” She covers through new research that shows how much we learn in the womb — from the lilt of our native language to our soon-to-be-favorite foods.
https://www.ted.com/talks/annie_murphy_paul_what_we_learn_before_we_re_born
Web: Neuroscience News - This is a science website dedicated to neuroscience research, with this page addressing fascinating new memory research.
http://neurosciencenews.com/neuroscience-terms/memory-research/
Web: The Learning Scientists - A websitecreated by three psychologists who wanted to make scientific research on learning more accessible to students, teachers, and other educators.
http://www.learningscientists.org/
Discussion Questions
1. How would you best design a computer program to help someone learn a new foreign language? Think about some of the principles of learning outlined in this module and how those principles could be instantiated in “rules” in a computer program.
2. Would you rather have a really good memory or really good metacognition? How might you train someone to develop better metacognition if he or she doesn’t have a very good memory, and what would be the consequences of that training?
3. In what kinds of situations not discussed here might you find a benefit of forgetting on learning?
Vocabulary
Chunk
The process of grouping information together using our knowledge.
Classical conditioning
Describes stimulus-stimulus associative learning.
Encoding
The pact of putting information into memory.
Habituation
Occurs when the response to a stimulus decreases with exposure.
Implicit learning
Occurs when we acquire information without intent that we cannot easily express.
Implicit memory
A type of long-term memory that does not require conscious thought to encode. It's the type of memory one makes without intent.
Incidental learning
Any type of learning that happens without the intention to learn.
Intentional learning
Any type of learning that happens when motivated by intention.
Metacognition
Describes the knowledge and skills people have in monitoring and controlling their own learning and memory.
Nonassociative learning
Occurs when a single repeated exposure leads to a change in behavior.
Operant conditioning
Describes stimulus-response associative learning.
Perceptual learning
Occurs when aspects of our perception changes as a function of experience.
Sensitization
Occurs when the response to a stimulus increases with exposure
Transfer-appropriate processing
A principle that states that memory performance is superior when a test taps the same cognitive processes as the original encoding activity.
Working memory
The form of memory we use to hold onto information temporarily, usually for the purposes of manipulation. | textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/04%3A_Learning_and_Memory/4.01%3A_Factors_Influencing_Learning.txt |
By Kathleen B. McDermott and Henry L. Roediger III
Washington University in St. Louis
“Memory” is a single term that reflects a number of different abilities: holding information briefly while working with it (working memory), remembering episodes of one’s life (episodic memory), and our general knowledge of facts of the world (semantic memory), among other types. Remembering episodes involves three processes: encoding information (learning it, by perceiving it and relating it to past knowledge), storing it (maintaining it over time), and then retrieving it (accessing the information when needed). Failures can occur at any stage, leading to forgetting or to having false memories. The key to improving one’s memory is to improve processes of encoding and to use techniques that guarantee effective retrieval. Good encoding techniques include relating new information to what one already knows, forming mental images, and creating associations among information that needs to be remembered. The key to good retrieval is developing effective cues that will lead the rememberer back to the encoded information. Classic mnemonic systems, known since the time of the ancient Greeks and still used by some today, can greatly improve one’s memory abilities.
learning objectives
• Define and note differences between the following forms of memory: working memory, episodic memory, semantic memory, collective memory.
• Describe the three stages in the process of learning and remembering.
• Describe strategies that can be used to enhance the original learning or encoding of information.
• Describe strategies that can improve the process of retrieval.
• Describe why the classic mnemonic device, the method of loci, works so well.
Introduction
In 2013, Simon Reinhard sat in front of 60 people in a room at Washington University, where he memorized an increasingly long series of digits. On the first round, a computer generated 10 random digits—6 1 9 4 8 5 6 3 7 1—on a screen for 10 seconds. After the series disappeared, Simon typed them into his computer. His recollection was perfect. In the next phase, 20 digits appeared on the screen for 20 seconds. Again, Simon got them all correct. No one in the audience (mostly professors, graduate students, and undergraduate students) could recall the 20 digits perfectly. Then came 30 digits, studied for 30 seconds; once again, Simon didn’t misplace even a single digit. For a final trial, 50 digits appeared on the screen for 50 seconds, and again, Simon got them all right. In fact, Simon would have been happy to keep going. His record in this task—called “forward digit span”—is 240 digits!
When most of us witness a performance like that of Simon Reinhard, we think one of two things: First, maybe he’s cheating somehow. (No, he is not.) Second, Simon must have abilities more advanced than the rest of humankind. After all, psychologists established many years ago that the normal memory span for adults is about 7 digits, with some of us able to recall a few more and others a few less (Miller, 1956). That is why the first phone numbers were limited to 7 digits—psychologists determined that many errors occurred (costing the phone company money) when the number was increased to even 8 digits. But in normal testing, no one gets 50 digits correct in a row, much less 240. So, does Simon Reinhard simply have a photographic memory? He does not. Instead, Simon has taught himself simple strategies for remembering that have greatly increased his capacity for remembering virtually any type of material—digits, words, faces and names, poetry, historical dates, and so on. Twelve years earlier, before he started training his memory abilities, he had a digit span of 7, just like most of us. Simon has been training his abilities for about 10 years as of this writing, and has risen to be in the top two of “memory athletes.” In 2012, he came in second place in the World Memory Championships (composed of 11 tasks), held in London. He currently ranks second in the world, behind another German competitor, Johannes Mallow. In this module, we reveal what psychologists and others have learned about memory, and we also explain the general principles by which you can improve your own memory for factual material.
Varieties of Memory
For most of us, remembering digits relies on short-term memory, or working memory—the ability to hold information in our minds for a brief time and work with it (e.g., multiplying 24 x 17 without using paper would rely on working memory). Another type of memory is episodic memory—the ability to remember the episodes of our lives. If you were given the task of recalling everything you did 2 days ago, that would be a test of episodic memory; you would be required to mentally travel through the day in your mind and note the main events. Semantic memory is our storehouse of more-or-less permanent knowledge, such as the meanings of words in a language (e.g., the meaning of “parasol”) and the huge collection of facts about the world (e.g., there are 196 countries in the world, and 206 bones in your body). Collective memory refers to the kind of memory that people in a group share (whether family, community, schoolmates, or citizens of a state or a country). For example, residents of small towns often strongly identify with those towns, remembering the local customs and historical events in a unique way. That is, the community’s collective memory passes stories and recollections between neighbors and to future generations, forming a memory system unto itself.
Psychologists continue to debate the classification of types of memory, as well as which types rely on others (Tulving, 2007), but for this module we will focus on episodic memory. Episodic memory is usually what people think of when they hear the word “memory.” For example, when people say that an older relative is “losing her memory” due to Alzheimer’s disease, the type of memory-loss they are referring to is the inability to recall events, or episodic memory. (Semantic memory is actually preserved in early-stage Alzheimer’s disease.) Although remembering specific events that have happened over the course of one’s entire life (e.g., your experiences in sixth grade) can be referred to as autobiographical memory, we will focus primarily on the episodic memories of more recent events.
Three Stages of the Learning/Memory Process
Psychologists distinguish between three necessary stages in the learning and memory process: encoding, storage, and retrieval (Melton, 1963). Encoding is defined as the initial learning of information; storage refers to maintaining information over time; retrieval is the ability to access information when you need it. If you meet someone for the first time at a party, you need to encode her name (Lyn Goff) while you associate her name with her face. Then you need to maintain the information over time. If you see her a week later, you need to recognize her face and have it serve as a cue to retrieve her name. Any successful act of remembering requires that all three stages be intact. However, two types of errors can also occur. Forgetting is one type: you see the person you met at the party and you cannot recall her name. The other error is misremembering (false recall or false recognition): you see someone who looks like Lyn Goff and call the person by that name (false recognition of the face). Or, you might see the real Lyn Goff, recognize her face, but then call her by the name of another woman you met at the party (misrecall of her name).
Whenever forgetting or misremembering occurs, we can ask, at which stage in the learning/memory process was there a failure?—though it is often difficult to answer this question with precision. One reason for this inaccuracy is that the three stages are not as discrete as our description implies. Rather, all three stages depend on one another. How we encode information determines how it will be stored and what cues will be effective when we try to retrieve it. And too, the act of retrieval itself also changes the way information is subsequently remembered, usually aiding later recall of the retrieved information. The central point for now is that the three stages—encoding, storage, and retrieval—affect one another, and are inextricably bound together.
Encoding
Encoding refers to the initial experience of perceiving and learning information. Psychologists often study recall by having participants study a list of pictures or words. Encoding in these situations is fairly straightforward. However, “real life” encoding is much more challenging. When you walk across campus, for example, you encounter countless sights and sounds—friends passing by, people playing Frisbee, music in the air. The physical and mental environments are much too rich for you to encode all the happenings around you or the internal thoughts you have in response to them. So, an important first principle of encoding is that it is selective: we attend to some events in our environment and we ignore others. A second point about encoding is that it is prolific; we are always encoding the events of our lives—attending to the world, trying to understand it. Normally this presents no problem, as our days are filled with routine occurrences, so we don’t need to pay attention to everything. But if something does happen that seems strange—during your daily walk across campus, you see a giraffe—then we pay close attention and try to understand why we are seeing what we are seeing.
Right after your typical walk across campus (one without the appearance of a giraffe), you would be able to remember the events reasonably well if you were asked. You could say whom you bumped into, what song was playing from a radio, and so on. However, suppose someone asked you to recall the same walk a month later. You wouldn’t stand a chance. You would likely be able to recount the basics of a typical walk across campus, but not the precise details of that particular walk. Yet, if you had seen a giraffe during that walk, the event would have been fixed in your mind for a long time, probably for the rest of your life. You would tell your friends about it, and, on later occasions when you saw a giraffe, you might be reminded of the day you saw one on campus. Psychologists have long pinpointed distinctiveness—having an event stand out as quite different from a background of similar events—as a key to remembering events (Hunt, 2003).
In addition, when vivid memories are tinged with strong emotional content, they often seem to leave a permanent mark on us. Public tragedies, such as terrorist attacks, often create vivid memories in those who witnessed them. But even those of us not directly involved in such events may have vivid memories of them, including memories of first hearing about them. For example, many people are able to recall their exact physical location when they first learned about the assassination or accidental death of a national figure. The term flashbulb memory was originally coined by Brown and Kulik (1977) to describe this sort of vivid memory of finding out an important piece of news. The name refers to how some memories seem to be captured in the mind like a flash photograph; because of the distinctiveness and emotionality of the news, they seem to become permanently etched in the mind with exceptional clarity compared to other memories.
Take a moment and think back on your own life. Is there a particular memory that seems sharper than others? A memory where you can recall unusual details, like the colors of mundane things around you, or the exact positions of surrounding objects? Although people have great confidence in flashbulb memories like these, the truth is, our objective accuracy with them is far from perfect (Talarico & Rubin, 2003). That is, even though people may have great confidence in what they recall, their memories are not as accurate (e.g., what the actual colors were; where objects were truly placed) as they tend to imagine. Nonetheless, all other things being equal, distinctive and emotional events are well-remembered.
Details do not leap perfectly from the world into a person’s mind. We might say that we went to a party and remember it, but what we remember is (at best) what we encoded. As noted above, the process of encoding is selective, and in complex situations, relatively few of many possible details are noticed and encoded. The process of encoding always involves recoding—that is, taking the information from the form it is delivered to us and then converting it in a way that we can make sense of it. For example, you might try to remember the colors of a rainbow by using the acronym ROY G BIV (red, orange, yellow, green, blue, indigo, violet). The process of recoding the colors into a name can help us to remember. However, recoding can also introduce errors—when we accidentally add information during encoding, then remember that new material as if it had been part of the actual experience (as discussed below).
Psychologists have studied many recoding strategies that can be used during study to improve retention. First, research advises that, as we study, we should think of the meaning of the events (Craik & Lockhart, 1972), and we should try to relate new events to information we already know. This helps us form associations that we can use to retrieve information later. Second, imagining events also makes them more memorable; creating vivid images out of information (even verbal information) can greatly improve later recall (Bower & Reitman, 1972). Creating imagery is part of the technique Simon Reinhard uses to remember huge numbers of digits, but we can all use images to encode information more effectively. The basic concept behind good encoding strategies is to form distinctive memories (ones that stand out), and to form links or associations among memories to help later retrieval (Hunt & McDaniel, 1993). Using study strategies such as the ones described here is challenging, but the effort is well worth the benefits of enhanced learning and retention.
We emphasized earlier that encoding is selective: people cannot encode all information they are exposed to. However, recoding can add information that was not even seen or heard during the initial encoding phase. Several of the recoding processes, like forming associations between memories, can happen without our awareness. This is one reason people can sometimes remember events that did not actually happen—because during the process of recoding, details got added. One common way of inducing false memories in the laboratory employs a word-list technique (Deese, 1959; Roediger & McDermott, 1995). Participants hear lists of 15 words, like door, glass, pane, shade, ledge, sill, house, open, curtain, frame, view, breeze, sash, screen, and shutter. Later, participants are given a test in which they are shown a list of words and asked to pick out the ones they’d heard earlier. This second list contains some words from the first list (e.g., door, pane, frame) and some words not from the list (e.g., arm, phone, bottle). In this example, one of the words on the test is window, which—importantly—does not appear in the first list, but which is related to other words in that list. When subjects were tested, they were reasonably accurate with the studied words (door, etc.), recognizing them 72% of the time. However, when window was on the test, they falsely recognized it as having been on the list 84% of the time (Stadler, Roediger, & McDermott, 1999). The same thing happened with many other lists the authors used. This phenomenon is referred to as the DRM (for Deese-Roediger-McDermott) effect. One explanation for such results is that, while students listened to items in the list, the words triggered the students to think about window, even though windowwas never presented. In this way, people seem to encode events that are not actually part of their experience.
Because humans are creative, we are always going beyond the information we are given: we automatically make associations and infer from them what is happening. But, as with the word association mix-up above, sometimes we make false memories from our inferences—remembering the inferences themselves as if they were actual experiences. To illustrate this, Brewer (1977) gave people sentences to remember that were designed to elicit pragmatic inferences. Inferences, in general, refer to instances when something is not explicitly stated, but we are still able to guess the undisclosed intention. For example, if your friend told you that she didn’t want to go out to eat, you may infer that she doesn’t have the money to go out, or that she’s too tired. With pragmatic inferences, there is usually one particular inference you’re likely to make. Consider the statement Brewer (1977) gave her participants: “The karate champion hit the cinder block.” After hearing or seeing this sentence, participants who were given a memory test tended to remember the statement as having been, “The karate champion broke the cinder block.” This remembered statement is not necessarily a logical inference (i.e., it is perfectly reasonable that a karate champion could hit a cinder block without breaking it). Nevertheless, the pragmatic conclusion from hearing such a sentence is that the block was likely broken. The participants remembered this inference they made while hearing the sentence in place of the actual words that were in the sentence (see also McDermott & Chan, 2006).
Encoding—the initial registration of information—is essential in the learning and memory process. Unless an event is encoded in some fashion, it will not be successfully remembered later. However, just because an event is encoded (even if it is encoded well), there’s no guarantee that it will be remembered later.
Storage
Every experience we have changes our brains. That may seem like a bold, even strange, claim at first, but it’s true. We encode each of our experiences within the structures of the nervous system, making new impressions in the process—and each of those impressions involves changes in the brain. Psychologists (and neurobiologists) say that experiences leave memory traces, or engrams (the two terms are synonyms). Memories have to be stored somewhere in the brain, so in order to do so, the brain biochemically alters itself and its neural tissue. Just like you might write yourself a note to remind you of something, the brain “writes” a memory trace, changing its own physical composition to do so. The basic idea is that events (occurrences in our environment) create engrams through a process of consolidation: the neural changes that occur after learning to create the memory trace of an experience. Although neurobiologists are concerned with exactly what neural processes change when memories are created, for psychologists, the term memory trace simply refers to the physical change in the nervous system (whatever that may be, exactly) that represents our experience.
Although the concept of engram or memory trace is extremely useful, we shouldn’t take the term too literally. It is important to understand that memory traces are not perfect little packets of information that lie dormant in the brain, waiting to be called forward to give an accurate report of past experience. Memory traces are not like video or audio recordings, capturing experience with great accuracy; as discussed earlier, we often have errors in our memory, which would not exist if memory traces were perfect packets of information. Thus, it is wrong to think that remembering involves simply “reading out” a faithful record of past experience. Rather, when we remember past events, we reconstruct them with the aid of our memory traces—but also with our current belief of what happened. For example, if you were trying to recall for the police who started a fight at a bar, you may not have a memory trace of who pushed whom first. However, let’s say you remember that one of the guys held the door open for you. When thinking back to the start of the fight, this knowledge (of how one guy was friendly to you) may unconsciously influence your memory of what happened in favor of the nice guy. Thus, memory is a construction of what you actually recall and what you believe happened. In a phrase, remembering is reconstructive (we reconstruct our past with the aid of memory traces) not reproductive (a perfect reproduction or recreation of the past).
Psychologists refer to the time between learning and testing as the retention interval. Memories can consolidate during that time, aiding retention. However, experiences can also occur that undermine the memory. For example, think of what you had for lunch yesterday—a pretty easy task. However, if you had to recall what you had for lunch 17 days ago, you may well fail (assuming you don’t eat the same thing every day). The 16 lunches you’ve had since that one have created retroactive interference. Retroactive interference refers to new activities (i.e., the subsequent lunches) during the retention interval (i.e., the time between the lunch 17 days ago and now) that interfere with retrieving the specific, older memory (i.e., the lunch details from 17 days ago). But just as newer things can interfere with remembering older things, so can the opposite happen. Proactive interference is when past memories interfere with the encoding of new ones. For example, if you have ever studied a second language, often times the grammar and vocabulary of your native language will pop into your head, impairing your fluency in the foreign language.
Retroactive interference is one of the main causes of forgetting (McGeoch, 1932). In the module Eyewitness Testimony and Memory Biases http://noba.to/uy49tm37 Elizabeth Loftus describes her fascinating work on eyewitness memory, in which she shows how memory for an event can be changed via misinformation supplied during the retention interval. For example, if you witnessed a car crash but subsequently heard people describing it from their own perspective, this new information may interfere with or disrupt your own personal recollection of the crash. In fact, you may even come to remember the event happening exactly as the others described it! This misinformation effect in eyewitness memory represents a type of retroactive interference that can occur during the retention interval (see Loftus [2005] for a review). Of course, if correct information is given during the retention interval, the witness’s memory will usually be improved.
Although interference may arise between the occurrence of an event and the attempt to recall it, the effect itself is always expressed when we retrieve memories, the topic to which we turn next.
Retrieval
Endel Tulving argued that “the key process in memory is retrieval” (1991, p. 91). Why should retrieval be given more prominence than encoding or storage? For one thing, if information were encoded and stored but could not be retrieved, it would be useless. As discussed previously in this module, we encode and store thousands of events—conversations, sights and sounds—every day, creating memory traces. However, we later access only a tiny portion of what we’ve taken in. Most of our memories will never be used—in the sense of being brought back to mind, consciously. This fact seems so obvious that we rarely reflect on it. All those events that happened to you in the fourth grade that seemed so important then? Now, many years later, you would struggle to remember even a few. You may wonder if the traces of those memories still exist in some latent form. Unfortunately, with currently available methods, it is impossible to know.
Psychologists distinguish information that is available in memory from that which is accessible (Tulving & Pearlstone, 1966). Available information is the information that is stored in memory—but precisely how much and what types are stored cannot be known. That is, all we can know is what information we can retrieve—accessibleinformation. The assumption is that accessible information represents only a tiny slice of the information available in our brains. Most of us have had the experience of trying to remember some fact or event, giving up, and then—all of a sudden!—it comes to us at a later time, even after we’ve stopped trying to remember it. Similarly, we all know the experience of failing to recall a fact, but then, if we are given several choices (as in a multiple-choice test), we are easily able to recognize it.
What factors determine what information can be retrieved from memory? One critical factor is the type of hints, or cues, in the environment. You may hear a song on the radio that suddenly evokes memories of an earlier time in your life, even if you were not trying to remember it when the song came on. Nevertheless, the song is closely associated with that time, so it brings the experience to mind.
The general principle that underlies the effectiveness of retrieval cues is the encoding specificity principle (Tulving & Thomson, 1973): when people encode information, they do so in specific ways. For example, take the song on the radio: perhaps you heard it while you were at a terrific party, having a great, philosophical conversation with a friend. Thus, the song became part of that whole complex experience. Years later, even though you haven’t thought about that party in ages, when you hear the song on the radio, the whole experience rushes back to you. In general, the encoding specificity principle states that, to the extent a retrieval cue (the song) matches or overlaps the memory trace of an experience (the party, the conversation), it will be effective in evoking the memory. A classic experiment on the encoding specificity principle had participants memorize a set of words in a unique setting. Later, the participants were tested on the word sets, either in the same location they learned the words or a different one. As a result of encoding specificity, the students who took the test in the same place they learned the words were actually able to recall more words (Godden & Baddeley, 1975) than the students who took the test in a new setting. In this instance, the physical context itself provided cues for retrieval. This is why it’s good to study for midterms and finals in the same room you’ll be taking them in.
One caution with this principle, though, is that, for the cue to work, it can’t match too many other experiences (Nairne, 2002; Watkins, 1975). Consider a lab experiment. Suppose you study 100 items; 99 are words, and one is a picture—of a penguin, item 50 in the list. Afterwards, the cue “recall the picture” would evoke “penguin” perfectly. No one would miss it. However, if the word “penguin” were placed in the same spot among the other 99 words, its memorability would be exceptionally worse. This outcome shows the power of distinctiveness that we discussed in the section on encoding: one picture is perfectly recalled from among 99 words because it stands out. Now consider what would happen if the experiment were repeated, but there were 25 pictures distributed within the 100-item list. Although the picture of the penguin would still be there, the probability that the cue “recall the picture” (at item 50) would be useful for the penguin would drop correspondingly. Watkins (1975) referred to this outcome as demonstrating the cue overload principle. That is, to be effective, a retrieval cue cannot be overloaded with too many memories. For the cue “recall the picture” to be effective, it should only match one item in the target set (as in the one-picture, 99-word case).
To sum up how memory cues function: for a retrieval cue to be effective, a match must exist between the cue and the desired target memory; furthermore, to produce the best retrieval, the cue-target relationship should be distinctive. Next, we will see how the encoding specificity principle can work in practice.
Psychologists measure memory performance by using production tests (involving recall) or recognition tests (involving the selection of correct from incorrect information, e.g., a multiple-choice test). For example, with our list of 100 words, one group of people might be asked to recall the list in any order (a free recall test), while a different group might be asked to circle the 100 studied words out of a mix with another 100, unstudied words (a recognition test). In this situation, the recognition test would likely produce better performance from participants than the recall test.
We usually think of recognition tests as being quite easy, because the cue for retrieval is a copy of the actual event that was presented for study. After all, what could be a better cue than the exact target (memory) the person is trying to access? In most cases, this line of reasoning is true; nevertheless, recognition tests do not provide perfect indexes of what is stored in memory. That is, you can fail to recognize a target staring you right in the face, yet be able to recall it later with a different set of cues (Watkins & Tulving, 1975). For example, suppose you had the task of recognizing the surnames of famous authors. At first, you might think that being given the actual last name would always be the best cue. However, research has shown this not necessarily to be true (Muter, 1984). When given names such as Tolstoy, Shaw, Shakespeare, and Lee, subjects might well say that Tolstoy and Shakespeare are famous authors, whereas Shaw and Lee are not. But, when given a cued recall test using first names, people often recall items (produce them) that they had failed to recognize before. For example, in this instance, a cue like George Bernard ________ often leads to a recall of “Shaw,” even though people initially failed to recognize Shaw as a famous author’s name. Yet, when given the cue “William,” people may not come up with Shakespeare, because William is a common name that matches many people (the cue overload principle at work). This strange fact—that recall can sometimes lead to better performance than recognition—can be explained by the encoding specificity principle. As a cue, George Bernard _________ matches the way the famous writer is stored in memory better than does his surname, Shaw, does (even though it is the target). Further, the match is quite distinctive with George Bernard ___________, but the cue William _________________ is much more overloaded (Prince William, William Yeats, William Faulkner, will.i.am).
The phenomenon we have been describing is called the recognition failure of recallable words, which highlights the point that a cue will be most effective depending on how the information has been encoded (Tulving & Thomson, 1973). The point is, the cues that work best to evoke retrieval are those that recreate the event or name to be remembered, whereas sometimes even the target itself, such as Shaw in the above example, is not the best cue. Which cue will be most effective depends on how the information has been encoded.
Whenever we think about our past, we engage in the act of retrieval. We usually think that retrieval is an objective act because we tend to imagine that retrieving a memory is like pulling a book from a shelf, and after we are done with it, we return the book to the shelf just as it was. However, research shows this assumption to be false; far from being a static repository of data, the memory is constantly changing. In fact, every time we retrieve a memory, it is altered. For example, the act of retrieval itself (of a fact, concept, or event) makes the retrieved memory much more likely to be retrieved again, a phenomenon called the testing effect or the retrieval practice effect (Pyc & Rawson, 2009; Roediger & Karpicke, 2006). However, retrieving some information can actually cause us to forget other information related to it, a phenomenon called retrieval-induced forgetting (Anderson, Bjork, & Bjork, 1994). Thus the act of retrieval can be a double-edged sword—strengthening the memory just retrieved (usually by a large amount) but harming related information (though this effect is often relatively small).
As discussed earlier, retrieval of distant memories is reconstructive. We weave the concrete bits and pieces of events in with assumptions and preferences to form a coherent story (Bartlett, 1932). For example, if during your 10th birthday, your dog got to your cake before you did, you would likely tell that story for years afterward. Say, then, in later years you misremember where the dog actually found the cake, but repeat that error over and over during subsequent retellings of the story. Over time, that inaccuracy would become a basic fact of the event in your mind. Just as retrieval practice (repetition) enhances accurate memories, so will it strengthen errors or false memories (McDermott, 2006). Sometimes memories can even be manufactured just from hearing a vivid story. Consider the following episode, recounted by Jean Piaget, the famous developmental psychologist, from his childhood:
One of my first memories would date, if it were true, from my second year. I can still see, most clearly, the following scene, in which I believed until I was about 15. I was sitting in my pram . . . when a man tried to kidnap me. I was held in by the strap fastened round me while my nurse bravely tried to stand between me and the thief. She received various scratches, and I can still vaguely see those on her face. . . . When I was about 15, my parents received a letter from my former nurse saying that she had been converted to the Salvation Army. She wanted to confess her past faults, and in particular to return the watch she had been given as a reward on this occasion. She had made up the whole story, faking the scratches. I therefore must have heard, as a child, this story, which my parents believed, and projected it into the past in the form of a visual memory. . . . Many real memories are doubtless of the same order. (Norman & Schacter, 1997, pp. 187–188)
Piaget’s vivid account represents a case of a pure reconstructive memory. He heard the tale told repeatedly, and doubtless told it (and thought about it) himself. The repeated telling cemented the events as though they had really happened, just as we are all open to the possibility of having “many real memories ... of the same order.” The fact that one can remember precise details (the location, the scratches) does not necessarily indicate that the memory is true, a point that has been confirmed in laboratory studies, too (e.g., Norman & Schacter, 1997).
Putting It All Together: Improving Your Memory
A central theme of this module has been the importance of the encoding and retrieval processes, and their interaction. To recap: to improve learning and memory, we need to encode information in conjunction with excellent cues that will bring back the remembered events when we need them. But how do we do this? Keep in mind the two critical principles we have discussed: to maximize retrieval, we should construct meaningful cues that remind us of the original experience, and those cues should be distinctive and not associated with other memories. These two conditions are critical in maximizing cue effectiveness (Nairne, 2002).
So, how can these principles be adapted for use in many situations? Let’s go back to how we started the module, with Simon Reinhard’s ability to memorize huge numbers of digits. Although it was not obvious, he applied these same general memory principles, but in a more deliberate way. In fact, all mnemonic devices, or memory aids/tricks, rely on these fundamental principles. In a typical case, the person learns a set of cues and then applies these cues to learn and remember information. Consider the set of 20 items below that are easy to learn and remember (Bower & Reitman, 1972).
1. is a gun. 11 is penny-one, hot dog bun.
2. is a shoe. 12 is penny-two, airplane glue.
3. is a tree. 13 is penny-three, bumble bee.
4. is a door. 14 is penny-four, grocery store.
5. is knives. 15 is penny-five, big beehive.
6. is sticks. 16 is penny-six, magic tricks.
7. is oven. 17 is penny-seven, go to heaven.
8. is plate. 18 is penny-eight, golden gate.
9. is wine. 19 is penny-nine, ball of twine.
10. is hen. 20 is penny-ten, ballpoint pen.
It would probably take you less than 10 minutes to learn this list and practice recalling it several times (remember to use retrieval practice!). If you were to do so, you would have a set of peg words on which you could “hang” memories. In fact, this mnemonic device is called the peg word technique. If you then needed to remember some discrete items—say a grocery list, or points you wanted to make in a speech—this method would let you do so in a very precise yet flexible way. Suppose you had to remember bread, peanut butter, bananas, lettuce, and so on. The way to use the method is to form a vivid image of what you want to remember and imagine it interacting with your peg words (as many as you need). For example, for these items, you might imagine a large gun (the first peg word) shooting a loaf of bread, then a jar of peanut butter inside a shoe, then large bunches of bananas hanging from a tree, then a door slamming on a head of lettuce with leaves flying everywhere. The idea is to provide good, distinctive cues (the weirder the better!) for the information you need to remember while you are learning it. If you do this, then retrieving it later is relatively easy. You know your cues perfectly (one is gun, etc.), so you simply go through your cue word list and “look” in your mind’s eye at the image stored there (bread, in this case).
This peg word method may sound strange at first, but it works quite well, even with little training (Roediger, 1980). One word of warning, though, is that the items to be remembered need to be presented relatively slowly at first, until you have practice associating each with its cue word. People get faster with time. Another interesting aspect of this technique is that it’s just as easy to recall the items in backwards order as forwards. This is because the peg words provide direct access to the memorized items, regardless of order.
How did Simon Reinhard remember those digits? Essentially he has a much more complex system based on these same principles. In his case, he uses “memory palaces” (elaborate scenes with discrete places) combined with huge sets of images for digits. For example, imagine mentally walking through the home where you grew up and identifying as many distinct areas and objects as possible. Simon has hundreds of such memory palaces that he uses. Next, for remembering digits, he has memorized a set of 10,000 images. Every four-digit number for him immediately brings forth a mental image. So, for example, 6187 might recall Michael Jackson. When Simon hears all the numbers coming at him, he places an image for every four digits into locations in his memory palace. He can do this at an incredibly rapid rate, faster than 4 digits per 4 seconds when they are flashed visually, as in the demonstration at the beginning of the module. As noted, his record is 240 digits, recalled in exact order. Simon also holds the world record in an event called “speed cards,” which involves memorizing the precise order of a shuffled deck of cards. Simon was able to do this in 21.19 seconds! Again, he uses his memory palaces, and he encodes groups of cards as single images.
Many books exist on how to improve memory using mnemonic devices, but all involve forming distinctive encoding operations and then having an infallible set of memory cues. We should add that to develop and use these memory systems beyond the basic peg system outlined above takes a great amount of time and concentration. The World Memory Championships are held every year and the records keep improving. However, for most common purposes, just keep in mind that to remember well you need to encode information in a distinctive way and to have good cues for retrieval. You can adapt a system that will meet most any purpose.
Outside Resources
Book: Brown, P.C., Roediger, H. L. & McDaniel, M. A. (2014). Make it stick: The science of successful learning.Cambridge, MA: Harvard University Press.
www.amazon.com/Make-Stick-Sc.../dp/0674729013
Student Video 1: Eureka Foong\\\\'s - The Misinformation Effect. This is a student-made video illustrating this phenomenon of altered memory. It was one of the winning entries in the 2014 Noba Student Video Award.
Student Video 2: Kara McCord\\\\'s - Flashbulb Memories. This is a student-made video illustrating this phenomenon of autobiographical memory. It was one of the winning entries in the 2014 Noba Student Video Award.
Student Video 3: Ang Rui Xia & Ong Jun Hao\\\\'s - The Misinformation Effect. Another student-made video exploring the misinformation effect. Also an award winner from 2014.
Video: Simon Reinhard breaking the world record in speedcards.
Web: Retrieval Practice, a website with research, resources, and tips for both educators and learners around the memory-strengthening skill of retrieval practice.
http://www.retrievalpractice.org/
Discussion Questions
1. Mnemonists like Simon Reinhard develop mental “journeys,” which enable them to use the method of loci. Develop your own journey, which contains 20 places, in order, that you know well. One example might be: the front walkway to your parents’ apartment; their doorbell; the couch in their living room; etc. Be sure to use a set of places that you know well and that have a natural order to them (e.g., the walkway comes before the doorbell). Now you are more than halfway toward being able to memorize a set of 20 nouns, in order, rather quickly. As an optional second step, have a friend make a list of 20 such nouns and read them to you, slowly (e.g., one every 5 seconds). Use the method to attempt to remember the 20 items.
2. Recall a recent argument or misunderstanding you have had about memory (e.g., a debate over whether your girlfriend/boyfriend had agreed to something). In light of what you have just learned about memory, how do you think about it? Is it possible that the disagreement can be understood by one of you making a pragmatic inference?
3. Think about what you’ve learned in this module and about how you study for tests. On the basis of what you have learned, is there something you want to try that might help your study habits?
Vocabulary
Autobiographical memory
Memory for the events of one’s life.
Consolidation
The process occurring after encoding that is believed to stabilize memory traces.
Cue overload principle
The principle stating that the more memories that are associated to a particular retrieval cue, the less effective the cue will be in prompting retrieval of any one memory.
Distinctiveness
The principle that unusual events (in a context of similar events) will be recalled and recognized better than uniform (nondistinctive) events.
Encoding
The initial experience of perceiving and learning events.
Encoding specificity principle
The hypothesis that a retrieval cue will be effective to the extent that information encoded from the cue overlaps or matches information in the engram or memory trace.
Engrams
A term indicating the change in the nervous system representing an event; also, memory trace.
Episodic memory
Memory for events in a particular time and place.
Flashbulb memory
Vivid personal memories of receiving the news of some momentous (and usually emotional) event.
Memory traces
A term indicating the change in the nervous system representing an event.
Misinformation effect
When erroneous information occurring after an event is remembered as having been part of the original event.
Mnemonic devices
A strategy for remembering large amounts of information, usually involving imaging events occurring on a journey or with some other set of memorized cues.
Recoding
The ubiquitous process during learning of taking information in one form and converting it to another form, usually one more easily remembered.
Retrieval
The process of accessing stored information.
Retroactive interference
The phenomenon whereby events that occur after some particular event of interest will usually cause forgetting of the original event.
Semantic memory
The more or less permanent store of knowledge that people have.
Storage
The stage in the learning/memory process that bridges encoding and retrieval; the persistence of memory over time. | textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/04%3A_Learning_and_Memory/4.02%3A_Memory_%28Encoding_Storage_Retrieval%29.txt |
By Mark E. Bouton
University of Vermont
Basic principles of learning are always operating and always influencing human behavior. This module discusses the two most fundamental forms of learning -- classical (Pavlovian) and instrumental (operant) conditioning. Through them, we respectively learn to associate 1) stimuli in the environment, or 2) our own behaviors, with significant events, such as rewards and punishments. The two types of learning have been intensively studied because they have powerful effects on behavior, and because they provide methods that allow scientists to analyze learning processes rigorously. This module describes some of the most important things you need to know about classical and instrumental conditioning, and it illustrates some of the many ways they help us understand normal and disordered behavior in humans. The module concludes by introducing the concept of observational learning, which is a form of learning that is largely distinct from classical and operant conditioning.
learning objectives
• Distinguish between classical (Pavlovian) conditioning and instrumental (operant) conditioning.
• Understand some important facts about each that tell us how they work.
• Understand how they work separately and together to influence human behavior in the world outside the laboratory.
• Students will be able to list the four aspects of observational learning according to Social Learning Theory.
Two Types of Conditioning
Although Ivan Pavlov won a Nobel Prize for studying digestion, he is much more famous for something else: working with a dog, a bell, and a bowl of saliva. Many people are familiar with the classic study of “Pavlov’s dog,” but rarely do they understand the significance of its discovery. In fact, Pavlov’s work helps explain why some people get anxious just looking at a crowded bus, why the sound of a morning alarm is so hated, and even why we swear off certain foods we’ve only tried once. Classical (or Pavlovian) conditioning is one of the fundamental ways we learn about the world around us. But it is far more than just a theory of learning; it is also arguably a theory of identity. For, once you understand classical conditioning, you’ll recognize that your favorite music, clothes, even political candidate, might all be a result of the same process that makes a dog drool at the sound of bell.
Around the turn of the 20th century, scientists who were interested in understanding the behavior of animals and humans began to appreciate the importance of two very basic forms of learning. One, which was first studied by the Russian physiologist Ivan Pavlov, is known as classical, or Pavlovian conditioning. In his famous experiment, Pavlov rang a bell and then gave a dog some food. After repeating this pairing multiple times, the dog eventually treated the bell as a signal for food, and began salivating in anticipation of the treat. This kind of result has been reproduced in the lab using a wide range of signals (e.g., tones, light, tastes, settings) paired with many different events besides food (e.g., drugs, shocks, illness; see below).
We now believe that this same learning process is engaged, for example, when humans associate a drug they’ve taken with the environment in which they’ve taken it; when they associate a stimulus (e.g., a symbol for vacation, like a big beach towel) with an emotional event (like a burst of happiness); and when they associate the flavor of a food with getting food poisoning. Although classical conditioning may seem “old” or “too simple” a theory, it is still widely studied today for at least two reasons: First, it is a straightforward test of associative learning that can be used to study other, more complex behaviors. Second, because classical conditioning is always occurring in our lives, its effects on behavior have important implications for understanding normal and disordered behavior in humans.
In a general way, classical conditioning occurs whenever neutral stimuli are associated with psychologically significant events. With food poisoning, for example, although having fish for dinner may not normally be something to be concerned about (i.e., a “neutral stimuli”), if it causes you to get sick, you will now likely associate that neutral stimuli (the fish) with the psychologically significant event of getting sick. These paired events are often described using terms that can be applied to any situation.
The dog food in Pavlov’s experiment is called the unconditioned stimulus (US) because it elicits an unconditioned response (UR). That is, without any kind of “training” or “teaching,” the stimulus produces a natural or instinctual reaction. In Pavlov’s case, the food (US) automatically makes the dog drool (UR). Other examples of unconditioned stimuli include loud noises (US) that startle us (UR), or a hot shower (US) that produces pleasure (UR).
On the other hand, a conditioned stimulus produces a conditioned response. A conditioned stimulus (CS) is a signal that has no importance to the organism until it is paired with something that does have importance. For example, in Pavlov’s experiment, the bell is the conditioned stimulus. Before the dog has learned to associate the bell (CS) with the presence of food (US), hearing the bell means nothing to the dog. However, after multiple pairings of the bell with the presentation of food, the dog starts to drool at the sound of the bell. This drooling in response to the bell is the conditioned response (CR). Although it can be confusing, the conditioned response is almost always the same as the unconditioned response. However, it is called the conditioned response because it is conditional on (or, depends on) being paired with the conditioned stimulus (e.g., the bell). To help make this clearer, consider becoming really hungry when you see the logo for a fast food restaurant. There’s a good chance you’ll start salivating. Although it is the actual eating of the food (US) that normally produces the salivation (UR), simply seeing the restaurant’s logo (CS) can trigger the same reaction (CR).
Another example you are probably very familiar with involves your alarm clock. If you’re like most people, waking up early usually makes you unhappy. In this case, waking up early (US) produces a natural sensation of grumpiness (UR). Rather than waking up early on your own, though, you likely have an alarm clock that plays a tone to wake you. Before setting your alarm to that particular tone, let’s imagine you had neutral feelings about it (i.e., the tone had no prior meaning for you). However, now that you use it to wake up every morning, you psychologically “pair” that tone (CS) with your feelings of grumpiness in the morning (UR). After enough pairings, this tone (CS) will automatically produce your natural response of grumpiness (CR). Thus, this linkage between the unconditioned stimulus (US; waking up early) and the conditioned stimulus (CS; the tone) is so strong that the unconditioned response (UR; being grumpy) will become a conditioned response (CR; e.g., hearing the tone at any point in the day—whether waking up or walking down the street—will make you grumpy). Modern studies of classical conditioning use a very wide range of CSs and USs and measure a wide range of conditioned responses.
Although classical conditioning is a powerful explanation for how we learn many different things, there is a second form of conditioning that also helps explain how we learn. First studied by Edward Thorndike, and later extended by B. F. Skinner, this second type of conditioning is known as instrumentalor operant conditioning. Operant conditioning occurs when a behavior (as opposed to a stimulus) is associated with the occurrence of a significant event. In the best-known example, a rat in a laboratory learns to press a lever in a cage (called a “Skinner box”) to receive food. Because the rat has no “natural” association between pressing a lever and getting food, the rat has to learn this connection. At first, the rat may simply explore its cage, climbing on top of things, burrowing under things, in search of food. Eventually while poking around its cage, the rat accidentally presses the lever, and a food pellet drops in. This voluntary behavior is called an operant behavior, because it “operates” on the environment (i.e., it is an action that the animal itself makes).
Now, once the rat recognizes that it receives a piece of food every time it presses the lever, the behavior of lever-pressing becomes reinforced. That is, the food pellets serve as reinforcers because they strengthen the rat’s desire to engage with the environment in this particular manner. In a parallel example, imagine that you’re playing a street-racing video game. As you drive through one city course multiple times, you try a number of different streets to get to the finish line. On one of these trials, you discover a shortcut that dramatically improves your overall time. You have learned this new path through operant conditioning. That is, by engaging with your environment (operant responses), you performed a sequence of behaviors that that was positively reinforced (i.e., you found the shortest distance to the finish line). And now that you’ve learned how to drive this course, you will perform that same sequence of driving behaviors (just as the rat presses on the lever) to receive your reward of a faster finish.
Operant conditioning research studies how the effects of a behavior influence the probability that it will occur again. For example, the effects of the rat’s lever-pressing behavior (i.e., receiving a food pellet) influences the probability that it will keep pressing the lever. For, according to Thorndike’s law of effect, when a behavior has a positive (satisfying) effect or consequence, it is likely to be repeated in the future. However, when a behavior has a negative (painful/annoying) consequence, it is less likely to be repeated in the future. Effects that increase behaviors are referred to as reinforcers, and effects that decrease them are referred to as punishers.
An everyday example that helps to illustrate operant conditioning is striving for a good grade in class—which could be considered a reward for students (i.e., it produces a positive emotional response). In order to get that reward (similar to the rat learning to press the lever), the student needs to modify his/her behavior. For example, the student may learn that speaking up in class gets him/her participation points (a reinforcer), so the student speaks up repeatedly. However, the student also learns that s/he shouldn’t speak up about just anything; talking about topics unrelated to school actually costs points. Therefore, through the student’s freely chosen behaviors, s/he learns which behaviors are reinforced and which are punished.
An important distinction of operant conditioning is that it provides a method for studying how consequences influence “voluntary” behavior. The rat’s decision to press the lever is voluntary, in the sense that the rat is free to make and repeat that response whenever it wants. Classical conditioning, on the other hand, is just the opposite—depending instead on “involuntary” behavior (e.g., the dog doesn’t choose to drool; it just does). So, whereas the rat must actively participate and perform some kind of behavior to attain its reward, the dog in Pavlov’s experiment is a passive participant. One of the lessons of operant conditioning research, then, is that voluntary behavior is strongly influenced by its consequences.
The illustration on the left summarizes the basic elements of classical and instrumental conditioning. The two types of learning differ in many ways. However, modern thinkers often emphasize the fact that they differ—as illustrated here—in what is learned. In classical conditioning, the animal behaves as if it has learned to associate a stimulus with a significant event. In operant conditioning, the animal behaves as if it has learned to associate a behavior with a significant event. Another difference is that the response in the classical situation (e.g., salivation) is elicited by a stimulus that comes before it, whereas the response in the operant case is not elicited by any particular stimulus. Instead, operant responses are said to be emitted. The word “emitted” further conveys the idea that operant behaviors are essentially voluntary in nature.
Understanding classical and operant conditioning provides psychologists with many tools for understanding learning and behavior in the world outside the lab. This is in part because the two types of learning occur continuously throughout our lives. It has been said that “much like the laws of gravity, the laws of learning are always in effect” (Spreat & Spreat, 1982).
Useful Things to Know about Classical Conditioning
Classical Conditioning Has Many Effects on Behavior
A classical CS (e.g., the bell) does not merely elicit a simple, unitary reflex. Pavlov emphasized salivation because that was the only response he measured. But his bell almost certainly elicited a whole system of responses that functioned to get the organism ready for the upcoming US (food) (see Timberlake, 2001). For example, in addition to salivation, CSs (such as the bell) that signal that food is near also elicit the secretion of gastric acid, pancreatic enzymes, and insulin (which gets blood glucose into cells). All of these responses prepare the body for digestion. Additionally, the CS elicits approach behavior and a state of excitement. And presenting a CS for food can also cause animals whose stomachs are full to eat more food if it is available. In fact, food CSs are so prevalent in modern society, humans are likewise inclined to eat or feel hungry in response to cues associated with food, such as the sound of a bag of potato chips opening, the sight of a well-known logo (e.g., Coca-Cola), or the feel of the couch in front of the television.
Classical conditioning is also involved in other aspects of eating. Flavors associated with certain nutrients (such as sugar or fat) can become preferred without arousing any awareness of the pairing. For example, protein is a US that your body automatically craves more of once you start to consume it (UR): since proteins are highly concentrated in meat, the flavor of meat becomes a CS (or cue, that proteins are on the way), which perpetuates the cycle of craving for yet more meat (this automatic bodily reaction now a CR).
In a similar way, flavors associated with stomach pain or illness become avoided and disliked. For example, a person who gets sick after drinking too much tequila may acquire a profound dislike of the taste and odor of tequila—a phenomenon called taste aversion conditioning. The fact that flavors are often associated with so many consequences of eating is important for animals (including rats and humans) that are frequently exposed to new foods. And it is clinically relevant. For example, drugs used in chemotherapy often make cancer patients sick. As a consequence, patients often acquire aversions to foods eaten just before treatment, or even aversions to such things as the waiting room of the chemotherapy clinic itself (see Bernstein, 1991; Scalera & Bavieri, 2009).
Classical conditioning occurs with a variety of significant events. If an experimenter sounds a tone just before applying a mild shock to a rat’s feet, the tone will elicit fear or anxiety after one or two pairings. Similar fear conditioning plays a role in creating many anxiety disorders in humans, such as phobias and panic disorders, where people associate cues (such as closed spaces, or a shopping mall) with panic or other emotional trauma (see Mineka & Zinbarg, 2006). Here, rather than a physical response (like drooling), the CS triggers an emotion.
Another interesting effect of classical conditioning can occur when we ingest drugs. That is, when a drug is taken, it can be associated with the cues that are present at the same time (e.g., rooms, odors, drug paraphernalia). In this regard, if someone associates a particular smell with the sensation induced by the drug, whenever that person smells the same odor afterward, it may cue responses (physical and/or emotional) related to taking the drug itself. But drug cues have an even more interesting property: They elicit responses that often “compensate” for the upcoming effect of the drug (see Siegel, 1989). For example, morphine itself suppresses pain; however, if someone is used to taking morphine, a cue that signals the “drug is coming soon” can actually make the person more sensitive to pain. Because the person knows a pain suppressant will soon be administered, the body becomes more sensitive, anticipating that “the drug will soon take care of it.” Remarkably, such conditioned compensatory responses in turn decrease the impact of the drug on the body—because the body has become more sensitive to pain.
This conditioned compensatory response has many implications. For instance, a drug user will be most “tolerant” to the drug in the presence of cues that have been associated with it (because such cues elicit compensatory responses). As a result, overdose is usually not due to an increase in dosage, but to taking the drug in a new place without the familiar cues—which would have otherwise allowed the user to tolerate the drug (see Siegel, Hinson, Krank, & McCully, 1982). Conditioned compensatory responses (which include heightened pain sensitivity and decreased body temperature, among others) might also cause discomfort, thus motivating the drug user to continue usage of the drug to reduce them. This is one of several ways classical conditioning might be a factor in drug addiction and dependence.
A final effect of classical cues is that they motivate ongoing operant behavior (see Balleine, 2005). For example, if a rat has learned via operant conditioning that pressing a lever will give it a drug, in the presence of cues that signal the “drug is coming soon” (like the sound of the lever squeaking), the rat will work harder to press the lever than if those cues weren’t present (i.e., there is no squeaking lever sound). Similarly, in the presence of food-associated cues (e.g., smells), a rat (or an overeater) will work harder for food. And finally, even in the presence of negative cues (like something that signals fear), a rat, a human, or any other organism will work harder to avoid those situations that might lead to trauma. Classical CSs thus have many effects that can contribute to significant behavioral phenomena.
The Learning Process
As mentioned earlier, classical conditioning provides a method for studying basic learning processes. Somewhat counterintuitively, though, studies show that pairing a CS and a US together is not sufficient for an association to be learned between them. Consider an effect called blocking (see Kamin, 1969). In this effect, an animal first learns to associate one CS—call it stimulus A—with a US. In the illustration above, the sound of a bell (stimulus A) is paired with the presentation of food. Once this association is learned, in a second phase, a second stimulus—stimulus B—is presented alongside stimulus A, such that the two stimuli are paired with the US together. In the illustration, a light is added and turned on at the same time the bell is rung. However, because the animal has already learned the association between stimulus A (the bell) and the food, the animal doesn’t learn an association between stimulus B (the light) and the food. That is, the conditioned response only occurs during the presentation of stimulus A, because the earlier conditioning of A “blocks” the conditioning of B when B is added to A. The reason? Stimulus A already predicts the US, so the US is not surprising when it occurs with Stimulus B.
Learning depends on such a surprise, or a discrepancy between what occurs on a conditioning trial and what is already predicted by cues that are present on the trial. To learn something through classical conditioning, there must first be some prediction error, or the chance that a conditioned stimulus won’t lead to the expected outcome. With the example of the bell and the light, because the bell always leads to the reward of food, there’s no “prediction error” that the addition of the light helps to correct. However, if the researcher suddenly requires that the bell and the light both occur in order to receive the food, the bell alone will produce a prediction error that the animal has to learn.
Blocking and other related effects indicate that the learning process tends to take in the most valid predictors of significant events and ignore the less useful ones. This is common in the real world. For example, imagine that your supermarket puts big star-shaped stickers on products that are on sale. Quickly, you learn that items with the big star-shaped stickers are cheaper. However, imagine you go into a similar supermarket that not only uses these stickers, but also uses bright orange price tags to denote a discount. Because of blocking (i.e., you already know that the star-shaped stickers indicate a discount), you don’t have to learn the color system, too. The star-shaped stickers tell you everything you need to know (i.e. there’s no prediction error for the discount), and thus the color system is irrelevant.
Classical conditioning is strongest if the CS and US are intense or salient. It is also best if the CS and US are relatively new and the organism hasn’t been frequently exposed to them before. And it is especially strong if the organism’s biology has prepared it to associate a particular CS and US. For example, rats and humans are naturally inclined to associate an illness with a flavor, rather than with a light or tone. Because foods are most commonly experienced by taste, if there is a particular food that makes us ill, associating the flavor (rather than the appearance—which may be similar to other foods) with the illness will more greatly ensure we avoid that food in the future, and thus avoid getting sick. This sorting tendency, which is set up by evolution, is called preparedness.
There are many factors that affect the strength of classical conditioning, and these have been the subject of much research and theory (see Rescorla & Wagner, 1972; Pearce & Bouton, 2001). Behavioral neuroscientists have also used classical conditioning to investigate many of the basic brain processes that are involved in learning (see Fanselow & Poulos, 2005; Thompson & Steinmetz, 2009).
Erasing Classical Learning
After conditioning, the response to the CS can be eliminated if the CS is presented repeatedly without the US. This effect is called extinction, and the response is said to become “extinguished.” For example, if Pavlov kept ringing the bell but never gave the dog any food afterward, eventually the dog’s CR (drooling) would no longer happen when it heard the CS (the bell), because the bell would no longer be a predictor of food. Extinction is important for many reasons. For one thing, it is the basis for many therapies that clinical psychologists use to eliminate maladaptive and unwanted behaviors. Take the example of a person who has a debilitating fear of spiders: one approach might include systematic exposure to spiders. Whereas, initially the person has a CR (e.g., extreme fear) every time s/he sees the CS (e.g., the spider), after repeatedly being shown pictures of spiders in neutral conditions, pretty soon the CS no longer predicts the CR (i.e., the person doesn’t have the fear reaction when seeing spiders, having learned that spiders no longer serve as a “cue” for that fear). Here, repeated exposure to spiders without an aversive consequence causes extinction.
Psychologists must accept one important fact about extinction, however: it does not necessarily destroy the original learning (see Bouton, 2004). For example, imagine you strongly associate the smell of chalkboards with the agony of middle school detention. Now imagine that, after years of encountering chalkboards, the smell of them no longer recalls the agony of detention (an example of extinction). However, one day, after entering a new building for the first time, you suddenly catch a whiff of a chalkboard and WHAM!, the agony of detention returns. This is called spontaneous recovery: following a lapse in exposure to the CS after extinction has occurred, sometimes re-exposure to the CS (e.g., the smell of chalkboards) can evoke the CR again (e.g., the agony of detention).
Another related phenomenon is the renewal effect: After extinction, if the CS is tested in a new context, such as a different room or location, the CR can also return. In the chalkboard example, the action of entering a new building—where you don’t expect to smell chalkboards—suddenly renews the sensations associated with detention. These effects have been interpreted to suggest that extinction inhibits rather than erases the learned behavior, and this inhibition is mainly expressed in the context in which it is learned (see “context” in the Key Vocabulary section below).
This does not mean that extinction is a bad treatment for behavior disorders. Instead, clinicians can increase its effectiveness by using basic research on learning to help defeat these relapse effects (see Craske et al., 2008). For example, conducting extinction therapies in contexts where patients might be most vulnerable to relapsing (e.g., at work), might be a good strategy for enhancing the therapy’s success.
Useful Things to Know about Instrumental Conditioning
Most of the things that affect the strength of classical conditioning also affect the strength of instrumental learning—whereby we learn to associate our actions with their outcomes. As noted earlier, the “bigger” the reinforcer (or punisher), the stronger the learning. And, if an instrumental behavior is no longer reinforced, it will also be extinguished. Most of the rules of associative learning that apply to classical conditioning also apply to instrumental learning, but other facts about instrumental learning are also worth knowing.
Instrumental Responses Come Under Stimulus Control
As you know, the classic operant response in the laboratory is lever-pressing in rats, reinforced by food. However, things can be arranged so that lever-pressing only produces pellets when a particular stimulus is present. For example, lever-pressing can be reinforced only when a light in the Skinner box is turned on; when the light is off, no food is released from lever-pressing. The rat soon learns to discriminate between the light-on and light-off conditions, and presses the lever only in the presence of the light (responses in light-off are extinguished). In everyday life, think about waiting in the turn lane at a traffic light. Although you know that green means go, only when you have the green arrow do you turn. In this regard, the operant behavior is now said to be under stimulus control. And, as is the case with the traffic light, in the real world, stimulus control is probably the rule.
The stimulus controlling the operant response is called a discriminative stimulus. It can be associated directly with the response, or the reinforcer (see below). However, it usually does not elicit the response the way a classical CS does. Instead, it is said to “set the occasion for” the operant response. For example, a canvas put in front of an artist does not elicit painting behavior or compel her to paint. It allows, or sets the occasion for, painting to occur.
Stimulus-control techniques are widely used in the laboratory to study perception and other psychological processes in animals. For example, the rat would not be able to respond appropriately to light-on and light-off conditions if it could not see the light. Following this logic, experiments using stimulus-control methods have tested how well animals see colors, hear ultrasounds, and detect magnetic fields. That is, researchers pair these discriminative stimuli with those they know the animals already understand (such as pressing the lever). In this way, the researchers can test if the animals can learn to press the lever only when an ultrasound is played, for example.
These methods can also be used to study “higher” cognitive processes. For example, pigeons can learn to peck at different buttons in a Skinner box when pictures of flowers, cars, chairs, or people are shown on a miniature TV screen (see Wasserman, 1995). Pecking button 1 (and no other) is reinforced in the presence of a flower image, button 2 in the presence of a chair image, and so on. Pigeons can learn the discrimination readily, and, under the right conditions, will even peck the correct buttons associated with pictures of new flowers, cars, chairs, and people they have never seen before. The birds have learned to categorize the sets of stimuli. Stimulus-control methods can be used to study how such categorization is learned.
Operant Conditioning Involves Choice
Another thing to know about operant conditioning is that the response always requires choosing one behavior over others. The student who goes to the bar on Thursday night chooses to drink instead of staying at home and studying. The rat chooses to press the lever instead of sleeping or scratching its ear in the back of the box. The alternative behaviors are each associated with their own reinforcers. And the tendency to perform a particular action depends on both the reinforcers earned for it and the reinforcers earned for its alternatives.
To investigate this idea, choice has been studied in the Skinner box by making two levers available for the rat (or two buttons available for the pigeon), each of which has its own reinforcement or payoff rate. A thorough study of choice in situations like this has led to a rule called the quantitative law of effect (see Herrnstein, 1970), which can be understood without going into quantitative detail: The law acknowledges the fact that the effects of reinforcing one behavior depend crucially on how much reinforcement is earned for the behavior’s alternatives. For example, if a pigeon learns that pecking one light will reward two food pellets, whereas the other light only rewards one, the pigeon will only peck the first light. However, what happens if the first light is more strenuous to reach than the second one? Will the cost of energy outweigh the bonus of food? Or will the extra food be worth the work? In general, a given reinforcer will be less reinforcing if there are many alternative reinforcers in the environment. For this reason, alcohol, sex, or drugs may be less powerful reinforcers if the person’s environment is full of other sources of reinforcement, such as achievement at work or love from family members.
Cognition in Instrumental Learning
Modern research also indicates that reinforcers do more than merely strengthen or “stamp in” the behaviors they are a consequence of, as was Thorndike’s original view. Instead, animals learn about the specific consequences of each behavior, and will perform a behavior depending on how much they currently want—or “value”—its consequence.
This idea is best illustrated by a phenomenon called the reinforcer devaluation effect (see Colwill & Rescorla, 1986). A rat is first trained to perform two instrumental actions (e.g., pressing a lever on the left, and on the right), each paired with a different reinforcer (e.g., a sweet sucrose solution, and a food pellet). At the end of this training, the rat tends to press both levers, alternating between the sucrose solution and the food pellet. In a second phase, one of the reinforcers (e.g., the sucrose) is then separately paired with illness. This conditions a taste aversion to the sucrose. In a final test, the rat is returned to the Skinner box and allowed to press either lever freely. No reinforcers are presented during this test (i.e., no sucrose or food comes from pressing the levers), so behavior during testing can only result from the rat’s memory of what it has learned earlier. Importantly here, the rat chooses not to perform the response that once produced the reinforcer that it now has an aversion to (e.g., it won’t press the sucrose lever). This means that the rat has learned and remembered the reinforcer associated with each response, and can combine that knowledge with the knowledge that the reinforcer is now “bad.” Reinforcers do not merely stamp in responses; the animal learns much more than that. The behavior is said to be “goal-directed” (see Dickinson & Balleine, 1994), because it is influenced by the current value of its associated goal (i.e., how much the rat wants/doesn’t want the reinforcer).
Things can get more complicated, however, if the rat performs the instrumental actions frequently and repeatedly. That is, if the rat has spent many months learning the value of pressing each of the levers, the act of pressing them becomes automatic and routine. And here, this once goal-directed action (i.e., the rat pressing the lever for the goal of getting sucrose/food) can become a habit. Thus, if a rat spends many months performing the lever-pressing behavior (turning such behavior into a habit), even when sucrose is again paired with illness, the rat will continue to press that lever (see Holland, 2004). After all the practice, the instrumental response (pressing the lever) is no longer sensitive to reinforcer devaluation. The rat continues to respond automatically, regardless of the fact that the sucrose from this lever makes it sick.
Habits are very common in human experience, and can be useful. You do not need to relearn each day how to make your coffee in the morning or how to brush your teeth. Instrumental behaviors can eventually become habitual, letting us get the job done while being free to think about other things.
Putting Classical and Instrumental Conditioning Together
Classical and operant conditioning are usually studied separately. But outside of the laboratory they almost always occur at the same time. For example, a person who is reinforced for drinking alcohol or eating excessively learns these behaviors in the presence of certain stimuli—a pub, a set of friends, a restaurant, or possibly the couch in front of the TV. These stimuli are also available for association with the reinforcer. In this way, classical and operant conditioning are always intertwined.
The figure below summarizes this idea, and helps review what we have discussed in this module. Generally speaking, any reinforced or punished operant response (R) is paired with an outcome (O) in the presence of some stimulus or set of stimuli (S).
The figure illustrates the types of associations that can be learned in this very general scenario. For one thing, the organism will learn to associate the response and the outcome (R – O). This is instrumental conditioning. The learning process here is probably similar to classical conditioning, with all its emphasis on surprise and prediction error. And, as we discussed while considering the reinforcer devaluation effect, once R – O is learned, the organism will be ready to perform the response if the outcome is desired or valued. The value of the reinforcer can also be influenced by other reinforcers earned for other behaviors in the situation. These factors are at the heart of instrumental learning.
Second, the organism can also learn to associate the stimulus with the reinforcing outcome (S – O). This is the classical conditioning component, and as we have seen, it can have many consequences on behavior. For one thing, the stimulus will come to evoke a system of responses that help the organism prepare for the reinforcer (not shown in the figure): The drinker may undergo changes in body temperature; the eater may salivate and have an increase in insulin secretion. In addition, the stimulus will evoke approach (if the outcome is positive) or retreat (if the outcome is negative). Presenting the stimulus will also prompt the instrumental response.
The third association in the diagram is the one between the stimulus and the response (S – R). As discussed earlier, after a lot of practice, the stimulus may begin to elicit the response directly. This is habit learning, whereby the response occurs relatively automatically, without much mental processing of the relation between the action and the outcome and the outcome’s current value.
The final link in the figure is between the stimulus and the response-outcome association [S – (R – O)]. More than just entering into a simple association with the R or the O, the stimulus can signal that the R – O relationship is now in effect. This is what we mean when we say that the stimulus can “set the occasion” for the operant response: It sets the occasion for the response-reinforcer relationship. Through this mechanism, the painter might begin to paint when given the right tools and the opportunity enabled by the canvas. The canvas theoretically signals that the behavior of painting will now be reinforced by positive consequences.
The figure provides a framework that you can use to understand almost any learned behavior you observe in yourself, your family, or your friends. If you would like to understand it more deeply, consider taking a course on learning in the future, which will give you a fuller appreciation of how classical learning, instrumental learning, habit learning, and occasion setting actually work and interact.
Observational Learning
Not all forms of learning are accounted for entirely by classical and operant conditioning. Imagine a child walking up to a group of children playing a game on the playground. The game looks fun, but it is new and unfamiliar. Rather than joining the game immediately, the child opts to sit back and watch the other children play a round or two. Observing the others, the child takes note of the ways in which they behave while playing the game. By watching the behavior of the other kids, the child can figure out the rules of the game and even some strategies for doing well at the game. This is called observational learning.
Observational learning is a component of Albert Bandura’s Social Learning Theory (Bandura, 1977), which posits that individuals can learn novel responses via observation of key others’ behaviors. Observational learning does not necessarily require reinforcement, but instead hinges on the presence of others, referred to as social models. Social models are typically of higher status or authority compared to the observer, examples of which include parents, teachers, and police officers. In the example above, the children who already know how to play the game could be thought of as being authorities—and are therefore social models—even though they are the same age as the observer. By observing how the social models behave, an individual is able to learn how to act in a certain situation. Other examples of observational learning might include a child learning to place her napkin in her lap by watching her parents at the dinner table, or a customer learning where to find the ketchup and mustard after observing other customers at a hot dog stand.
Bandura theorizes that the observational learning process consists of four parts. The first is attention—as, quite simply, one must pay attention to what s/he is observing in order to learn. The second part is retention: to learn one must be able to retain the behavior s/he is observing in memory.The third part of observational learning, initiation, acknowledges that the learner must be able to execute (or initiate) the learned behavior. Lastly, the observer must possess the motivation to engage in observational learning. In our vignette, the child must want to learn how to play the game in order to properly engage in observational learning.
Researchers have conducted countless experiments designed to explore observational learning, the most famous of which is Albert Bandura’s “Bobo doll experiment.”
In this experiment (Bandura, Ross & Ross 1961), Bandura had children individually observe an adult social model interact with a clown doll (“Bobo”). For one group of children, the adult interacted aggressively with Bobo: punching it, kicking it, throwing it, and even hitting it in the face with a toy mallet. Another group of children watched the adult interact with other toys, displaying no aggression toward Bobo. In both instances the adult left and the children were allowed to interact with Bobo on their own. Bandura found that children exposed to the aggressive social model were significantly more likely to behave aggressively toward Bobo, hitting and kicking him, compared to those exposed to the non-aggressive model. The researchers concluded that the children in the aggressive group used their observations of the adult social model’s behavior to determine that aggressive behavior toward Bobo was acceptable.
While reinforcement was not required to elicit the children’s behavior in Bandura’s first experiment, it is important to acknowledge that consequences do play a role within observational learning. A future adaptation of this study (Bandura, Ross, & Ross, 1963) demonstrated that children in the aggression group showed less aggressive behavior if they witnessed the adult model receive punishment for aggressing against Bobo. Bandura referred to this process as vicarious reinforcement, as the children did not experience the reinforcement or punishment directly, yet were still influenced by observing it.
Conclusion
We have covered three primary explanations for how we learn to behave and interact with the world around us. Considering your own experiences, how well do these theories apply to you? Maybe when reflecting on your personal sense of fashion, you realize that you tend to select clothes others have complimented you on (operant conditioning). Or maybe, thinking back on a new restaurant you tried recently, you realize you chose it because its commercials play happy music (classical conditioning). Or maybe you are now always on time with your assignments, because you saw how others were punished when they were late (observational learning). Regardless of the activity, behavior, or response, there’s a good chance your “decision” to do it can be explained based on one of the theories presented in this module.
Outside Resources
Article: Rescorla, R. A. (1988). Pavlovian conditioning: It’s not what you think it is. American Psychologist, 43, 151–160.
Book: Bouton, M. E. (2007). Learning and behavior: A contemporary synthesis. Sunderland, MA: Sinauer Associates.
Book: Bouton, M. E. (2009). Learning theory. In B. J. Sadock, V. A. Sadock, & P. Ruiz (Eds.), Kaplan & Sadock’s comprehensive textbook of psychiatry (9th ed., Vol. 1, pp. 647–658). New York, NY: Lippincott Williams & Wilkins.
Book: Domjan, M. (2010). The principles of learning and behavior (6th ed.). Belmont, CA: Wadsworth.
Video: Albert Bandura discusses the Bobo Doll Experiment.
Discussion Questions
1. Describe three examples of Pavlovian (classical) conditioning that you have seen in your own behavior, or that of your friends or family, in the past few days.
2. Describe three examples of instrumental (operant) conditioning that you have seen in your own behavior, or that of your friends or family, in the past few days.
3. Drugs can be potent reinforcers. Discuss how Pavlovian conditioning and instrumental conditioning can work together to influence drug taking.
4. In the modern world, processed foods are highly available and have been engineered to be highly palatable and reinforcing. Discuss how Pavlovian and instrumental conditioning can work together to explain why people often eat too much.
5. How does blocking challenge the idea that pairings of a CS and US are sufficient to cause Pavlovian conditioning? What is important in creating Pavlovian learning?
6. How does the reinforcer devaluation effect challenge the idea that reinforcers merely “stamp in” the operant response? What does the effect tell us that animals actually learn in operant conditioning?
7. With regards to social learning do you think people learn violence from observing violence in movies? Why or why not?
8. What do you think you have learned through social learning? Who are your social models?
Vocabulary
Blocking
In classical conditioning, the finding that no conditioning occurs to a stimulus if it is combined with a previously conditioned stimulus during conditioning trials. Suggests that information, surprise value, or prediction error is important in conditioning.
Categorize
To sort or arrange different items into classes or categories.
Classical conditioning
The procedure in which an initially neutral stimulus (the conditioned stimulus, or CS) is paired with an unconditioned stimulus (or US). The result is that the conditioned stimulus begins to elicit a conditioned response (CR). Classical conditioning is nowadays considered important as both a behavioral phenomenon and as a method to study simple associative learning. Same as Pavlovian conditioning.
Conditioned compensatory response
In classical conditioning, a conditioned response that opposes, rather than is the same as, the unconditioned response. It functions to reduce the strength of the unconditioned response. Often seen in conditioning when drugs are used as unconditioned stimuli.
Conditioned response (CR)
The response that is elicited by the conditioned stimulus after classical conditioning has taken place.
Conditioned stimulus (CS)
An initially neutral stimulus (like a bell, light, or tone) that elicits a conditioned response after it has been associated with an unconditioned stimulus.
Context
Stimuli that are in the background whenever learning occurs. For instance, the Skinner box or room in which learning takes place is the classic example of a context. However, “context” can also be provided by internal stimuli, such as the sensory effects of drugs (e.g., being under the influence of alcohol has stimulus properties that provide a context) and mood states (e.g., being happy or sad). It can also be provided by a specific period in time—the passage of time is sometimes said to change the “temporal context.”
Discriminative stimulus
In operant conditioning, a stimulus that signals whether the response will be reinforced. It is said to “set the occasion” for the operant response.
Extinction
Decrease in the strength of a learned behavior that occurs when the conditioned stimulus is presented without the unconditioned stimulus (in classical conditioning) or when the behavior is no longer reinforced (in instrumental conditioning). The term describes both the procedure (the US or reinforcer is no longer presented) as well as the result of the procedure (the learned response declines). Behaviors that have been reduced in strength through extinction are said to be “extinguished.”
Fear conditioning
A type of classical or Pavlovian conditioning in which the conditioned stimulus (CS) is associated with an aversive unconditioned stimulus (US), such as a foot shock. As a consequence of learning, the CS comes to evoke fear. The phenomenon is thought to be involved in the development of anxiety disorders in humans.
Goal-directed behavior
Instrumental behavior that is influenced by the animal’s knowledge of the association between the behavior and its consequence and the current value of the consequence. Sensitive to the reinforcer devaluation effect.
Habit
Instrumental behavior that occurs automatically in the presence of a stimulus and is no longer influenced by the animal’s knowledge of the value of the reinforcer. Insensitive to the reinforcer devaluation effect.
Instrumental conditioning
Process in which animals learn about the relationship between their behaviors and their consequences. Also known as operant conditioning.
Law of effect
The idea that instrumental or operant responses are influenced by their effects. Responses that are followed by a pleasant state of affairs will be strengthened and those that are followed by discomfort will be weakened. Nowadays, the term refers to the idea that operant or instrumental behaviors are lawfully controlled by their consequences.
Observational learning
Learning by observing the behavior of others.
Operant
A behavior that is controlled by its consequences. The simplest example is the rat’s lever-pressing, which is controlled by the presentation of the reinforcer.
Operant conditioning
See instrumental conditioning.
Pavlovian conditioning
See classical conditioning.
Prediction error
When the outcome of a conditioning trial is different from that which is predicted by the conditioned stimuli that are present on the trial (i.e., when the US is surprising). Prediction error is necessary to create Pavlovian conditioning (and associative learning generally). As learning occurs over repeated conditioning trials, the conditioned stimulus increasingly predicts the unconditioned stimulus, and prediction error declines. Conditioning works to correct or reduce prediction error.
Preparedness
The idea that an organism’s evolutionary history can make it easy to learn a particular association. Because of preparedness, you are more likely to associate the taste of tequila, and not the circumstances surrounding drinking it, with getting sick. Similarly, humans are more likely to associate images of spiders and snakes than flowers and mushrooms with aversive outcomes like shocks.
Punisher
A stimulus that decreases the strength of an operant behavior when it is made a consequence of the behavior.
Quantitative law of effect
A mathematical rule that states that the effectiveness of a reinforcer at strengthening an operant response depends on the amount of reinforcement earned for all alternative behaviors. A reinforcer is less effective if there is a lot of reinforcement in the environment for other behaviors.
Reinforcer
Any consequence of a behavior that strengthens the behavior or increases the likelihood that it will be performed it again.
Reinforcer devaluation effect
The finding that an animal will stop performing an instrumental response that once led to a reinforcer if the reinforcer is separately made aversive or undesirable.
Renewal effect
Recovery of an extinguished response that occurs when the context is changed after extinction. Especially strong when the change of context involves return to the context in which conditioning originally occurred. Can occur after extinction in either classical or instrumental conditioning.
Social Learning Theory
The theory that people can learn new responses and behaviors by observing the behavior of others.
Social models
Authorities that are the targets for observation and who model behaviors.
Spontaneous recovery
Recovery of an extinguished response that occurs with the passage of time after extinction. Can occur after extinction in either classical or instrumental conditioning.
Stimulus control
When an operant behavior is controlled by a stimulus that precedes it.
Taste aversion learning
The phenomenon in which a taste is paired with sickness, and this causes the organism to reject—and dislike—that taste in the future.
Unconditioned response (UR)
In classical conditioning, an innate response that is elicited by a stimulus before (or in the absence of) conditioning.
Unconditioned stimulus (US)
In classical conditioning, the stimulus that elicits the response before conditioning occurs.
Vicarious reinforcement
Learning that occurs by observing the reinforcement or punishment of another person. | textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/04%3A_Learning_and_Memory/4.03%3A_Conditioning_and_Learning.txt |
By Nicole Dudukovic and Brice Kuhl
New York University
This module explores the causes of everyday forgetting and considers pathological forgetting in the context of amnesia. Forgetting is viewed as an adaptive process that allows us to be efficient in terms of the information we retain.
learning objectives
• Identify five reasons we forget and give examples of each.
• Describe how forgetting can be viewed as an adaptive process.
• Explain the difference between anterograde and retrograde amnesia.
Introduction
Chances are that you have experienced memory lapses and been frustrated by them. You may have had trouble remembering the definition of a key term on an exam or found yourself unable to recall the name of an actor from one of your favorite TV shows. Maybe you forgot to call your aunt on her birthday or you routinely forget where you put your cell phone. Oftentimes, the bit of information we are searching for comes back to us, but sometimes it does not. Clearly, forgetting seems to be a natural part of life. Why do we forget? And is forgetting always a bad thing?
Causes of Forgetting
One very common and obvious reason why you cannot remember a piece of information is because you did not learn it in the first place. If you fail to encode information into memory, you are not going to remember it later on. Usually, encoding failures occur because we are distracted or are not paying attention to specific details. For example, people have a lot of trouble recognizing an actual penny out of a set of drawings of very similar pennies, or lures, even though most of us have had a lifetime of experience handling pennies (Nickerson & Adams, 1979). However, few of us have studied the features of a penny in great detail, and since we have not attended to those details, we fail to recognize them later. Similarly, it has been well documented that distraction during learning impairs later memory (e.g., Craik, Govoni, Naveh-Benjamin, & Anderson, 1996). Most of the time this is not problematic, but in certain situations, such as when you are studying for an exam, failures to encode due to distraction can have serious repercussions.
Another proposed reason why we forget is that memories fade, or decay, over time. It has been known since the pioneering work of Hermann Ebbinghaus (1885/1913) that as time passes, memories get harder to recall. Ebbinghaus created more than 2,000 nonsense syllables, such as dax, bap, and rif, and studied his own memory for them, learning as many as 420 lists of 16 nonsense syllables for one experiment. He found that his memories diminished as time passed, with the most forgetting happening early on after learning. His observations and subsequent research suggested that if we do not rehearse a memory and the neural representation of that memory is not reactivated over a long period of time, the memory representation may disappear entirely or fade to the point where it can no longer be accessed. As you might imagine, it is hard to definitively prove that a memory has decayed as opposed to it being inaccessible for another reason. Critics argued that forgetting must be due to processes other than simply the passage of time, since disuse of a memory does not always guarantee forgetting (McGeoch, 1932). More recently, some memory theorists have proposed that recent memory traces may be degraded or disrupted by new experiences (Wixted, 2004). Memory traces need to be consolidated, or transferred from the hippocampus to more durable representations in the cortex, in order for them to last (McGaugh, 2000). When the consolidation process is interrupted by the encoding of other experiences, the memory trace for the original experience does not get fully developed and thus is forgotten.
Both encoding failures and decay account for more permanent forms of forgetting, in which the memory trace does not exist, but forgetting may also occur when a memory exists yet we temporarily cannot access it. This type of forgetting may occur when we lack the appropriate retrievalcues for bringing the memory to mind. You have probably had the frustrating experience of forgetting your password for an online site. Usually, the password has not been permanently forgotten; instead, you just need the right reminder to remember what it is. For example, if your password was “pizza0525,” and you received the password hints “favorite food” and “Mom’s birthday,” you would easily be able to retrieve it. Retrieval hints can bring back to mind seemingly forgotten memories (Tulving & Pearlstone, 1966). One real-life illustration of the importance of retrieval cues comes from a study showing that whereas people have difficulty recalling the names of high school classmates years after graduation, they are easily able to recognize the names and match them to the appropriate faces (Bahrick, Bahrick, & Wittinger, 1975). The names are powerful enough retrieval cues that they bring back the memories of the faces that went with them. The fact that the presence of the right retrieval cues is critical for remembering adds to the difficulty in proving that a memory is permanently forgotten as opposed to temporarily unavailable.
Retrieval failures can also occur because other memories are blocking or getting in the way of recalling the desired memory. This blocking is referred to as interference. For example, you may fail to remember the name of a town you visited with your family on summer vacation because the names of other towns you visited on that trip or on other trips come to mind instead. Those memories then prevent the desired memory from being retrieved. Interference is also relevant to the example of forgetting a password: passwords that we have used for other websites may come to mind and interfere with our ability to retrieve the desired password. Interference can be either proactive, in which old memories block the learning of new related memories, or retroactive, in which new memories block the retrieval of old related memories. For both types of interference, competition between memories seems to be key (Mensink & Raaijmakers, 1988). Your memory for a town you visited on vacation is unlikely to interfere with your ability to remember an Internet password, but it is likely to interfere with your ability to remember a different town’s name. Competition between memories can also lead to forgetting in a different way. Recalling a desired memory in the face of competition may result in the inhibition of related, competing memories (Levy & Anderson, 2002). You may have difficulty recalling the name of Kennebunkport, Maine, because other Maine towns, such as Bar Harbor, Winterport, and Camden, come to mind instead. However, if you are able to recall Kennebunkport despite strong competition from the other towns, this may actually change the competitive landscape, weakening memory for those other towns’ names, leading to forgetting of them instead.
Finally, some memories may be forgotten because we deliberately attempt to keep them out of mind. Over time, by actively trying not to remember an event, we can sometimes successfully keep the undesirable memory from being retrieved either by inhibiting the undesirable memory or generating diversionary thoughts (Anderson & Green, 2001). Imagine that you slipped and fell in your high school cafeteria during lunch time, and everyone at the surrounding tables laughed at you. You would likely wish to avoid thinking about that event and might try to prevent it from coming to mind. One way that you could accomplish this is by thinking of other, more positive, events that are associated with the cafeteria. Eventually, this memory may be suppressed to the point that it would only be retrieved with great difficulty (Hertel & Calcaterra, 2005).
Adaptive Forgetting
We have explored five different causes of forgetting. Together they can account for the day-to-day episodes of forgetting that each of us experience. Typically, we think of these episodes in a negative light and view forgetting as a memory failure. Is forgetting ever good? Most people would reason that forgetting that occurs in response to a deliberate attempt to keep an event out of mind is a good thing. No one wants to be constantly reminded of falling on their face in front of all of their friends. However, beyond that, it can be argued that forgetting is adaptive, allowing us to be efficient and hold onto only the most relevant memories (Bjork, 1989; Anderson & Milson, 1989). Shereshevsky, or “S,” the mnemonist studied by Alexander Luria (1968), was a man who almost never forgot. His memory appeared to be virtually limitless. He could memorize a table of 50 numbers in under 3 minutes and recall the numbers in rows, columns, or diagonals with ease. He could recall lists of words and passages that he had memorized over a decade before. Yet Shereshevsky found it difficult to function in his everyday life because he was constantly distracted by a flood of details and associations that sprung to mind. His case history suggests that remembering everything is not always a good thing. You may occasionally have trouble remembering where you parked your car, but imagine if every time you had to find your car, every single former parking space came to mind. The task would become impossibly difficult to sort through all of those irrelevant memories. Thus, forgetting is adaptive in that it makes us more efficient. The price of that efficiency is those moments when our memories seem to fail us (Schacter, 1999).
Amnesia
Clearly, remembering everything would be maladaptive, but what would it be like to remember nothing? We will now consider a profound form of forgetting called amnesia that is distinct from more ordinary forms of forgetting. Most of us have had exposure to the concept of amnesia through popular movies and television. Typically, in these fictionalized portrayals of amnesia, a character suffers some type of blow to the head and suddenly has no idea who they are and can no longer recognize their family or remember any events from their past. After some period of time (or another blow to the head), their memories come flooding back to them. Unfortunately, this portrayal of amnesia is not very accurate. What does amnesia typically look like?
The most widely studied amnesic patient was known by his initials H. M. (Scoville & Milner, 1957). As a teenager, H. M. suffered from severe epilepsy, and in 1953, he underwent surgery to have both of his medial temporal lobes removed to relieve his epileptic seizures. The medial temporal lobesencompass the hippocampus and surrounding cortical tissue. Although the surgery was successful in reducing H. M.’s seizures and his general intelligence was preserved, the surgery left H. M. with a profound and permanent memory deficit. From the time of his surgery until his death in 2008, H. M. was unable to learn new information, a memory impairment called anterograde amnesia. H. M. could not remember any event that occurred since his surgery, including highly significant ones, such as the death of his father. He could not remember a conversation he had a few minutes prior or recognize the face of someone who had visited him that same day. He could keep information in his short-term, or working, memory, but when his attention turned to something else, that information was lost for good. It is important to note that H. M.’s memory impairment was restricted to declarative memory, or conscious memory for facts and events. H. M. could learn new motor skills and showed improvement on motor tasks even in the absence of any memory for having performed the task before (Corkin, 2002).
In addition to anterograde amnesia, H. M. also suffered from temporally graded retrograde amnesia. Retrograde amnesia refers to an inability to retrieve old memories that occurred before the onset of amnesia. Extensive retrograde amnesia in the absence of anterograde amnesia is very rare (Kopelman, 2000). More commonly, retrograde amnesia co-occurs with anterograde amnesia and shows a temporal gradient, in which memories closest in time to the onset of amnesia are lost, but more remote memories are retained (Hodges, 1994). In the case of H. M., he could remember events from his childhood, but he could not remember events that occurred a few years before the surgery.
Amnesiac patients with damage to the hippocampus and surrounding medial temporal lobes typically manifest a similar clinical profile as H. M. The degree of anterograde amnesia and retrograde amnesia depend on the extent of the medial temporal lobe damage, with greater damage associated with a more extensive impairment (Reed & Squire, 1998). Anterograde amnesia provides evidence for the role of the hippocampus in the formation of long-lasting declarative memories, as damage to the hippocampus results in an inability to create this type of new memory. Similarly, temporally graded retrograde amnesia can be seen as providing further evidence for the importance of memory consolidation (Squire & Alvarez, 1995). A memory depends on the hippocampus until it is consolidated and transferred into a more durable form that is stored in the cortex. According to this theory, an amnesiac patient like H. M. could remember events from his remote past because those memories were fully consolidated and no longer depended on the hippocampus.
The classic amnesiac syndrome we have considered here is sometimes referred to as organic amnesia, and it is distinct from functional, or dissociative, amnesia. Functional amnesia involves a loss of memory that cannot be attributed to brain injury or any obvious brain disease and is typically classified as a mental disorder rather than a neurological disorder (Kihlstrom, 2005). The clinical profile of dissociative amnesia is very different from that of patients who suffer from amnesia due to brain damage or deterioration. Individuals who experience dissociative amnesia often have a history of trauma. Their amnesia is retrograde, encompassing autobiographical memories from a portion of their past. In an extreme version of this disorder, people enter a dissociative fugue state, in which they lose most or all of their autobiographical memories and their sense of personal identity. They may be found wandering in a new location, unaware of who they are and how they got there. Dissociative amnesia is controversial, as both the causes and existence of it have been called into question. The memory loss associated with dissociative amnesia is much less likely to be permanent than it is in organic amnesia.
Conclusion
Just as the case study of the mnemonist Shereshevsky illustrates what a life with a near perfect memory would be like, amnesiac patients show us what a life without memory would be like. Each of the mechanisms we discussed that explain everyday forgetting—encoding failures, decay, insufficient retrieval cues, interference, and intentional attempts to forget—help to keep us highly efficient, retaining the important information and for the most part, forgetting the unimportant. Amnesiac patients allow us a glimpse into what life would be like if we suffered from profound forgetting and perhaps show us that our everyday lapses in memory are not so bad after all.
Outside Resources
Web: Brain Case Study: Patient HM
https://bigpictureeducation.com/brain-case-study-patient-hm
Web: Self-experiment, Penny demo
www.indiana.edu/~p1013447/dictionary/penny.htm
Web: The Man Who Couldn’t Remember
http://www.pbs.org/wgbh/nova/body/corkin-hm-memory.html
Discussion Questions
1. Is forgetting good or bad? Do you agree with the authors that forgetting is an adaptive process? Why or why not?
2. Can we ever prove that something is forgotten? Why or why not?
3. Which of the five reasons for forgetting do you think explains the majority of incidences of forgetting? Why?
4. How is real-life amnesia different than amnesia that is portrayed on TV and in film?
Vocabulary
Anterograde amnesia
Inability to form new memories for facts and events after the onset of amnesia.
Consolidation
Process by which a memory trace is stabilized and transformed into a more durable form.
Decay
The fading of memories with the passage of time.
Declarative memory
Conscious memories for facts and events.
Dissociative amnesia
Loss of autobiographical memories from a period in the past in the absence of brain injury or disease.
Encoding
Process by which information gets into memory.
Interference
Other memories get in the way of retrieving a desired memory
Medial temporal lobes
Inner region of the temporal lobes that includes the hippocampus.
Retrieval
Process by which information is accessed from memory and utilized.
Retrograde amnesia
Inability to retrieve memories for facts and events acquired before the onset of amnesia.
Temporally graded retrograde amnesia
Inability to retrieve memories from just prior to the onset of amnesia with intact memory for more remote events. | textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/04%3A_Learning_and_Memory/4.04%3A_Forgetting_and_Amnesia.txt |
• 5.1: Attention
We use the term “attention“ all the time, but what processes or abilities does that concept really refer to? This module will focus on how attention allows us to select certain parts of our environment and ignore other parts, and what happens to the ignored information. A key concept is the idea that we are limited in how much we can do at any one time. So we will also consider what happens when someone tries to do several things at once, such as driving while using electronic devices.
• 5.2: Intelligence
Intelligence is among the oldest and longest studied topics in all of psychology. The development of assessments to measure this concept is at the core of the development of psychological science itself. This module introduces key historical figures, major theories of intelligence, and common assessment strategies related to intelligence. This module will also discuss controversies related to the study of group differences in intelligence.
• 5.3: Judgement and Decision Making
Humans are not perfect decision makers. Not only are we not perfect, but we depart from perfection or rationality in systematic and predictable ways. The understanding of these systematic and predictable departures is core to the field of judgment and decision making. By understanding these limitations, we can also identify strategies for making better and more effective decisions.
05: Cognition and Language
By Frances Friedrich
University of Utah
We use the term “attention“ all the time, but what processes or abilities does that concept really refer to? This module will focus on how attention allows us to select certain parts of our environment and ignore other parts, and what happens to the ignored information. A key concept is the idea that we are limited in how much we can do at any one time. So we will also consider what happens when someone tries to do several things at once, such as driving while using electronic devices.
learning objectives
• Understand why selective attention is important and how it can be studied.
• Learn about different models of when and how selection can occur.
• Understand how divided attention or multitasking is studied, and implications of multitasking in situations such as distracted driving.
What is Attention?
Before we begin exploring attention in its various forms, take a moment to consider how you think about the concept. How would you define attention, or how do you use the term? We certainly use the word very frequently in our everyday language: “ATTENTION! USE ONLY AS DIRECTED!” warns the label on the medicine bottle, meaning be alert to possible danger. “Pay attention!” pleads the weary seventh-grade teacher, not warning about danger (with possible exceptions, depending on the teacher) but urging the students to focus on the task at hand. We may refer to a child who is easily distracted as having an attention disorder, although we also are told that Americans have an attention span of about 8 seconds, down from 12 seconds in 2000, suggesting that we all have trouble sustaining concentration for any amount of time (from www.Statisticbrain.com). How that number was determined is not clear from the Web site, nor is it clear how attention span in the goldfish—9 seconds!—was measured, but the fact that our average span reportedly is less than that of a goldfish is intriguing, to say the least.
William James wrote extensively about attention in the late 1800s. An often quoted passage (James, 1890/1983) beautifully captures how intuitively obvious the concept of attention is, while it remains very difficult to define in measurable, concrete terms:
Everyone knows what attention is. It is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration of consciousness are of its essence. It implies withdrawal from some things in order to deal effectively with others. (pp. 381–382)
Notice that this description touches on the conscious nature of attention, as well as the notion that what is in consciousness is often controlled voluntarily but can also be determined by events that capture our attention. Implied in this description is the idea that we seem to have a limited capacity for information processing, and that we can only attend to or be consciously aware of a small amount of information at any given time.
Many aspects of attention have been studied in the field of psychology. In some respects, we define different types of attention by the nature of the task used to study it. For example, a crucial issue in World War II was how long an individual could remain highly alert and accurate while watching a radar screen for enemy planes, and this problem led psychologists to study how attention works under such conditions. When watching for a rare event, it is easy to allow concentration to lag. (This a continues to be a challenge today for TSA agents, charged with looking at images of the contents of your carry-on items in search of knives, guns, or shampoo bottles larger than 3 oz.) Attention in the context of this type of search task refers to the level of sustained attention or vigilance one can maintain. In contrast, divided attentiontasks allow us to determine how well individuals can attend to many sources of information at once. Spatial attention refers specifically to how we focus on one part of our environment and how we move attention to other locations in the environment. These are all examples of different aspects of attention, but an implied element of most of these ideas is the concept of selective attention; some information is attended to while other information is intentionally blocked out. This module will focus on important issues in selective and divided attention, addressing these questions:
• Can we pay attention to several sources of information at once, or do we have a limited capacity for information?
• How do we select what to pay attention to?
• What happens to information that we try to ignore?
• Can we learn to divide attention between multiple tasks?
Selective Attention
The Cocktail Party
Selective attention is the ability to select certain stimuli in the environment to process, while ignoring distracting information. One way to get an intuitive sense of how attention works is to consider situations in which attention is used. A party provides an excellent example for our purposes. Many people may be milling around, there is a dazzling variety of colors and sounds and smells, the buzz of many conversations is striking. There are so many conversations going on; how is it possible to select just one and follow it? You don’t have to be looking at the person talking; you may be listening with great interest to some gossip while pretending not to hear. However, once you are engaged in conversation with someone, you quickly become aware that you cannot also listen to other conversations at the same time. You also are probably not aware of how tight your shoes feel or of the smell of a nearby flower arrangement. On the other hand, if someone behind you mentions your name, you typically notice it immediately and may start attending to that (much more interesting) conversation. This situation highlights an interesting set of observations. We have an amazing ability to select and track one voice, visual object, etc., even when a million things are competing for our attention, but at the same time, we seem to be limited in how much we can attend to at one time, which in turn suggests that attention is crucial in selecting what is important. How does it all work?
Dichotic Listening Studies
This cocktail party scenario is the quintessential example of selective attention, and it is essentially what some early researchers tried to replicate under controlled laboratory conditions as a starting point for understanding the role of attention in perception (e.g., Cherry, 1953; Moray, 1959). In particular, they used dichotic listening and shadowing tasks to evaluate the selection process. Dichotic listening simply refers to the situation when two messages are presented simultaneously to an individual, with one message in each ear. In order to control which message the person attends to, the individual is asked to repeat back or “shadow” one of the messages as he hears it. For example, let’s say that a story about a camping trip is presented to John’s left ear, and a story about Abe Lincoln is presented to his right ear. The typical dichotic listening task would have John repeat the story presented to one ear as he hears it. Can he do that without being distracted by the information in the other ear?
People can become pretty good at the shadowing task, and they can easily report the content of the message that they attend to. But what happens to the ignored message? Typically, people can tell you if the ignored message was a man’s or a woman’s voice, or other physical characteristics of the speech, but they cannot tell you what the message was about. In fact, many studies have shown that people in a shadowing task were not aware of a change in the language of the message (e.g., from English to German; Cherry, 1953), and they didn't even notice when the same word was repeated in the unattended ear more than 35 times (Moray, 1959)! Only the basic physical characteristics, such as the pitch of the unattended message, could be reported.
On the basis of these types of experiments, it seems that we can answer the first question about how much information we can attend to very easily: not very much. We clearly have a limited capacity for processing information for meaning, making the selection process all the more important. The question becomes: How does this selection process work?
Models of Selective Attention
Broadbent’s Filter Model. Many researchers have investigated how selection occurs and what happens to ignored information. Donald Broadbent was one of the first to try to characterize the selection process. His Filter Model was based on the dichotic listening tasks described above as well as other types of experiments (Broadbent, 1958). He found that people select information on the basis of physical features: the sensory channel (or ear) that a message was coming in, the pitch of the voice, the color or font of a visual message. People seemed vaguely aware of the physical features of the unattended information, but had no knowledge of the meaning. As a result, Broadbent argued that selection occurs very early, with no additional processing for the unselected information. A flowchart of the model might look like this:
Treisman’s Attenuation Model
Broadbent’s model makes sense, but if you think about it you already know that it cannot account for all aspects of the Cocktail Party Effect. What doesn’t fit? The fact is that you tend to hear your own name when it is spoken by someone, even if you are deeply engaged in a conversation. We mentioned earlier that people in a shadowing experiment were unaware of a word in the unattended ear that was repeated many times—and yet many people noticed their own name in the unattended ear even it occurred only once.
Anne Treisman (1960) carried out a number of dichotic listening experiments in which she presented two different stories to the two ears. As usual, she asked people to shadowthe message in one ear. As the stories progressed, however, she switched the stories to the opposite ears. Treisman found that individuals spontaneously followed the story, or the content of the message, when it shifted from the left ear to the right ear. Then they realized they were shadowing the wrong ear and switched back.
Results like this, and the fact that you tend to hear meaningful information even when you aren’t paying attention to it, suggest that we do monitor the unattended information to some degree on the basis of its meaning. Therefore, the filter theory can’t be right to suggest that unattended information is completely blocked at the sensory analysis level. Instead, Treisman suggested that selection starts at the physical or perceptual level, but that the unattended information is not blocked completely, it is just weakened or attenuated. As a result, highly meaningful or pertinent information in the unattended ear will get through the filter for further processing at the level of meaning. The figure below shows information going in both ears, and in this case there is no filter that completely blocks nonselected information. Instead, selection of the left ear information strengthens that material, while the nonselected information in the right ear is weakened. However, if the preliminary analysis shows that the nonselected information is especially pertinent or meaningful (such as your own name), then the Attenuation Control will instead strengthen the more meaningful information.
Late Selection Models
Other selective attention models have been proposed as well. Alate selection or response selection model proposed by Deutsch and Deutsch (1963) suggests that all information in the unattended ear is processed on the basis of meaning, not just the selected or highly pertinent information. However, only the information that is relevant for the task response gets into conscious awareness. This model is consistent with ideas of subliminal perception; in other words, that you don’t have to be aware of or attending a message for it to be fully processed for meaning.
You might notice that this figure looks a lot like that of the Early Selection model—only the location of the selective filter has changed, with the assumption that analysis of meaning occurs before selection occurs, but only the selected information becomes conscious.
Multimode Model
Why did researchers keep coming up with different models? Because no model really seemed to account for all the data, some of which indicates that nonselected information is blocked completely, whereas other studies suggest that it can be processed for meaning. The multimode model addresses this apparent inconsistency, suggesting that the stage at which selection occurs can change depending on the task. Johnston and Heinz (1978) demonstrated that under some conditions, we can select what to attend to at a very early stage and we do not process the content of the unattended message very much at all. Analyzing physical information, such as attending to information based on whether it is a male or female voice, is relatively easy; it occurs automatically, rapidly, and doesn’t take much effort. Under the right conditions, we can select what to attend to on the basis of the meaning of the messages. However, the late selection option—processing the content of all messages before selection—is more difficult and requires more effort. The benefit, though, is that we have the flexibility to change how we deploy our attention depending upon what we are trying to accomplish, which is one of the greatest strengths of our cognitive system.
This discussion of selective attention has focused on experiments using auditory material, but the same principles hold for other perceptual systems as well. Neisser (1979) investigated some of the same questions with visual materials by superimposing two semi-transparent video clips and asking viewers to attend to just one series of actions. As with the auditory materials, viewers often were unaware of what went on in the other clearly visible video. Twenty years later, Simons and Chabris (1999) explored and expanded these findings using similar techniques, and triggered a flood of new work in an area referred to as inattentional blindness. We touch on those ideas below, and you can also refer to another Noba Module, Failures of Awareness: The Case of Inattentional Blindness for a more complete discussion.
Focus Topic 1: Subliminal Perception
The idea of subliminal perception—that stimuli presented below the threshold for awareness can influence thoughts, feelings, or actions—is a fascinating and kind of creepy one. Can messages you are unaware of, embedded in movies or ads or the music playing in the grocery store, really influence what you buy? Many such claims of the power of subliminal perception have been made. One of the most famous came from a market researcher who claimed that the message “Eat Popcorn” briefly flashed throughout a movie increased popcorn sales by more than 50%, although he later admitted that the study was made up (Merikle, 2000). Psychologists have worked hard to investigate whether this is a valid phenomenon. Studying subliminal perception is more difficult than it might seem, because of the difficulty of establishing what the threshold for consciousness is or of even determining what type of threshold is important; for example, Cheesman and Merikle (1984, 1986) make an important distinction between objective and subjective thresholds. The bottom line is that there is some evidence that individuals can be influenced by stimuli they are not aware of, but how complex the stimuli can be or the extent to which unconscious material can affect behavior is not settled (e.g., Bargh & Morsella, 2008; Greenwald, 1992; Merikle, 2000).
Divided Attention and Multitasking
In spite of the evidence of our limited capacity, we all like to think that we can do several things at once. Some people claim to be able to multitask without any problem: reading a textbook while watching television and talking with friends; talking on the phone while playing computer games; texting while driving. The fact is that we sometimes can seem to juggle several things at once, but the question remains whether dividing attention in this way impairs performance.
Is it possible to overcome the limited capacity that we experience when engaging in cognitive tasks? We know that with extensive practice, we can acquire skills that do not appear to require conscious attention. As we walk down the street, we don’t need to think consciously about what muscle to contract in order to take the next step. Indeed, paying attention to automated skills can lead to a breakdown in performance, or “choking” (e.g., Beilock & Carr, 2001). But what about higher level, more mentally demanding tasks: Is it possible to learn to perform two complex tasks at the same time?
Divided Attention Tasks
In a classic study that examined this type of divided attention task, two participants were trained to take dictation for spoken words while reading unrelated material for comprehension (Spelke, Hirst, & Neisser, 1976). In divided attention tasks such as these, each task is evaluated separately, in order to determine baseline performance when the individual can allocate as many cognitive resources as necessary to one task at a time. Then performance is evaluated when the two tasks are performed simultaneously. A decrease in performance for either task would suggest that even if attention can be divided or switched between the tasks, the cognitive demands are too great to avoid disruption of performance. (We should note here that divided attention tasks are designed, in principle, to see if two tasks can be carried out simultaneously. A related research area looks at task switching and how well we can switch back and forth among different tasks [e.g., Monsell, 2003]. It turns out that switching itself is cognitively demanding and can impair performance.)
The focus of the Spelke et al. (1976) study was whether individuals could learn to perform two relatively complex tasks concurrently, without impairing performance. The participants received plenty of practice—the study lasted 17 weeks and they had a 1-hour session each day, 5 days a week. These participants were able to learn to take dictation for lists of words and read for comprehension without affecting performance in either task, and the authors suggested that perhaps there are not fixed limits on our attentional capacity. However, changing the tasks somewhat, such as reading aloud rather than silently, impaired performance initially, so this multitasking ability may be specific to these well-learned tasks. Indeed, not everyone could learn to perform two complex tasks without performance costs (Hirst, Neisser, & Spelke, 1978), although the fact that some can is impressive.
Distracted Driving
More relevant to our current lifestyles are questions about multitasking while texting or having cell phone conversations. Research designed to investigate, under controlled conditions, multitasking while driving has revealed some surprising results. Certainly there are many possible types of distractions that could impair driving performance, such as applying makeup using the rearview mirror, attempting (usually in vain) to stop the kids in the backseat from fighting, fiddling with the CD player, trying to negotiate a handheld cell phone, a cigarette, and a soda all at once, eating a bowl of cereal while driving (!). But we tend to have a strong sense that we CAN multitask while driving, and cars are being built with more and more technological capabilities that encourage multitasking. How good are we at dividing attention in these cases?
Most people acknowledge the distraction caused by texting while driving and the reason seems obvious: Your eyes are off the road and your hands and at least one hand (often both) are engaged while texting. However, the problem is not simply one of occupied hands or eyes, but rather that the cognitive demands on our limited capacity systems can seriously impair driving performance (Strayer, Watson, & Drews, 2011). The effect of a cell phone conversation on performance (such as not noticing someone’s brake lights or responding more slowly to them) is just as significant when the individual is having a conversation with a hands-free device as with a handheld phone; the same impairments do not occur when listening to the radio or a book on tape (Strayer & Johnston, 2001). Moreover, studies using eye-tracking devices have shown that drivers are less likely to later recognize objects that they did look at when using a cell phone while driving (Strayer & Drews, 2007). These findings demonstrate that cognitive distractions such as cell phone conversations can produce inattentional blindness, or a lack of awareness of what is right before your eyes (see also, Simons & Chabris, 1999). Sadly, although we all like to think that we can multitask while driving, in fact the percentage of people who can truly perform cognitive tasks without impairing their driving performance is estimated to be about 2% (Watson & Strayer, 2010).
Summary
It may be useful to think of attention as a mental resource, one that is needed to focus on and fully process important information, especially when there is a lot of distracting “noise” threatening to obscure the message. Our selective attention system allows us to find or track an object or conversation in the midst of distractions. Whether the selection process occurs early or late in the analysis of those events has been the focus of considerable research, and in fact how selection occurs may very well depend on the specific conditions. With respect to divided attention, in general we can only perform one cognitively demanding task at a time, and we may not even be aware of unattended events even though they might seem too obvious to miss (check out some examples in the Outside Resources below). This type of inattention blindness can occur even in well-learned tasks, such as driving while talking on a cell phone. Understanding how attention works is clearly important, even for our everyday lives.
Outside Resources
Video: Here's a wild example of how much we fail to notice when our attention is captured by one element of a scene.
Video: Try this test to see how well you can focus on a task in the face of a lot of distraction.
Discussion Questions
1. Discuss the implications of the different models of selective attention for everyday life. For instance, what advantages and disadvantages would be associated with being able to filter out all unwanted information at a very early stage in processing? What are the implications of processing all ignored information fully, even if you aren't consciously aware of that information?
2. Think of examples of when you feel you can successfully multitask and when you can’t. Discuss what aspects of the tasks or the situation seem to influence divided attention performance. How accurate do you think you are in judging your own multitasking ability?
3. What are the public policy implications of current evidence of inattentional blindness as a result of distracted driving? Should this evidence influence traffic safety laws? What additional studies of distracted driving would you propose?
Vocabulary
Dichotic listening
An experimental task in which two messages are presented to different ears.
Divided attention
The ability to flexibly allocate attentional resources between two or more concurrent tasks.
Inattentional blindness
The failure to notice a fully visible object when attention is devoted to something else.
Limited capacity
The notion that humans have limited mental resources that can be used at a given time.
Selective attention
The ability to select certain stimuli in the environment to process, while ignoring distracting information.
Shadowing
A task in which the individual is asked to repeat an auditory message as it is presented.
Subliminal perception
The ability to process information for meaning when the individual is not consciously aware of that information. | textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/05%3A_Cognition_and_Language/5.01%3A_Attention.txt |
By Robert Biswas-Diener
Portland State University
Intelligence is among the oldest and longest studied topics in all of psychology. The development of assessments to measure this concept is at the core of the development of psychological science itself. This module introduces key historical figures, major theories of intelligence, and common assessment strategies related to intelligence. This module will also discuss controversies related to the study of group differences in intelligence.
learning objectives
• List at least two common strategies for measuring intelligence.
• Name at least one “type” of intelligence.
• Define intelligence in simple terms.
• Explain the controversy relating to differences in intelligence between groups.
Introduction
Every year hundreds of grade school students converge on Washington, D.C., for the annual Scripps National Spelling Bee. The “bee” is an elite event in which children as young as 8 square off to spell words like “cymotrichous” and “appoggiatura.” Most people who watch the bee think of these kids as being “smart” and you likely agree with this description.
What makes a person intelligent? Is it heredity (two of the 2014 contestants in the bee have siblings who have previously won)(National Spelling Bee, 2014a)? Is it interest (the most frequently listed favorite subject among spelling bee competitors is math)(NSB, 2014b)? In this module we will cover these and other fascinating aspects of intelligence. By the end of the module you should be able to define intelligence and discuss some common strategies for measuring intelligence. In addition, we will tackle the politically thorny issue of whether there are differences in intelligence between groups such as men and women.
Defining and Measuring Intelligence
When you think of “smart people” you likely have an intuitive sense of the qualities that make them intelligent. Maybe you think they have a good memory, or that they can think quickly, or that they simply know a whole lot of information. Indeed, people who exhibit such qualities appear very intelligent. That said, it seems that intelligence must be more than simply knowing facts and being able to remember them. One point in favor of this argument is the idea of animal intelligence. It will come as no surprise to you that a dog, which can learn commands and tricks seems smarter than a snake that cannot. In fact, researchers and lay people generally agree with one another that primates—monkeys and apes (including humans)—are among the most intelligent animals. Apes such as chimpanzees are capable of complex problem solving and sophisticated communication (Kohler, 1924).
Scientists point to the social nature of primates as one evolutionary source of their intelligence. Primates live together in troops or family groups and are, therefore, highly social creatures. As such, primates tend to have brains that are better developed for communication and long term thinking than most other animals. For instance, the complex social environment has led primates to develop deception, altruism, numerical concepts, and “theory of mind” (a sense of the self as a unique individual separate from others in the group; Gallup, 1982; Hauser, MacNeilage & Ware, 1996).[Also see Noba module Theory of Mind noba.to/a8wpytg3]
The question of what constitutes human intelligence is one of the oldest inquiries in psychology. When we talk about intelligence we typically mean intellectual ability. This broadly encompasses the ability to learn, remember and use new information, to solve problems and to adapt to novel situations. An early scholar of intelligence, Charles Spearman, proposed the idea that intelligence was one thing, a “general factor” sometimes known as simply “g.” He based this conclusion on the observation that people who perform well in one intellectual area such as verbal ability also tend to perform well in other areas such as logic and reasoning (Spearman, 1904).
A contemporary of Spearman’s named Francis Galton—himself a cousin of Charles Darwin-- was among those who pioneered psychological measurement (Hunt, 2009). For three pence Galton would measure various physical characteristics such as grip strength but also some psychological attributes such as the ability to judge distance or discriminate between colors. This is an example of one of the earliest systematic measures of individual ability. Galton was particularly interested in intelligence, which he thought was heritable in much the same way that height and eye color are. He conceived of several rudimentary methods for assessing whether his hypothesis was true. For example, he carefully tracked the family tree of the top-scoring Cambridge students over the previous 40 years. Although he found specific families disproportionately produced top scholars, intellectual achievement could still be the product of economic status, family culture or other non-genetic factors. Galton was also, possibly, the first to popularize the idea that the heritability of psychological traits could be studied by looking at identical and fraternal twins. Although his methods were crude by modern standards, Galton established intelligence as a variable that could be measured (Hunt, 2009).
The person best known for formally pioneering the measurement of intellectual ability is Alfred Binet. Like Galton, Binet was fascinated by individual differences in intelligence. For instance, he blindfolded chess players and saw that some of them had the ability to continue playing using only their memory to keep the many positions of the pieces in mind (Binet, 1894). Binet was particularly interested in the development of intelligence, a fascination that led him to observe children carefully in the classroom setting.
Along with his colleague Theodore Simon, Binet created a test of children’s intellectual capacity. They created individual test items that should be answerable by children of given ages. For instance, a child who is three should be able to point to her mouth and eyes, a child who is nine should be able to name the months of the year in order, and a twelve year old ought to be able to name sixty words in three minutes. Their assessment became the first “IQ test.”
“IQ” or “intelligence quotient” is a name given to the score of the Binet-Simon test. The score is derived by dividing a child’s mental age (the score from the test) by their chronological age to create an overall quotient. These days, the phrase “IQ” does not apply specifically to the Binet-Simon test and is used to generally denote intelligence or a score on any intelligence test. In the early 1900s the Binet-Simon test was adapted by a Stanford professor named Lewis Terman to create what is, perhaps, the most famous intelligence test in the world, the Stanford-Binet (Terman, 1916). The major advantage of this new test was that it was standardized. Based on a large sample of children Terman was able to plot the scores in a normal distribution, shaped like a “bell curve” (see Fig. 7.5.1). To understand a normal distribution think about the height of people. Most people are average in height with relatively fewer being tall or short, and fewer still being extremely tall or extremely short. Terman (1916) laid out intelligence scores in exactly the same way, allowing for easy and reliable categorizations and comparisons between individuals.
Looking at another modern intelligence test—the Wechsler Adult Intelligence Scale (WAIS)—can provide clues to a definition of intelligence itself. Motivated by several criticisms of the Stanford-Binet test, psychologist David Wechsler sought to create a superior measure of intelligence. He was critical of the way that the Stanford-Binet relied so heavily on verbal ability and was also suspicious of using a single score to capture all of intelligence. To address these issues Wechsler created a test that tapped a wide range of intellectual abilities. This understanding of intelligence—that it is made up of a pool of specific abilities—is a notable departure from Spearman’s concept of general intelligence. The WAIS assesses people's ability to remember, compute, understand language, reason well, and process information quickly (Wechsler, 1955).
One interesting by-product of measuring intelligence for so many years is that we can chart changes over time. It might seem strange to you that intelligence can change over the decades but that appears to have happened over the last 80 years we have been measuring this topic. Here’s how we know: IQ tests have an average score of 100. When new waves of people are asked to take older tests they tend to outperform the original sample from years ago on which the test was normed. This gain is known as the “Flynn Effect,” named after James Flynn, the researcher who first identified it (Flynn, 1987). Several hypotheses have been put forth to explain the Flynn Effect including better nutrition (healthier brains!), greater familiarity with testing in general, and more exposure to visual stimuli. Today, there is no perfect agreement among psychological researchers with regards to the causes of increases in average scores on intelligence tests. Perhaps if you choose a career in psychology you will be the one to discover the answer!
Types of Intelligence
David Wechsler’s approach to testing intellectual ability was based on the fundamental idea that there are, in essence, many aspects to intelligence. Other scholars have echoed this idea by going so far as to suggest that there are actually even different types of intelligence. You likely have heard distinctions made between “street smarts” and “book learning.” The former refers to practical wisdom accumulated through experience while the latter indicates formal education. A person high in street smarts might have a superior ability to catch a person in a lie, to persuade others, or to think quickly under pressure. A person high in book learning, by contrast, might have a large vocabulary and be able to remember a large number of references to classic novels. Although psychologists don’t use street smarts or book smarts as professional terms they do believe that intelligence comes in different types.
There are many ways to parse apart the concept of intelligence. Many scholars believe that Carroll ‘s (1993) review of more than 400 data sets provides the best currently existing single source for organizing various concepts related to intelligence. Carroll divided intelligence into three levels, or strata, descending from the most abstract down to the most specific (see Fig. 7.5.2). To understand this way of categorizing simply think of a “car.” Car is a general word that denotes all types of motorized vehicles. At the more specific level under “car” might be various types of cars such as sedans, sports cars, SUVs, pick-up trucks, station wagons, and so forth. More specific still would be certain models of each such as a Honda Civic or Ferrari Enzo. In the same manner, Carroll called the highest level (stratum III) the general intelligence factor “g.” Under this were more specific stratum II categories such as fluid intelligence and visual perception and processing speed. Each of these, in turn, can be sub-divided into very specific components such as spatial scanning, reaction time, and word fluency.
Thinking of intelligence as Carroll (1993) does, as a collection of specific mental abilities, has helped researchers conceptualize this topic in new ways. For example, Horn and Cattell (1966) distinguish between “fluid” and “crystalized” intelligence, both of which show up on stratum II of Carroll’s model. Fluid intelligence is the ability to “think on your feet;” that is, to solve problems. Crystalized intelligence, on the other hand, is the ability to use language, skills and experience to address problems. The former is associated more with youth while the latter increases with age. You may have noticed the way in which younger people can adapt to new situations and use trial and error to quickly figure out solutions. By contrast, older people tend to rely on their relatively superior store of knowledge to solve problems.
Harvard professor Howard Gardner is another figure in psychology who is well-known for championing the notion that there are different types of intelligence. Gardner’s theory is appropriately, called “multiple intelligences.” Gardner’s theory is based on the idea that people process information through different “channels” and these are relatively independent of one another. He has identified 8 common intelligences including 1) logic-math, 2) visual-spatial, 3) music-rhythm, 4) verbal-linguistic, 5) bodily-kinesthetic, 6) interpersonal, 7) intrapersonal, and 8) naturalistic (Gardner, 1985). Many people are attracted to Gardner’s theory because it suggests that people each learn in unique ways. There are now many Gardner- influenced schools in the world.
Another type of intelligence is Emotional intelligence. Unlike traditional models of intelligence that emphasize cognition (thinking) the idea of emotional intelligence emphasizes the experience and expression of emotion. Some researchers argue that emotional intelligence is a set of skills in which an individual can accurately understand the emotions of others, can identify and label their own emotions, and can use emotions. (Mayer & Salovey, 1997). Other researchers believe that emotional intelligence is a mixture of abilities, such as stress management, and personality, such as a person’s predisposition for certain moods (Bar-On, 2006). Regardless of the specific definition of emotional intelligence, studies have shown a link between this concept and job performance (Lopes, Grewal, Kadis, Gall, & Salovey, 2006). In fact, emotional intelligence is similar to more traditional notions of cognitive intelligence with regards to workplace benefits. Schmidt and Hunter (1998), for example, reviewed research on intelligence in the workplace context and show that intelligence is the single best predictor of doing well in job training programs, of learning on the job. They also report that general intelligence is moderately correlated with all types of jobs but especially with managerial and complex, technical jobs.
There is one last point that is important to bear in mind about intelligence. It turns out that the way an individual thinks about his or her own intelligence is also important because it predicts performance. Researcher Carol Dweck has made a career out of looking at the differences between high IQ children who perform well and those who do not, so-called “under achievers.” Among her most interesting findings is that it is not gender or social class that sets apart the high and low performers. Instead, it is their mindset. The children who believe that their abilities in general—and their intelligence specifically—is a fixed trait tend to underperform. By contrast, kids who believe that intelligence is changeable and evolving tend to handle failure better and perform better (Dweck, 1986). Dweck refers to this as a person’s “mindset” and having a growth mindset appears to be healthier.
Correlates of Intelligence
The research on mindset is interesting but there can also be a temptation to interpret it as suggesting that every human has an unlimited potential for intelligence and that becoming smarter is only a matter of positive thinking. There is some evidence that genetics is an important factor in the intelligence equation. For instance, a number of studies on genetics in adults have yielded the result that intelligence is largely, but not totally, inherited (Bouchard,2004). Having a healthy attitude about the nature of smarts and working hard can both definitely help intellectual performance but it also helps to have the genetic leaning toward intelligence.
Carol Dweck’s research on the mindset of children also brings one of the most interesting and controversial issues surrounding intelligence research to the fore: group differences. From the very beginning of the study of intelligence researchers have wondered about differences between groups of people such as men and women. With regards to potential differences between the sexes some people have noticed that women are under-represented in certain fields. In 1976, for example, women comprised just 1% of all faculty members in engineering (Ceci, Williams & Barnett, 2009).
Even today women make up between 3% and 15% of all faculty in math-intensive fields at the 50 top universities. This phenomenon could be explained in many ways: it might be the result of inequalities in the educational system, it might be due to differences in socialization wherein young girls are encouraged to develop other interests, it might be the result of that women are—on average—responsible for a larger portion of childcare obligations and therefore make different types of professional decisions, or it might be due to innate differences between these groups, to name just a few possibilities. The possibility of innate differences is the most controversial because many people see it as either the product of or the foundation for sexism. In today’s political landscape it is easy to see that asking certain questions such as “are men smarter than women?” would be inflammatory. In a comprehensive review of research on intellectual abilities and sex Ceci and colleagues (2009) argue against the hypothesis that biological and genetic differences account for much of the sex differences in intellectual ability. Instead, they believe that a complex web of influences ranging from societal expectations to test taking strategies to individual interests account for many of the sex differences found in math and similar intellectual abilities.
A more interesting question, and perhaps a more sensitive one, might be to inquire in which ways men and women might differ in intellectual ability, if at all. That is, researchers should not seek to prove that one group or another is better but might examine the ways that they might differ and offer explanations for any differences that are found. Researchers have investigated sex differences in intellectual ability. In a review of the research literature Halpern (1997) found that women appear, on average, superior to men on measures of fine motor skill, acquired knowledge, reading comprehension, decoding non-verbal expression, and generally have higher grades in school. Men, by contrast, appear, on average, superior to women on measures of fluid reasoning related to math and science, perceptual tasks that involve moving objects, and tasks that require transformations in working memory such as mental rotations of physical spaces. Halpern also notes that men are disproportionately represented on the low end of cognitive functioning including in mental retardation, dyslexia, and attention deficit disorders (Halpern, 1997).
Other researchers have examined various explanatory hypotheses for why sex differences in intellectual ability occur. Some studies have provided mixed evidence for genetic factors while others point to evidence for social factors (Neisser, et al, 1996; Nisbett, et al., 2012). One interesting phenomenon that has received research scrutiny is the idea of stereotype threat. Stereotype threat is the idea that mental access to a particular stereotype can have real-world impact on a member of the stereotyped group. In one study (Spencer, Steele, & Quinn, 1999), for example, women who were informed that women tend to fare poorly on math exams just before taking a math test actually performed worse relative to a control group who did not hear the stereotype. One possible antidote to stereotype threat, at least in the case of women, is to make a self-affirmation (such as listing positive personal qualities) before the threat occurs. In one study, for instance, Martens and her colleagues (2006) had women write about personal qualities that they valued before taking a math test. The affirmation largely erased the effect of stereotype by improving math scores for women relative to a control group but similar affirmations had little effect for men (Martens, Johns, Greenberg, & Schimel, 2006).
These types of controversies compel many lay people to wonder if there might be a problem with intelligence measures. It is natural to wonder if they are somehow biased against certain groups. Psychologists typically answer such questions by pointing out that bias in the testing sense of the word is different than how people use the word in everyday speech. Common use of bias denotes a prejudice based on group membership. Scientific bias, on the other hand, is related to the psychometric properties of the test such as validity and reliability. Validity is the idea that an assessment measures what it claims to measure and that it can predict future behaviors or performance. To this end, intelligence tests are not biased because they are fairly accurate measures and predictors. There are, however, real biases, prejudices, and inequalities in the social world that might benefit some advantaged group while hindering some disadvantaged others.
Conclusion
Although you might not be able to spell “esquamulose” or “staphylococci” – indeed, you might not even know what they mean—you don’t need to count yourself out in the intelligence department. Now that we have examined intelligence in depth we can return to our intuitive view of those students who compete in the National Spelling Bee. Are they smart? Certainly, they seem to have high verbal intelligence. There is also the possibility that they benefit from either a genetic boost in intelligence, a supportive social environment, or both. Watching them spell difficult words there is also much we do not know about them. We cannot tell, for instance, how emotionally intelligent they are or how they might use bodily-kinesthetic intelligence. This highlights the fact that intelligence is a complicated issue. Fortunately, psychologists continue to research this fascinating topic and their studies continue to yield new insights.
Outside Resources
Blog: Dr. Jonathan Wai has an excellent blog on Psychology Today discussing many of the most interesting issue related to intelligence.
http://www.psychologytoday.com/blog/...-next-einstein
Video: Hank Green gives a fun and interesting overview of the concept of intelligence in this installment of the Crash Course series.
Discussion Questions
1. Do you think that people get smarter as they get older? In what ways might people gain or lose intellectual abilities as they age?
2. When you meet someone who strikes you as being smart what types of cues or information do you typically attend to in order to arrive at this judgment?
3. How do you think socio-economic status affects an individual taking an intellectual abilities test?
4. Should psychologists be asking about group differences in intellectual ability? What do you think?
5. Which of Howard Gardner’s 8 types of intelligence do you think describes the way you learn best?
Vocabulary
G
Short for “general factor” and is often used to be synonymous with intelligence itself.
Intelligence
An individual’s cognitive capability. This includes the ability to acquire, process, recall and apply information.
IQ
Short for “intelligence quotient.” This is a score, typically obtained from a widely used measure of intelligence that is meant to rank a person’s intellectual ability against that of others.
Norm
Assessments are given to a representative sample of a population to determine the range of scores for that population. These “norms” are then used to place an individual who takes that assessment on a range of scores in which he or she is compared to the population at large.
Standardize
Assessments that are given in the exact same manner to all people . With regards to intelligence tests standardized scores are individual scores that are computed to be referenced against normative scores for a population (see “norm”).
Stereotype threat
The phenomenon in which people are concerned that they will conform to a stereotype or that their performance does conform to that stereotype, especially in instances in which the stereotype is brought to their conscious awareness. | textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/05%3A_Cognition_and_Language/5.02%3A_Intelligence.txt |
By Max H. Bazerman
Harvard University
Humans are not perfect decision makers. Not only are we not perfect, but we depart from perfection or rationality in systematic and predictable ways. The understanding of these systematic and predictable departures is core to the field of judgment and decision making. By understanding these limitations, we can also identify strategies for making better and more effective decisions.
learning objectives
• Understand the systematic biases that affect our judgment and decision making.
• Develop strategies for making better decisions.
• Experience some of the biases through sample decisions.
Introduction
Every day you have the opportunity to make countless decisions: should you eat dessert, cheat on a test, or attend a sports event with your friends. If you reflect on your own history of choices you will realize that they vary in quality; some are rational and some are not. This module provides an overview of decision making and includes discussion of many of the common biases involved in this process.
In his Nobel Prize–winning work, psychologist Herbert Simon (1957; March & Simon, 1958) argued that our decisions are bounded in their rationality. According to the bounded rationality framework, human beings try to make rational decisions (such as weighing the costs and benefits of a choice) but our cognitive limitations prevent us from being fully rational. Time and cost constraints limit the quantity and quality of the information that is available to us. Moreover, we only retain a relatively small amount of information in our usable memory. And limitations on intelligence and perceptions constrain the ability of even very bright decision makers to accurately make the best choice based on the information that is available.
About 15 years after the publication of Simon’s seminal work, Tversky and Kahneman (1973, 1974; Kahneman & Tversky, 1979) produced their own Nobel Prize–winning research, which provided critical information about specific systematic and predictable biases, or mistakes, that influence judgment (Kahneman received the prize after Tversky’s death). The work of Simon, Tversky, and Kahneman paved the way to our modern understanding of judgment and decision making. And their two Nobel prizes signaled the broad acceptance of the field of behavioral decision research as a mature area of intellectual study.
What Would a Rational Decision Look Like?
Imagine that during your senior year in college, you apply to a number of doctoral programs, law schools, or business schools (or another set of programs in whatever field most interests you). The good news is that you receive many acceptance letters. So, how should you decide where to go? Bazerman and Moore (2013) outline the following six steps that you should take to make a rational decision: (1) define the problem (i.e., selecting the right graduate program), (2) identify the criteria necessary to judge the multiple options (location, prestige, faculty, etc.), (3) weight the criteria (rank them in terms of importance to you), (4) generate alternatives (the schools that admitted you), (5) rate each alternative on each criterion (rate each school on each criteria that you identified, and (6) compute the optimal decision. Acting rationally would require that you follow these six steps in a fully rational manner.
I strongly advise people to think through important decisions such as this in a manner similar to this process. Unfortunately, we often don’t. Many of us rely on our intuitions far more than we should. And when we do try to think systematically, the way we enter data into such formal decision-making processes is often biased.
Fortunately, psychologists have learned a great deal about the biases that affect our thinking. This knowledge about the systematic and predictable mistakes that even the best and the brightest make can help you identify flaws in your thought processes and reach better decisions.
Biases in Our Decision Process
Simon’s concept of bounded rationality taught us that judgment deviates from rationality, but it did not tell us how judgment is biased. Tversky and Kahneman’s (1974) research helped to diagnose the specific systematic, directional biases that affect human judgment. These biases are created by the tendency to short-circuit a rational decision process by relying on a number of simplifying strategies, or rules of thumb, known as heuristics. Heuristics allow us to cope with the complex environment surrounding our decisions. Unfortunately, they also lead to systematic and predictable biases.
To highlight some of these biases please answer the following three quiz items:
Problem 1 (adapted from Alpert & Raiffa, 1969):
Listed below are 10 uncertain quantities. Do not look up any information on these items. For each, write down your best estimate of the quantity. Next, put a lower and upper bound around your estimate, such that you are 98 percent confident that your range surrounds the actual quantity. Respond to each of these items even if you admit to knowing very little about these quantities.
1. The first year the Nobel Peace Prize was awarded
2. The date the French celebrate "Bastille Day"
3. The distance from the Earth to the Moon
4. The height of the Leaning Tower of Pisa
5. Number of students attending Oxford University (as of 2014)
6. Number of people who have traveled to space (as of 2013)
7. 2012-2013 annual budget for the University of Pennsylvania
8. Average life expectancy in Bangladesh (as of 2012)
9. World record for pull-ups in a 24-hour period
10. Number of colleges and universities in the Boston metropolitan area
Problem 2 (adapted from Joyce & Biddle, 1981):
We know that executive fraud occurs and that it has been associated with many recent financial scandals. And, we know that many cases of management fraud go undetected even when annual audits are performed. Do you think that the incidence of significant executive-level management fraud is more than 10 in 1,000 firms (that is, 1 percent) audited by Big Four accounting firms?
1. Yes, more than 10 in 1,000 Big Four clients have significant executive-level management fraud.
2. No, fewer than 10 in 1,000 Big Four clients have significant executive-level management fraud.
What is your estimate of the number of Big Four clients per 1,000 that have significant executive-level management fraud? (Fill in the blank below with the appropriate number.)
________ in 1,000 Big Four clients have significant executive-level management fraud.
Problem 3 (adapted from Tversky & Kahneman, 1981):
Imagine that the United States is preparing for the outbreak of an unusual avian disease that is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows.
1. Program A: If Program A is adopted, 200 people will be saved.
2. Program B: If Program B is adopted, there is a one-third probability that 600 people will be saved and a two-thirds probability that no people will be saved.
Which of the two programs would you favor?
Overconfidence
On the first problem, if you set your ranges so that you were justifiably 98 percent confident, you should expect that approximately 9.8, or nine to 10, of your ranges would include the actual value. So, let’s look at the correct answers:
1. 1901
2. 14th of July
3. 384,403 km (238,857 mi)
4. 56.67 m (183 ft)
5. 22,384 (as of 2014)
6. 536 people (as of 2013)
7. \$6.007 billion
8. 70.3 years (as of 2012)
9. 4,321
10. 52
Count the number of your 98% ranges that actually surrounded the true quantities. If you surrounded nine to 10, you were appropriately confident in your judgments. But most readers surround only between three (30%) and seven (70%) of the correct answers, despite claiming 98% confidence that each range would surround the true value. As this problem shows, humans tend to be overconfident in their judgments.
Anchoring
Regarding the second problem, people vary a great deal in their final assessment of the level of executive-level management fraud, but most think that 10 out of 1,000 is too low. When I run this exercise in class, half of the students respond to the question that I asked you to answer. The other half receive a similar problem, but instead are asked whether the correct answer is higher or lower than 200 rather than 10. Most people think that 200 is high. But, again, most people claim that this “anchor” does not affect their final estimate. Yet, on average, people who are presented with the question that focuses on the number 10 (out of 1,000) give answers that are about one-half the size of the estimates of those facing questions that use an anchor of 200. When we are making decisions, any initial anchor that we face is likely to influence our judgments, even if the anchor is arbitrary. That is, we insufficiently adjust our judgments away from the anchor.
Framing
Turning to Problem 3, most people choose Program A, which saves 200 lives for sure, over Program B. But, again, if I was in front of a classroom, only half of my students would receive this problem. The other half would have received the same set-up, but with the following two options:
1. Program C: If Program C is adopted, 400 people will die.
2. Program D: If Program D is adopted, there is a one-third probability that no one will die and a two-thirds probability that 600 people will die.
Which of the two programs would you favor?
Careful review of the two versions of this problem clarifies that they are objectively the same. Saving 200 people (Program A) means losing 400 people (Program C), and Programs B and D are also objectively identical. Yet, in one of the most famous problems in judgment and decision making, most individuals choose Program A in the first set and Program D in the second set (Tversky & Kahneman, 1981). People respond very differently to saving versus losing lives—even when the difference is based just on the “framing” of the choices.
The problem that I asked you to respond to was framed in terms of saving lives, and the implied reference point was the worst outcome of 600 deaths. Most of us, when we make decisions that concern gains, are risk averse; as a consequence, we lock in the possibility of saving 200 lives for sure. In the alternative version, the problem is framed in terms of losses. Now the implicit reference point is the best outcome of no deaths due to the avian disease. And in this case, most people are risk seeking when making decisions regarding losses.
These are just three of the many biases that affect even the smartest among us. Other research shows that we are biased in favor of information that is easy for our minds to retrieve, are insensitive to the importance of base rates and sample sizes when we are making inferences, assume that random events will always look random, search for information that confirms our expectations even when disconfirming information would be more informative, claim a priori knowledge that didn’t exist due to the hindsight bias, and are subject to a host of other effects that continue to be developed in the literature (Bazerman & Moore, 2013).
Contemporary Developments
Bounded rationality served as the integrating concept of the field of behavioral decision research for 40 years. Then, in 2000, Thaler (2000) suggested that decision making is bounded in two ways not precisely captured by the concept of bounded rationality. First, he argued that our willpower is bounded and that, as a consequence, we give greater weight to present concerns than to future concerns. Our immediate motivations are often inconsistent with our long-term interests in a variety of ways, such as the common failure to save adequately for retirement or the difficulty many people have staying on a diet. Second, Thaler suggested that our self-interest is bounded such that we care about the outcomes of others. Sometimes we positively value the outcomes of others—giving them more of a commodity than is necessary out of a desire to be fair, for example. And, in unfortunate contexts, we sometimes are willing to forgo our own benefits out of a desire to harm others.
My colleagues and I have recently added two other important bounds to the list. Chugh, Banaji, and Bazerman (2005) and Banaji and Bhaskar (2000) introduced the concept of bounded ethicality, which refers to the notion that our ethics are limited in ways we are not even aware of ourselves. Second, Chugh and Bazerman (2007) developed the concept of bounded awareness to refer to the broad array of focusing failures that affect our judgment, specifically the many ways in which we fail to notice obvious and important information that is available to us.
A final development is the application of judgment and decision-making research to the areas of behavioral economics, behavioral finance, and behavioral marketing, among others. In each case, these fields have been transformed by applying and extending research from the judgment and decision-making literature.
Fixing Our Decisions
Ample evidence documents that even smart people are routinely impaired by biases. Early research demonstrated, unfortunately, that awareness of these problems does little to reduce bias (Fischhoff, 1982). The good news is that more recent research documents interventions that do help us overcome our faulty thinking (Bazerman & Moore, 2013).
One critical path to fixing our biases is provided in Stanovich and West’s (2000) distinction between System 1 and System 2 decision making. System 1 processing is our intuitive system, which is typically fast, automatic, effortless, implicit, and emotional. System 2 refers to decision making that is slower, conscious, effortful, explicit, and logical. The six logical steps of decision making outlined earlier describe a System 2 process.
Clearly, a complete System 2 process is not required for every decision we make. In most situations, our System 1 thinking is quite sufficient; it would be impractical, for example, to logically reason through every choice we make while shopping for groceries. But, preferably, System 2 logic should influence our most important decisions. Nonetheless, we use our System 1 processes for most decisions in life, relying on it even when making important decisions.
The key to reducing the effects of bias and improving our decisions is to transition from trusting our intuitive System 1 thinking toward engaging more in deliberative System 2 thought. Unfortunately, the busier and more rushed people are, the more they have on their minds, and the more likely they are to rely on System 1 thinking (Chugh, 2004). The frantic pace of professional life suggests that executives often rely on System 1 thinking (Chugh, 2004).
Fortunately, it is possible to identify conditions where we rely on intuition at our peril and substitute more deliberative thought. One fascinating example of this substitution comes from journalist Michael Lewis’ (2003) account of how Billy Beane, the general manager of the Oakland Athletics, improved the outcomes of the failing baseball team after recognizing that the intuition of baseball executives was limited and systematically biased and that their intuitions had been incorporated into important decisions in ways that created enormous mistakes. Lewis (2003) documents that baseball professionals tend to overgeneralize from their personal experiences, be overly influenced by players’ very recent performances, and overweigh what they see with their own eyes, despite the fact that players’ multiyear records provide far better data. By substituting valid predictors of future performance (System 2 thinking), the Athletics were able to outperform expectations given their very limited payroll.
Another important direction for improving decisions comes from Thaler and Sunstein’s (2008) book Nudge: Improving Decisions about Health, Wealth, and Happiness. Rather than setting out to debias human judgment, Thaler and Sunstein outline a strategy for how “decision architects” can change environments in ways that account for human bias and trigger better decisions as a result. For example, Beshears, Choi, Laibson, and Madrian (2008) have shown that simple changes to defaults can dramatically improve people’s decisions. They tackle the failure of many people to save for retirement and show that a simple change can significantly influence enrollment in 401(k) programs. In most companies, when you start your job, you need to proactively sign up to join the company’s retirement savings plan. Many people take years before getting around to doing so. When, instead, companies automatically enroll their employees in 401(k) programs and give them the opportunity to “opt out,” the net enrollment rate rises significantly. By changing defaults, we can counteract the human tendency to live with the status quo.
Similarly, Johnson and Goldstein’s (2003) cross-European organ donation study reveals that countries that have opt-in organ donation policies, where the default is not to harvest people’s organs without their prior consent, sacrifice thousands of lives in comparison to opt-out policies, where the default is to harvest organs. The United States and too many other countries require that citizens opt in to organ donation through a proactive effort; as a consequence, consent rates range between 4.25%–44% across these countries. In contrast, changing the decision architecture to an opt-out policy improves consent rates to 85.9% to 99.98%. Designing the donation system with knowledge of the power of defaults can dramatically change donation rates without changing the options available to citizens. In contrast, a more intuitive strategy, such as the one in place in the United States, inspires defaults that result in many unnecessary deaths.
Concluding Thoughts
Our days are filled with decisions ranging from the small (what should I wear today?) to the important (should we get married?). Many have real world consequences on our health, finances and relationships. Simon, Kahneman, and Tversky created a field that highlights the surprising and predictable deficiencies of the human mind when making decisions. As we understand more about our own biases and thinking shortcomings we can begin to take them into account or to avoid them. Only now have we reached the frontier of using this knowledge to help people make better decisions.
Outside Resources
Book: Bazerman, M. H., & Moore, D. (2013). Judgment in managerial decision making (8th ed.). John Wiley & Sons Inc.
Book: Kahneman, D. (2011) Thinking, Fast and Slow. New York, NY: Farrar, Straus and Giroux.
Book: Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving Decisions about Health, Wealth, and Happiness. New Haven, CT: Yale University Press.
Discussion Questions
1. Are the biases in this module a problem in the real world?
2. How would you use this module to be a better decision maker?
3. Can you see any biases in today’s newspaper?
Vocabulary
Anchoring
The bias to be affected by an initial anchor, even if the anchor is arbitrary, and to insufficiently adjust our judgments away from that anchor.
Biases
The systematic and predictable mistakes that influence the judgment of even very talented human beings.
Bounded awareness
The systematic ways in which we fail to notice obvious and important information that is available to us.
Bounded ethicality
The systematic ways in which our ethics are limited in ways we are not even aware of ourselves.
Bounded rationality
Model of human behavior that suggests that humans try to make rational decisions but are bounded due to cognitive limitations.
Bounded self-interest
The systematic and predictable ways in which we care about the outcomes of others.
Bounded willpower
The tendency to place greater weight on present concerns rather than future concerns.
Framing
The bias to be systematically affected by the way in which information is presented, while holding the objective information constant.
Heuristics
cognitive (or thinking) strategies that simplify decision making by using mental short-cuts
Overconfident
The bias to have greater confidence in your judgment than is warranted based on a rational assessment.
System 1
Our intuitive decision-making system, which is typically fast, automatic, effortless, implicit, and emotional.
System 2
Our more deliberative decision-making system, which is slower, conscious, effortful, explicit, and logical. | textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/05%3A_Cognition_and_Language/5.03%3A_Judgement_and_Decision_Making.txt |
• 6.1: Cognitive Development in Childhood
This module examines what cognitive development is, major theories about how it occurs, the roles of nature and nurture, whether it is continuous or discontinuous, and how research in the area is being used to improve education.
• 6.2: Adolescent Development
Adolescence is a period that begins with puberty and ends with the transition to adulthood (approximately ages 10-20). Physical changes associated with puberty are triggered by hormones. Cognitive changes include improvements in complex and abstract thought, as well as development that happens at different rates in distinct parts of the brain & increases adolescents’ propensity for risky behavior because increases in sensation-seeking and reward motivation precede increases in cognitive control.
• 6.3: Aging
Traditionally, research on aging described only the lives of people over age 65 and the very old. Contemporary theories and research recognize that biogenetic and psychological processes of aging are complex and lifelong. We consider contemporary questions about cognitive aging and changes in personality, self-related beliefs, social relationships, and subjective well-being. These four aspects of psychosocial aging are related to health and longevity.
06: Development
By Robert Siegler
Carnegie Mellon University
This module examines what cognitive development is, major theories about how it occurs, the roles of nature and nurture, whether it is continuous or discontinuous, and how research in the area is being used to improve education.
learning objectives
• Be able to identify and describe the main areas of cognitive development.
• Be able to describe major theories of cognitive development and what distinguishes them.
• Understand how nature and nurture work together to produce cognitive development.
• Understand why cognitive development is sometimes viewed as discontinuous and sometimes as continuous.
• Know some ways in which research on cognitive development is being used to improve education.
Introduction
By the time you reach adulthood you have learned a few things about how the world works. You know, for instance, that you can’t walk through walls or leap into the tops of trees. You know that although you cannot see your car keys they’ve got to be around here someplace. What’s more, you know that if you want to communicate complex ideas like ordering a triple-shot soy vanilla latte with chocolate sprinkles it’s better to use words with meanings attached to them rather than simply gesturing and grunting. People accumulate all this useful knowledge through the process of cognitive development, which involves a multitude of factors, both inherent and learned.
Cognitive development refers to the development of thinking across the lifespan. Defining thinking can be problematic, because no clear boundaries separate thinking from other mental activities. Thinking obviously involves the higher mental processes: problem solving, reasoning, creating, conceptualizing, categorizing, remembering, planning, and so on. However, thinking also involves other mental processes that seem more basic and at which even toddlers are skilled—such as perceiving objects and events in the environment, acting skillfully on objects to obtain goals, and understanding and producing language. Yet other areas of human development that involve thinking are not usually associated with cognitive development, because thinking isn’t a prominent feature of them—such as personality and temperament.
As the name suggests, cognitive development is about change. Children’s thinking changes in dramatic and surprising ways. Consider DeVries’s (1969) study of whether young children understand the difference between appearance and reality. To find out, she brought an unusually even-tempered cat named Maynard to a psychology laboratory and allowed the 3- to 6-year-old participants in the study to pet and play with him. DeVries then put a mask of a fierce dog on Maynard’s head, and asked the children what Maynard was. Despite all of the children having identified Maynard previously as a cat, now most 3-year-olds said that he was a dog and claimed that he had a dog’s bones and a dog’s stomach. In contrast, the 6-year-olds weren’t fooled; they had no doubt that Maynard remained a cat. Understanding how children’s thinking changes so dramatically in just a few years is one of the fascinating challenges in studying cognitive development.
There are several main types of theories of child development. Stage theories, such as Piaget’s stage theory, focus on whether children progress through qualitatively different stages of development. Sociocultural theories, such as that of Lev Vygotsky, emphasize how other people and the attitudes, values, and beliefs of the surrounding culture, influence children’s development. Information processing theories, such as that of David Klahr, examine the mental processes that produce thinking at any one time and the transition processes that lead to growth in that thinking.
At the heart of all of these theories, and indeed of all research on cognitive development, are two main questions: (1) How do nature and nurture interact to produce cognitive development? (2) Does cognitive development progress through qualitatively distinct stages? In the remainder of this module, we examine the answers that are emerging regarding these questions, as well as ways in which cognitive developmental research is being used to improve education.
Nature and Nurture
The most basic question about child development is how nature and nurture together shape development. Nature refers to our biological endowment, the genes we receive from our parents. Nurture refers to the environments, social as well as physical, that influence our development, everything from the womb in which we develop before birth to the homes in which we grow up, the schools we attend, and the many people with whom we interact.
The nature-nurture issue is often presented as an either-or question: Is our intelligence (for example) due to our genes or to the environments in which we live? In fact, however, every aspect of development is produced by the interaction of genes and environment. At the most basic level, without genes, there would be no child, and without an environment to provide nurture, there also would be no child.
The way in which nature and nurture work together can be seen in findings on visual development. Many people view vision as something that people either are born with or that is purely a matter of biological maturation, but it also depends on the right kind of experience at the right time. For example, development of depth perception, the ability to actively perceive the distance from oneself to objects in the environment, depends on seeing patterned light and having normal brain activity in response to the patterned light, in infancy (Held, 1993). If no patterned light is received, for example when a baby has severe cataracts or blindness that is not surgically corrected until later in development, depth perception remains abnormal even after the surgery.
Adding to the complexity of the nature-nurture interaction, children’s genes lead to their eliciting different treatment from other people, which influences their cognitive development. For example, infants’ physical attractiveness and temperament are influenced considerably by their genetic inheritance, but it is also the case that parents provide more sensitive and affectionate care to easygoing and attractive infants than to difficult and less attractive ones, which can contribute to the infants’ later cognitive development (Langlois et al., 1995; van den Boom & Hoeksma, 1994).
Also contributing to the complex interplay of nature and nurture is the role of children in shaping their own cognitive development. From the first days out of the womb, children actively choose to attend more to some things and less to others. For example, even 1-month-olds choose to look at their mother’s face more than at the faces of other women of the same age and general level of attractiveness (Bartrip, Morton, & de Schonen, 2001). Children’s contributions to their own cognitive development grow larger as they grow older (Scarr & McCartney, 1983). When children are young, their parents largely determine their experiences: whether they will attend day care, the children with whom they will have play dates, the books to which they have access, and so on. In contrast, older children and adolescents choose their environments to a larger degree. Their parents’ preferences largely determine how 5-year-olds spend time, but 15-year-olds’ own preferences largely determine when, if ever, they set foot in a library. Children’s choices often have large consequences. To cite one example, the more that children choose to read, the more that their reading improves in future years (Baker, Dreher, & Guthrie, 2000). Thus, the issue is not whether cognitive development is a product of nature or nurture; rather, the issue is how nature and nurture work together to produce cognitive development.
Does Cognitive Development Progress Through Distinct Stages?
Some aspects of the development of living organisms, such as the growth of the width of a pine tree, involve quantitative changes, with the tree getting a little wider each year. Other changes, such as the life cycle of a ladybug, involve qualitative changes, with the creature becoming a totally different type of entity after a transition than before (Figure 6.2.1). The existence of both gradual, quantitative changes and relatively sudden, qualitative changes in the world has led researchers who study cognitive development to ask whether changes in children’s thinking are gradual and continuous or sudden and discontinuous.
The great Swiss psychologist Jean Piaget proposed that children’s thinking progresses through a series of four discrete stages. By “stages,” he meant periods during which children reasoned similarly about many superficially different problems, with the stages occurring in a fixed order and the thinking within different stages differing in fundamental ways. The four stages that Piaget hypothesized were the sensorimotor stage (birth to 2 years), thepreoperational reasoning stage (2 to 6 or 7 years), the concrete operational reasoning stage (6 or 7 to 11 or 12 years), and the formal operational reasoning stage (11 or 12 years and throughout the rest of life).
During the sensorimotor stage, children’s thinking is largely realized through their perceptions of the world and their physical interactions with it. Their mental representations are very limited. Consider Piaget’s object permanence task, which is one of his most famous problems. If an infant younger than 9 months of age is playing with a favorite toy, and another person removes the toy from view, for example by putting it under an opaque cover and not letting the infant immediately reach for it, the infant is very likely to make no effort to retrieve it and to show no emotional distress (Piaget, 1954). This is not due to their being uninterested in the toy or unable to reach for it; if the same toy is put under a clear cover, infants below 9 months readily retrieve it (Munakata, McClelland, Johnson, & Siegler, 1997). Instead, Piaget claimed that infants less than 9 months do not understand that objects continue to exist even when out of sight.
During the preoperational stage, according to Piaget, children can solve not only this simple problem (which they actually can solve after 9 months) but show a wide variety of other symbolic-representation capabilities, such as those involved in drawing and using language. However, such 2- to 7-year-olds tend to focus on a single dimension, even when solving problems would require them to consider multiple dimensions. This is evident in Piaget’s (1952) conservation problems. For example, if a glass of water is poured into a taller, thinner glass, children below age 7 generally say that there now is more water than before. Similarly, if a clay ball is reshaped into a long, thin sausage, they claim that there is now more clay, and if a row of coins is spread out, they claim that there are now more coins. In all cases, the children are focusing on one dimension, while ignoring the changes in other dimensions (for example, the greater width of the glass and the clay ball).
Children overcome this tendency to focus on a single dimension during the concrete operations stage, and think logically in most situations. However, according to Piaget, they still cannot think in systematic scientific ways, even when such thinking would be useful. Thus, if asked to find out which variables influence the period that a pendulum takes to complete its arc, and given weights that they can attach to strings in order to do experiments with the pendulum to find out, most children younger than age 12, perform biased experiments from which no conclusion can be drawn, and then conclude that whatever they originally believed is correct. For example, if a boy believed that weight was the only variable that mattered, he might put the heaviest weight on the shortest string and push it the hardest, and then conclude that just as he thought, weight is the only variable that matters (Inhelder & Piaget, 1958).
Finally, in the formal operations period, children attain the reasoning power of mature adults, which allows them to solve the pendulum problem and a wide range of other problems. However, this formal operations stagetends not to occur without exposure to formal education in scientific reasoning, and appears to be largely or completely absent from some societies that do not provide this type of education.
Although Piaget’s theory has been very influential, it has not gone unchallenged. Many more recent researchers have obtained findings indicating that cognitive development is considerably more continuous than Piaget claimed. For example, Diamond (1985) found that on the object permanence task described above, infants show earlier knowledge if the waiting period is shorter. At age 6 months, they retrieve the hidden object if the wait is no longer than 2 seconds; at 7 months, they retrieve it if the wait is no longer than 4 seconds; and so on. Even earlier, at 3 or 4 months, infants show surprise in the form of longer looking times if objects suddenly appear to vanish with no obvious cause (Baillargeon, 1987). Similarly, children’s specific experiences can greatly influence when developmental changes occur. Children of pottery makers in Mexican villages, for example, know that reshaping clay does not change the amount of clay at much younger ages than children who do not have similar experiences (Price-Williams, Gordon, & Ramirez, 1969).
So, is cognitive development fundamentally continuous or fundamentally discontinuous? A reasonable answer seems to be, “It depends on how you look at it and how often you look.” For example, under relatively facilitative circumstances, infants show early forms of object permanence by 3 or 4 months, and they gradually extend the range of times for which they can remember hidden objects as they grow older. However, on Piaget’s original object permanence task, infants do quite quickly change toward the end of their first year from not reaching for hidden toys to reaching for them, even after they’ve experienced a substantial delay before being allowed to reach. Thus, the debate between those who emphasize discontinuous, stage-like changes in cognitive development and those who emphasize gradual continuous changes remains a lively one.
Applications to Education
Understanding how children think and learn has proven useful for improving education. One example comes from the area of reading. Cognitive developmental research has shown that phonemic awareness—that is, awareness of the component sounds within words—is a crucial skill in learning to read. To measure awareness of the component sounds within words, researchers ask children to decide whether two words rhyme, to decide whether the words start with the same sound, to identify the component sounds within words, and to indicate what would be left if a given sound were removed from a word. Kindergartners’ performance on these tasks is the strongest predictor of reading achievement in third and fourth grade, even stronger than IQ or social class background (Nation, 2008). Moreover, teaching these skills to randomly chosen 4- and 5-year-olds results in their being better readers years later (National Reading Panel, 2000).
Another educational application of cognitive developmental research involves the area of mathematics. Even before they enter kindergarten, the mathematical knowledge of children from low-income backgrounds lags far behind that of children from more affluent backgrounds. Ramani and Siegler (2008) hypothesized that this difference is due to the children in middle- and upper-income families engaging more frequently in numerical activities, for example playing numerical board games such as Chutes and Ladders. Chutes and Ladders is a game with a number in each square; children start at the number one and spin a spinner or throw a dice to determine how far to move their token. Playing this game seemed likely to teach children about numbers, because in it, larger numbers are associated with greater values on a variety of dimensions. In particular, the higher the number that a child’s token reaches, the greater the distance the token will have traveled from the starting point, the greater the number of physical movements the child will have made in moving the token from one square to another, the greater the number of number-words the child will have said and heard, and the more time will have passed since the beginning of the game. These spatial, kinesthetic, verbal, and time-based cues provide a broad-based, multisensory foundation for knowledge of numerical magnitudes (the sizes of numbers), a type of knowledge that is closely related to mathematics achievement test scores (Booth & Siegler, 2006).
Playing this numerical board game for roughly 1 hour, distributed over a 2-week period, improved low-income children’s knowledge of numerical magnitudes, ability to read printed numbers, and skill at learning novel arithmetic problems. The gains lasted for months after the game-playing experience (Ramani & Siegler, 2008; Siegler & Ramani, 2009). An advantage of this type of educational intervention is that it has minimal if any cost—a parent could just draw a game on a piece of paper.
Understanding of cognitive development is advancing on many different fronts. One exciting area is linking changes in brain activity to changes in children’s thinking (Nelson et al., 2006). Although many people believe that brain maturation is something that occurs before birth, the brain actually continues to change in large ways for many years thereafter. For example, a part of the brain called the prefrontal cortex, which is located at the front of the brain and is particularly involved with planning and flexible problem solving, continues to develop throughout adolescence (Blakemore & Choudhury, 2006). Such new research domains, as well as enduring issues such as nature and nurture, continuity and discontinuity, and how to apply cognitive development research to education, insure that cognitive development will continue to be an exciting area of research in the coming years.
Conclusion
Research into cognitive development has shown us that minds don’t just form according to a uniform blueprint or innate intellect, but through a combination of influencing factors. For instance, if we want our kids to have a strong grasp of language we could concentrate on phonemic awareness early on. If we want them to be good at math and science we could engage them in numerical games and activities early on. Perhaps most importantly, we no longer think of brains as empty vessels waiting to be filled up with knowledge but as adaptable organs that develop all the way through early adulthood.
Outside Resources
Book: Frye, D., Baroody, A., Burchinal, M., Carver, S. M., Jordan, N. C., & McDowell, J. (2013). Teaching math to young children: A practice guide. Washington, DC: National Center for Education Evaluation and Regional Assistance (NCEE), Institute of Education Sciences, U.S. Department of Education.
Book: Goswami, U. G. (2010). The Blackwell Handbook of Childhood Cognitive Development. New York: John Wiley and Sons.
Book: Kuhn, D., & Siegler, R. S. (Vol. Eds.). (2006). Volume 2: Cognition, perception, and language. In W. Damon & R. M. Lerner (Series Eds.), Handbook of child psychology (6th ed.). Hoboken, NJ: Wiley.
Book: Miller, P. H. (2011). Theories of developmental psychology (5th ed.). New York: Worth.
Book: Siegler, R. S., & Alibali, M. W. (2004). Children's thinking (4th ed.). Upper Saddle River, NJ: Prentice-Hall.
Discussion Questions
1. Why are there different theories of cognitive development? Why don’t researchers agree on which theory is the right one?
2. Do children’s natures differ, or do differences among children only reflect differences in their experiences?
3. Do you see development as more continuous or more discontinuous?
4. Can you think of ways other than those described in the module in which research on cognitive development could be used to improve education?
Vocabulary
Chutes and Ladders
A numerical board game that seems to be useful for building numerical knowledge.
Concrete operations stage
Piagetian stage between ages 7 and 12 when children can think logically about concrete situations but not engage in systematic scientific reasoning.
Conservation problems
Problems pioneered by Piaget in which physical transformation of an object or set of objects changes a perceptually salient dimension but not the quantity that is being asked about.
Continuous development
Ways in which development occurs in a gradual incremental manner, rather than through sudden jumps.
Depth perception
The ability to actively perceive the distance from oneself of objects in the environment.
Discontinuous development
Discontinuous development
Formal operations stage
Piagetian stage starting at age 12 years and continuing for the rest of life, in which adolescents may gain the reasoning powers of educated adults.
Information processing theories
Theories that focus on describing the cognitive processes that underlie thinking at any one age and cognitive growth over time.
Nature
The genes that children bring with them to life and that influence all aspects of their development.
Numerical magnitudes
The sizes of numbers.
Nurture
The environments, starting with the womb, that influence all aspects of children’s development.
Object permanence task
The Piagetian task in which infants below about 9 months of age fail to search for an object that is removed from their sight and, if not allowed to search immediately for the object, act as if they do not know that it continues to exist.
Phonemic awareness
Awareness of the component sounds within words.
Piaget’s theory
Theory that development occurs through a sequence of discontinuous stages: the sensorimotor, preoperational, concrete operational, and formal operational stages.
Preoperational reasoning stage
Period within Piagetian theory from age 2 to 7 years, in which children can represent objects through drawing and language but cannot solve logical reasoning problems, such as the conservation problems.
Qualitative changes
Large, fundamental change, as when a caterpillar changes into a butterfly; stage theories such as Piaget’s posit that each stage reflects qualitative change relative to previous stages.
Quantitative changes
Gradual, incremental change, as in the growth of a pine tree’s girth.
Sensorimotor stage
Period within Piagetian theory from birth to age 2 years, during which children come to represent the enduring reality of objects.
Sociocultural theories
Theory founded in large part by Lev Vygotsky that emphasizes how other people and the attitudes, values, and beliefs of the surrounding culture influence children’s development. | textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/06%3A_Development/6.01%3A_Cognitive_Development_in_Childhood.txt |
By Jennifer Lansford
Duke University
Adolescence is a period that begins with puberty and ends with the transition to adulthood (approximately ages 10–20). Physical changes associated with puberty are triggered by hormones. Cognitive changes include improvements in complex and abstract thought, as well as development that happens at different rates in distinct parts of the brain and increases adolescents’ propensity for risky behavior because increases in sensation-seeking and reward motivation precede increases in cognitive control. Adolescents’ relationships with parents go through a period of redefinition in which adolescents become more autonomous, and aspects of parenting, such as distal monitoring and psychological control, become more salient. Peer relationships are important sources of support and companionship during adolescence yet can also promote problem behaviors. Same-sex peer groups evolve into mixed-sex peer groups, and adolescents’ romantic relationships tend to emerge from these groups. Identity formation occurs as adolescents explore and commit to different roles and ideological positions. Nationality, gender, ethnicity, socioeconomic status, religious background, sexual orientation, and genetic factors shape how adolescents behave and how others respond to them, and are sources of diversity in adolescence.
learning objectives
• Describe major features of physical, cognitive, and social development during adolescence.
• Understand why adolescence is a period of heightened risk taking.
• Be able to explain sources of diversity in adolescent development.
Adolescence Defined
Adolescence is a developmental stage that has been defined as starting with puberty and ending with the transition to adulthood (approximately ages 10–20). Adolescence has evolved historically, with evidence indicating that this stage is lengthening as individuals start puberty earlier and transition to adulthood later than in the past. Puberty today begins, on average, at age 10–11 years for girls and 11–12 years for boys. This average age of onset has decreased gradually over time since the 19th century by 3–4 months per decade, which has been attributed to a range of factors including better nutrition, obesity, increased father absence, and other environmental factors (Steinberg, 2013). Completion of formal education, financial independence from parents, marriage, and parenthood have all been markers of the end of adolescence and beginning of adulthood, and all of these transitions happen, on average, later now than in the past. In fact, the prolonging of adolescence has prompted the introduction of a new developmental period called emerging adulthood that captures these developmental changes out of adolescence and into adulthood, occurring from approximately ages 18 to 29 (Arnett, 2000).
This module will outline changes that occur during adolescence in three domains: physical, cognitive, and social. Within the social domain, changes in relationships with parents, peers, and romantic partners will be considered. Next, the module turns to adolescents’ psychological and behavioral adjustment, including identity formation, aggression and antisocial behavior, anxiety and depression, and academic achievement. Finally, the module summarizes sources of diversity in adolescents’ experiences and development.
Physical Changes
Physical changes of puberty mark the onset of adolescence (Lerner & Steinberg, 2009). For both boys and girls, these changes include a growth spurt in height, growth of pubic and underarm hair, and skin changes (e.g., pimples). Boys also experience growth in facial hair and a deepening of their voice. Girls experience breast development and begin menstruating. These pubertal changes are driven by hormones, particularly an increase in testosterone for boys and estrogen for girls.
Cognitive Changes
Major changes in the structure and functioning of the brain occur during adolescence and result in cognitive and behavioral developments (Steinberg, 2008). Cognitive changes during adolescence include a shift from concrete to more abstract and complex thinking. Such changes are fostered by improvements during early adolescence in attention, memory, processing speed, and metacognition (ability to think about thinking and therefore make better use of strategies like mnemonic devices that can improve thinking). Early in adolescence, changes in the brain’s dopaminergic system contribute to increases in adolescents’ sensation-seeking and reward motivation. Later in adolescence, the brain’s cognitive control centers in the prefrontal cortex develop, increasing adolescents’ self-regulation and future orientation. The difference in timing of the development of these different regions of the brain contributes to more risk taking during middle adolescence because adolescents are motivated to seek thrills that sometimes come from risky behavior, such as reckless driving, smoking, or drinking, and have not yet developed the cognitive control to resist impulses or focus equally on the potential risks (Steinberg, 2008). One of the world’s leading experts on adolescent development, Laurence Steinberg, likens this to engaging a powerful engine before the braking system is in place. The result is that adolescents are more prone to risky behaviors than are children or adults.
Social Changes
Parents
Although peers take on greater importance during adolescence, family relationships remain important too. One of the key changes during adolescence involves a renegotiation of parent–child relationships. As adolescents strive for more independence and autonomy during this time, different aspects of parenting become more salient. For example, parents’ distal supervision and monitoring become more important as adolescents spend more time away from parents and in the presence of peers. Parental monitoring encompasses a wide range of behaviors such as parents’ attempts to set rules and know their adolescents’ friends, activities, and whereabouts, in addition to adolescents’ willingness to disclose information to their parents (Stattin & Kerr, 2000). Psychological control, which involves manipulation and intrusion into adolescents’ emotional and cognitive world through invalidating adolescents’ feelings and pressuring them to think in particular ways (Barber, 1996), is another aspect of parenting that becomes more salient during adolescence and is related to more problematic adolescent adjustment.
Peers
As children become adolescents, they usually begin spending more time with their peers and less time with their families, and these peer interactions are increasingly unsupervised by adults. Children’s notions of friendship often focus on shared activities, whereas adolescents’ notions of friendship increasingly focus on intimate exchanges of thoughts and feelings. During adolescence, peer groups evolve from primarily single-sex to mixed-sex. Adolescents within a peer group tend to be similar to one another in behavior and attitudes, which has been explained as being a function of homophily (adolescents who are similar to one another choose to spend time together in a “birds of a feather flock together” way) and influence (adolescents who spend time together shape each other’s behavior and attitudes). One of the most widely studied aspects of adolescent peer influence is known as deviant peer contagion (Dishion & Tipsord, 2011), which is the process by which peers reinforce problem behavior by laughing or showing other signs of approval that then increase the likelihood of future problem behavior.
Peers can serve both positive and negative functions during adolescence. Negative peer pressure can lead adolescents to make riskier decisions or engage in more problematic behavior than they would alone or in the presence of their family. For example, adolescents are much more likely to drink alcohol, use drugs, and commit crimes when they are with their friends than when they are alone or with their family. However, peers also serve as an important source of social support and companionship during adolescence, and adolescents with positive peer relationships are happier and better adjusted than those who are socially isolated or have conflictual peer relationships.
Crowds are an emerging level of peer relationships in adolescence. In contrast to friendships (which are reciprocal dyadic relationships) and cliques (which refer to groups of individuals who interact frequently), crowds are characterized more by shared reputations or images than actual interactions (Brown & Larson, 2009). These crowds reflect different prototypic identities (such as jocks or brains) and are often linked with adolescents’ social status and peers’ perceptions of their values or behaviors.
Romantic relationships
Adolescence is the developmental period during which romantic relationships typically first emerge. Initially, same-sex peer groups that were common during childhood expand into mixed-sex peer groups that are more characteristic of adolescence. Romantic relationships often form in the context of these mixed-sex peer groups (Connolly, Furman, & Konarski, 2000). Although romantic relationships during adolescence are often short-lived rather than long-term committed partnerships, their importance should not be minimized. Adolescents spend a great deal of time focused on romantic relationships, and their positive and negative emotions are more tied to romantic relationships (or lack thereof) than to friendships, family relationships, or school (Furman & Shaffer, 2003). Romantic relationships contribute to adolescents’ identity formation, changes in family and peer relationships, and adolescents’ emotional and behavioral adjustment.
Furthermore, romantic relationships are centrally connected to adolescents’ emerging sexuality. Parents, policymakers, and researchers have devoted a great deal of attention to adolescents’ sexuality, in large part because of concerns related to sexual intercourse, contraception, and preventing teen pregnancies. However, sexuality involves more than this narrow focus. For example, adolescence is often when individuals who are lesbian, gay, bisexual, or transgender come to perceive themselves as such (Russell, Clarke, & Clary, 2009). Thus, romantic relationships are a domain in which adolescents experiment with new behaviors and identities.
Behavioral and Psychological Adjustment
Identity formation
Theories of adolescent development often focus on identity formation as a central issue. For example, in Erikson’s (1968) classic theory of developmental stages, identity formation was highlighted as the primary indicator of successful development during adolescence (in contrast to role confusion, which would be an indicator of not successfully meeting the task of adolescence). Marcia (1966) described identify formation during adolescence as involving both decision points and commitments with respect to ideologies (e.g., religion, politics) and occupations. He described four identity statuses: foreclosure, identity diffusion, moratorium, and identity achievement. Foreclosure occurs when an individual commits to an identity without exploring options. Identity diffusion occurs when adolescents neither explore nor commit to any identities. Moratorium is a state in which adolescents are actively exploring options but have not yet made commitments. Identity achievement occurs when individuals have explored different options and then made identity commitments. Building on this work, other researchers have investigated more specific aspects of identity. For example, Phinney (1989) proposed a model of ethnic identity development that included stages of unexplored ethnic identity, ethnic identity search, and achieved ethnic identity.
Aggression and antisocial behavior
Several major theories of the development of antisocial behavior treat adolescence as an important period. Patterson’s (1982) early versus late starter model of the development of aggressive and antisocial behavior distinguishes youths whose antisocial behavior begins during childhood (early starters) versus adolescence (late starters). According to the theory, early starters are at greater risk for long-term antisocial behavior that extends into adulthood than are late starters. Late starters who become antisocial during adolescence are theorized to experience poor parental monitoring and supervision, aspects of parenting that become more salient during adolescence. Poor monitoring and lack of supervision contribute to increasing involvement with deviant peers, which in turn promotes adolescents’ own antisocial behavior. Late starters desist from antisocial behavior when changes in the environment make other options more appealing. Similarly, Moffitt’s (1993) life-course persistent versus adolescent-limited model distinguishes between antisocial behavior that begins in childhood versus adolescence. Moffitt regards adolescent-limited antisocial behavior as resulting from a “maturity gap” between adolescents’ dependence on and control by adults and their desire to demonstrate their freedom from adult constraint. However, as they continue to develop, and legitimate adult roles and privileges become available to them, there are fewer incentives to engage in antisocial behavior, leading to desistance in these antisocial behaviors.
Anxiety and depression
Developmental models of anxiety and depression also treat adolescence as an important period, especially in terms of the emergence of gender differences in prevalence rates that persist through adulthood (Rudolph, 2009). Starting in early adolescence, compared with males, females have rates of anxiety that are about twice as high and rates of depression that are 1.5 to 3 times as high (American Psychiatric Association, 2013). Although the rates vary across specific anxiety and depression diagnoses, rates for some disorders are markedly higher in adolescence than in childhood or adulthood. For example, prevalence rates for specific phobias are about 5% in children and 3%–5% in adults but 16% in adolescents. Anxiety and depression are particularly concerning because suicide is one of the leading causes of death during adolescence. Developmental models focus on interpersonal contexts in both childhood and adolescence that foster depression and anxiety (e.g., Rudolph, 2009). Family adversity, such as abuse and parental psychopathology, during childhood sets the stage for social and behavioral problems during adolescence. Adolescents with such problems generate stress in their relationships (e.g., by resolving conflict poorly and excessively seeking reassurance) and select into more maladaptive social contexts (e.g., “misery loves company” scenarios in which depressed youths select other depressed youths as friends and then frequently co-ruminate as they discuss their problems, exacerbating negative affect and stress). These processes are intensified for girls compared with boys because girls have more relationship-oriented goals related to intimacy and social approval, leaving them more vulnerable to disruption in these relationships. Anxiety and depression then exacerbate problems in social relationships, which in turn contribute to the stability of anxiety and depression over time.
Academic achievement
Adolescents spend more waking time in school than in any other context (Eccles & Roeser, 2011). Academic achievement during adolescence is predicted by interpersonal (e.g., parental engagement in adolescents’ education), intrapersonal (e.g., intrinsic motivation), and institutional (e.g., school quality) factors. Academic achievement is important in its own right as a marker of positive adjustment during adolescence but also because academic achievement sets the stage for future educational and occupational opportunities. The most serious consequence of school failure, particularly dropping out of school, is the high risk of unemployment or underemployment in adulthood that follows. High achievement can set the stage for college or future vocational training and opportunities.
Diversity
Adolescent development does not necessarily follow the same pathway for all individuals. Certain features of adolescence, particularly with respect to biological changes associated with puberty and cognitive changes associated with brain development, are relatively universal. But other features of adolescence depend largely on circumstances that are more environmentally variable. For example, adolescents growing up in one country might have different opportunities for risk taking than adolescents in a different country, and supports and sanctions for different behaviors in adolescence depend on laws and values that might be specific to where adolescents live. Likewise, different cultural norms regarding family and peer relationships shape adolescents’ experiences in these domains. For example, in some countries, adolescents’ parents are expected to retain control over major decisions, whereas in other countries, adolescents are expected to begin sharing in or taking control of decision making.
Even within the same country, adolescents’ gender, ethnicity, immigrant status, religion, sexual orientation, socioeconomic status, and personality can shape both how adolescents behave and how others respond to them, creating diverse developmental contexts for different adolescents. For example, early puberty (that occurs before most other peers have experienced puberty) appears to be associated with worse outcomes for girls than boys, likely in part because girls who enter puberty early tend to associate with older boys, which in turn is associated with early sexual behavior and substance use. For adolescents who are ethnic or sexual minorities, discrimination sometimes presents a set of challenges that nonminorities do not face.
Finally, genetic variations contribute an additional source of diversity in adolescence. Current approaches emphasize gene X environment interactions, which often follow a differential susceptibility model (Belsky & Pluess, 2009). That is, particular genetic variations are considered riskier than others, but genetic variations also can make adolescents more or less susceptible to environmental factors. For example, the association between the CHRM2genotype and adolescent externalizing behavior (aggression and delinquency)has been found in adolescents whose parents are low in monitoring behaviors (Dick et al., 2011). Thus, it is important to bear in mind that individual differences play an important role in adolescent development.
Conclusions
Adolescent development is characterized by biological, cognitive, and social changes. Social changes are particularly notable as adolescents become more autonomous from their parents, spend more time with peers, and begin exploring romantic relationships and sexuality. Adjustment during adolescence is reflected in identity formation, which often involves a period of exploration followed by commitments to particular identities. Adolescence is characterized by risky behavior, which is made more likely by changes in the brain in which reward-processing centers develop more rapidly than cognitive control systems, making adolescents more sensitive to rewards than to possible negative consequences. Despite these generalizations, factors such as country of residence, gender, ethnicity, and sexual orientation shape development in ways that lead to diversity of experiences across adolescence.
Outside Resources
Podcasts: Society for Research on Adolescence website with links to podcasts on a variety of topics, from autonomy-relatedness in adolescence, to the health ramifications of growing up in the United States.
www.s-r-a.org/sra-news/podcasts
Study: The National Longitudinal Study of Adolescent to Adult Health (Add Health) is a longitudinal study of a nationally representative sample of adolescents in grades 7-12 in the United States during the 1994-95 school year. Add Health combines data on respondents’ social, economic, psychological and physical well-being with contextual data on the family, neighborhood, community, school, friendships, peer groups, and romantic relationships.
http://www.cpc.unc.edu/projects/addhealth
Video: This is a series of TED talks on topics from the mysterious workings of the adolescent brain, to videos about surviving anxiety in adolescence.
http://tinyurl.com/lku4a3k
Web: UNICEF website on adolescents around the world. UNICEF provides videos and other resources as part of an initiative to challenge common preconceptions about adolescence.
http://www.unicef.org/adolescence/index.html
Discussion Questions
1. What can parents do to promote their adolescents’ positive adjustment?
2. In what ways do changes in brain development and cognition make adolescents particularly susceptible to peer influence?
3. How could interventions designed to prevent or reduce adolescents’ problem behavior be developed to take advantage of what we know about adolescent development?
4. Reflecting on your own adolescence, provide examples of times when you think your experience was different from those of your peers as a function of something unique about you.
5. In what ways was your experience of adolescence different from your parents’ experience of adolescence? How do you think adolescence may be different 20 years from now?
Vocabulary
Crowds
Adolescent peer groups characterized by shared reputations or images.
Deviant peer contagion
The spread of problem behaviors within groups of adolescents.
Differential susceptibility
Genetic factors that make individuals more or less responsive to environmental experiences.
Foreclosure
Individuals commit to an identity without exploration of options.
Homophily
Adolescents tend to associate with peers who are similar to themselves.
Identity achievement
Individuals have explored different options and then made commitments.
Identity diffusion
Adolescents neither explore nor commit to any roles or ideologies.
Moratorium
State in which adolescents are actively exploring options but have not yet made identity commitments.
Psychological control
Parents’ manipulation of and intrusion into adolescents’ emotional and cognitive world through invalidating adolescents’ feelings and pressuring them to think in particular ways. | textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/06%3A_Development/6.02%3A_Adolescent_Development.txt |
By Tara Queen and Jacqui Smith
University of Michigan
Traditionally, research on aging described only the lives of people over age 65 and the very old. Contemporary theories and research recognize that biogenetic and psychological processes of aging are complex and lifelong. Functioning in each period of life is influenced by what happened earlier and, in turn, affects subsequent change. We all age in specific social and historical contexts. Together, these multiple influences on aging make it difficult to define when middle-age or old age begins. This module describes central concepts and research about adult development and aging. We consider contemporary questions about cognitive aging and changes in personality, self-related beliefs, social relationships, and subjective well-being. These four aspects of psychosocial aging are related to health and longevity.
learning objectives
• Explain research approaches to studying aging.
• Describe cognitive, psychosocial, and physical changes that occur with age.
• Provide examples of how age-related changes in these domains are observed in the context of everyday life.
Introduction
We are currently living in an aging society (Rowe, 2009). Indeed, by 2030 when the last of the Baby Boomers reach age 65, the U.S. older population will be double that of 2010. Furthermore, because of increases in average life expectancy, each new generation can expect to live longer than their parents’ generation and certainly longer than their grandparents’ generation. As a consequence, it is time for individuals of all ages to rethink their personal life plans and consider prospects for a long life. When is the best time to start a family? Will the education gained up to age 20 be sufficient to cope with future technological advances and marketplace needs? What is the right balance between work, family, and leisure throughout life? What's the best age to retire? How can I age successfully and enjoy life to the fullest when I'm 80 or 90? In this module we will discuss several different domains of psychological research on aging that will help answer these important questions.
Overview: Life Span and Life Course Perspectives on Aging
Just as young adults differ from one another, older adults are also not all the same. In each decade of adulthood, we observe substantial heterogeneity in cognitive functioning, personality, social relationships, lifestyle, beliefs, and satisfaction with life. This heterogeneity reflects differences in rates of biogenetic and psychological aging and the sociocultural contexts and history of people's lives (Bronfenbrenner, 1979; Fingerman, Berg, Smith, & Antonucci, 2011). Theories of aging describe how these multiple factors interact and change over time. They describe why functioning differs on average between young, middle-aged, young-old, and very old adults and why there is heterogeneity within these age groups. Life course theories, for example, highlight the effects of social expectations and the normative timing of life events and social roles (e.g., becoming a parent, retirement). They also consider the lifelong cumulative effects of membership in specific cohorts (generations) and sociocultural subgroups (e.g., race, gender, socioeconomic status) and exposure to historical events (e.g., war, revolution, natural disasters; Elder, Johnson, & Crosnoe, 2003; Settersten, 2005). Life span theories complement the life-course perspective with a greater focus on processes within the individual (e.g., the aging brain). This approach emphasizes the patterning of lifelong intra- and inter-individual differences in the shape (gain, maintenance, loss), level, and rate of change (Baltes, 1987, 1997). Both life course and life span researchers generally rely on longitudinal studies to examine hypotheses about different patterns of aging associated with the effects of biogenetic, life history, social, and personal factors. Cross-sectional studies provide information about age-group differences, but these are confounded with cohort, time of study, and historical effects.
Cognitive Aging
Researchers have identified areas of both losses and gains in cognition in older age. Cognitive ability and intelligence are often measured using standardized tests and validated measures. The psychometric approachhas identified two categories of intelligence that show different rates of change across the life span (Schaie & Willis, 1996). Fluid intelligence refers to information processing abilities, such as logical reasoning, remembering lists, spatial ability, and reaction time. Crystallized intelligence encompasses abilities that draw upon experience and knowledge. Measures of crystallized intelligence include vocabulary tests, solving number problems, and understanding texts.
With age, systematic declines are observed on cognitive tasks requiring self-initiated, effortful processing, without the aid of supportive memory cues (Park, 2000). Older adults tend to perform poorer than young adults on memory tasks that involve recallof information, where individuals must retrieve information they learned previously without the help of a list of possible choices. For example, older adults may have more difficulty recalling facts such as names or contextual details about where or when something happened (Craik, 2000). What might explain these deficits as we age? As we age, working memory, or our ability to simultaneously store and use information, becomes less efficient (Craik & Bialystok, 2006). The ability to process information quickly also decreases with age. This slowing of processing speedmay explain age differences on many different cognitive tasks (Salthouse, 2004). Some researchers have argued that inhibitory functioning, or the ability to focus on certain information while suppressing attention to less pertinent information, declines with age and may explain age differences in performance on cognitive tasks (Hasher & Zacks, 1988). Finally, it is well established that our hearing and vision decline as we age. Longitudinal research has proposed that deficits in sensory functioning explain age differences in a variety of cognitive abilities (Baltes & Lindenberger, 1997).
Fewer age differences are observed when memory cues are available, such as for recognition memory tasks, or when individuals can draw upon acquired knowledge or experience. For example, older adults often perform as well if not better than young adults on tests of word knowledge or vocabulary. With age often comes expertise, and research has pointed to areas where aging experts perform as well or better than younger individuals. For example, older typists were found to compensate for age-related declines in speed by looking farther ahead at printed text (Salthouse, 1984). Compared to younger players, older chess experts are able to focus on a smaller set of possible moves, leading to greater cognitive efficiency (Charness, 1981). Accrued knowledge of everyday tasks, such as grocery prices, can help older adults to make better decisions than young adults (Tentori, Osheron, Hasher, & May, 2001).
How do changes or maintenance of cognitive ability affect older adults’ everyday lives? Researchers have studied cognition in the context of several different everyday activities. One example is driving. Although older adults often have more years of driving experience, cognitive declines related to reaction time or attentional processes may pose limitations under certain circumstances (Park & Gutchess, 2000). Research on interpersonal problem solving suggested that older adults use more effective strategies than younger adults to navigate through social and emotional problems (Blanchard-Fields, 2007). In the context of work, researchers rarely find that older individuals perform poorer on the job (Park & Gutchess, 2000). Similar to everyday problem solving, older workers may develop more efficient strategies and rely on expertise to compensate for cognitive decline.
Research on adult personality examines normative age-related increases and decreases in the expression of the so-called "Big Five" traits—extraversion, neuroticism, conscientiousness, agreeableness, and openness to new experience. Does personality change throughout adulthood? Previously the answer was no, but contemporary research shows that although some people’s personalities are relatively stable over time, others’ are not (Lucas & Donnellan, 2011; Roberts & Mroczek, 2008). Longitudinal studies reveal average changes during adulthood in the expression of some traits (e.g., neuroticism and openness decrease with age and conscientiousness increases) and individual differences in these patterns due to idiosyncratic life events (e.g., divorce, illness). Longitudinal research also suggests that adult personality traits, such as conscientiousness, predict important life outcomes including job success, health, and longevity (Friedman, Tucker, Tomlinson-Keasey, Schwartz, Wingard, & Criqui, 1993; Roberts, Kuncel, Shiner, Caspi, & Goldberg, 2007).
In contrast to the relative stability of personality traits, theories about the aging self-propose changes in self-related knowledge, beliefs, and autobiographical narratives. Responses to questions such as “Tell me something about yourself. Who are you?” "What are your hopes for the future?" provide insight into the characteristics and life themes that an individual considers uniquely distinguish him or herself from others. These self-descriptions enhance self-esteem and guide behavior (Markus & Nurius, 1986; McAdams, 2006). Theory suggests that as we age, themes that were relatively unimportant in young and middle adulthood gain in salience (e.g., generativity, health) and that people view themselves as improving over time (Ross & Wilson, 2003). Reorganizing personal life narratives and self-descriptions are the major tasks of midlife and young-old age due to transformations in professional and family roles and obligations. In advanced old age, self-descriptions are often characterized by a life review and reflections about having lived a long life. Birren and Schroots (2006), for example, found the process of life review in late life helped individuals confront and cope with the challenges of old age.
One aspect of the self that particularly interests life span and life course psychologists is the individual’s perception and evaluation of their own aging and identification with an age group. Subjective age is a multidimensional construct that indicates how old (or young) a person feels and into which age group a person categorizes him- or herself. After early adulthood, most people say that they feel younger than their chronological age and the gap between subjective age and actual age generally increases. On average, after age 40 people report feeling 20% younger than their actual age (e.g., Rubin & Berntsen, 2006). Asking people how satisfied they are with their own aging assesses an evaluative component of age identity. Whereas some aspects of age identity are positively valued (e.g., acquiring seniority in a profession or becoming a grandparent), others may be less valued, depending on societal context. Perceived physical age (i.e., the age one looks in a mirror) is one aspect that requires considerable self-related adaptation in social and cultural contexts that value young bodies. Feeling younger and being satisfied with one’s own aging are expressions of positive self-perceptions of aging. They reflect the operation of self-related processes that enhance well-being. Levy (2009) found that older individuals who are able to adapt to and accept changes in their appearance and physical capacity in a positive way report higher well-being, have better health, and live longer.
Social Relationships
Social ties to family, friends, mentors, and peers are primary resources of information, support, and comfort. Individuals develop and age together with family and friends and interact with others in the community. Across the life course, social ties are accumulated, lost, and transformed. Already in early life, there are multiple sources of heterogeneity in the characteristics of each person's social network of relationships (e.g., size, composition, and quality). Life course and life span theories and research about age-related patterns in social relationships focus on understanding changes in the processes underlying social connections. Antonucci's Convoy Model of Social Relations (2001; Kahn & Antonucci, 1980), for example, suggests that the social connections that people accumulate are held together by exchanges in social support (e.g., tangible and emotional). The frequency, types, and reciprocity of the exchanges change with age and in response to need, and in turn, these exchanges impact the health and well-being of the givers and receivers in the convoy. In many relationships, it is not the actual objective exchange of support that is critical but instead the perception that support is available if needed (Uchino, 2009). Carstensen’s Socioemotional Selectivity Theory (1993; Carstensen, Isaacowitz, & Charles, 1999) focuses on changes in motivation for actively seeking social contact with others. She proposes that with increasing age our motivational goals change from information gathering to emotion regulation. To optimize the experience of positive affect, older adults actively restrict their social life to prioritize time spent with emotionally close significant others. In line with this, older marriages are found to be characterized by enhanced positive and reduced negative interactions and older partners show more affectionate behavior during conflict discussions than do middle-aged partners (Carstensen, Gottman, & Levenson, 1995). Research showing that older adults have smaller networks compared to young adults and tend to avoid negative interactions also supports this theory. Similar selective processes are also observed when time horizons for interactions with close partners shrink temporarily for young adults (e.g., impending geographical separations).
Much research focuses on the associations between specific effects of long-term social relationships and health in later life. Older married individuals who receive positive social and emotional support from their partner generally report better health than their unmarried peers (Antonucci, 2001; Umberson, Williams, Powers, Liu, & Needham, 2006; Waite & Gallagher, 2000). Despite the overall positive health effects of being married in old age (compared with being widowed, divorced, or single), living as a couple can have a "dark side" if the relationship is strained or if one partner is the primary caregiver. The consequences of positive and negative aspects of relationships are complex (Birditt & Antonucci, 2008; Rook, 1998; Uchino, 2009). For example, in some circumstances, criticism from a partner may be perceived as valid and useful feedback whereas in others it is considered unwarranted and hurtful. In long-term relationships, habitual negative exchanges might have diminished effects. Parent-child and sibling relationships are often the most long-term and emotion-laden social ties. Across the life span, the parent-child tie, for example, is characterized by a paradox of solidarity, conflict, and ambivalence (Fingerman, Chen, Hay, Cichy, & Lefkowitz, 2006).
Emotion and Well-being
As we get older, the likelihood of losing loved ones or experiencing declines in health increases. Does the experience of such losses result in decreases in well-being in older adulthood? Researchers have found that well-being differs across the life span and that the patterns of these differences depend on how well-being is measured.
Measures of global subjective well-being assess individuals’ overall perceptions of their lives. This can include questions about life satisfaction or judgments of whether individuals are currently living the best life possible. What factors may contribute to how people respond to these questions? Age, health, personality, social support, and life experiences have been shown to influence judgments of global well-being. It is important to note that predictors of well-being may change as we age. What is important to life satisfaction in young adulthood can be different in later adulthood (George, 2010). Early research on well-being argued that life events such as marriage or divorce can temporarily influence well-being, but people quickly adapt and return to a neutral baseline (called the hedonic treadmill; Diener, Lucas, & Scollon, 2006). More recent research suggests otherwise. Using longitudinal data, researchers have examined well-being prior to, during, and after major life events such as widowhood, marriage, and unemployment (Lucas, 2007). Different life events influence well-being in different ways, and individuals do not often adapt back to baseline levels of well-being. The influence of events, such as unemployment, may have a lasting negative influence on well-being as people age. Research suggests that global well-being is highest in early and later adulthood and lowest in midlife (Stone, Schwartz, Broderick, & Deaton, 2010).
Hedonic well-being refers to the emotional component of well-being and includes measures of positive (e.g., happiness, contentment) and negative affect (e.g., stress, sadness). The pattern of positive affect across the adult life span is similar to that of global well-being, with experiences of positive emotions such as happiness and enjoyment being highest in young and older adulthood. Experiences of negative affect, particularly stress and anger, tend to decrease with age. Experiences of sadness are lowest in early and later adulthood compared to midlife (Stone et al., 2010). Other research finds that older adults report more positive and less negative affect than middle age and younger adults (Magai, 2008; Mroczek, 2001). It should be noted that both global well-being and positive affect tend to taper off during late older adulthood and these declines may be accounted for by increases in health-related losses during these years (Charles & Carstensen, 2010).
Psychological well-being aims to evaluate the positive aspects of psychosocial development, as opposed to factors of ill-being, such as depression or anxiety. Ryff’s model of psychological well-being proposes six core dimensions of positive well-being. Older adults tend to report higher environmental mastery (feelings of competence and control in managing everyday life) and autonomy (independence), lower personal growth and purpose in life, and similar levels of positive relations with others as younger individuals (Ryff, 1995). Links between health and interpersonal flourishing, or having high-quality connections with others, may be important in understanding how to optimize quality of life in old age (Ryff & Singer, 2000).
Successful Aging and Longevity
Increases in average life expectancy in the 20th century and evidence from twin studies that suggests that genes account for only 25% of the variance in human life spans have opened new questions about implications for individuals and society (Christensen, Doblhammer, Rau, & Vaupel, 2009). What environmental and behavioral factors contribute to a healthy long life? Is it possible to intervene to slow processes of aging or to minimize cognitive decline, prevent dementia, and ensure life quality at the end of life (Fratiglioni, Paillard-Borg, & Winblad, 2004; Hertzog, Kramer, Wilson, & Lindenberger, 2009; Lang, Baltes, & Wagner, 2007)? Should interventions focus on late life, midlife, or indeed begin in early life? Suggestions that pathological change (e.g., dementia) is not an inevitable component of aging and that pathology could at least be delayed until the very end of life led to theories about successful aging and proposals about targets for intervention. Rowe and Kahn (1997) defined three criteria of successful aging: (a) the relative avoidance of disease, disability, and risk factors like high blood pressure, smoking, or obesity; (b) the maintenance of high physical and cognitive functioning; and (c) active engagement in social and productive activities. Although such definitions of successful aging are value-laden, research and behavioral interventions have subsequently been guided by this model. For example, research has suggested that age-related declines in cognitive functioning across the adult life span may be slowed through physical exercise and lifestyle interventions (Kramer & Erickson, 2007). It is recognized, however, that societal and environmental factors also play a role and that there is much room for social change and technical innovation to accommodate the needs of the Baby Boomers and later generations as they age in the next decades.
Outside Resources
Web: Columbia Aging Society
http://www.agingsocietynetwork.org/
Web: Columbia International Longevity Center
www.mailman.columbia.edu/acad...ledge-transfer
Web: National Institute on Aging
http://www.nia.nih.gov/
Web: Stanford Center Longevity
http://longevity3.stanford.edu/
Discussion Questions
1. How do age stereotypes and intergenerational social interactions shape quality of life in older adults? What are the implications of the research of Levy and others?
2. Researchers suggest that there is both stability and change in Big Five personality traits after age 30. What is stable? What changes?
3. Describe the Social Convoy Model of Antonucci. What are the implications of this model for older adults?
4. Memory declines during adulthood. Is this statement correct? What does research show?
5. Is dementia inevitable in old age? What factors are currently thought to be protective?
6. What are the components of successful aging described by Rowe and Kahn (1998) and others? What outcomes are used to evaluate successful aging?
Vocabulary
Age identity
How old or young people feel compared to their chronological age; after early adulthood, most people feel younger than their chronological age.
Autobiographical narratives
A qualitative research method used to understand characteristics and life themes that an individual considers to uniquely distinguish him- or herself from others.
Average life expectancy
Mean number of years that 50% of people in a specific birth cohort are expected to survive. This is typically calculated from birth but is also sometimes re-calculated for people who have already reached a particular age (e.g., 65).
Cohort
Group of people typically born in the same year or historical period, who share common experiences over time; sometimes called a generation (e.g., Baby Boom Generation).
Convoy Model of Social Relations
Theory that proposes that the frequency, types, and reciprocity of social exchanges change with age. These social exchanges impact the health and well-being of the givers and receivers in the convoy.
Cross-sectional studies
Research method that provides information about age group differences; age differences are confounded with cohort differences and effects related to history and time of study.
Crystallized intelligence
Type of intellectual ability that relies on the application of knowledge, experience, and learned information.
Fluid intelligence
Type of intelligence that relies on the ability to use information processing resources to reason logically and solve novel problems.
Global subjective well-being
Individuals’ perceptions of and satisfaction with their lives as a whole.
Hedonic well-being
Component of well-being that refers to emotional experiences, often including measures of positive (e.g., happiness, contentment) and negative affect (e.g., stress, sadness).
Heterogeneity
Inter-individual and subgroup differences in level and rate of change over time.
Inhibitory functioning
Ability to focus on a subset of information while suppressing attention to less relevant information.
Intra- and inter-individual differences
Different patterns of development observed within an individual (intra-) or between individuals (inter-).
Life course theories
Theory of development that highlights the effects of social expectations of age-related life events and social roles; additionally considers the lifelong cumulative effects of membership in specific cohorts and sociocultural subgroups and exposure to historical events.
Life span theories
Theory of development that emphasizes the patterning of lifelong within- and between-person differences in the shape, level, and rate of change trajectories.
Longitudinal studies
Research method that collects information from individuals at multiple time points over time, allowing researchers to track cohort differences in age-related change to determine cumulative effects of different life experiences.
Processing speed
The time it takes individuals to perform cognitive operations (e.g., process information, react to a signal, switch attention from one task to another, find a specific target object in a complex picture).
Psychometric approach
Approach to studying intelligence that examines performance on tests of intellectual functioning.
Recall
Type of memory task where individuals are asked to remember previously learned information without the help of external cues.
Recognition
Type of memory task where individuals are asked to remember previously learned information with the assistance of cues.
Self-perceptions of aging
An individual’s perceptions of their own aging process; positive perceptions of aging have been shown to be associated with greater longevity and health.
Social network
Network of people with whom an individual is closely connected; social networks provide emotional, informational, and material support and offer opportunities for social engagement.
Socioemotional Selectivity Theory
Theory proposed to explain the reduction of social partners in older adulthood; posits that older adults focus on meeting emotional over information-gathering goals, and adaptively select social partners who meet this need.
Subjective age
A multidimensional construct that indicates how old (or young) a person feels and into which age group a person categorizes him- or herself
Successful aging
Includes three components: avoiding disease, maintaining high levels of cognitive and physical functioning, and having an actively engaged lifestyle.
Working memory
Memory system that allows for information to be simultaneously stored and utilized or manipulated. | textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/06%3A_Development/6.03%3A_Aging.txt |
• 7.1: Social Neuroscience
This module provides an overview of the new field of social neuroscience, which combines the use of neuroscience methods and theories to understand how other people influence our thoughts, feelings, and behavior. The module reviews research measuring neural and hormonal responses to understand how we make judgments about other people and react to stress.
• 7.2: Aggression and Violence
This module discusses the causes and consequences of human aggression and violence. Both internal and external causes are considered. Effective and ineffective techniques for reducing aggression are also discussed.
• 7.3: Helping and Prosocial Behavior
The focus of this module is on helping—prosocial acts in dyadic situations in which one person is in need and another provides the necessary assistance to eliminate the other’s need. Although people are often in need, help is not always given. Why not? In this module, we will try to understand how the decision to help is made by answering the question: Who helps when and why?
07: Social
By Tiffany A. Ito and Jennifer T. Kubota
University of Colorado Boulder, New York University
This module provides an overview of the new field of social neuroscience, which combines the use of neuroscience methods and theories to understand how other people influence our thoughts, feelings, and behavior. The module reviews research measuring neural and hormonal responses to understand how we make judgments about other people and react to stress. Through these examples, it illustrates how social neuroscience addresses three different questions: (1) how our understanding of social behavior can be expanded when we consider neural and physiological responses, (2) what the actual biological systems are that implement social behavior (e.g., what specific brain areas are associated with specific social tasks), and (3) how biological systems are impacted by social processes.
learning objectives
• Define social neuroscience and describe its three major goals.
• Describe how measures of brain activity such as EEG and fMRI are used to make inferences about social processes.
• Discuss how social categorization occurs.
• Describe how simulation may be used to make inferences about others.
• Discuss the ways in which other people can cause stress and also protect us against stress.
Psychology has a long tradition of using our brains and body to better understand how we think and act. For example, in 1939 Heinrich Kluver and Paul Bucy removed (i.e. lesioned) the temporal lobes in some rhesus monkeys and observed the effect on behavior. Included in these lesions was a subcortical area of the brain called the amygdala. After surgery, the monkeys experienced profound behavioral changes, including loss of fear. These results provided initial evidence that the amygdala plays a role in emotional responses, a finding that has since been confirmed by subsequent studies (Phelps & LeDoux, 2005; Whalen & Phelps, 2009).
What Is Social Neuroscience?
Social neuroscience similarly uses the brain and body to understand how we think and act, with a focus on how we think about and act toward other people. More specifically, we can think of social neuroscience as an interdisciplinary field that uses a range of neuroscience measures to understand how other people influence our thoughts, feelings, and behavior. As such, social neuroscience studies the same topics as social psychology, but does so from a multilevel perspective that includes the study of the brain and body. Figure 12.9.1 shows the scope of social neuroscience with respect to the older fields of social psychology and neuroscience. Although the field is relatively new – the term first appeared in 1992 (Cacioppo & Berntson, 1992) – it has grown rapidly, thanks to technological advances making measures of the brain and body cheaper and more powerful than ever before, and to the recognition that neural and physiological information are critical to understanding how we interact with other people.
Social neuroscience can be thought of as both a methodological approach (using measures of the brain and body to study social processes) and a theoretical orientation (seeing the benefits of integrating neuroscience into the study of social psychology). The overall approach in social neuroscience is to understand the psychological processes that underlie our social behavior. Because those psychological processes are intrapsychic phenomena that cannot be directly observed, social neuroscientists rely on a combination of measureable or observable neural and physiological responses as well as actual overt behavior to make inferences about psychological states (see Figure 1). Using this approach, social neuroscientists have been able to pursue three different types of questions: (1) What more can we learn about social behavior when we consider neural and physiological responses? (2) What are the actual biological systems that implement social behavior (e.g., what specific brain areas are associated with specific social tasks)? and (3) How are biological systems impacted by social processes?
In this module, we review three research questions that have been addressed with social neuroscience that illustrate the different goals of the field. These examples also expose you to some of the frequently used measures.
How Automatically Do We Judge Other People?
Social categorization is the act of mentally classifying someone as belonging in a group. Why do we do this? It is an effective mental shortcut. Rather than effortfully thinking about every detail of every person we encounter, social categorization allows us to rely on information we already know about the person’s group. For example, by classifying your restaurant server as a man, you can quickly activate all the information you have stored about men and use it to guide your behavior. But this shortcut comes with potentially high costs. The stored group beliefs might not be very accurate, and even when they do accurately describe some group members, they are unlikely to be true for every member you encounter. In addition, many beliefs we associate with groups – called stereotypes – are negative. This means that relying on social categorization can often lead people to make negative assumptions about others.
The potential costs of social categorization make it important to understand how social categorization occurs. Is it rare or does it occur often? Is it something we can easily stop, or is it hard to override? One difficulty answering these questions is that people are not always consciously aware of what they are doing. In this case, we might not always realize when we are categorizing someone. Another concern is that even when people are aware of their behavior, they can be reluctant to accurately report it to an experimenter. In the case of social categorization, subjects might worry they will look bad if they accurately report classifying someone into a group associated with negative stereotypes. For instance, many racial groups are associated with some negative stereotypes, and subjects may worry that admitting to classifying someone into one of those groups means they believe and use those negative stereotypes.
Social neuroscience has been useful for studying how social categorization occurs without having to rely on self-report measures, instead measuring brain activity differences that occur when people encounter members of different social groups. Much of this work has been recorded using the electroencephalogram, or EEG. EEG is a measure of electrical activity generated by the brain’s neurons. Comparing this electrical activity at a given point in time against what a person is thinking and doing at that same time allows us to make inferences about brain activity associated with specific psychological states. One particularly nice feature of EEG is that it provides very precise timing information about when brain activity occurs. EEG is measured non-invasively with small electrodes that rest on the surface of the scalp. This is often done with a stretchy elastic cap, like the one shown in Figure 12.9.2, into which the small electrodes are sewn. Researchers simply pull the cap onto the subject’s head to get the electrodes into place; wearing it is similar to wearing a swim cap. The subject can then be asked to think about different topics or engage in different tasks as brain activity is measured.
To study social categorization, subjects have been shown pictures of people who belong to different social groups. Brain activity recorded from many individual trials (e.g., looking at lots of different Black individuals) is then averaged together to get an overall idea of how the brain responds when viewing individuals who belong to a particular social group. These studies suggest that social categorization is an automatic process – something that happens with little conscious awareness or control – especially for dimensions like gender, race, and age (Ito & Urland, 2003; Mouchetant-Rostaing & Giard, 2003). The studies specifically show that brain activity differs when subjects view members of different social groups (e.g., men versus women, Blacks versus Whites), suggesting that the group differences are being encoded and processed by the perceiver. One interesting finding is that these brain changes occur both when subjects are purposely asked to categorize the people into social groups (e.g., to judge whether the person is Black or White), and also when they are asked to do something that draws attention away from group classifications (e.g., making a personality judgment about the person) (Ito & Urland, 2005). This tells us that we do not have to intend to make group classifications in order for them to happen. It is also very interesting to consider how quickly the changes in brain responses occur. Brain activity is altered by viewing members of different groups within 200 milliseconds of seeing a person’s face. That is just two-tenths of a second. Such a fast response lends further support to the idea that social categorization occurs automatically and may not depend on conscious intention.
Overall, this research suggests that we engage in social categorization very frequently. In fact, it appears to happen automatically (i.e., without us consciously intending for it to happen) in most situations for dimensions like gender, age, and race. Since classifying someone into a group is the first step to activating a group stereotype, this research provides important information about how easily stereotypes can be activated. And because it is hard for people to accurately report on things that happen so quickly, this issue has been difficult to study using more traditional self-report measures. Using EEGs has, therefore, been helpful in providing interesting new insights into social behavior.
Do We Use Our Own Behavior to Help Us Understand Others?
Classifying someone into a social group then activating the associated stereotype is one way to make inferences about others. However, it is not the only method. Another strategy is to imagine what our own thoughts, feelings, and behaviors would be in a similar situation. Then we can use our simulated reaction as a best guess about how someone else will respond (Goldman, 2005). After all, we are experts in our own feelings, thoughts, and tendencies. It might be hard to know what other people are feeling and thinking, but we can always ask ourselves how we would feel and act if we were in their shoes.
There has been some debate about whether simulation is used to get into the minds of others (Carruthers & Smith, 1996; Gallese & Goldman, 1998). Social neuroscience research has addressed this question by looking at the brain areas used when people think about themselves and others. If the same brain areas are active for the two types of judgments, it lends support to the idea that the self may be used to make inferences about others via simulation.
We know that an area in the prefrontal cortex called the medial prefrontal cortex (mPFC) – located in the middle of the frontal lobe – is active when people think about themselves (Kelley, Macrae, Wyland, Caglar, Inati, & Heatherton, 2002). This conclusion comes from studies using functional magnetic resonance imaging, or fMRI. While EEG measures the brain’s electrical activity, fMRI measures changes in the oxygenation of blood flowing in the brain. When neurons become more active, blood flow to the area increases to bring more oxygen and glucose to the active cells. fMRI allows us to image these changes in oxygenation by placing people in an fMRI machine or scanner (Figure 12.9.3), which consists of large magnets that create strong magnetic fields. The magnets affect the alignment of the oxygen molecules within the blood (i.e., how they are tilted). As the oxygen molecules move in and out of alignment with the magnetic fields, their nuclei produce energy that can be detected with special sensors placed close to the head. Recording fMRI involves having the subject lay on a small bed that is then rolled into the scanner. While fMRI does require subjects to lie still within the small scanner and the large magnets involved are noisy, the scanning itself is safe and painless. Like EEG, the subject can then be asked to think about different topics or engage in different tasks as brain activity is measured. If we know what a person is thinking or doing when fMRI detects a blood flow increase to a particular brain area, we can infer that part of the brain is involved with the thought or action. fMRI is particularly useful for identifying which particular brain areas are active at a given point in time.
The conclusion that the mPFC is associated with the self comes from studies measuring fMRI while subjects think about themselves (e.g., saying whether traits are descriptive of themselves). Using this knowledge, other researchers have looked at whether the same brain area is active when people make inferences about others. Mitchell, Neil Macrae, and Banaji (2005) showed subjects pictures of strangers and had them judge either how pleased the person was to have his or her picture taken or how symmetrical the face appeared. Judging whether someone is pleased about being photographed requires making an inference about someone’s internal feelings – we call this mentalizing. By contrast, facial symmetry judgments are based solely on physical appearances and do not involve mentalizing. A comparison of brain activity during the two types of judgments shows more activity in the mPFC when making the mental versus physical judgments, suggesting this brain area is involved when inferring the internal beliefs of others.
There are two other notable aspects of this study. First, mentalizing about others also increased activity in a variety of regions important for many aspects of social processing, including a region important in representing biological motion (superior temporal sulcus or STS), an area critical for emotional processing (amygdala), and a region also involved in thinking about the beliefs of others (temporal parietal junction, TPJ) (Gobbini & Haxby, 2007; Schultz, Imamizu, Kawato, & Frith, 2004) (Figure 12.9.4). This finding shows that a distributed and interacting set of brain areas is likely to be involved in social processing. Second, activity in the most ventral part of the mPFC (the part closer to the belly rather than toward the top of the head), which has been most consistently associated with thinking about the self, was particularly active when subjects mentalized about people they rated as similar to themselves. Simulation is thought to be most likely for similar others, so this finding lends support to the conclusion that we use simulation to mentalize about others. After all, if you encounter someone who has the same musical taste as you, you will probably assume you have other things in common with him. By contrast, if you learn that someone loves music that you hate, you might expect him to differ from you in other ways (Srivastava, Guglielmo, & Beer, 2010). Using a simulation of our own feelings and thoughts will be most accurate if we have reason to think the person’s internal experiences are like our own. Thus, we may be most likely to use simulation to make inferences about others if we think they are similar to us.
This research is a good example of how social neuroscience is revealing the functional neuroanatomy of social behavior. That is, it tells us which brain areas are involved with social behavior. The mPFC (as well as other areas such as the STS, amygdala, and TPJ) is involved in making judgments about the self and others. This research also provides new information about how inferences are made about others. Whereas some have doubted the widespread use of simulation as a means for making inferences about others, the activation of the mPFC when mentalizing about others, and the sensitivity of this activation to similarity between self and other, provides evidence that simulation occurs.
What Is the Cost of Social Stress?
Stress is an unfortunately frequent experience for many of us. Stress – which can be broadly defined as a threat or challenge to our well-being – can result from everyday events like a course exam or more extreme events such as experiencing a natural disaster. When faced with a stressor, sympathetic nervous system activity increases in order to prepare our body to respond to the challenge. This produces what Selye (1950) called a fight or flight response. The release of hormones, which act as messengers from one part of an organism (e.g., a cell or gland) to another part of the organism, is part of the stress response.
A small amount of stress can actually help us stay alert and active. In comparison, sustained stressors, or chronic stress, detrimentally affect our health and impair performance (Al’Absi, Hugdahl, & Lovallo, 2002; Black, 2002; Lazarus, 1974). This happens in part through the chronic secretion of stress-related hormones (e.g., Davidson, Pizzagalli, Nitschke, & Putnam, 2002; Dickerson, Gable, Irwin, Aziz, & Kemeny, 2009). In particular, stress activates the hypothalamic-pituitary-adrenal (HPA) axis to release cortisol (see Figure 12.9.5 for a discussion). Chronic stress, by way of increases in cortisol, impairs attention, memory, and self-control (Arnsten, 2009). Cortisol levels can be measured non-invasively in bodily fluids, including blood and saliva. Researchers often collect a cortisol sample before and after a potentially stressful task. In one common collection method, subjects place polymer swabs under their tongue for 1 to 2 minutes to soak up saliva. The saliva samples are then stored and analyzed later to determine the level of cortisol present at each time point.
Whereas early stress researchers studied the effects of physical stressors like loud noises, social neuroscientists have been instrumental in studying how our interactions with other people can cause stress. This question has been addressed through neuroendocrinology, or the study of how the brain and hormones act in concert to coordinate the physiology of the body. One contribution of this work has been in understanding the conditions under which other people can cause stress. In one study, Dickerson, Mycek, and Zaldivar (2008) asked undergraduates to deliver a speech either alone or to two other people. When the students gave the speech in front of others, there was a marked increase in cortisol compared with when they were asked to give a speech alone. This suggests that like chronic physical stress, everyday social stressors, like having your performance judged by others, induces a stress response. Interestingly, simply giving a speech in the same room with someone who is doing something else did not induce a stress response. This suggests that the mere presence of others is not stressful, but rather it is the potential for them to judge us that induces stress.
Worrying about what other people think of us is not the only source of social stress in our lives. Other research has shown that interacting with people who belong to different social groups than us – what social psychologists call outgroup members – can increase physiological stress responses. For example, cardiovascular responses associated with stress like contractility of the heart ventricles and the amount of blood pumped by the heart (what is called cardiac output) are increased when interacting with outgroup as compared with ingroup members (i.e., people who belong to the same social group we do) (Mendes, Blascovich, Likel, & Hunter, 2002). This stress may derive from the expectation that interactions with dissimilar others will be uncomfortable (Stephan & Stephan, 1985) or concern about being judged as unfriendly and prejudiced if the interaction goes poorly (Plant & Devine, 2003).
The research just reviewed shows that events in our social lives can be stressful, but are social interactions always bad for us? No. In fact, while others can be the source of much stress, they are also a major buffer against stress. Research on social support shows that relying on a network of individuals in tough times gives us tools for dealing with stress and can ward off loneliness (Cacioppo & Patrick, 2008). For instance, people who report greater social support show a smaller increase in cortisol when performing a speech in front of two evaluators (Eisenberger, Taylor, Gable, Hilmert, & Lieberman, 2007).
What determines whether others will increase or decrease stress? What matters is the context of the social interaction. When it has potential to reflect badly on the self, social interaction can be stressful, but when it provides support and comfort, social interaction can protect us from the negative effects of stress. Using neuroendocrinology by measuring hormonal changes in the body has helped researchers better understand how social factors impact our body and ultimately our health.
Conclusions
Human beings are intensely social creatures – our lives are intertwined with other people and our health and well-being depend on others. Social neuroscience helps us to understand the critical function of how we make sense of and interact with other people. This module provides an introduction to what social neuroscience is and what we have already learned from it, but there is much still to understand. As we move forward, one exciting future direction will be to better understand how different parts of the brain and body interact to produce the numerous and complex patterns of social behavior that humans display. We hinted at some of this complexity when we reviewed research showing that while the mPFC is involved in mentalizing, other areas such as the STS, amygdala, and TPJ are as well. There are likely additional brain areas involved as well, interacting in ways we do not yet fully understand. These brain areas in turn control other aspects of the body to coordinate our responses during social interactions. Social neuroscience will continue to investigate these questions, revealing new information about how social processes occur, while also increasing our understanding of basic neural and physiological processes.
Outside Resources
Society for Social Neuroscience
http://www.s4sn.org
Video: See a demonstration of fMRI data being collected.
Video: See an example of EEG data being collected.
Video: View two tasks frequently used in the lab to create stress – giving a speech in front of strangers, and doing math computations out loud in front of others. Notice how some subjects show obvious signs of stress, but in some situations, cortisol changes suggest that even people who appear calm are experiencing a physiological response associated with stress.
Video: Watch a video used by Fritz Heider and Marianne Simmel in a landmark study on social perception published in 1944. Their goal was to investigate how we perceive other people, and they studied it by seeing how readily we apply people-like interpretations to non-social stimuli.
intentionperception.org/wp-co...ider_Flash.swf
Discussion Questions
1. Categorizing someone as a member of a social group can activate group stereotypes. EEG research suggests that social categorization occurs quickly and often automatically. What does this tell us about the likelihood of stereotyping occurring? How can we use this information to develop ways to stop stereotyping from happening?
2. Watch this video, similar to what was used by Fritz Heider and Marianne Simmel in a landmark study on social perception published in 1944, and imagine telling a friend what happened in the video. intentionperception.org/wp-co...ider_Flash.swf. After watching the video, think about the following: Did you describe the motion of the objects solely in geometric terms (e.g., a large triangle moved from the left to the right), or did you describe the movements as actions of animate beings, maybe even of people (e.g., the circle goes into the house and shuts the door)? In the original research, 33 of 34 subjects described the action of the shapes using human terms. What does this tell us about our tendency to mentalize?
3. Consider the types of things you find stressful. How many of them are social in nature (e.g., are related to your interactions with other people)? Why do you think our social relations have such potential for stress? In what ways can social relations be beneficial and serve as a buffer for stress?
Vocabulary
Amygdala
A region located deep within the brain in the medial area (toward the center) of the temporal lobes (parallel to the ears). If you could draw a line through your eye sloping toward the back of your head and another line between your two ears, the amygdala would be located at the intersection of these lines. The amygdala is involved in detecting relevant stimuli in our environment and has been implicated in emotional responses.
Automatic process
When a thought, feeling, or behavior occurs with little or no mental effort. Typically, automatic processes are described as involuntary or spontaneous, often resulting from a great deal of practice or repetition.
Cortisol
A hormone made by the adrenal glands, within the cortex. Cortisol helps the body maintain blood pressure and immune function. Cortisol increases when the body is under stress.
Electroencephalogram
A measure of electrical activity generated by the brain’s neurons.
Fight or flight response
The physiological response that occurs in response to a perceived threat, preparing the body for actions needed to deal with the threat.
Functional magnetic resonance imaging
A measure of changes in the oxygenation of blood flow as areas in the brain become active.
Functional neuroanatomy
Classifying how regions within the nervous system relate to psychology and behavior.
Hormones
Chemicals released by cells in the brain or body that affect cells in other parts of the brain or body.
Hypothalamic-pituitary-adrenal (HPA) axis
A system that involves the hypothalamus (within the brain), the pituitary gland (within the brain), and the adrenal glands (at the top of the kidneys). This system helps maintain homeostasis (keeping the body’s systems within normal ranges) by regulating digestion, immune function, mood, temperature, and energy use. Through this, the HPA regulates the body’s response to stress and injury.
Ingroup
A social group to which an individual identifies or belongs.
Lesions
Damage or tissue abnormality due, for example, to an injury, surgery, or a vascular problem.
Medial prefrontal cortex
An area of the brain located in the middle of the frontal lobes (at the front of the head), active when people mentalize about the self and others.
Mentalizing
The act of representing the mental states of oneself and others. Mentalizing allows humans to interpret the intentions, beliefs, and emotional states of others.
Neuroendocrinology
The study of how the brain and hormones act in concert to coordinate the physiology of the body.
Outgroup
A social group to which an individual does not identify or belong.
Simulation
Imaginary or real imitation of other people’s behavior or feelings.
Social categorization
The act of mentally classifying someone into a social group (e.g., as female, elderly, a librarian).
Social support
A subjective feeling of psychological or physical comfort provided by family, friends, and others.
Stereotypes
The beliefs or attributes we associate with a specific social group. Stereotyping refers to the act of assuming that because someone is a member of a particular group, he or she possesses the group’s attributes. For example, stereotyping occurs when we assume someone is unemotional just because he is man, or particularly athletic just because she is African American.
Stress
A threat or challenge to our well-being. Stress can have both a psychological component, which consists of our subjective thoughts and feelings about being threatened or challenged, as well as a physiological component, which consists of our body’s response to the threat or challenge (see “fight or flight response”).
Superior temporal sulcus
The sulcus (a fissure in the surface of the brain) that separates the superior temporal gyrus from the middle temporal gyrus. Located in the temporal lobes (parallel to the ears), it is involved in perception of biological motion or the movement of animate objects.
Sympathetic nervous system
A branch of the autonomic nervous system that controls many of the body’s internal organs. Activity of the SNS generally mobilizes the body’s fight or flight response.
Temporal parietal junction
The area where the temporal lobes (parallel to the ears) and partial lobes (at the top of the head toward the back) meet. This area is important in mentalizing and distinguishing between the self and others. | textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/07%3A_Social/7.01%3A_Social_Neuroscience.txt |
By Brad J. Bushman
The Ohio State University
This module discusses the causes and consequences of human aggression and violence. Both internal and external causes are considered. Effective and ineffective techniques for reducing aggression are also discussed.
learning objectives
• Explain the important components of the definition of aggression, and explain how aggression differs from violence.
• Explain whether people think the world is less violent now than in the past, and whether it actually is less violent. If there is a discrepancy between perception and reality, how can it be resolved?
• Identify the internal causes and external causes of aggression. Compare and contrast how the inner and external causes differ.
• Identify effective and ineffective approaches to reducing aggression.
Introduction
"Beware of the dark side. Anger, fear, aggression; the dark side of the Force are they."
-Yoda, renowned Jedi master in the Star Wars universe
Aggression is indeed the dark side of human nature. Although aggression may have been adaptive in our ancient past, it hardly seems adaptive today. For example, on 14 December 2012 Adam Lanza, age 20, first killed his mother in their home, and then went to an elementary school in Newtown, Connecticut and began shooting, killing 20 children and 6 school employees, before killing himself. When incidents such as these happen, we want to know what caused them. Although it is impossible to know what motivated a particular individual such as Lanza to commit the Newtown school shooting, for decades researchers have studied the internal and external factors that influence aggression and violence. We consider some of these factors in this module.
Before we get too far, let’s begin by defining the term “aggression.” Laypeople and researchers often use the term “aggression” differently. Laypeople might describe a salesperson that tries really hard to sell them something as “aggressive.” The salesperson does not, however, want to harm potential customers. Most researchers define aggression as any behavior intended to harm another person who does not want to be harmed (Baron & Richardson, 1994). This definition includes three important features. First, aggression is a behavior—you can see it. Aggression is not an internal response, such as having angry feelings or aggressive thoughts (although such internal responses can increase the likelihood of actual aggression). Second, aggression is intentional rather than accidental. For example, a dentist might intentionally give a patient a shot of Novocain (which hurts!), but the goal is to help rather than harm the patient. Third, the victim wants to avoid the harm. Thus, suicide and sadomasochistic sex play would not be called aggression because the victim actively seeks to be harmed.
Researchers and laypeople also differ in their use of the term violence. A meteorologist might call a storm “violent” if it has intense winds, rain, thunder, lightning, or hail. Researchers define violence as aggression intended to cause extreme physical harm (e.g., injury, death). Thus, all violent acts are aggressive, but not all aggressive acts are violent. For example, screaming and swearing at another person is aggressive, but not violent.
The good news is that the level of violence in the world is decreasing over time—by millennia, century, and even decade (Pinker, 2011). Studies of body counts, such as the proportion of prehistoric skeletons with axe and arrowhead wounds, suggest that prehistoric societies were far more violent than those today. Estimates show that if the wars of the 20th century had killed the same proportion of the population as ancient tribal wars did, then the death toll would have been 20 times higher—2 billion rather than 100 million. More recent data show that murder rates in Europe have decreased dramatically since the Middle Ages. For example, estimated murders in England dropped from 24 per 100,000 in the 14th century to 0.6 per 100,000 by the early 1960s. The major decline in violence occurred in the 17th century during the “Age of Reason,” which began in the Netherlands and England and then spread to other European countries. Global violence has also steadily decreased since the middle of the 20th century. For example, the number of battle deaths in interstate wars has declined from more than 65,000 per year in the 1950s to fewer than 2,000 per year in the 2000s. There have also been global declines in the number of armed conflicts and combat deaths, the number of military coups, and the number of deadly violence campaigns waged against civilians. For example, Figure 12.8.1 shows the number of battle deaths per 100,000 people per year over 60 years (see Pinker, 2011, p. 301). As can be seen, battle deaths of all types (civil, colonial, interstate, internationalized civil) have decreased over time. The claim that violence has decreased dramatically over time may seem hard to believe in today’s digital age when we are constantly bombarded by scenes of violence in the media. In the news media, the top stories are the most violent ones—“If it bleeds it leads,” so the saying goes. Citizen journalists around the world also use social media to “show and tell” the world about unjustified acts of violence. Because violent images are more available to us now than ever before, we incorrectly assume that violence levels are also higher. Our tendency to overestimate the amount of violence in the world is due to the availability heuristic, which is the tendency to judge the frequency or likelihood of an event by the ease with which relevant instances come to mind. Because we are frequently exposed to scenes of violence in the mass media, acts of violence are readily accessible in memory and come to mind easily, so we assume violence is more common than it actually is.
Human aggression is very complex and is caused by multiple factors. We will consider a few of the most important internal and external causes of aggression. Internal causes include anything the individual brings to the situation that increases the probability of aggression. External causes include anything in the environment that increases the probability of aggression. Finally, we will consider a few strategies for reducing aggression.
Internal Factors
Age
At what age are people most aggressive? You might be surprised to learn that toddlers 1 to 3 years old are most aggressive. Toddlers often rely on physical aggression to resolve conflict and get what they want. In free play situations, researchers have found that 25 percent of their interactions are aggressive (Tremblay, 2000). No other group of individuals (e.g., Mafia, street gangs) resorts to aggression 25 percent of the time. Fortunately for the rest of us, most toddler aggression isn’t severe enough to qualify as violence because they don’t use weapons, such as guns and knives. As children grow older, they learn to inhibit their aggressive impulses and resolve conflict using nonaggressive means, such as compromise and negotiation. Although most people become less aggressive over time, a small subset of people becomes moreaggressive over time. The most dangerous years for this small subset of people (and for society as a whole) are late adolescence and early adulthood. For example, 18- to 24-year-olds commit most murders in the U.S. (U.S. Federal Bureau of Investigation, 2012).
Gender
At all ages, males tend to be more physically aggressive than females. However, it would be wrong to think that females are never physically aggressive. Females do use physical aggression, especially when they are provoked by other females (Collins, Quigley, & Leonard, 2007). Among heterosexual partners, women are actually slightly more likely than men to use physical aggression (Archer, 2000). However, when men do use physical aggression, they are more likely than women to cause serious injuries and even death to their partners. When people are strongly provoked, gender differences in aggression shrink (Bettencourt & Miller, 1996).
Females are much more likely than males to engage in relational aggression, defined as intentionally harming another person’s social relationships, feelings of acceptance, or inclusion within a group (Crick & Grotpeter, 1995). Examples of relational aggression include gossiping, spreading rumors, withdrawing affection to get what you want, excluding someone from your circle of friends, and giving someone the “silent treatment.”
Some people seem to be cranky and aggressive almost all the time. Aggressiveness is almost as stable as intelligence over time (Olweus, 1979). Individual differences in aggressiveness are often assessed using self-report questionnaires such as the “Aggression Questionnaire” (Buss & Perry, 1992), which includes items such as “I get into fights a little more than the average person” and “When frustrated, I let my irritation show.” Scores on these questionnaires are positively related to actual aggressive and violent behaviors (Anderson & Bushman, 1997).
The components of the “Dark Triad of Personality”—narcissism, psychopathy, and Machiavellianism—are also related to aggression (Paulhus & Williams, 2002). The term “narcissism” comes from the mythical Greek character Narcissus who fell in love with his own image reflected in the water. Narcissists have inflated egos, and they lash out aggressively against others when their inflated egos are threatened (e.g., Bushman & Baumeister, 1998). It is a common myth that aggressive people have low self-esteem (Bushman et al., 2009). Psychopaths are callous individuals who lack empathy for others. One of the strongest deterrents of aggression is empathy, which psychopaths lack. The term “Machiavellianism” comes from the Italian philosopher and writer Niccolò Machiavelli, who advocated using any means necessary to gain raw political power, including aggression and violence.
Hostile Cognitive Biases
One key to keeping aggression in check is to give people the benefit of the doubt. Some people, however, do just the opposite. There are three hostile cognitive biases. The hostile attribution bias is the tendency to perceive ambiguous actions by others as hostile actions (Dodge, 1980). For example, if a person bumps into you, a hostile attribution would be that the person did it on purpose and wants to hurt you. The hostile perception bias is the tendency to perceive social interactions in general as being aggressive (Dill et al., 1997). For example, if you see two people talking in an animated fashion, a hostile perception would be that they are fighting with each other. The hostile expectation bias is the tendency to expect others to react to potential conflicts with aggression (Dill et al., 1997). For example, if you bump into another person, a hostile expectation would be that the person will assume that you did it on purpose and will attack you in return. People with hostile cognitive biases view the world as a hostile place.
External Factors
Frustration and Other Unpleasant Events
One of the earliest theories of aggression proposed that aggression is caused by frustration, which was defined as blocking goal-directed behavior (Dollard et al., 1939). For example, if you are standing in a long line to purchase a ticket, it is frustrating when someone crowds in front of you. This theory was later expanded to say that all unpleasant events, not just frustrations, cause aggression (Berkowitz, 1989). Unpleasant events such as frustrations, provocations, social rejections, hot temperatures, loud noises, bad air (e.g., pollution, foul odors, secondhand smoke), and crowding can all cause aggression. Unpleasant events automatically trigger a fight–flight response.
Weapons
Obviously, using a weapon can increase aggression and violence, but can just seeing a weapon increase aggression? To find out, researchers sat angry participants at a table that had a shotgun and a revolver on it—or, in the control condition, badminton racquets and shuttlecocks (Berkowitz & LePage, 1967). The items on the table were supposedly part of a different study, but the researcher had forgotten to put them away. The participant was supposed to decide what level of electric shock to deliver to a person pretending to be another participant, and the electric shocks were used to measure aggression. The experimenter told participants to ignore the items on the table, but apparently they could not. Participants who saw the guns gave more shocks than did participants who saw the sports items. Several other studies have replicated this so-called weapons effect, including some conducted outside the lab (Carlson, Marcus-Newhall, & Miller, 1990). For example, one study found that motorists were more likely to honk their horns at another driver stalled in a pickup truck with a rifle visible in his rear window than in response to the same delay from the same truck, but with no gun (Turner, Layton, & Simons, 1975). When you think about it, you would have to be pretty stupid to honk your horn at a driver with a rifle in his truck. However, drivers were probably responding in an automatic rather than a deliberate manner. Other research has shown drivers who have guns in their vehicles are more aggressive drivers than those without guns in their vehicles (Hemenway, Vriniotis, & Miller, 2006).
Violent Media
There are plenty of aggressive cues in the mass media, such as in TV programs, films, and video games. In the U.S., the Surgeon General warns the public about threats to their physical and mental health. Most Americans know that the U.S. Surgeon General issued a warning about cigarettes in 1964: “Warning: The Surgeon General Has Determined That Cigarette Smoking Is Dangerous to Your Health.” However, most Americans do not know that the U.S. Surgeon General issued a warning regarding violent TV programs in 1972: “It is clear to me that the causal relationship between televised violence and antisocial behavior is sufficient to warrant appropriate and immediate remedial action. . . . There comes a time when the data are sufficient to justify action. That time has come” (Steinfeld, 1972). Since then, hundreds of additional studies have shown that all forms of violent media can increase aggression (e.g., Anderson & Bushman, 2002). Violent video games might even be more harmful than violent TV programs, for at least three reasons. First, playing a video game is active, whereas watching a TV program is passive. Active involvement enhances learning. One study found that boys who played a violent video game were more aggressive afterward than were boys who merely watched the same game (Polman, Orobio de Castro, & van Aken, 2008). Second, video game players are more likely to identify with a violent character than TV watchers. If the game involves a first-person shooter, players have the same visual perspective as the killer. If the game is third person, the player controls the character’s actions from a more distant visual perspective. In either case, the player is linked to a violent character. Research has shown that people are more aggressive when they identify with a violent character (e.g., Konijn, Nije Bijvank, & Bushman, 2007). Third, violent games directly reward players for violent behavior by awarding points or by allowing them to advance in the game. In some games, players are also rewarded through verbal praise, such as hearing “Impressive!” after killing an enemy. In TV programs, reward is not directly tied to the viewer’s behavior. It is well known that rewarding behavior increases its frequency. One study found that players were more aggressive after playing a violent game that rewarded violent actions than after playing the same game that punished violent actions (Carnagey & Anderson, 2005). The evidence linking violent video games to aggression is compelling. A comprehensive review found that violent games increase aggressive thoughts, angry feelings, and aggressive behaviors and decrease empathic feelings and prosocial behaviors (Anderson et al., 2010). Similar effects were obtained for males and females, regardless of their age, and regardless of what country they were from.
Alcohol
Alcohol has long been associated with aggression and violence. In fact, sometimes alcohol is deliberately used to promote aggression. It has been standard practice for many centuries to issue soldiers some alcohol before they went into battle, both to increase aggression and reduce fear (Keegan, 1993). There is ample evidence of a link between alcohol and aggression, including evidence from experimental studies showing that consuming alcohol can cause an increase in aggression (e.g., Lipsey, Wilson, Cohen, & Derzon, 1997). Most theories of intoxicated aggression fall into one of two categories: (a) pharmacological theories that focus on how alcohol disrupts cognitive processes, and (b) expectancy theories that focus on how social attitudes about alcohol facilitate aggression. Normally, people have strong inhibitions against behaving aggressively, and pharmacological models focus on how alcohol reduces these inhibitions. To use a car analogy, alcohol increases aggression by cutting the brake line rather than by stepping on the gas. How does alcohol cut the brake line? Alcohol disrupts cognitive executive functions that help us organize, plan, achieve goals, and inhibit inappropriate behaviors (Giancola, 2000). Alcohol also reduces glucose, which provides energy to the brain for self-control (Gailliot & Baumeister, 2007). Alcohol has a “myopic” effect on attention—it causes people to focus attention only on the most salient features of a situation and not pay attention to more subtle features (Steele & Josephs, 1990). In some places where alcohol is consumed (e.g., crowded bar), provocations can be salient. Alcohol also reduces self-awareness, which decreases attention to internal standards against behaving aggressively (Hull, 1981).
According to expectancy theories, alcohol increases aggression because people expect it to. In our brains, alcohol and aggression are strongly linked together. Indeed, research shows that subliminally exposing people to alcohol-related words (e.g., vodka) can make them more aggressive, even though they do not drink one drop of alcohol (Subra et al., 2010). In many cultures, drinking occasions are culturally agreed-on “time out” periods where people are not held responsible for their actions (MacAndrew & Edgerton, 1969). Those who behave aggressively when intoxicated sometimes “blame the bottle” for their aggressive actions.
Does this research evidence mean that aggression is somehow contained in alcohol? No. Alcohol increases rather than causes aggressive tendencies. Factors that normally increase aggression (e.g., frustrations and other unpleasant events, aggressive cues) have a stronger effect on intoxicated people than on sober people (Bushman, 1997). In other words, alcohol mainly seems to increase aggression in combination with other factors. If someone insults or attacks you, your response will probably be more aggressive if you are drunk than sober. When there is no provocation, however, the effect of alcohol on aggression may be negligible. Plenty of people enjoy an occasional drink without becoming aggressive.
Reducing Aggression
Most people are greatly concerned about the amount of aggression in society. Aggression directly interferes with our basic needs of safety and security. Thus, it is urgent to find ways to reduce aggression. Because there is no single cause for aggression, it is difficult to design effective treatments. A treatment that works for one individual may not work for another individual. And some extremely aggressive people, such as psychopaths, are considered to be untreatable. Indeed, many people have started to accept the fact that aggression and violence have become an inevitable, intrinsic part of our society. This being said, there certainly are things that can be done to reduce aggression and violence. Before discussing some effective methods for reducing aggression, two ineffective methods need to be debunked: catharsis and punishment.
Catharsis
The term catharsis dates back to Aristotle and means to cleanse or purge. Aristotle taught that viewing tragic plays gave people emotional release from negative emotions. In Greek tragedy, the heroes didn’t just grow old and retire—they are often murdered. Sigmund Freud revived the ancient notion of catharsis by proposing that people should express their bottled-up anger. Freud believed if they repressed it, negative emotions would build up inside the individual and surface as psychological disorders. According to catharsis theory, acting aggressively or even viewing aggression purges angry feelings and aggressive impulses into harmless channels. Unfortunately for catharsis theory, research shows the opposite often occurs (e.g., Geen & Quanty, 1977).
If venting anger doesn’t get rid of it, what does? All emotions, including anger, consist of bodily states (e.g., arousal) and mental meanings. To get rid of anger, you can focus on either of those. Anger can be reduced by getting rid of the arousal state, such as by relaxing, listening to calming music, or counting to 10 before responding. Mental tactics can also reduce anger, such as by reframing the situation or by distracting oneself and turning one’s attention to more pleasant topics. Incompatible behaviors can also help get rid of anger. For example, petting a puppy, watching a comedy, kissing your lover, or helping someone in need, because those acts are incompatible with anger and, therefore, they make the angry state impossible to sustain (e.g., Baron, 1976). Viewing the provocative situation from a more distant perspective, such as that of a fly on the wall, also helps (Mischkowski, Kross, & Bushman, 2012).
Punishment
Most cultures assume that punishment is an effective way to deter aggression and violence. Punishment is defined as inflicting pain or removing pleasure for a misdeed. Punishment can range in intensity from spanking a child to executing a convicted killer. Parents use it, organizations use it, and governments use it, but does it work? Today, aggression researchers have their doubts. Punishment is most effective when it is: (a) intense, (b) prompt, (c) applied consistently and with certainty, (d) perceived as justified, and (e) possible to replace the undesirable punished behavior with a desirable alternative behavior (Berkowitz, 1993). Even if punishment occurs under these ideal conditions, it may only suppress aggressive behavior temporarily, and it has several undesirable long-term consequences. Most important, punishment models the aggressive behavior it seeks to prevent. Longitudinal studies have shown that children who are physically punished by their parents at home are more aggressive outside the home, such as in school (e.g., Lefkowitz, Huesmann, & Eron, 1978). Because punishment is unpleasant, it can also trigger aggression just like other unpleasant events.
Successful Interventions
Although specific aggression intervention strategies cannot be discussed in any detail here, there are two important general points to be made. First, successful interventions target as many causes of aggression as possible and attempt to tackle them collectively. Interventions that are narrowly focused at removing a single cause of aggression, however well conducted, are bound to fail. In general, external causes are easier to change than internal causes. For example, one can reduce exposure to violent media or alcohol consumption, and make unpleasant situations more tolerable (e.g., use air conditioners when it is hot, reduce crowding in stressful environments such as prisons and psychiatric wards).
Second, aggression problems are best treated in early development, when people are still malleable. As was mentioned previously, aggression is very stable over time, almost as stable as intelligence. If young children display excessive levels of aggression (often in the form of hitting, biting, or kicking), it places them at high risk for becoming violent adolescents and even violent adults. It is much more difficult to alter aggressive behaviors when they are part of an adult personality, than when they are still in development.
Yoda warned that anger, fear, and aggression are the dark side of the Force. They are also the dark side of human nature. Fortunately, aggression and violence are decreasing over time, and this trend should continue. We also know a lot more now than ever before about what factors increase aggression and how to treat aggressive behavior problems. When Luke Skywalker was going to enter the dark cave on Degobah (the fictional Star Wars planet), Yoda said, “Your weapons, you will not need them.” Hopefully, there will come a time in the not-too-distant future when people all over the world will no longer need weapons.
Outside Resources
Book: Bushman, B. J., & Huesmann, L. R. (2010). Aggression. In S. T. Fiske, D. T. Gilbert, & G. Lindzey (Eds.), Handbook of social psychology (5th ed.) (pp. 833-863). New York: John Wiley & Sons.
TED Talk: Zak Ebrahim
https://www.ted.com/talks/zak_ebrahim_i_am_the_son_of_a_terrorist_here_s_how_i_chose_peace?language=en#t-528075
Video: From the Inquisitive Mind website, Brad Bushman conducts a short review of terminology and important research concerning aggression and violence.
Discussion Questions
1. Discuss whether different examples (hypothetical and real) meet the definition of aggression and the definition of violence.
2. Why do people deny the harmful effects of violent media when the research evidence linking violent media to aggression is so conclusive?
3. Consider the various causes of aggression described in this module and elsewhere, and discuss whether they can be changed to reduce aggression, and if so how.
Vocabulary
Aggression
Any behavior intended to harm another person who does not want to be harmed.
Availability heuristic
The tendency to judge the frequency or likelihood of an event by the ease with which relevant instances come to mind.
Catharsis
Greek term that means to cleanse or purge. Applied to aggression, catharsis is the belief that acting aggressively or even viewing aggression purges angry feelings and aggressive impulses into harmless channels.
Hostile attribution bias
The tendency to perceive ambiguous actions by others as aggressive.
Hostile expectation bias
The tendency to assume that people will react to potential conflicts with aggression.
Hostile perception bias
The tendency to perceive social interactions in general as being aggressive.
Punishment
Inflicting pain or removing pleasure for a misdeed. Punishment decreases the likelihood that a behavior will be repeated.
Relational aggression
Intentionally harming another person’s social relationships, feelings of acceptance, or inclusion within a group.
Violence
Aggression intended to cause extreme physical harm, such as injury or death.
Weapons effect
The increase in aggression that occurs as a result of the mere presence of a weapon. | textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/07%3A_Social/7.02%3A_Aggression_and_Violence.txt |
By Dennis L. Poepsel and David A. Schroeder
Truman State University, University of Arkansas
People often act to benefit other people, and these acts are examples of prosocial behavior. Such behaviors may come in many guises: helping an individual in need; sharing personal resources; volunteering time, effort, and expertise; cooperating with others to achieve some common goals. The focus of this module is on helping—prosocial acts in dyadic situations in which one person is in need and another provides the necessary assistance to eliminate the other’s need. Although people are often in need, help is not always given. Why not? The decision of whether or not to help is not as simple and straightforward as it might seem, and many factors need to be considered by those who might help. In this module, we will try to understand how the decision to help is made by answering the question: Who helps when and why?
learning objectives
• Learn which situational and social factors affect when a bystander will help another in need.
• Understand which personality and individual difference factors make some people more likely to help than others.
• Discover whether we help others out of a sense of altruistic concern for the victim, for more self-centered and egoistic motives, or both.
Introduction
Go to YouTube and search for episodes of “Primetime: What Would You Do?” You will find video segments in which apparently innocent individuals are victimized, while onlookers typically fail to intervene. The events are all staged, but they are very real to the bystanders on the scene. The entertainment offered is the nature of the bystanders’ responses, and viewers are outraged when bystanders fail to intervene. They are convinced that they would have helped. But would they? Viewers are overly optimistic in their beliefs that they would play the hero. Helping may occur frequently, but help is not always given to those in need. So when do people help, and when do they not? All people are not equally helpful—who helps? Why would a person help another in the first place? Many factors go into a person’s decision to help—a fact that the viewers do not fully appreciate. This module will answer the question: Who helps when and why?
When Do People Help?
Social psychologists began trying to answer this question following the unfortunate murder of Kitty Genovese in 1964 (Dovidio, Piliavin, Schroeder, & Penner, 2006; Penner, Dovidio, Piliavin, & Schroeder, 2005). A knife-wielding assailant attacked Kitty repeatedly as she was returning to her apartment early one morning. At least 38 people may have been aware of the attack, but no one came to save her. More recently, in 2010, Hugo Alfredo Tale-Yax was stabbed when he apparently tried to intervene in an argument between a man and woman. As he lay dying in the street, only one man checked his status, but many others simply glanced at the scene and continued on their way. (One passerby did stop to take a cellphone photo, however.) Unfortunately, failures to come to the aid of someone in need are not unique, as the segments on “What Would You Do?” show. Help is not always forthcoming for those who may need it the most. Trying to understand why people do not always help became the focus of bystander intervention research (e.g., Latané & Darley, 1970).
To answer the question regarding when people help, researchers have focused on
1. how bystanders come to define emergencies,
2. when they decide to take responsibility for helping, and
3. how the costs and benefits of intervening affect their decisions of whether to help.
Defining the situation: The role of pluralistic ignorance
The decision to help is not a simple yes/no proposition. In fact, a series of questions must be addressed before help is given—even in emergencies in which time may be of the essence. Sometimes help comes quickly; an onlooker recently jumped from a Philadelphia subway platform to help a stranger who had fallen on the track. Help was clearly needed and was quickly given. But some situations are ambiguous, and potential helpers may have to decide whether a situation is one in which help, in fact, needs to be given.
To define ambiguous situations (including many emergencies), potential helpers may look to the action of others to decide what should be done. But those others are looking around too, also trying to figure out what to do. Everyone is looking, but no one is acting! Relying on others to define the situation and to then erroneously conclude that no intervention is necessary when help is actually needed is called pluralistic ignorance (Latané & Darley, 1970). When people use the inactions of others to define their own course of action, the resulting pluralistic ignorance leads to less help being given.
Do I have to be the one to help?: Diffusion of responsibility
Simply being with others may facilitate or inhibit whether we get involved in other ways as well. In situations in which help is needed, the presence or absence of others may affect whether a bystander will assume personal responsibility to give the assistance. If the bystander is alone, personal responsibility to help falls solely on the shoulders of that person. But what if others are present? Although it might seem that having more potential helpers around would increase the chances of the victim getting help, the opposite is often the case. Knowing that someone else could help seems to relieve bystanders of personal responsibility, so bystanders do not intervene. This phenomenon is known as diffusion of responsibility (Darley & Latané, 1968).
On the other hand, watch the video of the race officials following the 2013 Boston Marathon after two bombs exploded as runners crossed the finish line. Despite the presence of many spectators, the yellow-jacketed race officials immediately rushed to give aid and comfort to the victims of the blast. Each one no doubt felt a personal responsibility to help by virtue of their official capacity in the event; fulfilling the obligations of their roles overrode the influence of the diffusion of responsibility effect.
There is an extensive body of research showing the negative impact of pluralistic ignorance and diffusion of responsibility on helping (Fisher et al., 2011), in both emergencies and everyday need situations. These studies show the tremendous importance potential helpers place on the social situation in which unfortunate events occur, especially when it is not clear what should be done and who should do it. Other people provide important social information about how we should act and what our personal obligations might be. But does knowing a person needs help and accepting responsibility to provide that help mean the person will get assistance? Not necessarily.
The costs and rewards of helping
The nature of the help needed plays a crucial role in determining what happens next. Specifically, potential helpers engage in a cost–benefit analysis before getting involved (Dovidio et al., 2006). If the needed help is of relatively low cost in terms of time, money, resources, or risk, then help is more likely to be given. Lending a classmate a pencil is easy; confronting the knife-wielding assailant who attacked Kitty Genovese is an entirely different matter. As the unfortunate case of Hugo Alfredo Tale-Yax demonstrates, intervening may cost the life of the helper.
The potential rewards of helping someone will also enter into the equation, perhaps offsetting the cost of helping. Thanks from the recipient of help may be a sufficient reward. If helpful acts are recognized by others, helpers may receive social rewards of praise or monetary rewards. Even avoiding feelings of guilt if one does not help may be considered a benefit. Potential helpers consider how much helping will cost and compare those costs to the rewards that might be realized; it is the economics of helping. If costs outweigh the rewards, helping is less likely. If rewards are greater than cost, helping is more likely.
Who Helps?
Do you know someone who always seems to be ready, willing, and able to help? Do you know someone who never helps out? It seems there are personality and individual differences in the helpfulness of others. To answer the question of who chooses to help, researchers have examined 1) the role that sex and gender play in helping, 2) what personality traits are associated with helping, and 3) the characteristics of the “prosocial personality.”
Who are more helpful—men or women?
In terms of individual differences that might matter, one obvious question is whether men or women are more likely to help. In one of the “What Would You Do?” segments, a man takes a woman’s purse from the back of her chair and then leaves the restaurant. Initially, no one responds, but as soon as the woman asks about her missing purse, a group of men immediately rush out the door to catch the thief. So, are men more helpful than women? The quick answer is “not necessarily.” It all depends on the type of help needed. To be very clear, the general level of helpfulness may be pretty much equivalent between the sexes, but men and women help in different ways (Becker & Eagly, 2004; Eagly & Crowley, 1986). What accounts for these differences?
Two factors help to explain sex and gender differences in helping. The first is related to the cost–benefit analysis process discussed previously. Physical differences between men and women may come into play (e.g., Wood & Eagly, 2002); the fact that men tend to have greater upper body strength than women makes the cost of intervening in some situations less for a man. Confronting a thief is a risky proposition, and some strength may be needed in case the perpetrator decides to fight. A bigger, stronger bystander is less likely to be injured and more likely to be successful.
The second explanation is simple socialization. Men and women have traditionally been raised to play different social roles that prepare them to respond differently to the needs of others, and people tend to help in ways that are most consistent with their gender roles. Female gender roles encourage women to be compassionate, caring, and nurturing; male gender roles encourage men to take physical risks, to be heroic and chivalrous, and to be protective of those less powerful. As a consequence of social training and the gender roles that people have assumed, men may be more likely to jump onto subway tracks to save a fallen passenger, but women are more likely to give comfort to a friend with personal problems (Diekman & Eagly, 2000; Eagly & Crowley, 1986). There may be some specialization in the types of help given by the two sexes, but it is nice to know that there is someone out there—man or woman—who is able to give you the help that you need, regardless of what kind of help it might be.
A trait for being helpful: Agreeableness
Graziano and his colleagues (e.g., Graziano & Tobin, 2009; Graziano, Habishi, Sheese, & Tobin, 2007) have explored how agreeableness—one of the Big Five personality dimensions (e.g., Costa & McCrae, 1988)—plays an important role in prosocial behavior. Agreeableness is a core trait that includes such dispositional characteristics as being sympathetic, generous, forgiving, and helpful, and behavioral tendencies toward harmonious social relations and likeability. At the conceptual level, a positive relationship between agreeableness and helping may be expected, and research by Graziano et al. (2007) has found that those higher on the agreeableness dimension are, in fact, more likely than those low on agreeableness to help siblings, friends, strangers, or members of some other group. Agreeable people seem to expect that others will be similarly cooperative and generous in interpersonal relations, and they, therefore, act in helpful ways that are likely to elicit positive social interactions.
Searching for the prosocial personality
Rather than focusing on a single trait, Penner and his colleagues (Penner, Fritzsche, Craiger, & Freifeld, 1995; Penner & Orom, 2010) have taken a somewhat broader perspective and identified what they call the prosocial personality orientation. Their research indicates that two major characteristics are related to the prosocial personality and prosocial behavior. The first characteristic is called other-oriented empathy: People high on this dimension have a strong sense of social responsibility, empathize with and feel emotionally tied to those in need, understand the problems the victim is experiencing, and have a heightened sense of moral obligation to be helpful. This factor has been shown to be highly correlated with the trait of agreeableness discussed previously. The second characteristic, helpfulness, is more behaviorally oriented. Those high on the helpfulness factor have been helpful in the past, and because they believe they can be effective with the help they give, they are more likely to be helpful in the future.
Why Help?
Finally, the question of why a person would help needs to be asked. What motivation is there for that behavior? Psychologists have suggested that 1) evolutionary forces may serve to predispose humans to help others, 2) egoistic concerns may determine if and when help will be given, and 3) selfless, altruistic motives may also promote helping in some cases.
Evolutionary roots for prosocial behavior
Our evolutionary past may provide keys about why we help (Buss, 2004). Our very survival was no doubt promoted by the prosocial relations with clan and family members, and, as a hereditary consequence, we may now be especially likely to help those closest to us—blood-related relatives with whom we share a genetic heritage. According to evolutionary psychology, we are helpful in ways that increase the chances that our DNA will be passed along to future generations (Burnstein, Crandall, & Kitayama, 1994)—the goal of the “selfish gene” (Dawkins, 1976). Our personal DNA may not always move on, but we can still be successful in getting some portion of our DNA transmitted if our daughters, sons, nephews, nieces, and cousins survive to produce offspring. The favoritism shown for helping our blood relatives is called kin selection (Hamilton, 1964).
But, we do not restrict our relationships just to our own family members. We live in groups that include individuals who are unrelated to us, and we often help them too. Why? Reciprocal altruism (Trivers, 1971) provides the answer. Because of reciprocal altruism, we are all better off in the long run if we help one another. If helping someone now increases the chances that you will be helped later, then your overall chances of survival are increased. There is the chance that someone will take advantage of your help and not return your favors. But people seem predisposed to identify those who fail to reciprocate, and punishments including social exclusion may result (Buss, 2004). Cheaters will not enjoy the benefit of help from others, reducing the likelihood of the survival of themselves and their kin.
Evolutionary forces may provide a general inclination for being helpful, but they may not be as good an explanation for why we help in the here and now. What factors serve as proximal influences for decisions to help?
Egoistic motivation for helping
Most people would like to think that they help others because they are concerned about the other person’s plight. In truth, the reasons why we help may be more about ourselves than others: Egoistic or selfish motivations may make us help. Implicitly, we may ask, “What’s in it for me?” There are two major theories that explain what types of reinforcement helpers may be seeking. The negative state relief model (e.g., Cialdini, Darby, & Vincent, 1973; Cialdini, Kenrick, & Baumann, 1982) suggests that people sometimes help in order to make themselves feel better. Whenever we are feeling sad, we can use helping someone else as a positive mood boost to feel happier. Through socialization, we have learned that helping can serve as a secondary reinforcement that will relieve negative moods (Cialdini & Kenrick, 1976).
The arousal: cost–reward model provides an additional way to understand why people help (e.g., Piliavin, Dovidio, Gaertner, & Clark, 1981). This model focuses on the aversive feelings aroused by seeing another in need. If you have ever heard an injured puppy yelping in pain, you know that feeling, and you know that the best way to relieve that feeling is to help and to comfort the puppy. Similarly, when we see someone who is suffering in some way (e.g., injured, homeless, hungry), we vicariously experience a sympathetic arousal that is unpleasant, and we are motivated to eliminate that aversive state. One way to do that is to help the person in need. By eliminating the victim’s pain, we eliminate our own aversive arousal. Helping is an effective way to alleviate our own discomfort.
As an egoistic model, the arousal: cost–reward model explicitly includes the cost/reward considerations that come into play. Potential helpers will find ways to cope with the aversive arousal that will minimize their costs—maybe by means other than direct involvement. For example, the costs of directly confronting a knife-wielding assailant might stop a bystander from getting involved, but the cost of some indirect help (e.g., calling the police) may be acceptable. In either case, the victim’s need is addressed. Unfortunately, if the costs of helping are too high, bystanders may reinterpret the situation to justify not helping at all. We now know that the attack of Kitty Genovese was a murderous assault, but it may have been misperceived as a lover’s spat by someone who just wanted to go back to sleep. For some, fleeing the situation causing their distress may do the trick (Piliavin et al., 1981).
The egoistically based negative state relief model and the arousal: cost–reward model see the primary motivation for helping as being the helper’s own outcome. Recognize that the victim’s outcome is of relatively little concern to the helper—benefits to the victim are incidental byproducts of the exchange (Dovidio et al., 2006). The victim may be helped, but the helper’s real motivation according to these two explanations is egoistic: Helpers help to the extent that it makes them feel better.
Altruistic help
Although many researchers believe that egoism is the only motivation for helping, others suggest that altruism—helping that has as its ultimate goal the improvement of another’s welfare—may also be a motivation for helping under the right circumstances. Batson (2011) has offered the empathy–altruism model to explain altruistically motivated helping for which the helper expects no benefits. According to this model, the key for altruism is empathizing with the victim, that is, putting oneself in the shoes of the victim and imagining how the victim must feel. When taking this perspective and having empathic concern, potential helpers become primarily interested in increasing the well-being of the victim, even if the helper must incur some costs that might otherwise be easily avoided. The empathy–altruism model does not dismiss egoistic motivations; helpers not empathizing with a victim may experience personal distress and have an egoistic motivation, not unlike the feelings and motivations explained by the arousal: cost–reward model. Because egoistically motivated individuals are primarily concerned with their own cost–benefit outcomes, they are less likely to help if they think they can escape the situation with no costs to themselves. In contrast, altruistically motivated helpers are willing to accept the cost of helping to benefit a person with whom they have empathized—this “self-sacrificial” approach to helping is the hallmark of altruism (Batson, 2011).
Although there is still some controversy about whether people can ever act for purely altruistic motives, it is important to recognize that, while helpers may derive some personal rewards by helping another, the help that has been given is also benefitting someone who was in need. The residents who offered food, blankets, and shelter to stranded runners who were unable to get back to their hotel rooms because of the Boston Marathon bombing undoubtedly received positive rewards because of the help they gave, but those stranded runners who were helped got what they needed badly as well. “In fact, it is quite remarkable how the fates of people who have never met can be so intertwined and complementary. Your benefit is mine; and mine is yours” (Dovidio et al., 2006, p. 143).
Conclusion
We started this module by asking the question, “Who helps when and why?” As we have shown, the question of when help will be given is not quite as simple as the viewers of “What Would You Do?” believe. The power of the situation that operates on potential helpers in real time is not fully considered. What might appear to be a split-second decision to help is actually the result of consideration of multiple situational factors (e.g., the helper’s interpretation of the situation, the presence and ability of others to provide the help, the results of a cost–benefit analysis) (Dovidio et al., 2006). We have found that men and women tend to help in different ways—men are more impulsive and physically active, while women are more nurturing and supportive. Personality characteristics such as agreeableness and the prosocial personality orientation also affect people’s likelihood of giving assistance to others. And, why would people help in the first place? In addition to evolutionary forces (e.g., kin selection, reciprocal altruism), there is extensive evidence to show that helping and prosocial acts may be motivated by selfish, egoistic desires; by selfless, altruistic goals; or by some combination of egoistic and altruistic motives. (For a fuller consideration of the field of prosocial behavior, we refer you to Dovidio et al. [2006].)
Outside Resources
Article: Alden, L. E., & Trew, J. L. (2013). If it makes you happy: Engaging in kind acts increases positive affect in socially anxious individuals. Emotion, 13, 64-75. doi:10.1037/a0027761 Review available at:
http://nymag.com/scienceofus/2015/07...y-be-nice.html
Book: Batson, C.D. (2009). Altruism in humans. New York, NY: Oxford University Press.
Book: Dovidio, J. F., Piliavin, J. A., Schroeder, D. A., & Penner, L. A. (2006). The social psychology of prosocial behavior. Mahwah, NJ: Erlbaum.
Book: Mikuliner, M., & Shaver, P. R. (2010). Prosocial motives, emotions, and behavior: The better angels of our nature. Washington, DC: American Psychological Association.
Book: Schroeder, D. A. & Graziano, W. G. (forthcoming). The Oxford handbook of prosocial behavior. New York, NY: Oxford University Press.
Institution: Center for Generosity, University of Notre Dame, 936 Flanner Hall, Notre Dame, IN 46556.
http://www.generosityresearch.nd.edu
Institution: The Greater Good Science Center, University of California, Berkeley.
www.greatergood.berkeley.edu
News Article: Bystanders Stop Suicide Attempt
http://jfmueller.faculty.noctrl.edu/crow/bystander.pdf
Social Psychology Network (SPN)
http://www.socialpsychology.org/social.htm#prosocial
Video: Episodes (individual) of “Primetime: What Would You Do?”
http://www.YouTube.com
Video: Episodes of “Primetime: What Would You Do?” that often include some commentary from experts in the field may be available at
http://www.abc.com
Video: From The Inquisitive Mind website, a great overview of different aspects of helping and pro-social behavior including - pluralistic ignorance, diffusion of responsibility, the bystander effect, and empathy.
Discussion Questions
1. Pluralistic ignorance suggests that inactions by other observers of an emergency will decrease the likelihood that help will be given. What do you think will happen if even one other observer begins to offer assistance to a victim?
2. In addition to those mentioned in the module, what other costs and rewards might affect a potential helper’s decision of whether to help? Receiving help to solve some problem is an obvious benefit for someone in need; are there any costs that a person might have to bear as a result of receiving help from someone?
3. What are the characteristics possessed by your friends who are most helpful? By your friends who are least helpful? What has made your helpful friends and your unhelpful friends so different? What kinds of help have they given to you, and what kind of help have you given to them? Are you a helpful person?
4. Do you think that sex and gender differences in the frequency of helping and the kinds of helping have changed over time? Why? Do you think that we might expect more changes in the future?
5. What do you think is the primary motive for helping behavior: egoism or altruism? Are there any professions in which people are being “pure” altruists, or are some egoistic motivations always playing a role?
6. There are other prosocial behaviors in addition to the kind of helping discussed here. People volunteer to serve many different causes and organizations. People come together to cooperate with one another to achieve goals that no one individual could reach alone. How do you think the factors that affect helping might affect prosocial actions such as volunteering and cooperating? Do you think that there might be other factors that make people more or less likely to volunteer their time and energy or to cooperate in a group?
Vocabulary
Agreeableness
A core personality trait that includes such dispositional characteristics as being sympathetic, generous, forgiving, and helpful, and behavioral tendencies toward harmonious social relations and likeability.
Altruism
A motivation for helping that has the improvement of another’s welfare as its ultimate goal, with no expectation of any benefits for the helper.
Arousal: cost–reward model
An egoistic theory proposed by Piliavin et al. (1981) that claims that seeing a person in need leads to the arousal of unpleasant feelings, and observers are motivated to eliminate that aversive state, often by helping the victim. A cost–reward analysis may lead observers to react in ways other than offering direct assistance, including indirect help, reinterpretation of the situation, or fleeing the scene.
Bystander intervention
The phenomenon whereby people intervene to help others in need even if the other is a complete stranger and the intervention puts the helper at risk.
Cost–benefit analysis
A decision-making process that compares the cost of an action or thing against the expected benefit to help determine the best course of action.
Diffusion of responsibility
When deciding whether to help a person in need, knowing that there are others who could also provide assistance relieves bystanders of some measure of personal responsibility, reducing the likelihood that bystanders will intervene.
Egoism
A motivation for helping that has the improvement of the helper’s own circumstances as its primary goal.
Empathic concern
According to Batson’s empathy–altruism hypothesis, observers who empathize with a person in need (that is, put themselves in the shoes of the victim and imagine how that person feels) will experience empathic concern and have an altruistic motivation for helping.
Empathy–altruism model
An altruistic theory proposed by Batson (2011) that claims that people who put themselves in the shoes of a victim and imagining how the victim feel will experience empathic concern that evokes an altruistic motivation for helping.
Helpfulness
A component of the prosocial personality orientation; describes individuals who have been helpful in the past and, because they believe they can be effective with the help they give, are more likely to be helpful in the future.
Helping
Prosocial acts that typically involve situations in which one person is in need and another provides the necessary assistance to eliminate the other’s need.
Kin selection
According to evolutionary psychology, the favoritism shown for helping our blood relatives, with the goals of increasing the likelihood that some portion of our DNA will be passed on to future generations.
Negative state relief model
An egoistic theory proposed by Cialdini et al. (1982) that claims that people have learned through socialization that helping can serve as a secondary reinforcement that will relieve negative moods such as sadness.
Other-oriented empathy
A component of the prosocial personality orientation; describes individuals who have a strong sense of social responsibility, empathize with and feel emotionally tied to those in need, understand the problems the victim is experiencing, and have a heightened sense of moral obligations to be helpful.
Personal distress
According to Batson’s empathy–altruism hypothesis, observers who take a detached view of a person in need will experience feelings of being “worried” and “upset” and will have an egoistic motivation for helping to relieve that distress.
Pluralistic ignorance
Relying on the actions of others to define an ambiguous need situation and to then erroneously conclude that no help or intervention is necessary.
Prosocial behavior
Social behavior that benefits another person.
Prosocial personality orientation
A measure of individual differences that identifies two sets of personality characteristics (other-oriented empathy, helpfulness) that are highly correlated with prosocial behavior.
Reciprocal altruism
According to evolutionary psychology, a genetic predisposition for people to help those who have previously helped them. | textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/07%3A_Social/7.03%3A_Helping_and_Prosocial_Behavior.txt |
• 8.1: Personality Traits
Personality traits reflect people’s characteristic patterns of thoughts, feelings, and behaviors. Personality traits imply consistency and stability—someone who scores high on a specific trait like Extraversion is expected to be sociable in different situations and over time. Thus, trait psychology rests on the idea that people differ from one another in terms of where they stand on a set of basic trait dimensions that persist over time and across situations.
08: Personality
By Edward Diener and Richard E. Lucas
University of Utah, University of Virginia, Michigan State University
Personality traits reflect people’s characteristic patterns of thoughts, feelings, and behaviors. Personality traits imply consistency and stability—someone who scores high on a specific trait like Extraversion is expected to be sociable in different situations and over time. Thus, trait psychology rests on the idea that people differ from one another in terms of where they stand on a set of basic trait dimensions that persist over time and across situations. The most widely used system of traits is called the Five-Factor Model. This system includes five broad traits that can be remembered with the acronym OCEAN: Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. Each of the major traits from the Big Five can be divided into facets to give a more fine-grained analysis of someone's personality. In addition, some trait theorists argue that there are other traits that cannot be completely captured by the Five-Factor Model. Critics of the trait concept argue that people do not act consistently from one situation to the next and that people are very influenced by situational forces. Thus, one major debate in the field concerns the relative power of people’s traits versus the situations in which they find themselves as predictors of their behavior.
learning objectives
• List and describe the “Big Five” (“OCEAN”) personality traits that comprise the Five-Factor Model of personality.
• Describe how the facet approach extends broad personality traits.
• Explain a critique of the personality-trait concept.
• Describe in what ways personality traits may be manifested in everyday behavior.
• Describe each of the Big Five personality traits, and the low and high end of the dimension.
• Give examples of each of the Big Five personality traits, including both a low and high example.
• Describe how traits and social learning combine to predict your social activities.
• Describe your theory of how personality traits get refined by social learning.
Introduction
When we observe people around us, one of the first things that strikes us is how different people are from one another. Some people are very talkative while others are very quiet. Some are active whereas others are couch potatoes. Some worry a lot, others almost never seem anxious. Each time we use one of these words, words like “talkative,” “quiet,” “active,” or “anxious,” to describe those around us, we are talking about a person’s personalitythe characteristic ways that people differ from one another. Personality psychologists try to describe and understand these differences.
Although there are many ways to think about the personalities that people have, Gordon Allport and other “personologists” claimed that we can best understand the differences between individuals by understanding their personality traits. Personality traits reflect basic dimensions on which people differ (Matthews, Deary, & Whiteman, 2003). According to trait psychologists, there are a limited number of these dimensions (dimensions like Extraversion, Conscientiousness, or Agreeableness), and each individual falls somewhere on each dimension, meaning that they could be low, medium, or high on any specific trait.
An important feature of personality traits is that they reflect continuous distributions rather than distinct personality types. This means that when personality psychologists talk about Introverts and Extraverts, they are not really talking about two distinct types of people who are completely and qualitatively different from one another. Instead, they are talking about people who score relatively low or relatively high along a continuous distribution. In fact, when personality psychologists measure traits like Extraversion, they typically find that most people score somewhere in the middle, with smaller numbers showing more extreme levels. The figure below shows the distribution of Extraversion scores from a survey of thousands of people. As you can see, most people report being moderately, but not extremely, extraverted, with fewer people reporting very high or very low scores.
There are three criteria that are characterize personality traits: (1) consistency, (2) stability, and (3) individual differences.
1. To have a personality trait, individuals must be somewhat consistent across situations in their behaviors related to the trait. For example, if they are talkative at home, they tend also to be talkative at work.
2. Individuals with a trait are also somewhat stable over time in behaviors related to the trait. If they are talkative, for example, at age 30, they will also tend to be talkative at age 40.
3. People differ from one another on behaviors related to the trait. Using speech is not a personality trait and neither is walking on two feet—virtually all individuals do these activities, and there are almost no individual differences. But people differ on how frequently they talk and how active they are, and thus personality traits such as Talkativeness and Activity Level do exist.
A challenge of the trait approach was to discover the major traits on which all people differ. Scientists for many decades generated hundreds of new traits, so that it was soon difficult to keep track and make sense of them. For instance, one psychologist might focus on individual differences in “friendliness,” whereas another might focus on the highly related concept of “sociability.” Scientists began seeking ways to reduce the number of traits in some systematic way and to discover the basic traits that describe most of the differences between people.
The way that Gordon Allport and his colleague Henry Odbert approached this was to search the dictionary for all descriptors of personality (Allport & Odbert, 1936). Their approach was guided by the lexical hypothesis, which states that all important personality characteristics should be reflected in the language that we use to describe other people. Therefore, if we want to understand the fundamental ways in which people differ from one another, we can turn to the words that people use to describe one another. So if we want to know what words people use to describe one another, where should we look? Allport and Odbert looked in the most obvious place—the dictionary. Specifically, they took all the personality descriptors that they could find in the dictionary (they started with almost 18,000 words but quickly reduced that list to a more manageable number) and then used statistical techniques to determine which words “went together.” In other words, if everyone who said that they were “friendly” also said that they were “sociable,” then this might mean that personality psychologists would only need a single trait to capture individual differences in these characteristics. Statistical techniques were used to determine whether a small number of dimensions might underlie all of the thousands of words we use to describe people.
The Five-Factor Model of Personality
Research that used the lexical approach showed that many of the personality descriptors found in the dictionary do indeed overlap. In other words, many of the words that we use to describe people are synonyms. Thus, if we want to know what a person is like, we do not necessarily need to ask how sociable they are, how friendly they are, and how gregarious they are. Instead, because sociable people tend to be friendly and gregarious, we can summarize this personality dimension with a single term. Someone who is sociable, friendly, and gregarious would typically be described as an “Extravert.” Once we know she is an extravert, we can assume that she is sociable, friendly, and gregarious.
Statistical methods (specifically, a technique called factor analysis) helped to determine whether a small number of dimensions underlie the diversity of words that people like Allport and Odbert identified. The most widely accepted system to emerge from this approach was “The Big Five” or “Five-Factor Model” (Goldberg, 1990; McCrae & John, 1992; McCrae & Costa, 1987). The Big Five comprises five major traits shown in the Figure 3.2.2 below. A way to remember these five is with the acronym OCEAN (O is for Openness; C is for Conscientiousness; E is for Extraversion; A is for Agreeableness; N is for Neuroticism). Figure 3.2.3 provides descriptions of people who would score high and low on each of these traits.
Scores on the Big Five traits are mostly independent. That means that a person’s standing on one trait tells very little about their standing on the other traits of the Big Five. For example, a person can be extremely high in Extraversion and be either high or low on Neuroticism. Similarly, a person can be low in Agreeableness and be either high or low in Conscientiousness. Thus, in the Five-Factor Model, you need five scores to describe most of an individual’s personality.
In the Appendix to this module, we present a short scale to assess the Five-Factor Model of personality (Donnellan, Oswald, Baird, & Lucas, 2006). You can take this test to see where you stand in terms of your Big Five scores. John Johnson has also created a helpful website that has personality scales that can be used and taken by the general public:
http://www.personal.psu.edu/j5j/IPIP/ipipneo120.htm
After seeing your scores, you can judge for yourself whether you think such tests are valid.
Traits are important and interesting because they describe stable patterns of behavior that persist for long periods of time (Caspi, Roberts, & Shiner, 2005). Importantly, these stable patterns can have broad-ranging consequences for many areas of our life (Roberts, Kuncel, Shiner, Caspi, & Goldberg, 2007). For instance, think about the factors that determine success in college. If you were asked to guess what factors predict good grades in college, you might guess something like intelligence. This guess would be correct, but we know much more about who is likely to do well. Specifically, personality researchers have also found the personality traits like Conscientiousness play an important role in college and beyond, probably because highly conscientious individuals study hard, get their work done on time, and are less distracted by nonessential activities that take time away from school work. In addition, highly conscientious people are often healthier than people low in conscientiousness because they are more likely to maintain healthy diets, to exercise, and to follow basic safety procedures like wearing seat belts or bicycle helmets. Over the long term, this consistent pattern of behaviors can add up to meaningful differences in health and longevity. Thus, personality traits are not just a useful way to describe people you know; they actually help psychologists predict how good a worker someone will be, how long he or she will live, and the types of jobs and activities the person will enjoy. Thus, there is growing interest in personality psychology among psychologists who work in applied settings, such as health psychology or organizational psychology.
Facets of Traits (Subtraits)
So how does it feel to be told that your entire personality can be summarized with scores on just five personality traits? Do you think these five scores capture the complexity of your own and others’ characteristic patterns of thoughts, feelings, and behaviors? Most people would probably say no, pointing to some exception in their behavior that goes against the general pattern that others might see. For instance, you may know people who are warm and friendly and find it easy to talk with strangers at a party yet are terrified if they have to perform in front of others or speak to large groups of people. The fact that there are different ways of being extraverted or conscientious shows that there is value in considering lower-level units of personality that are more specific than the Big Five traits. These more specific, lower-level units of personality are often called facets.
To give you a sense of what these narrow units are like, Figure 3.2.4 shows facets for each of the Big Five traits. It is important to note that although personality researchers generally agree about the value of the Big Five traits as a way to summarize one’s personality, there is no widely accepted list of facets that should be studied. The list seen here, based on work by researchers Paul Costa and Jeff McCrae, thus reflects just one possible list among many. It should, however, give you an idea of some of the facets making up each of the Five-Factor Model.
Facets can be useful because they provide more specific descriptions of what a person is like. For instance, if we take our friend who loves parties but hates public speaking, we might say that this person scores high on the “gregariousness” and “warmth” facets of extraversion, while scoring lower on facets such as “assertiveness” or “excitement-seeking.” This precise profile of facet scores not only provides a better description, it might also allow us to better predict how this friend will do in a variety of different jobs (for example, jobs that require public speaking versus jobs that involve one-on-one interactions with customers; Paunonen & Ashton, 2001). Because different facets within a broad, global trait like extraversion tend to go together (those who are gregarious are often but not always assertive), the broad trait often provides a useful summary of what a person is like. But when we really want to know a person, facet scores add to our knowledge in important ways.
Other Traits Beyond the Five-Factor Model
Despite the popularity of the Five-Factor Model, it is certainly not the only model that exists. Some suggest that there are more than five major traits, or perhaps even fewer. For example, in one of the first comprehensive models to be proposed, Hans Eysenck suggested that Extraversion and Neuroticism are most important. Eysenck believed that by combining people’s standing on these two major traits, we could account for many of the differences in personality that we see in people (Eysenck, 1981). So for instance, a neurotic introvert would be shy and nervous, while a stable introvert might avoid social situations and prefer solitary activities, but he may do so with a calm, steady attitude and little anxiety or emotion. Interestingly, Eysenck attempted to link these two major dimensions to underlying differences in people’s biology. For instance, he suggested that introverts experienced too much sensory stimulation and arousal, which made them want to seek out quiet settings and less stimulating environments. More recently, Jeffrey Gray suggested that these two broad traits are related to fundamental reward and avoidance systems in the brain—extraverts might be motivated to seek reward and thus exhibit assertive, reward-seeking behavior, whereas people high in neuroticism might be motivated to avoid punishment and thus may experience anxiety as a result of their heightened awareness of the threats in the world around them (Gray, 1981. This model has since been updated; see Gray & McNaughton, 2000). These early theories have led to a burgeoning interest in identifying the physiological underpinnings of the individual differences that we observe.
Another revision of the Big Five is the HEXACO model of traits (Ashton & Lee, 2007). This model is similar to the Big Five, but it posits slightly different versions of some of the traits, and its proponents argue that one important class of individual differences was omitted from the Five-Factor Model. The HEXACO adds Honesty-Humility as a sixth dimension of personality. People high in this trait are sincere, fair, and modest, whereas those low in the trait are manipulative, narcissistic, and self-centered. Thus, trait theorists are agreed that personality traits are important in understanding behavior, but there are still debates on the exact number and composition of the traits that are most important.
There are other important traits that are not included in comprehensive models like the Big Five. Although the five factors capture much that is important about personality, researchers have suggested other traits that capture interesting aspects of our behavior. In Figure 5 below we present just a few, out of hundreds, of the other traits that have been studied by personologists.
Not all of the above traits are currently popular with scientists, yet each of them has experienced popularity in the past. Although the Five-Factor Model has been the target of more rigorous research than some of the traits above, these additional personality characteristics give a good idea of the wide range of behaviors and attitudes that traits can cover.
The Person-Situation Debate and Alternatives to the Trait Perspective
The ideas described in this module should probably seem familiar, if not obvious to you. When asked to think about what our friends, enemies, family members, and colleagues are like, some of the first things that come to mind are their personality characteristics. We might think about how warm and helpful our first teacher was, how irresponsible and careless our brother is, or how demanding and insulting our first boss was. Each of these descriptors reflects a personality trait, and most of us generally think that the descriptions that we use for individuals accurately reflect their “characteristic pattern of thoughts, feelings, and behaviors,” or in other words, their personality.
But what if this idea were wrong? What if our belief in personality traits were an illusion and people are not consistent from one situation to the next? This was a possibility that shook the foundation of personality psychology in the late 1960s when Walter Mischel published a book called Personality and Assessment (1968). In this book, Mischel suggested that if one looks closely at people’s behavior across many different situations, the consistency is really not that impressive. In other words, children who cheat on tests at school may steadfastly follow all rules when playing games and may never tell a lie to their parents. In other words, he suggested, there may not be any general trait of honesty that links these seemingly related behaviors. Furthermore, Mischel suggested that observers may believe that broad personality traits like honesty exist, when in fact, this belief is an illusion. The debate that followed the publication of Mischel’s book was called the person-situation debatebecause it pitted the power of personality against the power of situational factors as determinants of the behavior that people exhibit.
Because of the findings that Mischel emphasized, many psychologists focused on an alternative to the trait perspective. Instead of studying broad, context-free descriptions, like the trait terms we’ve described so far, Mischel thought that psychologists should focus on people’s distinctive reactions to specific situations. For instance, although there may not be a broad and general trait of honesty, some children may be especially likely to cheat on a test when the risk of being caught is low and the rewards for cheating are high. Others might be motivated by the sense of risk involved in cheating and may do so even when the rewards are not very high. Thus, the behavior itself results from the child’s unique evaluation of the risks and rewards present at that moment, along with her evaluation of her abilities and values. Because of this, the same child might act very differently in different situations. Thus, Mischel thought that specific behaviors were driven by the interaction between very specific, psychologically meaningful features of the situation in which people found themselves, the person’s unique way of perceiving that situation, and his or her abilities for dealing with it. Mischel and others argued that it was these social-cognitive processes that underlie people’s reactions to specific situations that provide some consistency when situational features are the same. If so, then studying these broad traits might be more fruitful than cataloging and measuring narrow, context-free traits like Extraversion or Neuroticism.
In the years after the publication of Mischel’s (1968) book, debates raged about whether personality truly exists, and if so, how it should be studied. And, as is often the case, it turns out that a more moderate middle ground than what the situationists proposed could be reached. It is certainly true, as Mischel pointed out, that a person’s behavior in one specific situation is not a good guide to how that person will behave in a very different specific situation. Someone who is extremely talkative at one specific party may sometimes be reticent to speak up during class and may even act like a wallflower at a different party. But this does not mean that personality does not exist, nor does it mean that people’s behavior is completely determined by situational factors. Indeed, research conducted after the person-situation debate shows that on average, the effect of the “situation” is about as large as that of personality traits. However, it is also true that if psychologists assess a broad range of behaviors across many different situations, there are general tendencies that emerge. Personality traits give an indication about how people will act on average, but frequently they are not so good at predicting how a person will act in a specific situation at a certain moment in time. Thus, to best capture broad traits, one must assess aggregatebehaviors, averaged over time and across many different types of situations. Most modern personality researchers agree that there is a place for broad personality traits and for the narrower units such as those studied by Walter Mischel.
Appendix
The Mini-IPIP Scale
(Donnellan, Oswald, Baird, & Lucas, 2006)
Instructions: Below are phrases describing people’s behaviors. Please use the rating scale below to describe how accurately each statement describes you. Describe yourself as you generally are now, not as you wish to be in the future. Describe yourself as you honestly see yourself, in relation to other people you know of the same sex as you are, and roughly your same age. Please read each statement carefully, and put a number from 1 to 5 next to it to describe how accurately the statement describes you.
1 = Very inaccurate
2 = Moderately inaccurate
3 = Neither inaccurate nor accurate
4 = Moderately accurate
5 = Very accurate
1. _______ Am the life of the party (E)
2. _______ Sympathize with others’ feelings (A)
3. _______ Get chores done right away (C)
4. _______ Have frequent mood swings (N)
5. _______ Have a vivid imagination (O)
6. _______Don’t talk a lot (E)
7. _______ Am not interested in other people’s problems (A)
8. _______ Often forget to put things back in their proper place (C)
9. _______ Am relaxed most of the time (N)
10. ______ Am not interested in abstract ideas (O)
11. ______ Talk to a lot of different people at parties (E)
12. ______ Feel others’ emotions (A)
13. ______ Like order (C)
14. ______ Get upset easily (N)
15. ______ Have difficulty understanding abstract ideas (O)
16. ______ Keep in the background (E)
17. ______ Am not really interested in others (A)
18. ______ Make a mess of things (C)
19. ______ Seldom feel blue (N)
20. ______ Do not have a good imagination (O)
Scoring: The first thing you must do is to reverse the items that are worded in the opposite direction. In order to do this, subtract the number you put for that item from 6. So if you put a 4, for instance, it will become a 2. Cross out the score you put when you took the scale, and put the new number in representing your score subtracted from the number 6.
Items to be reversed in this way: 6, 7, 8, 9, 10, 15, 16, 17, 18, 19, 20
Next, you need to add up the scores for each of the five OCEAN scales (including the reversed numbers where relevant). Each OCEAN score will be the sum of four items. Place the sum next to each scale below.
__________ Openness: Add items 5, 10, 15, 20
__________ Conscientiousness: Add items 3, 8, 13, 18
__________ Extraversion: Add items 1, 6, 11, 16
__________ Agreeableness: Add items 2, 7, 12, 17
__________ Neuroticism: Add items 4, 9,14, 19
Compare your scores to the norms below to see where you stand on each scale. If you are low on a trait, it means you are the opposite of the trait label. For example, low on Extraversion is Introversion, low on Openness is Conventional, and low on Agreeableness is Assertive.
19–20 Extremely High, 17–18 Very High, 14–16 High,
11–13 Neither high nor low; in the middle, 8–10 Low, 6–7 Very low, 4–5 Extremely low
Outside Resources
Video 1: Gabriela Cintron’s – 5 Factors of Personality (OCEAN Song). This is a student-made video which cleverly describes, through song, common behavioral characteristics of the Big 5 personality traits. It was one of the winning entries in the 2016-17 Noba + Psi Chi Student Video Award.
Video 2: Michael Harris’ – Personality Traits: The Big 5 and More. This is a student-made video that looks at characteristics of the OCEAN traits through a series of funny vignettes. It also presents on the Person vs Situation Debate. It was one of the winning entries in the 2016-17 Noba + Psi Chi Student Video Award.
Video 3: David M. Cole’s – Grouchy with a Chance of Stomping. This is a student-made video that makes a very important point about the relationship between personality traits and behavior using a handy weather analogy. It was one of the winning entries in the 2016-17 Noba + Psi Chi Student Video Award.
Web: International Personality Item Pool
http://ipip.ori.org/
Web: John Johnson personality scales
http://www.personal.psu.edu/j5j/IPIP/ipipneo120.htm
Web: Personality trait systems compared
http://www.personalityresearch.org/bigfive/goldberg.html
Web: Sam Gosling website
homepage.psy.utexas.edu/homep...samgosling.htm
Discussion Questions
1. Consider different combinations of the Big Five, such as O (Low), C (High), E (Low), A (High), and N (Low). What would this person be like? Do you know anyone who is like this? Can you select politicians, movie stars, and other famous people and rate them on the Big Five?
2. How do you think learning and inherited personality traits get combined in adult personality?
3. Can you think of instances where people do not act consistently—where their personality traits are not good predictors of their behavior?
4. Has your personality changed over time, and in what ways?
5. Can you think of a personality trait not mentioned in this module that describes how people differ from one another?
6. When do extremes in personality traits become harmful, and when are they unusual but productive of good outcomes?
Vocabulary
Agreeableness
A personality trait that reflects a person’s tendency to be compassionate, cooperative, warm, and caring to others. People low in agreeableness tend to be rude, hostile, and to pursue their own interests over those of others.
Conscientiousness
A personality trait that reflects a person’s tendency to be careful, organized, hardworking, and to follow rules.
Continuous distributions
Characteristics can go from low to high, with all different intermediate values possible. One does not simply have the trait or not have it, but can possess varying amounts of it.
Extraversion
A personality trait that reflects a person’s tendency to be sociable, outgoing, active, and assertive.
Facets
Broad personality traits can be broken down into narrower facets or aspects of the trait. For example, extraversion has several facets, such as sociability, dominance, risk-taking and so forth.
Factor analysis
A statistical technique for grouping similar things together according to how highly they are associated.
Five-Factor Model
(also called the Big Five) The Five-Factor Model is a widely accepted model of personality traits. Advocates of the model believe that much of the variability in people’s thoughts, feelings, and behaviors can be summarized with five broad traits. These five traits are Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism.
HEXACO model
The HEXACO model is an alternative to the Five-Factor Model. The HEXACO model includes six traits, five of which are variants of the traits included in the Big Five (Emotionality [E], Extraversion [X], Agreeableness [A], Conscientiousness [C], and Openness [O]). The sixth factor, Honesty-Humility [H], is unique to this model.
Independent
Two characteristics or traits are separate from one another-- a person can be high on one and low on the other, or vice-versa. Some correlated traits are relatively independent in that although there is a tendency for a person high on one to also be high on the other, this is not always the case.
Lexical hypothesis
The lexical hypothesis is the idea that the most important differences between people will be encoded in the language that we use to describe people. Therefore, if we want to know which personality traits are most important, we can look to the language that people use to describe themselves and others.
Neuroticism
A personality trait that reflects the tendency to be interpersonally sensitive and the tendency to experience negative emotions like anxiety, fear, sadness, and anger.
Openness to Experience
A personality trait that reflects a person’s tendency to seek out and to appreciate new things, including thoughts, feelings, values, and experiences.
Personality
Enduring predispositions that characterize a person, such as styles of thought, feelings and behavior.
Personality traits
Enduring dispositions in behavior that show differences across individuals, and which tend to characterize the person across varying types of situations.
Person-situation debate
The person-situation debate is a historical debate about the relative power of personality traits as compared to situational influences on behavior. The situationist critique, which started the person-situation debate, suggested that people overestimate the extent to which personality traits are consistent across situations. | textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/08%3A_Personality/8.01%3A_Personality_Traits.txt |
• 9.1: Affective Neuroscience
This module provides a brief overview of the neuroscience of emotion. It integrates findings from human and animal research to describe the brain networks and associated neurotransmitters involved in basic affective systems.
• 9.2: Functions of Emotions
Emotions play a crucial role in our lives because they have important functions. This module describes those functions, dividing the discussion into three areas: the intrapersonal, the interpersonal, and the social and cultural functions of emotions. All in all we will see that emotions are a crucially important aspect of our psychological composition, having meaning and function to each of us individually, to our relationships with others in groups, and to our societies as a whole.
• 9.3: Drive States
Our thoughts and behaviors are strongly influenced by affective experiences known as drive states. These drive states motivate us to fulfill goals that are beneficial to our survival and reproduction. This module provides an overview of key drive states, including information about their neurobiology and their psychological effects.
09: Emotions and Motivation
By Eddie Harmon-Jones and Cindy Harmon-Jones
University of New South Wales
This module provides a brief overview of the neuroscience of emotion. It integrates findings from human and animal research to describe the brain networks and associated neurotransmitters involved in basic affective systems.
learning objectives
• Define affective neuroscience.
• Describe neuroscience techniques used to study emotions in humans and animals.
• Name five emotional systems and their associated neural structures and neurotransmitters.
• Give examples of exogenous chemicals (e.g., drugs) that influence affective systems, and discuss their effects.
• Discuss multiple affective functions of the amygdala and the nucleus accumbens.
• Name several specific human emotions, and discuss their relationship to the affective systems of nonhuman animals.
Affective Neuroscience: What is it?
Affective neuroscience examines how the brain creates emotional responses. Emotions are psychological phenomena that involve changes to the body (e.g., facial expression), changes in autonomic nervous system activity, feeling states (subjective responses), and urges to act in specific ways (motivations; Izard, 2010). Affective neuroscience aims to understand how matter (brain structures and chemicals) creates one of the most fascinating aspects of mind, the emotions. Affective neuroscience uses unbiased, observable measures that provide credible evidence to other sciences and laypersons on the importance of emotions. It also leads to biologically based treatments for affective disorders (e.g., depression).
The human brain and its responses, including emotions, are complex and flexible. In comparison, nonhuman animals possess simpler nervous systems and more basic emotional responses. Invasive neuroscience techniques, such as electrode implantation, lesioning, and hormone administration, can be more easily used in animals than in humans. Human neuroscience must rely primarily on noninvasive techniques such as electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), and on studies of individuals with brain lesions caused by accident or disease. Thus, animal research provides useful models for understanding affective processes in humans. Affective circuits found in other species, particularly social mammals such as rats, dogs, and monkeys, function similarly to human affective networks, although nonhuman animals’ brains are more basic.
In humans, emotions and their associated neural systems have additional layers of complexity and flexibility. Compared to animals, humans experience a vast variety of nuanced and sometimes conflicting emotions. Humans also respond to these emotions in complex ways, such that conscious goals, values, and other cognitions influence behavior in addition to emotional responses. However, in this module we focus on the similarities between organisms, rather than the differences. We often use the term “organism” to refer to the individual who is experiencing an emotion or showing evidence of particular neural activations. An organism could be a rat, a monkey, or a human.
Across species, emotional responses are organized around the organism’s survival and reproductive needs. Emotions influence perception, cognition, and behavior to help organisms survive and thrive (Farb, Chapman, & Anderson, 2013). Networks of structures in the brain respond to different needs, with some overlap between different emotions. Specific emotions are not located in a single structure of the brain. Instead, emotional responses involve networks of activation, with many parts of the brain activated during any emotional process. In fact, the brain circuits involved in emotional reactions include nearly the entire brain (Berridge & Kringelbach, 2013). Brain circuits located deep within the brain below the cerebral cortex are primarily responsible for generating basic emotions (Berridge & Kringelbach, 2013; Panksepp & Biven, 2012). In the past, research attention was focused on specific brain structures that will be reviewed here, but future research may find that additional areas of the brain are also important in these processes.
Basic Emotions
Desire: The neural systems of reward seeking
One of the most important affective neuronal systems relates to feelings of desire, or the appetite for rewards. Researchers refer to these appetitive processes using terms such as “wanting” (Berridge & Kringelbach, 2008), “seeking” (Panksepp & Biven, 2012), or “behavioural activation sensitivity” (Gray, 1987). When the appetitive system is aroused, the organism shows enthusiasm, interest, and curiosity. These neural circuits motivate the animal to move through its environment in search of rewards such as appetizing foods, attractive sex partners, and other pleasurable stimuli. When the appetitive system is underaroused, the organism appears depressed and helpless.
Much evidence for the structures involved in this system comes from animal research using direct brain stimulation. When an electrode is implanted in the lateral hypothalamus or in cortical or mesencephalic regions to which the hypothalamus is connected, animals will press a lever to deliver electrical stimulation, suggesting that they find the stimulation pleasurable. The regions in the desire system also include the amygdala, nucleus accumbens, and frontal cortex (Panksepp & Biven, 2012). The neurotransmitter dopamine, produced in the mesolimbic and mesocortical dopamine circuits, activates these regions. It creates a sense of excitement, meaningfulness, and anticipation. These structures are also sensitive to drugs such as cocaine and amphetamines, chemicals that have similar effects to dopamine (Panksepp & Biven, 2012).
Research in both humans and nonhuman animals shows that the left frontal cortex (compared to the right frontal cortex) is more active during appetitive emotions such as desire and interest. Researchers first noted that persons who had suffered damage to the left frontal cortex developed depression, whereas those with damage to the right frontal cortex developed mania (Goldstein, 1939). The relationship between left frontal activation and approach-related emotions has been confirmed in healthy individuals using EEG and fMRI (Berkman & Lieberman, 2010). For example, increased left frontal activation occurs in 2- to 3-day-old infants when sucrose is placed on their tongues (Fox & Davidson, 1986), and in hungry adults as they view pictures of desirable desserts (Gable & Harmon-Jones, 2008). In addition, greater left frontal activity in appetitive situations has been found to relate to dopamine (Wacker, Mueller, Pizzagalli, Hennig, & Stemmler, 2013).
“Liking”: The neural circuits of pleasure and enjoyment
Surprisingly, the amount of desire an individual feels toward a reward need not correspond to how much he or she likes that reward. This is because the neural structures involved in the enjoyment of rewards are different from the structures involved in the desire for the rewards. “Liking” (e.g., enjoyment of a sweet liquid) can be measured in babies and nonhuman animals by measuring licking speed, tongue protrusions, and happy facial expressions, whereas “wanting” (desire) is shown by the willingness to work hard to obtain a reward (Berridge & Kringelbach, 2008). Liking has been distinguished from wanting in research on topics such as drug abuse. For example, drug addicts often desire drugs even when they know that the ones available will not provide pleasure (Stewart, de Wit, & Eikelboom, 1984).
Research on liking has focused on a small area within the nucleus accumbens and on the posterior half of the ventral pallidum. These brain regions are sensitive to opioids and endocannabinoids. Stimulation of other regions of the reward system increases wanting, but does not increase liking, and in some cases even decreases liking. The research on the distinction between desire and enjoyment contributes to the understanding of human addiction, particularly why individuals often continue to frantically pursue rewards such as cocaine, opiates, gambling, or sex, even when they no longer experience pleasure from obtaining these rewards due to habituation.
The experience of pleasure also involves the orbitofrontal cortex. Neurons in this region fire when monkeys taste, or merely see pictures of, desirable foods. In humans, this region is activated by pleasant stimuli including money, pleasant smells, and attractive faces (Gottfried, O’Doherty & Dolan, 2002; O’Doherty, Deichmann, Critchley, & Dolan, 2002; O’Doherty, Kringelbach, Rolls, Hornak, & Andrews, 2001; O’Doherty, Winston, Critchley, Perrett, Burt, & Dolan, 2003).
Fear: The neural system of freezing and fleeing
Fear is an unpleasant emotion that motivates avoidance of potentially harmful situations. Slight stimulation of the fear-related areas in the brain causes animals to freeze, whereas intense stimulation causes them to flee. The fear circuit extends from the central amygdala to the periaqueductal gray in the midbrain. These structures are sensitive to glutamate, corticotrophin releasing factor, adreno-cortico-trophic hormone, cholecystokinin, and several different neuropeptides. Benzodiazepines and other tranquilizers inhibit activation in these areas (Panksepp & Biven, 2012).
The role of the amygdala in fear responses has been extensively studied. Perhaps because fear is so important to survival, two pathways send signals to the amygdala from the sensory organs. When an individual sees a snake, for example, the sensory information travels from the eye to the thalamus and then to the visual cortex. The visual cortex sends the information on to the amygdala, provoking a fear response. However, the thalamus also quickly sends the information straight to the amygdala, so that the organism can react before consciously perceiving the snake (LeDoux, Farb, & Ruggiero, 1990). The pathway from the thalamus to the amygdala is fast but less accurate than the slower pathway from the visual cortex. Damage to the amygdala or areas of the ventral hypocampus interferes with fear conditioning in both humans and nonhuman animals (LeDoux, 1996).
Rage: The circuits of anger and attack
Anger or rage is an arousing, unpleasant emotion that motivates organisms to approach and attack (Harmon-Jones, Harmon-Jones, & Price, 2013). Anger can be evoked through goal frustration, physical pain, or physical restraint. In territorial animals, anger is provoked by a stranger entering the organism’s home territory (Blanchard & Blanchard, 2003). The neural networks for anger and fear are near one another, but separate (Panksepp & Biven, 2012). They extend from the medial amygdala, through specific parts of the hypothalamus, and into the periaqueductal gray of the midbrain. The anger circuits are linked to the appetitive circuits, such that lack of an anticipated reward can provoke rage. In addition, when humans are angered, they show increased left frontal cortical activation, supporting the idea that anger is an approach-related emotion (Harmon-Jones et al., 2013). The neurotransmitters involved in rage are not yet well understood, but Substance P may play an important role (Panksepp & Biven, 2012). Other neurochemicals that may be involved in anger include testosterone (Peterson & Harmon-Jones, 2012) and arginine-vasopressin (Heinrichs, von Dawans, & Domes, 2009). Several chemicals inhibit the rage system, including opioids and high doses of antipsychotics, such as chlorpromazine (Panksepp & Biven, 2012).
Love: The neural systems of care and attachment
For social animals such as humans, attachment to other members of the same species produces the positive emotions of attachment: love, warm feelings, and affection. The emotions that motivate nurturing behavior (e.g., maternal care) are distinguishable from those that motivate staying close to an attachment figure in order to receive care and protection (e.g., infant attachment). Important regions for maternal nurturing include the dorsal preoptic area (Numan & Insel, 2003) and the bed nucleus of the stria terminalis(Panksepp, 1998). These regions overlap with the areas involved in sexual desire, and are sensitive to some of the same neurotransmitters, including oxytocin, arginine-vasopressin, and endogenous opioids (endorphins and enkephalins).
Grief: The neural networks of loneliness and panic
The neural networks involved in infant attachment are also sensitive to separation. These regions produce the painful emotions of grief, panic, and loneliness. When infant humans or other infant mammals are separated from their mothers, they produce distress vocalizations, or crying. The attachment circuits are those that cause organisms to produce distress vocalizations when electrically stimulated.
The attachment system begins in the midbrain periaqueductal gray, very close to the area that produces physical pain responses, suggesting that it may have originated from the pain circuits (Panksepp, 1998). Separation distress can also be evoked by stimulating the dorsomedial thalamus, ventral septum, dorsal preoptic region, and areas in the bed nucleus of stria terminalis (near sexual and maternal circuits; Panksepp, Normansell, Herman, Bishop, & Crepeau, 1988).
These regions are sensitive to endogenous opiates, oxytocin, and prolactin. All of these neurotransmitters prevent separation distress. Opiate drugs such as morphine and heroin, as well as nicotine, artificially produce feelings of pleasure and gratification, similar to those normally produced during positive social interactions. This may explain why these drugs are addictive. Panic attacks appear to be an intense form of separation distress triggered by the attachment system, and panic can be effectively relieved by opiates. Testosterone also reduces separation distress, perhaps by reducing attachment needs. Consistent with this, panic attacks are more common in women than in men.
Plasticity: Experiences can alter the brain
The responses of specific neural regions may be modified by experience. For example, the front shell of the nucleus accumbens is generally involved in appetitive behaviors, such as eating, and the back shell is generally involved in fearful defensive behaviors (Reynolds & Berridge, 2001, 2002). Research using human neuroimaging has also revealed this front–back distinction in the functions of the nucleus accumbens (Seymour, Daw, Dayan, Singer, & Dolan, 2007). However, when rats are exposed to stressful environments, their fear-generating regions expand toward the front, filling almost 90% of the nucleus accumbens shell. On the other hand, when rats are exposed to preferred home environments, their fear-generating regions shrink and the appetitive regions expand toward the back, filling approximately 90% of the shell (Reynolds & Berridge, 2008).
Brain structures have multiple functions
Although much affective neuroscience research has emphasized whole structures, such as the amygdala and nucleus accumbens, it is important to note that many of these structures are more accurately referred to as complexes. They include distinct groups of nuclei that perform different tasks. At present, human neuroimaging techniques such as fMRI are unable to examine the activity of individual nuclei in the way that invasive animal neuroscience can. For instance, the amygdala of the nonhuman primate can be divided into 13 nuclei and cortical areas (Freese & Amaral, 2009). These regions of the amygdala perform different functions. The central nucleus sends outputs involving brainstem areas that result in innate emotional expressions and associated physiological responses. The basal nucleus is connected with striatal areas that are involved with actions such as running toward safety. Furthermore, it is not possible to make one-to-one maps of emotions onto brain regions. For example, extensive research has examined the involvement of the amygdala in fear, but research has also shown that the amygdala is active during uncertainty (Whalen, 1998) as well as positive emotions (Anderson et al., 2003; Schulkin, 1990).
Conclusion
Research in affective neuroscience has contributed to knowledge regarding emotional, motivational, and behavioral processes. The study of the basic emotional systems of nonhuman animals provides information about the organization and development of more complex human emotions. Although much still remains to be discovered, current findings in affective neuroscience have already influenced our understanding of drug use and abuse, psychological disorders such as panic disorder, and complex human emotions such as desire and enjoyment, grief and love.
Outside Resources
Video: A 1-hour interview with Jaak Panksepp, the father of affective neuroscience
Video: A 15-minute interview with Kent Berridge on pleasure in the brain
Video: A 5-minute interview with Joseph LeDoux on the amygdala and fear
Web: Brain anatomy interactive 3D model
http://www.pbs.org/wnet/brain/3d/index.html
Discussion Questions
1. The neural circuits of “liking” are different from the circuits of “wanting.” How might this relate to the problems people encounter when they diet, fight addictions, or try to change other habits?
2. The structures and neurotransmitters that produce pleasure during social contact also produce panic and grief when organisms are deprived of social contact. How does this contribute to an understanding of love?
3. Research shows that stressful environments increase the area of the nucleus accumbens that is sensitive to fear, whereas preferred environments increase the area that is sensitive to rewards. How might these changes be adaptive?
Vocabulary
Affect
An emotional process; includes moods, subjective feelings, and discrete emotions.
Amygdala
Two almond-shaped structures located in the medial temporal lobes of the brain.
Hypothalamus
A brain structure located below the thalamus and above the brain stem.
Neuroscience
The study of the nervous system.
Nucleus accumbens
A region of the basal forebrain located in front of the preoptic region.
Orbital frontal cortex
A region of the frontal lobes of the brain above the eye sockets.
Periaqueductal gray
The gray matter in the midbrain near the cerebral aqueduct.
Preoptic region
A part of the anterior hypothalamus.
Stria terminalis
A band of fibers that runs along the top surface of the thalamus.
Thalamus
A structure in the midline of the brain located between the midbrain and the cerebral cortex.
Visual cortex
The part of the brain that processes visual information, located in the back of the brain. | textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/09%3A_Emotions_and_Motivation/9.01%3A_Affective_Neuroscience.txt |
By Hyisung Hwang and David Matsumoto
San Francisco State University
Emotions play a crucial role in our lives because they have important functions. This module describes those functions, dividing the discussion into three areas: the intrapersonal, the interpersonal, and the social and cultural functions of emotions. The section on the intrapersonal functions of emotion describes the roles that emotions play within each of us individually; the section on the interpersonal functions of emotion describes the meanings of emotions to our relationships with others; and the section on the social and cultural functions of emotion describes the roles and meanings that emotions have to the maintenance and effective functioning of our societies and cultures at large. All in all we will see that emotions are a crucially important aspect of our psychological composition, having meaning and function to each of us individually, to our relationships with others in groups, and to our societies as a whole.
learning objectives
• Gain an appreciation of the importance of emotion in human life.
• Understand the functions and meanings of emotion in three areas of life: the intrapersonal, interpersonal, and social–cultural.
• Give examples of the role and function of emotion in each of the three areas described
Introduction
It is impossible to imagine life without emotion. We treasure our feelings—the joy at a ball game, the pleasure of the touch of a loved one, or the fun with friends on a night out. Even negative emotions are important, such as the sadness when a loved one dies, the anger when violated, the fear that overcomes us in a scary or unknown situation, or the guilt or shame toward others when our sins are made public. Emotions color life experiences and give those experiences meaning and flavor.
In fact, emotions play many important roles in people’s lives and have been the topic of scientific inquiry in psychology for well over a century (Cannon, 1927; Darwin, 1872; James, 1890). This module explores why we have emotions and why they are important. Doing so requires us to understand the function of emotions, and this module does so below by dividing the discussion into three sections. The first concerns the intrapersonal functions of emotion, which refer to the role that emotions play within each of us individually. The second concerns the interpersonal functions of emotion, which refer to the role emotions play between individuals within a group. The third concerns the social and cultural functions of emotion, which refer to the role that emotions play in the maintenance of social order within a society. All in all, we will see that emotions inform us of who we are, what our relationships with others are like, and how to behave in social interactions. Emotions give meaning to events; without emotions, those events would be mere facts. Emotions help coordinate interpersonal relationships. And emotions play an important role in the cultural functioning of keeping human societies together.
Intrapersonal Functions of Emotion
Emotions Help us Act Quickly with Minimal Conscious Awareness
Emotions are rapid information-processing systems that help us act with minimal thinking (Tooby & Cosmides, 2008). Problems associated with birth, battle, death, and seduction have occurred throughout evolutionary history and emotions evolved to aid humans in adapting to those problems rapidly and with minimal conscious cognitive intervention. If we did not have emotions, we could not make rapid decisions concerning whether to attack, defend, flee, care for others, reject food, or approach something useful, all of which were functionally adaptive in our evolutionary history and helped us to survive. For instance, drinking spoiled milk or eating rotten eggs has negative consequences for our welfare. The emotion of disgust, however, helps us immediately take action by not ingesting them in the first place or by vomiting them out. This response is adaptive because it aids, ultimately, in our survival and allows us to act immediately without much thinking. In some instances, taking the time to sit and rationally think about what to do, calculating cost–benefit ratios in one’s mind, is a luxury that might cost one one’s life. Emotions evolved so that we can act without that depth of thinking.
Emotions Prepare the Body for Immediate Action
Emotions prepare us for behavior. When triggered, emotions orchestrate systems such as perception, attention, inference, learning, memory, goal choice, motivational priorities, physiological reactions, motor behaviors, and behavioral decision making (Cosmides & Tooby, 2000; Tooby & Cosmides, 2008). Emotions simultaneously activate certain systems and deactivate others in order to prevent the chaos of competing systems operating at the same time, allowing for coordinated responses to environmental stimuli (Levenson, 1999). For instance, when we are afraid, our bodies shut down temporarily unneeded digestive processes, resulting in saliva reduction (a dry mouth); blood flows disproportionately to the lower half of the body; the visual field expands; and air is breathed in, all preparing the body to flee. Emotions initiate a system of components that includes subjective experience, expressive behaviors, physiological reactions, action tendencies, and cognition, all for the purposes of specific actions; the term “emotion” is, in reality, a metaphor for these reactions.
One common misunderstanding many people have when thinking about emotions, however, is the belief that emotions must always directly produce action. This is not true. Emotion certainly prepares the body for action; but whether people actually engage in action is dependent on many factors, such as the context within which the emotion has occurred, the target of the emotion, the perceived consequences of one’s actions, previous experiences, and so forth (Baumeister, Vohs, DeWall, & Zhang, 2007; Matsumoto & Wilson, 2008). Thus, emotions are just one of many determinants of behavior, albeit an important one.
Emotions Influence Thoughts
Emotions are also connected to thoughts and memories. Memories are not just facts that are encoded in our brains; they are colored with the emotions felt at those times the facts occurred (Wang & Ross, 2007). Thus, emotions serve as the neural glue that connects those disparate facts in our minds. That is why it is easier to remember happy thoughts when happy, and angry times when angry. Emotions serve as the affective basis of many attitudes, values, and beliefs that we have about the world and the people around us; without emotions those attitudes, values, and beliefs would be just statements without meaning, and emotions give those statements meaning. Emotions influence our thinking processes, sometimes in constructive ways, sometimes not. It is difficult to think critically and clearly when we feel intense emotions, but easier when we are not overwhelmed with emotions (Matsumoto, Hirayama, & LeRoux, 2006).
Emotions Motivate Future Behaviors
Because emotions prepare our bodies for immediate action, influence thoughts, and can be felt, they are important motivators of future behavior. Many of us strive to experience the feelings of satisfaction, joy, pride, or triumph in our accomplishments and achievements. At the same time, we also work very hard to avoid strong negative feelings; for example, once we have felt the emotion of disgust when drinking the spoiled milk, we generally work very hard to avoid having those feelings again (e.g., checking the expiration date on the label before buying the milk, smelling the milk before drinking it, watching if the milk curdles in one’s coffee before drinking it). Emotions, therefore, not only influence immediate actions but also serve as an important motivational basis for future behaviors.
Interpersonal Functions of Emotion
Emotions are expressed both verbally through words and nonverbally through facial expressions, voices, gestures, body postures, and movements. We are constantly expressing emotions when interacting with others, and others can reliably judge those emotional expressions (Elfenbein & Ambady, 2002; Matsumoto, 2001); thus, emotions have signal value to others and influence others and our social interactions. Emotions and their expressions communicate information to others about our feelings, intentions, relationship with the target of the emotions, and the environment. Because emotions have this communicative signal value, they help solve social problems by evoking responses from others, by signaling the nature of interpersonal relationships, and by providing incentives for desired social behavior (Keltner, 2003).
Emotional Expressions Facilitate Specific Behaviors in Perceivers
Because facial expressions of emotion are universal social signals, they contain meaning not only about the expressor’s psychological state but also about that person’s intent and subsequent behavior. This information affects what the perceiver is likely to do. People observing fearful faces, for instance, are more likely to produce approach-related behaviors, whereas people who observe angry faces are more likely to produce avoidance-related behaviors (Marsh, Ambady, & Kleck, 2005). Even subliminal presentation of smiles produces increases in how much beverage people pour and consume and how much they are willing to pay for it; presentation of angry faces decreases these behaviors (Winkielman, Berridge, & Wilbarger, 2005). Also, emotional displays evoke specific, complementary emotional responses from observers; for example, anger evokes fear in others (Dimberg & Ohman, 1996; Esteves, Dimberg, & Ohman, 1994), whereas distress evokes sympathy and aid (Eisenberg et al., 1989).
Emotional Expressions Signal the Nature of Interpersonal Relationships
Emotional expressions provide information about the nature of the relationships among interactants. Some of the most important and provocative set of findings in this area come from studies involving married couples (Gottman & Levenson, 1992; Gottman, Levenson, & Woodin, 2001). In this research, married couples visited a laboratory after having not seen each other for 24 hours, and then engaged in intimate conversations about daily events or issues of conflict. Discrete expressions of contempt, especially by the men, and disgust, especially by the women, predicted later marital dissatisfaction and even divorce.
Emotional Expressions Provide Incentives for Desired Social Behavior
Facial expressions of emotion are important regulators of social interaction. In the developmental literature, this concept has been investigated under the concept of social referencing (Klinnert, Campos, & Sorce, 1983); that is, the process whereby infants seek out information from others to clarify a situation and then use that information to act. To date, the strongest demonstration of social referencing comes from work on the visual cliff. In the first study to investigate this concept, Campos and colleagues (Sorce, Emde, Campos, & Klinnert, 1985) placed mothers on the far end of the “cliff” from the infant. Mothers first smiled to the infants and placed a toy on top the safety glass to attract them; infants invariably began crawling to their mothers. When the infants were in the center of the table, however, the mother then posed an expression of fear, sadness, anger, interest, or joy. The results were clearly different for the different faces; no infant crossed the table when the mother showed fear; only 6% did when the mother posed anger, 33% crossed when the mother posed sadness, and approximately 75% of the infants crossed when the mother posed joy or interest.
Other studies provide similar support for facial expressions as regulators of social interaction. In one study (Bradshaw, 1986), experimenters posed facial expressions of neutral, anger, or disgust toward babies as they moved toward an object and measured the amount of inhibition the babies showed in touching the object. The results for 10- and 15-month olds were the same: anger produced the greatest inhibition, followed by disgust, with neutral the least. This study was later replicated (Hertenstein & Campos, 2004) using joy and disgust expressions, altering the method so that the infants were not allowed to touch the toy (compared with a distractor object) until one hour after exposure to the expression. At 14 months of age, significantly more infants touched the toy when they saw joyful expressions, but fewer touched the toy when the infants saw disgust.
Social and Cultural Functions of Emotion
If you stop to think about many things we take for granted in our daily lives, we cannot help but come to the conclusion that modern human life is a colorful tapestry of many groups and individual lives woven together in a complex yet functional way. For example, when you’re hungry, you might go to the local grocery store and buy some food. Ever stop to think about how you’re able to do that? You might buy a banana that was grown in a field in southeast Asia being raised by farmers there, where they planted the tree, cared for it, and picked the fruit. They probably handed that fruit off to a distribution chain that allowed multiple people somewhere to use tools such as cranes, trucks, cargo bins, ships or airplanes (that were also created by multiple people somewhere) to bring that banana to your store. The store had people to care for that banana until you came and got it and to barter with you for it (with your money). You may have gotten to the store riding a vehicle that was produced somewhere else in the world by others, and you were probably wearing clothes produced by some other people somewhere else.
Thus, human social life is complex. Individuals are members of multiple groups, with multiple social roles, norms, and expectations, and people move rapidly in and out of the multiple groups of which they are members. Moreover, much of human social life is unique because it revolves around cities, where many people of disparate backgrounds come together. This creates the enormous potential for social chaos, which can easily occur if individuals are not coordinated well and relationships not organized systematically.
One of the important functions of culture is to provide this necessary coordination and organization. Doing so allows individuals and groups to negotiate the social complexity of human social life, thereby maintaining social order and preventing social chaos. Culture does this by providing a meaning and information system to its members, which is shared by a group and transmitted across generations, that allows the group to meet basic needs of survival, pursue happiness and well-being, and derive meaning from life (Matsumoto & Juang, 2013). Culture is what allowed the banana from southeast Asia to appear on your table.
Cultural transmission of the meaning and information system to its members is, therefore, a crucial aspect of culture. One of the ways this transmission occurs is through the development of worldviews (including attitudes, values, beliefs, and norms) related to emotions (Matsumoto & Hwang, 2013; Matsumoto et al., 2008). Worldviews related to emotions provide guidelines for desirable emotions that facilitate norms for regulating individual behaviors and interpersonal relationships. Our cultural backgrounds tell us which emotions are ideal to have, and which are not (Tsai, Knutson, & Fung, 2006). The cultural transmission of information related to emotions occurs in many ways, from childrearers to children, as well as from the cultural products available in our world, such as books, movies, ads, and the like (Schönpflug, 2009; Tsai, Louie, Chen, & Uchida, 2007).
Cultures also inform us about what to do with our emotions—that is, how to manage or modify them—when we experience them. One of the ways in which this is done is through the management of our emotional expressions through cultural display rules (Friesen, 1972). These are rules that are learned early in life that specify the management and modification of our emotional expressions according to social circumstances. Thus, we learn that “big boys don’t cry” or to laugh at the boss’s jokes even though they’re not funny. By affecting how individuals express their emotions, culture also influences how people experience them as well.
Because one of the major functions of culture is to maintain social order in order to ensure group efficiency and thus survival, cultures create worldviews, rules, guidelines, and norms concerning emotions because emotions have important intra- and interpersonal functions, as described above, and are important motivators of behavior. Norms concerning emotion and its regulation in all cultures serve the purpose of maintaining social order. Cultural worldviews and norms help us manage and modify our emotional reactions (and thus behaviors) by helping us to have certain kinds of emotional experiences in the first place and by managing our reactions and subsequent behaviors once we have them. By doing so, our culturally moderated emotions can help us engage in socially appropriate behaviors, as defined by our cultures, and thus reduce social complexity and increase social order, avoiding social chaos. All of this allows us to live relatively harmonious and constructive lives in groups. If cultural worldviews and norms about emotions did not exist, people would just run amok having all kinds of emotional experiences, expressing their emotions and then behaving in all sorts of unpredictable and potentially harmful ways. If that were the case, it would be very difficult for groups and societies to function effectively, and even for humans to survive as a species, if emotions were not regulated in culturally defined ways for the common, social good. Thus, emotions play a critical role in the successful functioning of any society and culture.
Outside Resources
Alberta, G. M., Rieckmann, T. R., & Rush, J. D. (2000). Issues and recommendations for teaching an ethnic/culture-based course. Teaching of Psychology, 27,102-107. doi:10.1207/S15328023TOP2702_05
http://top.sagepub.com/content/27/2/102.short
CrashCourse (2014, August 4). Feeling all the feels: Crash course psychology #25. [Video file]. Retrieved from:
Hughesm A. (2011). Exercises and demonstrations to promote student engagement in motivation and courses. In R. Miller, E. Balcetis, S. Burns, D. Daniel, B. Saville, & W. Woody (Eds.), Promoting Student Engagement: Volume 2: Activities, Exercises and Demonstrations for Psychology Courses. (pp. 79-82) Washington DC, Society for the Teaching of Psychology, American Psychological Association.
http://teachpsych.org/ebooks/pse2011/vol2/index.php
Johnston, E., & Olson, L. (2015). The feeling brain: The biology and psychology of emotions. New York, NY: W.W. Norton & Company.
http://books.wwnorton.com/books/The-Feeling-Brain/
NPR News: Science Of Sadness And Joy: 'Inside Out' Gets Childhood Emotions Right
www.npr.org/sections/health-s...emotions-right
Online Psychology Laboratory: Motivation and Emotion resources
opl.apa.org/Resources.aspx#Motivation
Web: See how well you can read other people’s facial expressions of emotion
http://www.humintell.com/free-demos/
Discussion Questions
1. When emotions occur, why do they simultaneously activate certain physiological and psychological systems in the body and deactivate others?
2. Why is it difficult for people to act rationally and think happy thoughts when they are angry? Conversely, why is it difficult to remember sad memories or have sad thoughts when people are happy?
3. You’re walking down a deserted street when you come across a stranger who looks scared. What would you say? What would you do? Why?
4. You’re walking down a deserted street when you come across a stranger who looks angry. What would you say? What would you do? Why?
5. Think about the messages children receive from their environment (such as from parents, mass media, the Internet, Hollywood movies, billboards, and storybooks). In what ways do these messages influence the kinds of emotions that children should and should not feel?
Vocabulary
Cultural display rules
These are rules that are learned early in life that specify the management and modification of emotional expressions according to social circumstances. Cultural display rules can work in a number of different ways. For example, they can require individuals to express emotions “as is” (i.e., as they feel them), to exaggerate their expressions to show more than what is actually felt, to tone down their expressions to show less than what is actually felt, to conceal their feelings by expressing something else, or to show nothing at all.
Interpersonal
This refers to the relationship or interaction between two or more individuals in a group. Thus, the interpersonal functions of emotion refer to the effects of one’s emotion on others, or to the relationship between oneself and others.
Intrapersonal
This refers to what occurs within oneself. Thus, the intrapersonal functions of emotion refer to the effects of emotion to individuals that occur physically inside their bodies and psychologically inside their minds.
Social and cultural
Society refers to a system of relationships between individuals and groups of individuals; culture refers to the meaning and information afforded to that system that is transmitted across generations. Thus, the social and cultural functions of emotion refer to the effects that emotions have on the functioning and maintenance of societies and cultures.
Social referencing
This refers to the process whereby individuals look for information from others to clarify a situation, and then use that information to act. Thus, individuals will often use the emotional expressions of others as a source of information to make decisions about their own behavior. | textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/09%3A_Emotions_and_Motivation/9.02%3A_Functions_of_Emotions.txt |
By Sudeep Bhatia and George Loewenstein
Carnegie Mellon University
Our thoughts and behaviors are strongly influenced by affective experiences known as drive states. These drive states motivate us to fulfill goals that are beneficial to our survival and reproduction. This module provides an overview of key drive states, including information about their neurobiology and their psychological effects.
learning objectives
• Identify the key properties of drive states
• Describe biological goals accomplished by drive states
• Give examples of drive states
• Outline the neurobiological basis of drive states such as hunger and arousal
• Discuss the main moderators and determinants of drive states such as hunger and arousal
Introduction
What is the longest you’ve ever gone without eating? A couple of hours? An entire day? How did it feel? Humans rely critically on food for nutrition and energy, and the absence of food can create drastic changes, not only in physical appearance, but in thoughts and behaviors. If you’ve ever fasted for a day, you probably noticed how hunger can take over your mind, directing your attention to foods you could be eating (a cheesy slice of pizza, or perhaps some sweet, cold ice cream), and motivating you to obtain and consume these foods. And once you have eaten and your hunger has been satisfied, your thoughts and behaviors return to normal.
Hunger is a drive state, an affective experience (something you feel, like the sensation of being tired or hungry) that motivates organisms to fulfill goals that are generally beneficial to their survival and reproduction. Like other drive states, such as thirst or sexual arousal, hunger has a profound impact on the functioning of the mind. It affects psychological processes, such as perception, attention, emotion, and motivation, and influences the behaviors that these processes generate.
Key Properties of Drive States
Drive states differ from other affective or emotional states in terms of the biological functions they accomplish. Whereas all affective states possess valence (i.e., they are positive or negative) and serve to motivate approach or avoidance behaviors (Zajonc, 1998), drive states are unique in that they generate behaviors that result in specific benefits for the body. For example, hunger directs individuals to eat foods that increase blood sugar levels in the body, while thirst causes individuals to drink fluids that increase water levels in the body.
Different drive states have different triggers. Most drive states respond to both internal and external cues, but the combinations of internal and external cues, and the specific types of cues, differ between drives. Hunger, for example, depends on internal, visceral signals as well as sensory signals, such as the sight or smell of tasty food. Different drive states also result in different cognitive and emotional states, and are associated with different behaviors. Yet despite these differences, there are a number of properties common to all drive states.
Homeostasis
Humans, like all organisms, need to maintain a stable state in their various physiological systems. For example, the excessive loss of body water results in dehydration, a dangerous and potentially fatal state. However, too much water can be damaging as well. Thus, a moderate and stable level of body fluid is ideal. The tendency of an organism to maintain this stability across all the different physiological systems in the body is called homeostasis.
Homeostasis is maintained via two key factors. First, the state of the system being regulated must be monitored and compared to an ideal level, or a set point. Second, there need to be mechanisms for moving the system back to this set point—that is, to restore homeostasis when deviations from it are detected. To better understand this, think of the thermostat in your own home. It detects when the current temperature in the house is different than the temperature you have it set at (i.e., the set point). Once the thermostat recognizes the difference, the heating or air conditioning turns on to bring the overall temperature back to the designated level.
Many homeostatic mechanisms, such as blood circulation and immune responses, are automatic and nonconscious. Others, however, involve deliberate action. Most drive states motivate action to restore homeostasis using both “punishments” and “rewards.” Imagine that these homeostatic mechanisms are like molecular parents. When you behave poorly by departing from the set point (such as not eating or being somewhere too cold), they raise their voice at you. You experience this as the bad feelings, or “punishments,” of hunger, thirst, or feeling too cold or too hot. However, when you behave well (such as eating nutritious foods when hungry), these homeostatic parents reward you with the pleasure that comes from any activity that moves the system back toward the set point. For example, when body temperature declines below the set point, any activity that helps to restore homeostasis (such as putting one’s hand in warm water) feels pleasurable; and likewise, when body temperature rises above the set point, anything that cools it feels pleasurable.
The Narrowing of Attention
As drive states intensify, they direct attention toward elements, activities, and forms of consumption that satisfy the biological needs associated with the drive. Hunger, for example, draws attention toward food. Outcomes and objects that are not related to satisfying hunger lose their value (Easterbrook, 1959). For instance, has anyone ever invited you to do a fun activity while you were hungry? Likely your response was something like: “I’m not doing anything until I eat first.” Indeed, at a sufficient level of intensity, individuals will sacrifice almost any quantity of goods that do not address the needs signaled by the drive state. For example, cocaine addicts, according to Gawin (1991:1581), “report that virtually all thoughts are focused on cocaine during binges; nourishment, sleep, money, loved ones, responsibility, and survival lose all significance.”
Drive states also produce a second form of attention-narrowing: a collapsing of time-perspective toward the present. That is, they make us impatient. While this form of attention-narrowing is particularly pronounced for the outcomes and behaviors directly related to the biological function being served by the drive state at issue (e.g., “I need food now”), it applies to general concerns for the future as well. Ariely and Loewenstein (2006), for example, investigated the impact of sexual arousal on the thoughts and behaviors of a sample of male undergraduates. These undergraduates were lent laptop computers that they took to their private residences, where they answered a series of questions, both in normal states and in states of high sexual arousal. Ariely and Loewenstein found that being sexually aroused made people extremely impatient for both sexual outcomes and for outcomes in other domains, such as those involving money. In another study Giordano et al. (2002) found that heroin addicts were more impatient with respect to heroin when they were craving it than when they were not. More surprisingly, they were also more impatient toward money (they valued delayed money less) when they were actively craving heroin.
Yet a third form of attention-narrowing involves thoughts and outcomes related to the self versus others. Intense drive states tend to narrow one’s focus inwardly and to undermine altruism—or the desire to do good for others. People who are hungry, in pain, or craving drugs tend to be selfish. Indeed, popular interrogation methods involve depriving individuals of sleep, food, or water, so as to trigger intense drive states leading the subject of the interrogation to divulge information that may betray comrades, friends, and family (Biderman, 1960).
Two Illustrative Drive States
Thus far we have considered drive states abstractly. We have discussed the ways in which they relate to other affective and motivational mechanisms, as well as their main biological purpose and general effects on thought and behavior. Yet, despite serving the same broader goals, different drive states are often remarkably different in terms of their specific properties. To understand some of these specific properties, we will explore two different drive states that play very important roles in determining behavior, and in ensuring human survival: hunger and sexual arousal.
Hunger
Hunger is a classic example of a drive state, one that results in thoughts and behaviors related to the consumption of food. Hunger is generally triggered by low glucose levels in the blood (Rolls, 2000), and behaviors resulting from hunger aim to restore homeostasis regarding those glucose levels. Various other internal and external cues can also cause hunger. For example, when fats are broken down in the body for energy, this initiates a chemical cue that the body should search for food (Greenberg, Smith, & Gibbs, 1990). External cues include the time of day, estimated time until the next feeding (hunger increases immediately prior to food consumption), and the sight, smell, taste, and even touch of food and food-related stimuli. Note that while hunger is a generic feeling, it has nuances that can provoke the eating of specific foods that correct for nutritional imbalances we may not even be conscious of. For example, a couple who was lost adrift at sea found they inexplicably began to crave the eyes of fish. Only later, after they had been rescued, did they learn that fish eyes are rich in vitamin C—a very important nutrient that they had been depleted of while lost in the ocean (Walker, 2014).
The hypothalamus (located in the lower, central part of the brain) plays a very important role in eating behavior. It is responsible for synthesizing and secreting various hormones. The lateral hypothalamus (LH) is concerned largely with hunger and, in fact, lesions (i.e., damage) of the LH can eliminate the desire for eating entirely—to the point that animals starve themselves to death unless kept alive by force feeding (Anand & Brobeck, 1951). Additionally, artificially stimulating the LH, using electrical currents, can generate eating behavior if food is available (Andersson, 1951).
Activation of the LH can not only increase the desirability of food but can also reduce the desirability of nonfood-related items. For example, Brendl, Markman, and Messner (2003) found that participants who were given a handful of popcorn to trigger hunger not only had higher ratings of food products, but also had lower ratings of nonfood products—compared with participants whose appetites were not similarly primed. That is, because eating had become more important, other non-food products lost some of their value.
Hunger is only part of the story of when and why we eat. A related process, satiation, refers to the decline of hunger and the eventual termination of eating behavior. Whereas the feeling of hunger gets you to start eating, the feeling of satiation gets you to stop. Perhaps surprisingly, hunger and satiation are two distinct processes, controlled by different circuits in the brain and triggered by different cues. Distinct from the LH, which plays an important role in hunger, the ventromedial hypothalamus (VMH) plays an important role in satiety. Though lesions of the VMH can cause an animal to overeat to the point of obesity, the relationship between the LH and the VMB is quite complicated. Rats with VMH lesions can also be quite finicky about their food (Teitelbaum, 1955).
Other brain areas, besides the LH and VMH, also play important roles in eating behavior. The sensory cortices (visual, olfactory, and taste), for example, are important in identifying food items. These areas provide informational value, however, not hedonic evaluations. That is, these areas help tell a person what is good or safe to eat, but they don’t provide the pleasure (or hedonic) sensations that actually eating the food produces. While many sensory functions are roughly stable across different psychological states, other functions, such as the detection of food-related stimuli, are enhanced when the organism is in a hungry drive state.
After identifying a food item, the brain also needs to determine its reward value, which affects the organism’s motivation to consume the food. The reward value ascribed to a particular item is, not surprisingly, sensitive to the level of hunger experienced by the organism. The hungrier you are, the greater the reward value of the food. Neurons in the areas where reward values are processed, such as the orbitofrontal cortex, fire more rapidly at the sight or taste of food when the organism is hungry relative to if it is satiated.
Sexual Arousal
A second drive state, especially critical to reproduction, is sexual arousal. Sexual arousal results in thoughts and behaviors related to sexual activity. As with hunger, it is generated by a large range of internal and external mechanisms that are triggered either after the extended absence of sexual activity or by the immediate presence and possibility of sexual activity (or by cues commonly associated with such possibilities). Unlike hunger, however, these mechanisms can differ substantially between males and females, indicating important evolutionary differences in the biological functions that sexual arousal serves for different sexes.
Sexual arousal and pleasure in males, for example, is strongly related to the preoptic area, a region in the anterior hypothalamus (or the front of the hypothalamus). If the preoptic area is damaged, male sexual behavior is severely impaired. For example, rats that have had prior sexual experiences will still seek out sexual partners after their preoptic area is lesioned. However, once having secured a sexual partner, rats with lesioned preoptic areas will show no further inclination to actually initiate sex.
For females, though, the preoptic area fulfills different roles, such as functions involved with eating behaviors. Instead, there is a different region of the brain, the ventromedial hypothalamus (the lower, central part) that plays a similar role for females as the preoptic area does for males. Neurons in the ventromedial hypothalamus determine the excretion of estradiol, an estrogen hormone that regulates sexual receptivity (or the willingness to accept a sexual partner). In many mammals, these neurons send impulses to the periaqueductal gray (a region in the midbrain) which is responsible for defensive behaviors, such as freezing immobility, running, increases in blood pressure, and other motor responses. Typically, these defensive responses might keep the female rat from interacting with the male one. However, during sexual arousal, these defensive responses are weakened and lordosis behavior, a physical sexual posture that serves as an invitation to mate, is initiated (Kow and Pfaff, 1998). Thus, while the preoptic area encourages males to engage in sexual activity, the ventromedial hypothalamus fulfills that role for females.
Other differences between males and females involve overlapping functions of neural modules. These neural modules often provide clues about the biological roles played by sexual arousal and sexual activity in males and females. Areas of the brain that are important for male sexuality overlap to a great extent with areas that are also associated with aggression. In contrast, areas important for female sexuality overlap extensively with those that are also connected to nurturance (Panksepp, 2004).
One region of the brain that seems to play an important role in sexual pleasure for both males and females is the septal nucleus, an area that receives reciprocal connections from many other brain regions, including the hypothalamus and the amygdala (a region of the brain primarily involved with emotions). This region shows considerable activity, in terms of rhythmic spiking, during sexual orgasm. It is also one of the brain regions that rats will most reliably voluntarily self-stimulate (Olds & Milner, 1954). In humans, placing a small amount of acetylcholine into this region, or stimulating it electrically, has been reported to produce a feeling of imminent orgasm (Heath, 1964).
Conclusion
Drive states are evolved motivational mechanisms designed to ensure that organisms take self-beneficial actions. In this module, we have reviewed key properties of drive states, such as homeostasis and the narrowing of attention. We have also discussed, in some detail, two important drive states—hunger and sexual arousal—and explored their underlying neurobiology and the ways in which various environmental and biological factors affect their properties.
There are many drive states besides hunger and sexual arousal that affect humans on a daily basis. Fear, thirst, exhaustion, exploratory and maternal drives, and drug cravings are all drive states that have been studied by researchers (see e.g., Buck, 1999; Van Boven & Loewenstein, 2003). Although these drive states share some of the properties discussed in this module, each also has unique features that allow it to effectively fulfill its evolutionary function.
One key difference between drive states is the extent to which they are triggered by internal as opposed to external stimuli. Thirst, for example, is induced both by decreased fluid levels and an increased concentration of salt in the body. Fear, on the other hand, is induced by perceived threats in the external environment. Drug cravings are triggered both by internal homeostatic mechanisms and by external visual, olfactory, and contextual cues. Other drive states, such as those pertaining to maternity, are triggered by specific events in the organism’s life. Differences such as these make the study of drive states a scientifically interesting and important endeavor. Drive states are rich in their diversity, and many questions involving their neurocognitive underpinnings, environmental determinants, and behavioral effects, have yet to be answered.
One final thing to consider, not discussed in this module, relates to the real-world consequences of drive states. Hunger, sexual arousal, and other drive states are all psychological mechanisms that have evolved gradually over millions of years. We share these drive states not only with our human ancestors but with other animals, such as monkeys, dogs, and rats. It is not surprising then that these drive states, at times, lead us to behave in ways that are ill-suited to our modern lives. Consider, for example, the obesity epidemic that is affecting countries around the world. Like other diseases of affluence, obesity is a product of drive states that are too easily fulfilled: homeostatic mechanisms that once worked well when food was scarce now backfire when meals rich in fat and sugar are readily available. Unrestricted sexual arousal can have similarly perverse effects on our well-being. Countless politicians have sacrificed their entire life’s work (not to mention their marriages) by indulging adulterous sexual impulses toward colleagues, staffers, prostitutes, and others over whom they have social or financial power. It not an overstatement to say that many problems of the 21st century, from school massacres to obesity to drug addiction, are influenced by the mismatch between our drive states and our uniquely modern ability to fulfill them at a moment’s notice.
Outside Resources
Web: An open textbook chapter on homeostasis
http://en.wikibooks.org/wiki/Human_P...gy/Homeostasis
Web: Motivation and emotion in psychology
http://allpsych.com/psychology101/mo...n_emotion.html
Web: The science of sexual arousal
http://www.apa.org/monitor/apr03/arousal.aspx
Discussion Questions
1. The ability to maintain homeostasis is important for an organism’s survival. What are the ways in which homeostasis ensures survival? Do different drive states accomplish homeostatic goals differently?
2. Drive states result in the narrowing of attention toward the present and toward the self. Which drive states lead to the most pronounced narrowing of attention toward the present? Which drive states lead to the most pronounced narrowing of attention toward the self?
3. What are important differences between hunger and sexual arousal, and in what ways do these differences reflect the biological needs that hunger and sexual arousal have been evolved to address?
4. Some of the properties of sexual arousal vary across males and females. What other drives states affect males and females differently? Are there drive states that vary with other differences in humans (e.g., age)?
Vocabulary
Drive state
Affective experiences that motivate organisms to fulfill goals that are generally beneficial to their survival and reproduction.
Homeostasis
The tendency of an organism to maintain a stable state across all the different physiological systems in the body.
Homeostatic set point
An ideal level that the system being regulated must be monitored and compared to.
Hypothalamus
A portion of the brain involved in a variety of functions, including the secretion of various hormones and the regulation of hunger and sexual arousal.
Lordosis
A physical sexual posture in females that serves as an invitation to mate.
Preoptic area
A region in the anterior hypothalamus involved in generating and regulating male sexual behavior.
Reward value
A neuropsychological measure of an outcome’s affective importance to an organism.
Satiation
The state of being full to satisfaction and no longer desiring to take on more. | textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/09%3A_Emotions_and_Motivation/9.03%3A_Drive_States.txt |
• 10.1: History of Mental Illness
This module is divided into three parts. The first is a brief introduction to various criteria we use to define or distinguish between normality and abnormality. The second, largest part is a history of mental illness from the Stone Age to the 20th century, with a special emphasis on the recurrence of three causal explanations for mental illness; supernatural, somatogenic, and psychogenic factors. The third part concludes with a brief description of the issue of diagnosis.
• 10.2: Mood Disorders
Mood disorders are extended periods of depressed, euphoric, or irritable moods that in combination with other symptoms cause the person significant distress and interfere with his or her daily life, often resulting in social and occupational difficulties. In this module, we describe major mood disorders, including their symptom presentations, general prevalence rates, and how and why the rates of these disorders tend to vary by age, gender, and race.
• 10.3: Autism- Insights from the Study of the Social Brain
People with autism spectrum disorder (ASD) suffer from a profound social disability. Social neuroscience is the study of the parts of the brain that support social interactions or the “social brain.” This module provides an overview of ASD and focuses on understanding how social brain dysfunction leads to ASD.
• 10.4: Psychopharmacology
Psychopharmacology is the study of how drugs affect behavior. If a drug changes your perception, or the way you feel or think, the drug exerts effects on your brain and nervous system. In this module, we will provide an overview of some of these topics as well as discuss some current controversial areas in the field of psychopharmacology.
10: Psychological Disorders
By Ingrid G. Farreras
Hood College
This module is divided into three parts. The first is a brief introduction to various criteria we use to define or distinguish between normality and abnormality. The second, largest part is a history of mental illness from the Stone Age to the 20th century, with a special emphasis on the recurrence of three causal explanations for mental illness; supernatural, somatogenic, and psychogenic factors. This part briefly touches upon trephination, the Greek theory of hysteria within the context of the four bodily humors, witch hunts, asylums, moral treatment, mesmerism, catharsis, the mental hygiene movement, deinstitutionalization, community mental health services, and managed care. The third part concludes with a brief description of the issue of diagnosis.
learning objectives
• Identify what the criteria used to distinguish normality from abnormality are.
• Understand the difference among the three main etiological theories of mental illness.
• Describe specific beliefs or events in history that exemplify each of these etiological theories (e.g., hysteria, humorism, witch hunts, asylums, moral treatments).
• Explain the differences in treatment facilities for the mentally ill (e.g., mental hospitals, asylums, community mental health centers).
• Describe the features of the “moral treatment” approach used by Chiarughi, Pinel, and Tuke.
• Describe the reform efforts of Dix and Beers and the outcomes of their work.
• Describe Kräpelin’s classification of mental illness and the current DSM system.
History of Mental Illness
References to mental illness can be found throughout history. The evolution of mental illness, however, has not been linear or progressive but rather cyclical. Whether a behavior is considered normal or abnormal depends on the context surrounding the behavior and thus changes as a function of a particular time and culture. In the past, uncommon behavior or behavior that deviated from the sociocultural norms and expectations of a specific culture and period has been used as a way to silence or control certain individuals or groups. As a result, a less cultural relativist view of abnormal behavior has focused instead on whether behavior poses a threat to oneself or others or causes so much pain and suffering that it interferes with one’s work responsibilities or with one’s relationships with family and friends.
Throughout history there have been three general theories of the etiology of mental illness:supernatural, somatogenic, and psychogenic. Supernatural theories attribute mental illness to possession by evil or demonic spirits, displeasure of gods, eclipses, planetary gravitation, curses, and sin. Somatogenic theories identify disturbances in physical functioning resulting from either illness, genetic inheritance, or brain damage or imbalance. Psychogenic theories focus on traumatic or stressful experiences, maladaptive learned associations and cognitions, or distorted perceptions. Etiological theories of mental illness determine the care and treatment mentally ill individuals receive. As we will see below, an individual believed to be possessed by the devil will be viewed and treated differently from an individual believed to be suffering from an excess of yellow bile. Their treatments will also differ, from exorcism to blood-letting. The theories, however, remain the same. They coexist as well as recycle over time.
Trephination is an example of the earliest supernatural explanation for mental illness. Examination of prehistoric skulls and cave art from as early as 6500 BC has identified surgical drilling of holes in skulls to treat head injuries and epilepsy as well as to allow evil spirits trapped within the skull to be released (Restak, 2000). Around 2700 BC, Chinese medicine’s concept of complementary positive and negative bodily forces (“yin and yang”) attributed mental (and physical) illness to an imbalance between these forces. As such, a harmonious life that allowed for the proper balance of yin and yang and movement of vital air was essential (Tseng, 1973).
Mesopotamian and Egyptian papyri from 1900 BC describe women suffering from mental illness resulting from a wandering uterus (later named hysteria by the Greeks): The uterus could become dislodged and attached to parts of the body like the liver or chest cavity, preventing their proper functioning or producing varied and sometimes painful symptoms. As a result, the Egyptians, and later the Greeks, also employed a somatogenic treatment of strong smelling substances to guide the uterus back to its proper location (pleasant odors to lure and unpleasant ones to dispel).
Throughout classical antiquity we see a return to supernatural theories of demonic possession or godly displeasure to account for abnormal behavior that was beyond the person’s control. Temple attendance with religious healing ceremonies and incantations to the gods were employed to assist in the healing process. Hebrews saw madness as punishment from God, so treatment consisted of confessing sins and repenting. Physicians were also believed to be able to comfort and cure madness, however.
Greek physicians rejected supernatural explanations of mental disorders. It was around 400 BC that Hippocrates (460–370 BC) attempted to separate superstition and religion from medicine by systematizing the belief that a deficiency in or especially an excess of one of the four essential bodily fluids (i.e., humors)—blood, yellow bile, black bile, and phlegm—was responsible for physical and mental illness. For example, someone who was too temperamental suffered from too much blood and thus blood-letting would be the necessary treatment. Hippocrates classified mental illness into one of four categories—epilepsy, mania, melancholia, and brain fever—and like other prominent physicians and philosophers of his time, he did not believe mental illness was shameful or that mentally ill individuals should be held accountable for their behavior. Mentally ill individuals were cared for at home by family members and the state shared no responsibility for their care. Humorism remained a recurrent somatogenic theory up until the 19th century.
While Greek physician Galen (AD 130–201) rejected the notion of a uterus having an animistic soul, he agreed with the notion that an imbalance of the four bodily fluids could cause mental illness. He also opened the door for psychogenic explanations for mental illness, however, by allowing for the experience of psychological stress as a potential cause of abnormality. Galen’s psychogenic theories were ignored for centuries, however, as physicians attributed mental illness to physical causes throughout most of the millennium.
By the late Middle Ages, economic and political turmoil threatened the power of the Roman Catholic church. Between the 11th and 15th centuries, supernatural theories of mental disorders again dominated Europe, fueled by natural disasters like plagues and famines that lay people interpreted as brought about by the devil. Superstition, astrology, and alchemy took hold, and common treatments included prayer rites, relic touching, confessions, and atonement. Beginning in the 13th century the mentally ill, especially women, began to be persecuted as witches who were possessed. At the height of the witch hunts during the 15th through 17th centuries, with the Protestant Reformation having plunged Europe into religious strife, two Dominican monks wrote the Malleus Maleficarum (1486) as the ultimate manual to guide witch hunts. Johann Weyer and Reginald Scot tried to convince people in the mid- to late-16th century that accused witches were actually women with mental illnesses and that mental illness was not due to demonic possession but to faulty metabolism and disease, but the Church’s Inquisition banned both of their writings. Witch-hunting did not decline until the 17th and 18th centuries, after more than 100,000 presumed witches had been burned at the stake (Schoeneman, 1977; Zilboorg & Henry, 1941).
Modern treatments of mental illness are most associated with the establishment of hospitals and asylumsbeginning in the 16th century. Such institutions’ mission was to house and confine the mentally ill, the poor, the homeless, the unemployed, and the criminal. War and economic depression produced vast numbers of undesirables and these were separated from society and sent to these institutions. Two of the most well-known institutions, St. Mary of Bethlehem in London, known as Bedlam, and the Hôpital Général of Paris—which included La Salpêtrière, La Pitié, and La Bicêtre—began housing mentally ill patients in the mid-16th and 17th centuries. As confinement laws focused on protecting the public from the mentally ill, governments became responsible for housing and feeding undesirables in exchange for their personal liberty. Most inmates were institutionalized against their will, lived in filth and chained to walls, and were commonly exhibited to the public for a fee. Mental illness was nonetheless viewed somatogenically, so treatments were similar to those for physical illnesses: purges, bleedings, and emetics.
While inhumane by today’s standards, the view of insanity at the time likened the mentally ill to animals (i.e., animalism) who did not have the capacity to reason, could not control themselves, were capable of violence without provocation, did not have the same physical sensitivity to pain or temperature, and could live in miserable conditions without complaint. As such, instilling fear was believed to be the best way to restore a disordered mind to reason.
By the 18th century, protests rose over the conditions under which the mentally ill lived, and the 18th and 19th centuries saw the growth of a more humanitarian view of mental illness. In 1785 Italian physician Vincenzo Chiarughi (1759–1820) removed the chains of patients at his St. Boniface hospital in Florence, Italy, and encouraged good hygiene and recreational and occupational training. More well known, French physician Philippe Pinel (1745–1826) and former patient Jean-Baptise Pussin created a “traitement moral” at La Bicêtre and the Salpêtrière in 1793 and 1795 that also included unshackling patients, moving them to well-aired, well-lit rooms, and encouraging purposeful activity and freedom to move about the grounds (Micale, 1985).
In England, humanitarian reforms rose from religious concerns. William Tuke (1732–1822) urged the Yorkshire Society of (Quaker) Friends to establish the York Retreat in 1796, where patients were guests, not prisoners, and where the standard of care depended on dignity and courtesy as well as the therapeutic and moral value of physical work (Bell, 1980).
While America had asylums for the mentally ill—such as the Pennsylvania Hospital in Philadelphia and the Williamsburg Hospital, established in 1756 and 1773—the somatogenic theory of mental illness of the time—promoted especially by the father of America psychiatry, Benjamin Rush (1745–1813)—had led to treatments such as blood-letting, gyrators, and tranquilizer chairs. When Tuke’s York Retreat became the model for half of the new private asylums established in the United States, however, psychogenic treatments such as compassionate care and physical labor became the hallmarks of the new American asylums, such as the Friends Asylum in Frankford, Pennsylvania, and the Bloomingdale Asylum in New York City, established in 1817 and 1821 (Grob, 1994).
Moral treatment had to be abandoned in America in the second half of the 19th century, however, when these asylums became overcrowded and custodial in nature and could no longer provide the space nor attention necessary. When retired school teacher Dorothea Dix discovered the negligence that resulted from such conditions, she advocated for the establishment of state hospitals. Between 1840 and1880, she helped establish over 30 mental institutions in the United States and Canada (Viney & Zorich, 1982). By the late 19th century, moral treatment had given way to the mental hygiene movement, founded by former patient Clifford Beers with the publication of his 1908 memoir A Mind That Found Itself. Riding on Pasteur’s breakthrough germ theory of the 1860s and 1870s and especially on the early 20th century discoveries of vaccines for cholera, syphilis, and typhus, the mental hygiene movement reverted to a somatogenic theory of mental illness.
European psychiatry in the late 18th century and throughout the 19th century, however, struggled between somatogenic and psychogenic explanations of mental illness, particularly hysteria, which caused physical symptoms such as blindness or paralysis with no apparent physiological explanation. Franz Anton Mesmer (1734–1815), influenced by contemporary discoveries in electricity, attributed hysterical symptoms to imbalances in a universal magnetic fluid found in individuals, rather than to a wandering uterus (Forrest, 1999). James Braid (1795–1860) shifted this belief in mesmerism to one in hypnosis, thereby proposing a psychogenic treatment for the removal of symptoms. At the time, famed Salpetriere Hospital neurologist Jean-Martin Charcot (1825–1893), and Ambroise Auguste Liébault (1823–1904) and Hyppolyte Bernheim (1840–1919) of the Nancy School in France, were engaged in a bitter etiological battle over hysteria, with Charcot maintaining that the hypnotic suggestibility underlying hysteria was a neurological condition while Liébault and Bernheim believed it to be a general trait that varied in the population. Josef Breuer (1842–1925) and Sigmund Freud (1856–1939) would resolve this dispute in favor of a psychogenic explanation for mental illness by treating hysteria through hypnosis, which eventually led to the cathartic method that became the precursor for psychoanalysis during the first half of the 20th century.
Psychoanalysis was the dominant psychogenic treatment for mental illness during the first half of the 20th century, providing the launching pad for the more than 400 different schools of psychotherapy found today (Magnavita, 2006). Most of these schools cluster around broader behavioral, cognitive, cognitive-behavioral, psychodynamic, and client-centered approaches to psychotherapy applied in individual, marital, family, or group formats. Negligible differences have been found among all these approaches, however; their efficacy in treating mental illness is due to factors shared among all of the approaches (not particular elements specific to each approach): the therapist-patient alliance, the therapist’s allegiance to the therapy, therapist competence, and placebo effects (Luborsky et al., 2002; Messer & Wampold, 2002).
In contrast, the leading somatogenic treatment for mental illness can be found in the establishment of the first psychotropic medications in the mid-20th century. Restraints, electro-convulsive shock therapy, and lobotomies continued to be employed in American state institutions until the 1970s, but they quickly made way for a burgeoning pharmaceutical industry that has viewed and treated mental illness as a chemical imbalance in the brain.
Both etiological theories coexist today in what the psychological discipline holds as the biopsychosocial model of explaining human behavior. While individuals may be born with a genetic predisposition for a certain psychological disorder, certain psychological stressors need to be present for them to develop the disorder. Sociocultural factors such as sociopolitical or economic unrest, poor living conditions, or problematic interpersonal relationships are also viewed as contributing factors. However much we want to believe that we are above the treatments described above, or that the present is always the most enlightened time, let us not forget that our thinking today continues to reflect the same underlying somatogenic and psychogenic theories of mental illness discussed throughout this cursory 9,000-year history.
Diagnosis of Mental Illness
Progress in the treatment of mental illness necessarily implies improvements in the diagnosis of mental illness. A standardized diagnostic classification system with agreed-upon definitions of psychological disorders creates a shared language among mental-health providers and aids in clinical research. While diagnoses were recognized as far back as the Greeks, it was not until 1883 that German psychiatrist Emil Kräpelin (1856–1926) published a comprehensive system of psychological disorders that centered around a pattern of symptoms (i.e., syndrome) suggestive of an underlying physiological cause. Other clinicians also suggested popular classification systems but the need for a single, shared system paved the way for the American Psychiatric Association’s 1952 publication of the first Diagnostic and Statistical Manual (DSM).
The DSM has undergone various revisions (in 1968, 1980, 1987, 1994, 2000, 2013), and it is the 1980 DSM-III version that began a multiaxial classification system that took into account the entire individual rather than just the specific problem behavior. Axes I and II contain the clinical diagnoses, including mental retardation and personality disorders. Axes III and IV list any relevant medical conditions or psychosocial or environmental stressors, respectively. Axis V provides a global assessment of the individual’s level of functioning. The most recent version -- the DSM-5-- has combined the first three axes and removed the last two. These revisions reflect an attempt to help clinicians streamline diagnosis and work better with other diagnostic systems such as health diagnoses outlined by the World Health Organization.
While the DSM has provided a necessary shared language for clinicians, aided in clinical research, and allowed clinicians to be reimbursed by insurance companies for their services, it is not without criticism. The DSM is based on clinical and research findings from Western culture, primarily the United States. It is also a medicalized categorical classification system that assumes disordered behavior does not differ in degree but in kind, as opposed to a dimensional classification system that would plot disordered behavior along a continuum. Finally, the number of diagnosable disorders has tripled since it was first published in 1952, so that almost half of Americans will have a diagnosable disorder in their lifetime, contributing to the continued concern of labeling and stigmatizing mentally ill individuals. These concerns appear to be relevant even in the DSM-5 version that came out in May of 2013.
Outside Resources
Video: An introduction to and overview of psychology, from its origins in the nineteenth century to current study of the brain's biochemistry.
www.learner.org/series/discoveringpsychology/01/e01expand.html
Video: The BBC provides an overview of ancient Greek approaches to health and medicine.
www.tes.com/teaching-resource/ancient-greek-approaches-to-health-and-medicine-6176019
Web: Images from the History of Medicine. Search \"mental illness\"
http://ihm.nlm.nih.gov/luna/servlet/view/all
Web: Science Museum Brought to Life
www.sciencemuseum.org.uk/brou...ndillness.aspx
Web: The Social Psychology Network provides a number of links and resources.
https://www.socialpsychology.org/history.htm
Web: The UCL Center for the History of Medicine
www.ucl.ac.uk/histmed/
Web: The Wellcome Library. Search \"mental illness\".
http://wellcomelibrary.org/
Web: US National Library of Medicine
http://vsearch.nlm.nih.gov/vivisimo/cgi-bin/query-meta?query=mental+illness&v:project=nlm-main-website
Discussion Questions
1. What does it mean to say that someone is mentally ill? What criteria are usually considered to determine whether someone is mentally ill?
2. Describe the difference between supernatural, somatogenic, and psychogenic theories of mental illness and how subscribing to a particular etiological theory determines the type of treatment used.
3. How did the Greeks describe hysteria and what treatment did they prescribe?
4. Describe humorism and how it explained mental illness.
5. Describe how the witch hunts came about and their relationship to mental illness.
6. Describe the development of treatment facilities for the mentally insane, from asylums to community mental health centers.
7. Describe the humane treatment of the mentally ill brought about by Chiarughi, Pinel, and Tuke in the late 18th and early 19th centuries and how it differed from the care provided in the centuries preceding it.
8. Describe William Tuke’s treatment of the mentally ill at the York Retreat within the context of the Quaker Society of Friends. What influence did Tuke’s treatment have in other parts of the world?
9. What are the 20th-century treatments resulting from the psychogenic and somatogenic theories of mental illness?
10. Describe why a classification system is important and how the leading classification system used in the United States works. Describe some concerns with regard to this system.
Vocabulary
Animism
The belief that everyone and everything had a “soul” and that mental illness was due to animistic causes, for example, evil spirits controlling an individual and his/her behavior.
Asylum
A place of refuge or safety established to confine and care for the mentally ill; forerunners of the mental hospital or psychiatric facility.
Biopsychosocial model
A model in which the interaction of biological, psychological, and sociocultural factors is seen as influencing the development of the individual.
C athartic method
A therapeutic procedure introduced by Breuer and developed further by Freud in the late 19th century whereby a patient gains insight and emotional relief from recalling and reliving traumatic events.
Cultural relativism
The idea that cultural norms and values of a society can only be understood on their own terms or in their own context.
Etiology
The causal description of all of the factors that contribute to the development of a disorder or illness.
Humorism (or humoralism)
A belief held by ancient Greek and Roman physicians (and until the 19th century) that an excess or deficiency in any of the four bodily fluids, or humors—blood, black bile, yellow bile, and phlegm—directly affected their health and temperament.
Hysteria
Term used by the ancient Greeks and Egyptians to describe a disorder believed to be caused by a woman’s uterus wandering throughout the body and interfering with other organs (today referred to as conversion disorder, in which psychological problems are expressed in physical form).
Maladaptive
Term referring to behaviors that cause people who have them physical or emotional harm, prevent them from functioning in daily life, and/or indicate that they have lost touch with reality and/or cannot control their thoughts and behavior (also called dysfunctional).
Mesmerism
Derived from Franz Anton Mesmer in the late 18th century, an early version of hypnotism in which Mesmer claimed that hysterical symptoms could be treated through animal magnetism emanating from Mesmer’s body and permeating the universe (and later through magnets); later explained in terms of high suggestibility in individuals.
Psychogenesis
Developing from psychological origins.
Somatogenesis
Developing from physical/bodily origins.
Supernatural
Developing from origins beyond the visible observable universe.
Syndrome
Involving a particular group of signs and symptoms.
“Traitement moral” (moral treatment)
A therapeutic regimen of improved nutrition, living conditions, and rewards for productive behavior that has been attributed to Philippe Pinel during the French Revolution, when he released mentally ill patients from their restraints and treated them with compassion and dignity rather than with contempt and denigration.
Trephination
The drilling of a hole in the skull, presumably as a way of treating psychological disorders. | textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/10%3A_Psychological_Disorders/10.01%3A_History_of_Mental_Illness.txt |
By Anda Gershon and Renee Thompson
Stanford University, Washington University in St. Louis
Everyone feels down or euphoric from time to time, but this is different from having a mood disorder such as major depressive disorder or bipolar disorder. Mood disorders are extended periods of depressed, euphoric, or irritable moods that in combination with other symptoms cause the person significant distress and interfere with his or her daily life, often resulting in social and occupational difficulties. In this module, we describe major mood disorders, including their symptom presentations, general prevalence rates, and how and why the rates of these disorders tend to vary by age, gender, and race. In addition, biological and environmental risk factors that have been implicated in the development and course of mood disorders, such as heritability and stressful life events, are reviewed. Finally, we provide an overview of treatments for mood disorders, covering treatments with demonstrated effectiveness, as well as new treatment options showing promise.
learning objectives
• Describe the diagnostic criteria for mood disorders.
• Understand age, gender, and ethnic differences in prevalence rates of mood disorders.
• Identify common risk factors for mood disorders.
• Know effective treatments of mood disorders.
The actress Brooke Shields published a memoir titled Down Came the Rain: My Journey through Postpartum Depression in which she described her struggles with depression following the birth of her daughter. Despite the fact that about one in 20 women experience depression after the birth of a baby (American Psychiatric Association [APA], 2013), postpartum depression—recently renamed “perinatal depression”—continues to be veiled by stigma, owing in part to a widely held expectation that motherhood should be a time of great joy. In an opinion piece in the New York Times, Shields revealed that entering motherhood was a profoundly overwhelming experience for her. She vividly describes experiencing a sense of “doom” and “dread” in response to her newborn baby. Because motherhood is conventionally thought of as a joyous event and not associated with sadness and hopelessness, responding to a newborn baby in this way can be shocking to the new mother as well as those close to her. It may also involve a great deal of shame for the mother, making her reluctant to divulge her experience to others, including her doctors and family.
Feelings of shame are not unique to perinatal depression. Stigma applies to other types of depressive and bipolar disorders and contributes to people not always receiving the necessary support and treatment for these disorders. In fact, the World Health Organization ranks both major depressive disorder (MDD) and bipolar disorder (BD) among the top 10 leading causes of disability worldwide. Further, MDD and BD carry a high risk of suicide. It is estimated that 25%–50% of people diagnosed with BD will attempt suicide at least once in their lifetimes (Goodwin & Jamison, 2007).
What Are Mood Disorders?
Mood Episodes
Everyone experiences brief periods of sadness, irritability, or euphoria. This is different than having a mood disorder, such as MDD or BD, which are characterized by a constellation of symptoms that causes people significant distress or impairs their everyday functioning.
Major Depressive Episode
A major depressive episode (MDE) refers to symptoms that co-occur for at least two weeks and cause significant distress or impairment in functioning, such as interfering with work, school, or relationships. Core symptoms include feeling down or depressed or experiencing anhedonia—loss of interest or pleasure in things that one typically enjoys. According to the fifth edition of the Diagnostic and Statistical Manual (DSM-5; APA, 2013), the criteria for an MDE require five or more of the following nine symptoms, including one or both of the first two symptoms, for most of the day, nearly every day:
1. depressed mood
2. diminished interest or pleasure in almost all activities
3. significant weight loss or gain or an increase or decrease in appetite
4. insomnia or hypersomnia
5. psychomotor agitation or retardation
6. fatigue or loss of energy
7. feeling worthless or excessive or inappropriate guilt
8. diminished ability to concentrate or indecisiveness
9. recurrent thoughts of death, suicidal ideation, or a suicide attempt
These symptoms cannot be caused by physiological effects of a substance or a general medical condition (e.g., hypothyroidism).
Manic or Hypomanic Episode
The core criterion for a manic or hypomanic episode is a distinct period of abnormally and persistently euphoric, expansive, or irritable mood and persistently increased goal-directed activity or energy. The mood disturbance must be present for one week or longer in mania (unless hospitalization is required) or four days or longer in hypomania. Concurrently, at least three of the following symptoms must be present in the context of euphoric mood (or at least four in the context of irritable mood):
1. inflated self-esteem or grandiosity
2. increased goal-directed activity or psychomotor agitation
3. reduced need for sleep
4. racing thoughts or flight of ideas
5. distractibility
6. increased talkativeness
7. excessive involvement in risky behaviors
Manic episodes are distinguished from hypomanic episodes by their duration and associated impairment; whereas manic episodes must last one week and are defined by a significant impairment in functioning, hypomanic episodes are shorter and not necessarily accompanied by impairment in functioning.
Mood Disorders
Unipolar Mood Disorders
Two major types of unipolar disorders described by the DSM-5 (APA, 2013) are major depressive disorder and persistent depressive disorder (PDD; dysthymia). MDD is defined by one or more MDEs, but no history of manic or hypomanic episodes. Criteria for PDD are feeling depressed most of the day for more days than not, for at least two years. At least two of the following symptoms are also required to meet criteria for PDD:
1. poor appetite or overeating
2. insomnia or hypersomnia
3. low energy or fatigue
4. low self-esteem
5. poor concentration or difficulty making decisions
6. feelings of hopelessness
Like MDD, these symptoms need to cause significant distress or impairment and cannot be due to the effects of a substance or a general medical condition. To meet criteria for PDD, a person cannot be without symptoms for more than two months at a time. PDD has overlapping symptoms with MDD. If someone meets criteria for an MDE during a PDD episode, the person will receive diagnoses of PDD and MDD.
Bipolar Mood Disorders
Three major types of BDs are described by the DSM-5 (APA, 2013). Bipolar I Disorder (BD I), which was previously known as manic-depression, is characterized by a single (or recurrent) manic episode. A depressive episode is not necessary but commonly present for the diagnosis of BD I. Bipolar II Disorder is characterized by single (or recurrent) hypomanic episodes and depressive episodes. Another type of BD is cyclothymic disorder, characterized by numerous and alternating periods of hypomania and depression, lasting at least two years. To qualify for cyclothymic disorder, the periods of depression cannot meet full diagnostic criteria for an MDE; the person must experience symptoms at least half the time with no more than two consecutive symptom-free months; and the symptoms must cause significant distress or impairment.
It is important to note that the DSM-5 was published in 2013, and findings based on the updated manual will be forthcoming. Consequently, the research presented below was largely based on a similar, but not identical, conceptualization of mood disorders drawn from the DSM-IV (APA, 2000).
How Common Are Mood Disorders? Who Develops Mood Disorders?
Depressive Disorders
In a nationally representative sample, lifetime prevalence rate for MDD is 16.6% (Kessler, Berglund, Demler, Jin, Merikangas, & Walters, 2005). This means that nearly one in five Americans will meet the criteria for MDD during their lifetime. The 12-month prevalence—the proportion of people who meet criteria for a disorder during a 12-month period—for PDD is approximately 0.5% (APA, 2013).
Although the onset of MDD can occur at any time throughout the lifespan, the average age of onset is mid-20s, with the age of onset decreasing with people born more recently (APA, 2000). Prevalence of MDD among older adults is much lower than it is for younger cohorts (Kessler, Birnbaum, Bromet, Hwang, Sampson, & Shahly, 2010). The duration of MDEs varies widely. Recovery begins within three months for 40% of people with MDD and within 12 months for 80% (APA, 2013). MDD tends to be a recurrent disorder with about 40%–50% of those who experience one MDE experiencing a second MDE (Monroe & Harkness, 2011). An earlier age of onset predicts a worse course. About 5%–10% of people who experience an MDE will later experience a manic episode (APA, 2000), thus no longer meeting criteria for MDD but instead meeting them for BD I. Diagnoses of other disorders across the lifetime are common for people with MDD: 59% experience an anxiety disorder; 32% experience an impulse control disorder, and 24% experience a substance use disorder (Kessler, Merikangas, & Wang, 2007).
Women experience two to three times higher rates of MDD than do men (Nolen-Hoeksema & Hilt, 2009). This gender difference emerges during puberty (Conley & Rudolph, 2009). Before puberty, boys exhibit similar or higher prevalence rates of MDD than do girls (Twenge & Nolen-Hoeksema, 2002). MDD is inversely correlated with socioeconomic status (SES), a person’s economic and social position based on income, education, and occupation. Higher prevalence rates of MDD are associated with lower SES (Lorant, Deliege, Eaton, Robert, Philippot, & Ansseau, 2003), particularly for adults over 65 years old (Kessler et al., 2010). Independent of SES, results from a nationally representative sample found that European Americans had a higher prevalence rate of MDD than did African Americans and Hispanic Americans, whose rates were similar (Breslau, Aguilar-Gaxiola, Kendler, Su, Williams, & Kessler, 2006). The course of MDD for African Americans is often more severe and less often treated than it is for European Americans, however (Williams et al., 2007). Native Americans have a higher prevalence rate than do European Americans, African Americans, or Hispanic Americans (Hasin, Goodwin, Stinson & Grant, 2005). Depression is not limited to industrialized or western cultures; it is found in all countries that have been examined, although the symptom presentation as well as prevalence rates vary across cultures (Chentsova-Dutton & Tsai, 2009).
Bipolar Disorders
The lifetime prevalence rate of bipolar spectrum disorders in the general U.S. population is estimated at approximately 4.4%, with BD I constituting about 1% of this rate (Merikangas et al., 2007). Prevalence estimates, however, are highly dependent on the diagnostic procedures used (e.g., interviews vs. self-report) and whether or not sub-threshold forms of the disorder are included in the estimate. BD often co-occurs with other psychiatric disorders. Approximately 65% of people with BD meet diagnostic criteria for at least one additional psychiatric disorder, most commonly anxiety disorders and substance use disorders (McElroy et al., 2001). The co-occurrence of BD with other psychiatric disorders is associated with poorer illness course, including higher rates of suicidality (Leverich et al., 2003). A recent cross-national study sample of more than 60,000 adults from 11 countries, estimated the worldwide prevalence of BD at 2.4%, with BD I constituting 0.6% of this rate (Merikangas et al., 2011). In this study, the prevalence of BD varied somewhat by country. Whereas the United States had the highest lifetime prevalence (4.4%), India had the lowest (0.1%). Variation in prevalence rates was not necessarily related to SES, as in the case of Japan, a high-income country with a very low prevalence rate of BD (0.7%).
With regard to ethnicity, data from studies not confounded by SES or inaccuracies in diagnosis are limited, but available reports suggest rates of BD among European Americans are similar to those found among African Americans (Blazer et al., 1985) and Hispanic Americans (Breslau, Kendler, Su, Gaxiola-Aguilar, & Kessler, 2005). Another large community-based study found that although prevalence rates of mood disorders were similar across ethnic groups, Hispanic Americans and African Americans with a mood disorder were more likely to remain persistently ill than European Americans (Breslau et al., 2005). Compared with European Americans with BD, African Americans tend to be underdiagnosed for BD (and over-diagnosed for schizophrenia) (Kilbourne, Haas, Mulsant, Bauer, & Pincus, 2004; Minsky, Vega, Miskimen, Gara, & Escobar, 2003), and Hispanic Americans with BD have been shown to receive fewer psychiatric medication prescriptions and specialty treatment visits (Gonzalez et al., 2007). Misdiagnosis of BD can result in the underutilization of treatment or the utilization of inappropriate treatment, and thus profoundly impact the course of illness.
As with MDD, adolescence is known to be a significant risk period for BD; mood symptoms start by adolescence in roughly half of BD cases (Leverich et al., 2007; Perlis et al., 2004). Longitudinal studies show that those diagnosed with BD prior to adulthood experience a more pernicious course of illness relative to those with adult onset, including more episode recurrence, higher rates of suicidality, and profound social, occupational, and economic repercussions (e.g., Lewinsohn, Seeley, Buckley, & Klein, 2002). The prevalence of BD is substantially lower in older adults compared with younger adults (1% vs. 4%) (Merikangas et al., 2007).
What Are Some of the Factors Implicated in the Development and Course of Mood Disorders?
Mood disorders are complex disorders resulting from multiple factors. Causal explanations can be attempted at various levels, including biological and psychosocial levels. Below are several of the key factors that contribute to onset and course of mood disorders are highlighted.
Depressive Disorders
Research across family and twin studies has provided support that genetic factors are implicated in the development of MDD. Twin studies suggest that familial influence on MDD is mostly due to genetic effects and that individual-specific environmental effects (e.g., romantic relationships) play an important role, too. By contrast, the contribution of shared environmental effect by siblings is negligible (Sullivan, Neale & Kendler, 2000). The mode of inheritance is not fully understood although no single genetic variation has been found to increase the risk of MDD significantly. Instead, several genetic variants and environmental factors most likely contribute to the risk for MDD (Lohoff, 2010).
One environmental stressor that has received much support in relation to MDD is stressful life events. In particular, severe stressful life events—those that have long-term consequences and involve loss of a significant relationship (e.g., divorce) or economic stability (e.g., unemployment) are strongly related to depression (Brown & Harris, 1989; Monroe et al., 2009). Stressful life events are more likely to predict the first MDE than subsequent episodes (Lewinsohn, Allen, Seeley, & Gotlib, 1999). In contrast, minor events may play a larger role in subsequent episodes than the initial episodes (Monroe & Harkness, 2005).
Depression research has not been limited to examining reactivity to stressful life events. Much research, particularly brain imagining research using functional magnetic resonance imaging (fMRI), has centered on examining neural circuitry—the interconnections that allow multiple brain regions to perceive, generate, and encode information in concert. A meta-analysis of neuroimaging studies showed that when viewing negative stimuli (e.g., picture of an angry face, picture of a car accident), compared with healthy control participants, participants with MDD have greater activation in brain regions involved in stress response and reduced activation of brain regions involved in positively motivated behaviors (Hamilton, Etkin, Furman, Lemus, Johnson, & Gotlib, 2012).
Other environmental factors related to increased risk for MDD include experiencing early adversity (e.g., childhood abuse or neglect; Widom, DuMont, & Czaja, 2007), chronic stress (e.g., poverty) and interpersonal factors. For example, marital dissatisfaction predicts increases in depressive symptoms in both men and women. On the other hand, depressive symptoms also predict increases in marital dissatisfaction (Whisman & Uebelacker, 2009). Research has found that people with MDD generate some of their interpersonal stress (Hammen, 2005). People with MDD whose relatives or spouses can be described as critical and emotionally overinvolved have higher relapse rates than do those living with people who are less critical and emotionally overinvolved (Butzlaff & Hooley, 1998).
People’s attributional styles or their general ways of thinking, interpreting, and recalling information have also been examined in the etiology of MDD (Gotlib & Joormann, 2010). People with a pessimistic attributional style tend to make internal (versus external), global (versus specific), and stable (versus unstable) attributions to negative events, serving as a vulnerability to developing MDD. For example, someone who when he fails an exam thinks that it was his fault (internal), that he is stupid (global), and that he will always do poorly (stable) has a pessimistic attribution style. Several influential theories of depression incorporate attributional styles (Abramson, Metalsky, & Alloy, 1989; Abramson Seligman, & Teasdale, 1978).
Bipolar Disorders
Although there have been important advances in research on the etiology, course, and treatment of BD, there remains a need to understand the mechanisms that contribute to episode onset and relapse. There is compelling evidence for biological causes of BD, which is known to be highly heritable (McGuffin, Rijsdijk, Andrew, Sham, Katz, & Cardno, 2003). It may be argued that a high rate of heritability demonstrates that BD is fundamentally a biological phenomenon. However, there is much variability in the course of BD both within a person across time and across people (Johnson, 2005). The triggers that determine how and when this genetic vulnerability is expressed are not yet understood; however, there is evidence to suggest that psychosocial triggers may play an important role in BD risk (e.g., Johnson et al., 2008; Malkoff-Schwartz et al., 1998).
In addition to the genetic contribution, biological explanations of BD have also focused on brain function. Many of the studies using fMRI techniques to characterize BD have focused on the processing of emotional stimuli based on the idea that BD is fundamentally a disorder of emotion (APA, 2000). Findings show that regions of the brain thought to be involved in emotional processing and regulation are activated differently in people with BD relative to healthy controls (e.g., Altshuler et al., 2008; Hassel et al., 2008; Lennox, Jacob, Calder, Lupson, & Bullmore, 2004).
However, there is little consensus as to whether a particular brain region becomes more or less active in response to an emotional stimulus among people with BD compared with healthy controls. Mixed findings are in part due to samples consisting of participants who are at various phases of illness at the time of testing (manic, depressed, inter-episode). Sample sizes tend to be relatively small, making comparisons between subgroups difficult. Additionally, the use of a standardized stimulus (e.g., facial expression of anger) may not elicit a sufficiently strong response. Personally engaging stimuli, such as recalling a memory, may be more effective in inducing strong emotions (Isacowitz, Gershon, Allard, & Johnson, 2013).
Within the psychosocial level, research has focused on the environmental contributors to BD. A series of studies show that environmental stressors, particularly severe stressors (e.g., loss of a significant relationship), can adversely impact the course of BD. People with BD have substantially increased risk of relapse (Ellicott, Hammen, Gitlin, Brown, & Jamison, 1990) and suffer more depressive symptoms (Johnson, Winett, Meyer, Greenhouse, & Miller, 1999) following a severe life stressor. Interestingly, positive life events can also adversely impact the course of BD. People with BD suffer more manic symptoms after life events involving attainment of a desired goal (Johnson et al., 2008). Such findings suggest that people with BD may have a hypersensitivity to rewards.
Evidence from the life stress literature has also suggested that people with mood disorders may have a circadian vulnerability that renders them sensitive to stressors that disrupt their sleep or rhythms. According to social zeitgeber theory (Ehlers, Frank, & Kupfer, 1988; Frank et al., 1994), stressors that disrupt sleep, or that disrupt the daily routines that entrain the biological clock (e.g., meal times) can trigger episode relapse. Consistent with this theory, studies have shown that life events that involve a disruption in sleep and daily routines, such as overnight travel, can increase bipolar symptoms in people with BD (Malkoff-Schwartz et al., 1998).
What Are Some of the Well-Supported Treatments for Mood Disorders?
Depressive Disorders
There are many treatment options available for people with MDD. First, a number of antidepressant medications are available, all of which target one or more of the neurotransmitters implicated in depression.The earliest antidepressant medications were monoamine oxidase inhibitors (MAOIs). MAOIs inhibit monoamine oxidase, an enzyme involved in deactivating dopamine, norepinephrine, and serotonin. Although effective in treating depression, MAOIs can have serious side effects. Patients taking MAOIs may develop dangerously high blood pressure if they take certain drugs (e.g., antihistamines) or eat foods containing tyramine, an amino acid commonly found in foods such as aged cheeses, wine, and soy sauce. Tricyclics, the second-oldest class of antidepressant medications, block the reabsorption of norepinephrine, serotonin, or dopamine at synapses, resulting in their increased availability. Tricyclics are most effective for treating vegetative and somatic symptoms of depression. Like MAOIs, they have serious side effects, the most concerning of which is being cardiotoxic. Selective serotonin reuptake inhibitors (SSRIs; e.g., Fluoxetine) and serotonin and norepinephrine reuptake inhibitors (SNRIs; e.g., Duloxetine) are the most recently introduced antidepressant medications. SSRIs, the most commonly prescribed antidepressant medication, block the reabsorption of serotonin, whereas SNRIs block the reabsorption of serotonin and norepinephrine. SSRIs and SNRIs have fewer serious side effects than do MAOIs and tricyclics. In particular, they are less cardiotoxic, less lethal in overdose, and produce fewer cognitive impairments. They are not, however, without their own side effects, which include but are not limited to difficulty having orgasms, gastrointestinal issues, and insomnia.
Other biological treatments for people with depression include electroconvulsive therapy (ECT), transcranial magnetic stimulation (TMS), and deep brain stimulation. ECT involves inducing a seizure after a patient takes muscle relaxants and is under general anesthesia. ECT is viable treatment for patients with severe depression or who show resistance to antidepressants although the mechanisms through which it works remain unknown. A common side effect is confusion and memory loss, usually short-term (Schulze-Rauschenbach, Harms, Schlaepfer, Maier, Falkai, & Wagner, 2005). Repetitive TMS is a noninvasive technique administered while a patient is awake. Brief pulsating magnetic fields are delivered to the cortex, inducing electrical activity. TMS has fewer side effects than ECT (Schulze-Rauschenbach et al., 2005), and while outcome studies are mixed, there is evidence that TMS is a promising treatment for patients with MDD who have shown resistance to other treatments (Rosa et al., 2006). Most recently, deep brain stimulation is being examined as a treatment option for patients who did not respond to more traditional treatments like those already described. Deep brain stimulation involves implanting an electrode in the brain. The electrode is connected to an implanted neurostimulator, which electrically stimulates that particular brain region. Although there is some evidence of its effectiveness (Mayberg et al., 2005), additional research is needed.
Several psychosocial treatments have received strong empirical support, meaning that independent investigations have achieved similarly positive results—a high threshold for examining treatment outcomes. These treatments include but are not limited to behavior therapy, cognitive therapy, and interpersonal therapy. Behavior therapies focus on increasing the frequency and quality of experiences that are pleasant or help the patient achieve mastery. Cognitive therapies primarily focus on helping patients identify and change distorted automatic thoughts and assumptions (e.g., Beck, 1967). Cognitive-behavioral therapies are based on the rationale that thoughts, behaviors, and emotions affect and are affected by each other. Interpersonal Therapy for Depression focuses largely on improving interpersonal relationships by targeting problem areas, specifically unresolved grief, interpersonal role disputes, role transitions, and interpersonal deficits. Finally, there is also some support for the effectiveness of Short-Term Psychodynamic Therapy for Depression (Leichsenring, 2001). The short-term treatment focuses on a limited number of important issues, and the therapist tends to be more actively involved than in more traditional psychodynamic therapy.
Bipolar Disorders
Patients with BD are typically treated with pharmacotherapy. Antidepressants such as SSRIs and SNRIs are the primary choice of treatment for depression, whereas for BD, lithium is the first line treatment choice. This is because SSRIs and SNRIs have the potential to induce mania or hypomania in patients with BD. Lithium acts on several neurotransmitter systems in the brain through complex mechanisms, including reduction of excitatory (dopamine and glutamate) neurotransmission, and increasing of inhibitory (GABA) neurotransmission (Lenox & Hahn, 2000). Lithium has strong efficacy for the treatment of BD (Geddes, Burgess, Hawton, Jamison, & Goodwin, 2004). However, a number of side effects can make lithium treatment difficult for patients to tolerate. Side effects include impaired cognitive function (Wingo, Wingo, Harvey, & Baldessarini, 2009), as well as physical symptoms such as nausea, tremor, weight gain, and fatigue (Dunner, 2000). Some of these side effects can improve with continued use; however, medication noncompliance remains an ongoing concern in the treatment of patients with BD. Anticonvulsant medications (e.g., carbamazepine, valproate) are also commonly used to treat patients with BD, either alone or in conjunction with lithium.
There are several adjunctive treatment options for people with BD. Interpersonal and social rhythm therapy (IPSRT; Frank et al., 1994) is a psychosocial intervention focused on addressing the mechanism of action posited in social zeitgeber theory to predispose patients who have BD to relapse, namely sleep disruption. A growing body of literature provides support for the central role of sleep dysregulation in BD (Harvey, 2008). Consistent with this literature, IPSRT aims to increase rhythmicity of patients’ lives and encourage vigilance in maintaining a stable rhythm. The therapist and patient work to develop and maintain a healthy balance of activity and stimulation such that the patient does not become overly active (e.g., by taking on too many projects) or inactive (e.g., by avoiding social contact). The efficacy of IPSRT has been demonstrated in that patients who received this treatment show reduced risk of episode recurrence and are more likely to remain well (Frank et al., 2005).
Conclusion
Everyone feels down or euphoric from time to time. For some people, these feelings can last for long periods of time and can also co-occur with other symptoms that, in combination, interfere with their everyday lives. When people experience an MDE or a manic episode, they see the world differently. During an MDE, people often feel hopeless about the future, and may even experience suicidal thoughts. During a manic episode, people often behave in ways that are risky or place them in danger. They may spend money excessively or have unprotected sex, often expressing deep shame over these decisions after the episode. MDD and BD cause significant problems for people at school, at work, and in their relationships and affect people regardless of gender, age, nationality, race, religion, or sexual orientation. If you or someone you know is suffering from a mood disorder, it is important to seek help. Effective treatments are available and continually improving. If you have an interest in mood disorders, there are many ways to contribute to their understanding, prevention, and treatment, whether by engaging in research or clinical work.
Outside Resources
Books: Recommended memoirs include A Memoir of Madness by William Styron (MDD); Noonday Demon: An Atlas of Depression by Andrew Solomon (MDD); and An Unquiet Mind: A Memoir of Moods and Madness by Kay Redfield (BD).
Web: Visit the Association for Behavioral and Cognitive Therapies to find a list of the recommended therapists and evidence-based treatments.
http://www.abct.org
Web: Visit the Depression and Bipolar Support Alliance for educational information and social support options.
http://www.dbsalliance.org/
Discussion Questions
1. What factors might explain the large gender difference in the prevalence rates of MDD?
2. Why might American ethnic minority groups experience more persistent BD than European Americans?
3. Why might the age of onset for MDD be decreasing over time?
4. Why might overnight travel constitute a potential risk for a person with BD?
5. What are some reasons positive life events may precede the occurrence of manic episode?
Vocabulary
Anhedonia
Loss of interest or pleasure in activities one previously found enjoyable or rewarding.
Attributional style
The tendency by which a person infers the cause or meaning of behaviors or events.
Chronic stress
Discrete or related problematic events and conditions which persist over time and result in prolonged activation of the biological and/or psychological stress response (e.g., unemployment, ongoing health difficulties, marital discord).
Early adversity
Single or multiple acute or chronic stressful events, which may be biological or psychological in nature (e.g., poverty, abuse, childhood illness or injury), occurring during childhood and resulting in a biological and/or psychological stress response.
Grandiosity
Inflated self-esteem or an exaggerated sense of self-importance and self-worth (e.g., believing one has special powers or superior abilities).
Hypersomnia
Excessive daytime sleepiness, including difficulty staying awake or napping, or prolonged sleep episodes.
Psychomotor agitation
Increased motor activity associated with restlessness, including physical actions (e.g., fidgeting, pacing, feet tapping, handwringing).
Psychomotor retardation
A slowing of physical activities in which routine activities (e.g., eating, brushing teeth) are performed in an unusually slow manner.
Social zeitgeber
Zeitgeber is German for “time giver.” Social zeitgebers are environmental cues, such as meal times and interactions with other people, that entrain biological rhythms and thus sleep-wake cycle regularity.
Socioeconomic status (SES)
A person’s economic and social position based on income, education, and occupation.
Suicidal ideation
Recurring thoughts about suicide, including considering or planning for suicide, or preoccupation with suicide. | textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/10%3A_Psychological_Disorders/10.02%3A_Mood_Disorders.txt |
By Kevin A. Pelphrey
Yale University
People with autism spectrum disorder (ASD) suffer from a profound social disability. Social neuroscience is the study of the parts of the brain that support social interactions or the “social brain.” This module provides an overview of ASD and focuses on understanding how social brain dysfunction leads to ASD. Our increasing understanding of the social brain and its dysfunction in ASD will allow us to better identify the genes that cause ASD and will help us to create and pick out treatments to better match individuals. Because social brain systems emerge in infancy, social neuroscience can help us to figure out how to diagnose ASD even before the symptoms of ASD are clearly present. This is a hopeful time because social brain systems remain malleable well into adulthood and thus open to creative new interventions that are informed by state-of-the-art science.
learning objectives
• Know the basic symptoms of ASD.
• Distinguish components of the social brain and understand their dysfunction in ASD.
• Appreciate how social neuroscience may facilitate the diagnosis and treatment of ASD.
Defining Autism Spectrum Disorder
Autism Spectrum Disorder (ASD) is a developmental disorder that usually emerges in the first three years and persists throughout the individual’s life. Though the key symptoms of ASD fall into three general categories (see below), each person with ASD exhibits symptoms in these domains in different ways and to varying degrees. This phenotypic heterogeneity reflects the high degree of variability in the genes underlying ASD (Geschwind & Levitt, 2007). Though we have identified genetic differences associated with individual cases of ASD, each accounts for only a small number of the actual cases, suggesting that no single genetic cause will apply in the majority of people with ASD. There is currently no biological test for ASD.
Autism is in the category of pervasive developmental disorders, which includes Asperger's disorder, childhood disintegrative disorder, autistic disorder, and pervasive developmental disorder - not otherwise specified. These disorders, together, are labeled autism spectrum disorder (ASD). ASD is defined by the presence of profound difficulties in social interactions and communication combined with the presence of repetitive or restricted interests, cognitions and behaviors. The diagnostic process involves a combination of parental report and clinical observation. Children with significant impairments across the social/communication domain who also exhibit repetitive behaviors can qualify for the ASD diagnosis. There is wide variability in the precise symptom profile an individual may exhibit.
Since Kanner first described ASD in 1943, important commonalities in symptom presentation have been used to compile criteria for the diagnosis of ASD. These diagnostic criteria have evolved during the past 70 years and continue to evolve (e.g., see the recent changes to the diagnostic criteria on the American Psychiatric Association’s website, http://www.dsm5.org/), yet impaired social functioning remains a required symptom for an ASD diagnosis. Deficits in social functioning are present in varying degrees for simple behaviors such as eye contact, and complex behaviors like navigating the give and take of a group conversation for individuals of all functioning levels (i.e. high or low IQ). Moreover, difficulties with social information processing occur in both visual (e.g., Pelphrey et al., 2002) and auditory (e.g., Dawson, Meltzoff, Osterling, Rinaldi, & Brown, 1998) sensory modalities.
Consider the results of an eye tracking study in which Pelphrey and colleagues (2002) observed that individuals with autism did not make use of the eyes when judging facial expressions of emotion (see right panels of Figure 1). While repetitive behaviors or language deficits are seen in other disorders (e.g., obsessive-compulsive disorder and specific language impairment, respectively), basic social deficits of this nature are unique to ASD. Onset of the social deficits appears to precede difficulties in other domains (Osterling, Dawson, & Munson, 2002) and may emerge as early as 6 months of age (Maestro et al., 2002).
Defining the Social Brain
Within the past few decades, research has elucidated specific brain circuits that support perception of humans and other species. This social perception refers to “the initial stages in the processing of information that culminates in the accurate analysis of the dispositions and intentions of other individuals” (Allison, Puce, & McCarthy, 2000). Basic social perception is a critical building block for more sophisticated social behaviors, such as thinking about the motives and emotions of others. Brothers (1990) first suggested the notion of a social brain, a set of interconnected neuroanatomical structures that process social information, enabling the recognition of other individuals and the evaluation their mental states (e.g., intentions, dispositions, desires, and beliefs).
The social brain is hypothesized to consist of the amygdala, the orbital frontal cortex (OFC), fusiform gyrus (FG), and the posterior superior temporal sulcus (STS) region, among other structures. Though all areas work in coordination to support social processing, each appears to serve a distinct role. The amygdala helps us recognize the emotional states of others (e.g., Morris et al., 1996) and also to experience and regulate our own emotions (e.g., LeDoux, 1992). The OFC supports the "reward" feelings we have when we are around other people (e.g., Rolls, 2000). The FG, located at the bottom of the surface of the temporal lobes detects faces and supports face recognition (e.g., Puce, Allison, Asgari, Gore, & McCarthy, 1996). The posterior STS region recognizes the biological motion, including eye, hand and other body movements, and helps to interpret and predict the actions and intentions of others (e.g., Pelphrey, Morris, Michelich, Allison, & McCarthy, 2005).
Current Understanding of Social Perception in ASD
The social brain is of great research interest because the social difficulties characteristic of ASD are thought to relate closely to the functioning of this brain network. Functional magnetic resonance imaging (fMRI) and event-related potentials (ERP)are complementary brain imaging methods used to study activity in the brain across the lifespan. Each method measures a distinct facet of brain activity and contributes unique information to our understanding of brain function.
FMRI uses powerful magnets to measure the levels of oxygen within the brain, which vary according to changes in neural activity. As the neurons in specific brain regions “work harder”, they require more oxygen. FMRI detects the brain regions that exhibit a relative increase in blood flow (and oxygen levels) while people listen to or view social stimuli in the MRI scanner. The areas of the brain most crucial for different social processes are thus identified, with spatial information being accurate to the millimeter.
In contrast, ERP provides direct measurements of the firing of groups of neurons in the cortex. Non-invasive sensors on the scalp record the small electrical currents created by this neuronal activity while the subject views stimuli or listens to specific kinds of information. While fMRI provides information about where brain activity occurs, ERP specifies when by detailing the timing of processing at the millisecond pace at which it unfolds.
ERP and fMRI are complementary, with fMRI providing excellent spatial resolution and ERP offering outstanding temporal resolution. Together, this information is critical to understanding the nature of social perception in ASD. To date, the most thoroughly investigated areas of the social brain in ASD are the superior temporal sulcus (STS), which underlies the perception and interpretation of biological motion, and the fusiform gyrus (FG), which supports face perception. Heightened sensitivity to biological motion (for humans, motion such as walking) serves an essential role in the development of humans and other highly social species. Emerging in the first days of life, the ability to detect biological motion helps to orient vulnerable young to critical sources of sustenance, support, and learning, and develops independent of visual experience with biological motion (e.g., Simion, Regolin, & Bulf, 2008). This inborn “life detector” serves as a foundation for the subsequent development of more complex social behaviors (Johnson, 2006).
From very early in life, children with ASD display reduced sensitivity to biological motion (Klin, Lin, Gorrindo, Ramsay, & Jones, 2009). Individuals with ASD have reduced activity in the STS during biological motion perception. Similarly, people at increased genetic risk for ASD but who do not develop symptoms of the disorder (i.e. unaffected siblings of individuals with ASD) show increased activity in this region, which is hypothesized to be a compensatory mechanism to offset genetic vulnerability (Kaiser et al., 2010).
In typical development, preferential attention to faces and the ability to recognize individual faces emerge in the first days of life (e.g., Goren, Sarty, & Wu, 1975). The special way in which the brain responds to faces usually emerges by three months of age (e.g., de Haan, Johnson, & Halit, 2003) and continues throughout the lifespan (e.g., Bentin et al., 1996). Children with ASD, however, tend to show decreased attention to human faces by six to 12 months (Osterling & Dawson, 1994). Children with ASD also show reduced activity in the FG when viewing faces (e.g., Schultz et al., 2000). Slowed processing of faces (McPartland, Dawson, Webb, Panagiotides, & Carver, 2004) is a characteristic of people with ASD that is shared by parents of children with ASD (Dawson, Webb, & McPartland, 2005) and infants at increased risk for developing ASD because of having a sibling with ASD (McCleery, Akshoomoff, Dobkins, & Carver, 2009). Behavioral and attentional differences in face perception and recognition are evident in children and adults with ASD as well (e.g., Hobson, 1986).
Exploring Diversity in ASD
Because of the limited quality of the behavioral methods used to diagnose ASD and current clinical diagnostic practice, which permits similar diagnoses despite distinct symptom profiles (McPartland, Webb, Keehn, & Dawson, 2011), it is possible that the group of children currently referred to as having ASD may actually represent different syndromes with distinct causes. Examination of the social brain may well reveal diagnostically meaningful subgroups of children with ASD. Measurements of the “where” and “when” of brain activity during social processing tasks provide reliable sources of the detailed information needed to profile children with ASD with greater accuracy. These profiles, in turn, may help to inform treatment of ASD by helping us to match specific treatments to specific profiles.
The integration of imaging methods is critical for this endeavor. Using face perception as an example, the combination of fMRI and ERP could identify who, of those individuals with ASD, shows anomalies in the FG and then determine the stage of information processing at which these impairments occur. Because different processing stages often reflect discrete cognitive processes, this level of understanding could encourage treatments that address specific processing deficits at the neural level.
For example, differences observed in the early processing stages might reflect problems with low-level visual perception, while later differences would indicate problems with higher-order processes, such as emotion recognition. These same principles can be applied to the broader network of social brain regions and, combined with measures of behavioral functioning, could offer a comprehensive profile of brain-behavior performance for a given individual. A fundamental goal for this kind of subgroup approach is to improve the ability to tailor treatments to the individual.
Another objective is to improve the power of other scientific tools. Most studies of individuals with ASD compare groups of individuals, for example, individuals on with ASD compared to typically developing peers. However, studies have also attempted to compare children across the autism spectrum by group according to differential diagnosis (e.g., Asperger’s disorder versus autistic disorder), or by other behavioral or cognitive characteristics (e.g., cognitively able versus intellectually disabled or anxious versus non-anxious). Yet, the power of a scientific study to detect these kinds of significant, meaningful, individual differences is only as strong as the accuracy of the factor used to define the compared groups.
The identification of distinct subgroups within the autism spectrum according to information about the brain would allow for a more accurate and detailed exposition of the individual differences seen in those with ASD. This is especially critical for the success of investigations into the genetic basis of ASD. As mentioned before, the genes discovered thus far account for only a small portion of ASD cases. If meaningful, quantitative distinctions in individuals with ASD are identified; a more focused examination into the genetic causes specific to each subgroup could then be pursued. Moreover, distinct findings from neuroimaging, or biomarkers, can help guide genetic research. Endophenotypes, or characteristics that are not immediately available to observation but that reflect an underlying genetic liability for disease, expose the most basic components of a complex psychiatric disorder and are more stable across the lifespan than observable behavior (Gottesman & Shields, 1973). By describing the key characteristics of ASD in these objective ways, neuroimaging research will facilitate identification of genetic contributions to ASD.
Atypical Brain Development Before the Emergence of Atypical Behavior
Because autism is a developmental disorder, it is particularly important to diagnose and treat ASD early in life. Early deficits in attention to biological motion, for instance, derail subsequent experiences in attending to higher level social information, thereby driving development toward more severe dysfunction and stimulating deficits in additional domains of functioning, such as language development. The lack of reliable predictors of the condition during the first year of life has been a major impediment to the effective treatment of ASD. Without early predictors, and in the absence of a firm diagnosis until behavioral symptoms emerge, treatment is often delayed for two or more years, eclipsing a crucial period in which intervention may be particularly successful in ameliorating some of the social and communicative impairments seen in ASD.
In response to the great need for sensitive (able to identify subtle cases) and specific (able to distinguish autism from other disorders) early indicators of ASD, such as biomarkers, many research groups from around the world have been studying patterns of infant development using prospective longitudinal studies of infant siblings of children with ASD and a comparison group of infant siblings without familial risks. Such designs gather longitudinal information about developmental trajectories across the first three years of life for both groups followed by clinical diagnosis at approximately 36 months.
These studies are problematic in that many of the social features of autism do not emerge in typical development until after 12 months of age, and it is not certain that these symptoms will manifest during the limited periods of observation involved in clinical evaluations or in pediatricians’ offices. Moreover, across development, but especially during infancy, behavior is widely variable and often unreliable, and at present, behavioral observation is the only means to detect symptoms of ASD and to confirm a diagnosis. This is quite problematic because, even highly sophisticated behavioral methods, such as eye tracking (see Figure 1), do not necessarily reveal reliable differences in infants with ASD (Ozonoff et al., 2010). However, measuring the brain activity associated with social perception can detect differences that do not appear in behavior until much later. The identification of biomarkers utilizing the imaging methods we have described offers promise for earlier detection of atypical social development.
ERP measures of brain response predict subsequent development of autism in infants as young as six months old who showed normal patterns of visual fixation (as measured by eye tracking) (Elsabbagh et al., 2012). This suggests the great promise of brain imaging for earlier recognition of ASD. With earlier detection, treatments could move from addressing existing symptoms to preventing their emergence by altering the course of abnormal brain development and steering it toward normality.
Hope for Improved Outcomes
The brain imaging research described above offers hope for the future of ASD treatment. Many of the functions of the social brain demonstrate significant plasticity, meaning that their functioning can be affected by experience over time. In contrast to theories that suggest difficulty processing complex information or communicating across large expanses of cortex (Minshew & Williams, 2007), this malleability of the social brain is a positive prognosticator for the development of treatment. The brains of people with ASD are not wired to process optimally social information. But this does not mean that these systems are irretrievably broken. Given the observed plasticity of the social brain, remediation of these difficulties may be possible with appropriate and timely intervention.
Outside Resources
Web: American Psychiatric Association’s website for the 5th edition of the Diagnostic and Statistical Manual of Mental Disorders
http://www.dsm5.org
Web: Autism Science Foundation - organization supporting autism research by providing funding and other assistance to scientists and organizations conducting, facilitating, publicizing and disseminating autism research. The organization also provides information about autism to the general public and serves to increase awareness of autism spectrum disorders and the needs of individuals and families affected by autism.
http://www.autismsciencefoundation.org/
Web: Autism Speaks - Autism science and advocacy organization
http://www.autismspeaks.org/
Discussion Questions
1. How can neuroimaging inform our understanding of the causes of autism?
2. What are the ways in which neuroimaging, including fMRI and ERP, may benefit efforts to diagnosis and treat autism?
3. How can an understanding of the social brain help us to understand ASD?
4. What are the core symptoms of ASD, and why is the social brain of particular interest?
5. What are some of the components of the social brain, and what functions do they serve?
Vocabulary
Endophenotypes
A characteristic that reflects a genetic liability for disease and a more basic component of a complex clinical presentation. Endophenotypes are less developmentally malleable than overt behavior.
Event-related potentials (ERP)
Measures the firing of groups of neurons in the cortex. As a person views or listens to specific types of information, neuronal activity creates small electrical currents that can be recorded from non-invasive sensors placed on the scalp. ERP provides excellent information about the timing of processing, clarifying brain activity at the millisecond pace at which it unfolds.
Functional magnetic resonance imaging (fMRI)
Entails the use of powerful magnets to measure the levels of oxygen within the brain that vary with changes in neural activity. That is, as the neurons in specific brain regions “work harder” when performing a specific task, they require more oxygen. By having people listen to or view social percepts in an MRI scanner, fMRI specifies the brain regions that evidence a relative increase in blood flow. In this way, fMRI provides excellent spatial information, pinpointing with millimeter accuracy, the brain regions most critical for different social processes.
Social brain
The set of neuroanatomical structures that allows us to understand the actions and intentions of other people. | textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/10%3A_Psychological_Disorders/10.03%3A_Autism-_Insights_from_the_Study_of_the_Social_Brain.txt |
By Susan Barron
University of Kentucky
Psychopharmacology is the study of how drugs affect behavior. If a drug changes your perception, or the way you feel or think, the drug exerts effects on your brain and nervous system. We call drugs that change the way you think or feel psychoactive or psychotropic drugs, and almost everyone has used a psychoactive drug at some point (yes, caffeine counts). Understanding some of the basics about psychopharmacology can help us better understand a wide range of things that interest psychologists and others. For example, the pharmacological treatment of certain neurodegenerative diseases such as Parkinson’s disease tells us something about the disease itself. The pharmacological treatments used to treat psychiatric conditions such as schizophrenia or depression have undergone amazing development since the 1950s, and the drugs used to treat these disorders tell us something about what is happening in the brain of individuals with these conditions. Finally, understanding something about the actions of drugs of abuse and their routes of administration can help us understand why some psychoactive drugs are so addictive. In this module, we will provide an overview of some of these topics as well as discuss some current controversial areas in the field of psychopharmacology.
learning objectives
• How do the majority of psychoactive drugs work in the brain?
• How does the route of administration affect how rewarding a drug might be?
• Why is grapefruit dangerous to consume with many psychotropic medications?
• Why might individualized drug doses based on genetic screening be helpful for treating conditions like depression?
• Why is there controversy regarding pharmacotherapy for children, adolescents, and the elderly?
Introduction
Psychopharmacology, the study of how drugs affect the brain and behavior, is a relatively new science, although people have probably been taking drugs to change how they feel from early in human history (consider the of eating fermented fruit, ancient beer recipes, chewing on the leaves of the cocaine plant for stimulant properties as just some examples). The word psychopharmacology itself tells us that this is a field that bridges our understanding of behavior (and brain) and pharmacology, and the range of topics included within this field is extremely broad.
Virtually any drug that changes the way you feel does this by altering how neurons communicate with each other. Neurons (more than 100 billion in your nervous system) communicate with each other by releasing a chemical (neurotransmitter) across a tiny space between two neurons (the synapse). When the neurotransmitter crosses the synapse, it binds to a postsynaptic receptor (protein) on the receiving neuron and the message may then be transmitted onward. Obviously, neurotransmission is far more complicated than this – links at the end of this module can provide some useful background if you want more detail – but the first step is understanding that virtually all psychoactive drugsinterfere with or alter how neurons communicate with each other.
There are many neurotransmitters. Some of the most important in terms of psychopharmacological treatment and drugs of abuse are outlined in Table 1. The neurons that release these neurotransmitters, for the most part, are localized within specific circuits of the brain that mediate these behaviors. Psychoactive drugs can either increase activity at the synapse (these are called agonists) or reduce activity at the synapse (antagonists). Different drugs do this by different mechanisms, and some examples of agonists and antagonists are presented in Table 2. For each example, the drug’s trade name, which is the name of the drug provided by the drug company, and generic name (in parentheses) are provided.
A very useful link at the end of this module shows the various steps involved in neurotransmission and some ways drugs can alter this.
Table 2 provides examples of drugs and their primary mechanism of action, but it is very important to realize that drugs also have effects on other neurotransmitters. This contributes to the kinds of side effects that are observed when someone takes a particular drug. The reality is that no drugs currently available work only exactly where we would like in the brain or only on a specific neurotransmitter. In many cases, individuals are sometimes prescribed one psychotropic drug but then may also have to take additional drugs to reduce the side effects caused by the initial drug. Sometimes individuals stop taking medication because the side effects can be so profound.
Pharmacokinetics: What Is It – Why Is It Important?
While this section may sound more like pharmacology, it is important to realize how important pharmacokinetics can be when considering psychoactive drugs. Pharmacokinetics refers to how the body handles a drug that we take. As mentioned earlier, psychoactive drugs exert their effects on behavior by altering neuronal communication in the brain, and the majority of drugs reach the brain by traveling in the blood. The acronym ADME is often used with A standing for absorption (how the drug gets into the blood), Distribution (how the drug gets to the organ of interest – in this module, that is the brain), Metabolism (how the drug is broken down so it no longer exerts its psychoactive effects), and Excretion (how the drug leaves the body). We will talk about a couple of these to show their importance for considering psychoactive drugs.
Drug Administration
There are many ways to take drugs, and these routes of drug administration can have a significant impact on how quickly that drug reaches brain. The most common route of administration is oral administration, which is relatively slow and – perhaps surprisingly – often the most variable and complex route of administration. Drugs enter the stomach and then get absorbed by the blood supply and capillaries that line the small intestine. The rate of absorption can be affected by a variety of factors including the quantity and the type of food in the stomach (e.g., fats vs. proteins). This is why the medicine label for some drugs (like antibiotics) may specifically state foods that you should or should NOT consume within an hour of taking the drug because they can affect the rate of absorption. Two of the most rapid routes of administration include inhalation (i.e., smoking or gaseous anesthesia) and intravenous (IV) in which the drug is injected directly into the vein and hence the blood supply. Both of these routes of administration can get the drug to brain in less than 10 seconds. IV administration also has the distinction of being the most dangerous because if there is an adverse drug reaction, there is very little time to administer any antidote, as in the case of an IV heroin overdose.
Why might how quickly a drug gets to the brain be important? If a drug activates the reward circuits in the brain AND it reaches the brain very quickly, the drug has a high risk for abuse and addiction. Psychostimulants like amphetamine or cocaine are examples of drugs that have high risk for abuse because they are agonists at DA neurons involved in reward AND because these drugs exist in forms that can be either smoked or injected intravenously. Some argue that cigarette smoking is one of the hardest addictions to quit, and although part of the reason for this may be that smoking gets the nicotine into the brain very quickly (and indirectly acts on DA neurons), it is a more complicated story. For drugs that reach the brain very quickly, not only is the drug very addictive, but so are the cues associated with the drug (see Rohsenow, Niaura, Childress, Abrams, & Monti, 1990). For a crack user, this could be the pipe that they use to smoke the drug. For a cigarette smoker, however, it could be something as normal as finishing dinner or waking up in the morning (if that is when the smoker usually has a cigarette). For both the crack user and the cigarette smoker, the cues associated with the drug may actually cause craving that is alleviated by (you guessed it) – lighting a cigarette or using crack (i.e., relapse). This is one of the reasons individuals that enroll in drug treatment programs, especially out-of-town programs, are at significant risk of relapse if they later find themselves in proximity to old haunts, friends, etc. But this is much more difficult for a cigarette smoker. How can someone avoid eating? Or avoid waking up in the morning, etc. These examples help you begin to understand how important the route of administration can be for psychoactive drugs.
Drug Metabolism
Metabolism involves the breakdown of psychoactive drugs, and this occurs primarily in the liver. The liver produces enzymes (proteins that speed up a chemical reaction), and these enzymes help catalyze a chemical reaction that breaks down psychoactive drugs. Enzymes exist in “families,” and many psychoactive drugs are broken down by the same family of enzymes, the cytochrome P450 superfamily. There is not a unique enzyme for each drug; rather, certain enzymes can break down a wide variety of drugs. Tolerance to the effects of many drugs can occur with repeated exposure; that is, the drug produces less of an effect over time, so more of the drug is needed to get the same effect. This is particularly true for sedative drugs like alcohol or opiate-based painkillers. Metabolic tolerance is one kind of tolerance and it takes place in the liver. Some drugs (like alcohol) cause enzyme induction – an increase in the enzymes produced by the liver. For example, chronic drinking results in alcohol being broken down more quickly, so the alcoholic needs to drink more to get the same effect – of course, until so much alcohol is consumed that it damages the liver (alcohol can cause fatty liver or cirrhosis).
Grapefruit Juice and Metabolism
Certain types of food in the stomach can alter the rate of drug absorption, and other foods can also alter the rate of drug metabolism. The most well known is grapefruit juice. Grapefruit juice suppresses cytochrome P450 enzymes in the liver, and these liver enzymes normally break down a large variety of drugs (including some of the psychotropic drugs). If the enzymes are suppressed, drug levels can build up to potentially toxic levels. In this case, the effects can persist for extended periods of time after the consumption of grapefruit juice. As of 2013, there are at least 85 drugs shown to adversely interact with grapefruit juice (Bailey, Dresser, & Arnold, 2013). Some psychotropic drugs that are likely to interact with grapefruit juice include carbamazepine (Tegretol), prescribed for bipolar disorder; diazepam (Valium), used to treat anxiety, alcohol withdrawal, and muscle spasms; and fluvoxamine (Luvox), used to treat obsessive compulsive disorder and depression. A link at the end of this module gives the latest list of drugs reported to have this unusual interaction.
Individualized Therapy, Metabolic Differences, and Potential Prescribing Approaches for the Future
Mental illnesses contribute to more disability in western countries than all other illnesses including cancer and heart disease. Depression alone is predicted to be the second largest contributor to disease burden by 2020 (World Health Organization, 2004). The numbers of people affected by mental health issues are pretty astonishing, with estimates that 25% of adults experience a mental health issue in any given year, and this affects not only the individual but their friends and family. One in 17 adults experiences a serious mental illness (Kessler, Chiu, Demler, & Walters, 2005). Newer antidepressants are probably the most frequently prescribed drugs for treating mental health issues, although there is no “magic bullet” for treating depression or other conditions. Pharmacotherapy with psychological therapy may be the most beneficial treatment approach for many psychiatric conditions, but there are still many unanswered questions. For example, why does one antidepressant help one individual yet have no effect for another? Antidepressants can take 4 to 6 weeks to start improving depressive symptoms, and we don’t really understand why. Many people do not respond to the first antidepressant prescribed and may have to try different drugs before finding something that works for them. Other people just do not improve with antidepressants (Ioannidis, 2008). As we better understand why individuals differ, the easier and more rapidly we will be able to help people in distress.
One area that has received interest recently has to do with an individualized treatment approach. We now know that there are genetic differences in some of the cytochrome P450 enzymes and their ability to break down drugs. The general population falls into the following 4 categories: 1) ultra-extensive metabolizers break down certain drugs (like some of the current antidepressants) very, very quickly, 2) extensive metabolizers are also able to break down drugs fairly quickly, 3) intermediate metabolizers break down drugs more slowly than either of the two above groups, and finally 4) poor metabolizers break down drugs much more slowly than all of the other groups. Now consider someone receiving a prescription for an antidepressant – what would the consequences be if they were either an ultra-extensive metabolizer or a poor metabolizer? The ultra-extensive metabolizer would be given antidepressants and told it will probably take 4 to 6 weeks to begin working (this is true), but they metabolize the medication so quickly that it will never be effective for them. In contrast, the poor metabolizer given the same daily dose of the same antidepressant may build up such high levels in their blood (because they are not breaking the drug down), that they will have a wide range of side effects and feel really badly – also not a positive outcome. What if – instead – prior to prescribing an antidepressant, the doctor could take a blood sample and determine which type of metabolizer a patient actually was? They could then make a much more informed decision about the best dose to prescribe. There are new genetic tests now available to better individualize treatment in just this way. A blood sample can determine (at least for some drugs) which category an individual fits into, but we need data to determine if this actually is effective for treating depression or other mental illnesses (Zhou, 2009). Currently, this genetic test is expensive and not many health insurance plans cover this screen, but this may be an important component in the future of psychopharmacology.
Other Controversial Issues
Juveniles and Psychopharmacology
A recent Centers for Disease Control (CDC) report has suggested that as many as 1 in 5 children between the ages of 5 and 17 may have some type of mental disorder (e.g., ADHD, autism, anxiety, depression) (CDC, 2013). The incidence of bipolar disorder in children and adolescents has also increased 40 times in the past decade (Moreno, Laje, Blanco, Jiang, Schmidt, & Olfson, 2007), and it is now estimated that 1 in 88 children have been diagnosed with an autism spectrum disorder (CDC, 2011). Why has there been such an increase in these numbers? There is no single answer to this important question. Some believe that greater public awareness has contributed to increased teacher and parent referrals. Others argue that the increase stems from changes in criterion currently used for diagnosing. Still others suggest environmental factors, either prenatally or postnatally, have contributed to this upsurge.
We do not have an answer, but the question does bring up an additional controversy related to how we should treat this population of children and adolescents. Many psychotropic drugs used for treating psychiatric disorders have been tested in adults, but few have been tested for safety or efficacy with children or adolescents. The most well-established psychotropics prescribed for children and adolescents are the psychostimulant drugs used for treating attention deficit hyperactivity disorder (ADHD), and there are clinical data on how effective these drugs are. However, we know far less about the safety and efficacy in young populations of the drugs typically prescribed for treating anxiety, depression, or other psychiatric disorders. The young brain continues to mature until probably well after age 20, so some scientists are concerned that drugs that alter neuronal activity in the developing brain could have significant consequences. There is an obvious need for clinical trials in children and adolescents to test the safety and effectiveness of many of these drugs, which also brings up a variety of ethical questions about who decides what children and adolescents will participate in these clinical trials, who can give consent, who receives reimbursements, etc.
The Elderly and Psychopharmacology
Another population that has not typically been included in clinical trials to determine the safety or effectiveness of psychotropic drugs is the elderly. Currently, there is very little high-quality evidence to guide prescribing for older people – clinical trials often exclude people with multiple comorbidities (other diseases, conditions, etc.), which are typical for elderly populations (see Hilmer and Gnjidict, 2008; Pollock, Forsyth, & Bies, 2008). This is a serious issue because the elderly consume a disproportionate number of the prescription meds prescribed. The term polypharmacy refers to the use of multiple drugs, which is very common in elderly populations in the United States. As our population ages, some estimate that the proportion of people 65 or older will reach 20% of the U.S. population by 2030, with this group consuming 40% of the prescribed medications. As shown in Table 3 (from Schwartz and Abernethy, 2008), it is quite clear why the typical clinical trial that looks at the safety and effectiveness of psychotropic drugs can be problematic if we try to interpret these results for an elderly population.
Metabolism of drugs is often slowed considerably for elderly populations, so less drug can produce the same effect (or all too often, too much drug can result in a variety of side effects). One of the greatest risk factors for elderly populations is falling (and breaking bones), which can happen if the elderly person gets dizzy from too much of a drug. There is also evidence that psychotropic medications can reduce bone density (thus worsening the consequences if someone falls) (Brown & Mezuk, 2012). Although we are gaining an awareness about some of the issues facing pharmacotherapy in older populations, this is a very complex area with many medical and ethical questions.
This module provided an introduction of some of the important areas in the field of psychopharmacology. It should be apparent that this module just touched on a number of topics included in this field. It should also be apparent that understanding more about psychopharmacology is important to anyone interested in understanding behavior and that our understanding of issues in this field has important implications for society.
Outside Resources
Video: Neurotransmission
Web: Description of how some drugs work and the brain areas involved - 1
www.drugabuse.gov/news-events...rotransmission
Web: Description of how some drugs work and the brain areas involved - 2
http://learn.genetics.utah.edu/content/addiction/mouse/
Web: Information about how neurons communicate and the reward pathways
http://learn.genetics.utah.edu/content/addiction/rewardbehavior/
Web: National Institute of Alcohol Abuse and Alcoholism
http://www.niaaa.nih.gov/
Web: National Institute of Drug Abuse
http://www.drugabuse.gov/
Web: National Institute of Mental Health
http://www.nimh.nih.gov/index.shtml
Web: Neurotransmission
science.education.nih.gov/su...nsmission.html
Web: Report of the Working Group on Psychotropic Medications for Children and Adolescents: Psychopharmacological, Psychosocial, and Combined Interventions for Childhood Disorders: Evidence Base, Contextual Factors, and Future Directions (2008):
http://www.apa.org/pi/families/resources/child-medications.pdf
Web: Ways drugs can alter neurotransmission
http://thebrain.mcgill.ca/flash/d/d_03/d_03_m/d_03_m_par/d_03_m_par.html
Discussion Questions
1. What are some of the issues surrounding prescribing medications for children and adolescents? How might this be improved?
2. What are some of the factors that can affect relapse to an addictive drug?
3. How might prescribing medications for depression be improved in the future to increase the likelihood that a drug would work and minimize side effects?
Vocabulary
Agonists
A drug that increases or enhances a neurotransmitter’s effect.
Antagonist
A drug that blocks a neurotransmitter’s effect.
Enzyme
A protein produced by a living organism that allows or helps a chemical reaction to occur.
Enzyme induction
Process through which a drug can enhance the production of an enzyme.
Metabolism
Breakdown of substances.
Neurotransmitter
A chemical substance produced by a neuron that is used for communication between neurons.
Pharmacokinetics
The action of a drug through the body, including absorption, distribution, metabolism, and excretion.
Polypharmacy
The use of many medications.
Psychoactive drugs
A drug that changes mood or the way someone feels.
Psychotropic drug
A drug that changes mood or emotion, usually used when talking about drugs prescribed for various mental conditions (depression, anxiety, schizophrenia, etc.).
Synapse
The tiny space separating neurons. | textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/10%3A_Psychological_Disorders/10.04%3A_Psychopharmacology.txt |
• 11.1: The Healthy Life
Our emotions, thoughts, and behaviors play an important role in our health. Not only do they influence our day-to-day health practices, but they can also influence how our body functions. This module provides an overview of health psychology, which is a field devoted to understanding the connections between psychology and health.
11: Well-Being
By Emily Hooker and Sarah Pressman
University of Calfornia, Irvine
Our emotions, thoughts, and behaviors play an important role in our health. Not only do they influence our day-to-day health practices, but they can also influence how our body functions. This module provides an overview of health psychology, which is a field devoted to understanding the connections between psychology and health. Discussed here are examples of topics a health psychologist might study, including stress, psychosocial factors related to health and disease, how to use psychology to improve health, and the role of psychology in medicine.
learning objectives
• Describe basic terminology used in the field of health psychology.
• Explain theoretical models of health, as well as the role of psychological stress in the development of disease.
• Describe psychological factors that contribute to resilience and improved health.
• Defend the relevance and importance of psychology to the field of medicine.
What Is Health Psychology?
Today, we face more chronic disease than ever before because we are living longer lives while also frequently behaving in unhealthy ways. One example of a chronic disease is coronary heart disease (CHD): It is the number one cause of death worldwide (World Health Organization, 2013). CHD develops slowly over time and typically appears midlife, but related heart problems can persist for years after the original diagnosis or cardiovascular event. In managing illnesses that persist over time (other examples might include cancer, diabetes, and long-term disability) many psychological factors will determine the progression of the ailment. For example, do patients seek help when appropriate? Do they follow doctor recommendations? Do they develop negative psychological symptoms due to lasting illness (e.g., depression)? Also important is that psychological factors can play a significant role in who develops these diseases, the prognosis, and the nature of the symptoms related to the illness. Health psychology is a relatively new, interdisciplinary field of study that focuses on these very issues, or more specifically, the role of psychology in maintaining health, as well as preventing and treating illness.
Consideration of how psychological and social factors influence health is especially important today because many of the leading causes of illness in developed countries are often attributed to psychological and behavioral factors. In the case of CHD, discussed above, psychosocial factors, such as excessive stress, smoking, unhealthy eating habits, and some personality traits can also lead to increased risk of disease and worse health outcomes. That being said, many of these factors can be adjusted using psychological techniques. For example, clinical health psychologists can improve health practices like poor dietary choices and smoking, they can teach important stress reduction techniques, and they can help treat psychological disorders tied to poor health. Health psychology considers how the choices we make, the behaviors we engage in, and even the emotions that we feel, can play an important role in our overall health (Cohen & Herbert, 1996; Taylor, 2012).
Health psychology relies on the Biopsychosocial Model of Health. This model posits that biology, psychology, and social factors are just as important in the development of disease as biological causes (e.g., germs, viruses), which is consistent with the World Health Organization (1946) definition of health. This model replaces the older Biomedical Model of Health, which primarily considers the physical, or pathogenic, factors contributing to illness. Thanks to advances in medical technology, there is a growing understanding of the physiology underlying the mind–body connection, and in particular, the role that different feelings can have on our body’s function. Health psychology researchers working in the fields of psychosomatic medicine and psychoneuroimmunology, for example, are interested in understanding how psychological factors can “get under the skin” and influence our physiology in order to better understand how factors like stress can make us sick.
Stress And Health
You probably know exactly what it’s like to feel stress, but what you may not know is that it can objectively influence your health. Answers to questions like, “How stressed do you feel?” or “How overwhelmed do you feel?” can predict your likelihood of developing both minor illnesses as well as serious problems like future heart attack (Cohen, Janicki-Deverts, & Miller, 2007). (Want to measure your own stress level? Check out the links at the end of the module.) To understand how health psychologists study these types of associations, we will describe one famous example of a stress and health study. Imagine that you are a research subject for a moment. After you check into a hotel room as part of the study, the researchers ask you to report your general levels of stress. Not too surprising; however, what happens next is that you receive droplets of cold virus into your nose! The researchers intentionally try to make you sick by exposing you to an infectious illness. After they expose you to the virus, the researchers will then evaluate you for several days by asking you questions about your symptoms, monitoring how much mucus you are producing by weighing your used tissues, and taking body fluid samples—all to see if you are objectively ill with a cold. Now, the interesting thing is that not everyone who has drops of cold virus put in their nose develops the illness. Studies like this one find that people who are less stressed and those who are more positive at the beginning of the study are at a decreased risk of developing a cold (Cohen, Tyrrell, & Smith, 1991; Cohen, Alper, Doyle, Treanor, & Turner, 2006) (see Figure 10.4.1 for an example).
Importantly, it is not just major life stressors (e.g., a family death, a natural disaster) that increase the likelihood of getting sick. Even small daily hassles like getting stuck in traffic or fighting with your girlfriend can raise your blood pressure, alter your stress hormones, and even suppress your immune system function (DeLongis, Folkman, & Lazarus, 1988; Twisk, Snel, Kemper, & van Machelen, 1999).
It is clear that stress plays a major role in our mental and physical health, but what exactly is it? The term stresswas originally derived from the field of mechanics where it is used to describe materials under pressure. The word was first used in a psychological manner by researcher Hans Selye. He was examining the effect of an ovarian hormone that he thought caused sickness in a sample of rats. Surprisingly, he noticed that almost any injected hormone produced this same sickness. He smartly realized that it was not the hormone under investigation that was causing these problems, but instead, the aversive experience of being handled and injected by researchers that led to high physiological arousal and, eventually, to health problems like ulcers. Selye (1946) coined the term stressor to label a stimulus that had this effect on the body and developed a model of the stress response called the General Adaptation Syndrome. Since then, psychologists have studied stress in a myriad of ways, including stress as negative events (e.g., natural disasters or major life changes like dropping out of school), as chronically difficult situations (e.g., taking care of a loved one with Alzheimer’s), as short-term hassles, as a biological fight-or-flight response, and even as clinical illness like post-traumatic stress disorder (PTSD). It continues to be one of the most important and well-studied psychological correlates of illness, because excessive stress causes potentially damaging wear and tear on the body and can influence almost any imaginable disease process.
Protecting Our Health
An important question that health psychologists ask is: What keeps us protected from disease and alive longer? When considering this issue of resilience (Rutter, 1985), five factors are often studied in terms of their ability to protect (or sometimes harm) health. They are:
1. Coping
2. Control and Self-Efficacy
3. Social Relationships
4. Dispositions and Emotions
5. Stress Management
Coping Strategies
How individuals cope with the stressors they face can have a significant impact on health. Coping is often classified into two categories: problem-focused coping or emotion-focused coping (Carver, Scheier, & Weintraub, 1989). Problem-focused coping is thought of as actively addressing the event that is causing stress in an effort to solve the issue at hand. For example, say you have an important exam coming up next week. A problem-focused strategy might be to spend additional time over the weekend studying to make sure you understand all of the material. Emotion-focused coping, on the other hand, regulates the emotions that come with stress. In the above examination example, this might mean watching a funny movie to take your mind off the anxiety you are feeling. In the short term, emotion-focused coping might reduce feelings of stress, but problem-focused coping seems to have the greatest impact on mental wellness (Billings & Moos, 1981; Herman-Stabl, Stemmler, & Petersen, 1995). That being said, when events are uncontrollable (e.g., the death of a loved one), emotion-focused coping directed at managing your feelings, at first, might be the better strategy. Therefore, it is always important to consider the match of the stressor to the coping strategy when evaluating its plausible benefits.
Control and Self-Efficacy
Another factor tied to better health outcomes and an improved ability to cope with stress is having the belief that you have control over a situation. For example, in one study where participants were forced to listen to unpleasant (stressful) noise, those who were led to believe that they had control over the noise performed much better on proofreading tasks afterwards (Glass & Singer, 1972). In other words, even though participants did not have actual control over the noise, the control belief aided them in completing the task. In similar studies, perceived control benefited immune system functioning (Sieber et al., 1992). Outside of the laboratory, studies have shown that older residents in assisted living facilities, which are notorious for low control, lived longer and showed better health outcomes when given control over something as simple as watering a plant or choosing when student volunteers came to visit (Rodin & Langer, 1977; Schulz & Hanusa, 1978). In addition, feeling in control of a threatening situation can actually change stress hormone levels (Dickerson & Kemeny, 2004). Believing that you have control over your own behaviors can also have a positive influence on important outcomes like smoking cessation, contraception use, and weight management (Wallston & Wallston, 1978). When individuals do not believe they have control, they do not try to change. Self-efficacy is closely related to control, in that people with high levels of this trait believe they can complete tasks and reach their goals. Just as feeling in control can reduce stress and improve health, higher self-efficacy can reduce stress and negative health behaviors, and is associated with better health (O’Leary, 1985).
Social Relationships
Research has shown that the impact of social isolation on our risk for disease and death is similar in magnitude to the risk associated with smoking regularly (Holt-Lunstad, Smith, & Layton, 2010; House, Landis, & Umberson, 1988). In fact, the importance of social relationships for our health is so significant that some scientists believe our body has developed a physiological system that encourages us to seek out our relationships, especially in times of stress (Taylor et al., 2000). Social integration is the concept used to describe the number of social roles that you have (Cohen & Wills, 1985), as well as the lack of isolation. For example, you might be a daughter, a basketball team member, a Humane Society volunteer, a coworker, and a student. Maintaining these different roles can improve your health via encouragement from those around you to maintain a healthy lifestyle. Those in your social network might also provide you with social support (e.g., when you are under stress). This support might include emotional help (e.g., a hug when you need it), tangible help (e.g., lending you money), or advice. By helping to improve health behaviors and reduce stress, social relationships can have a powerful, protective impact on health, and in some cases, might even help people with serious illnesses stay alive longer (Spiegel, Kraemer, Bloom, & Gottheil, 1989).
Dispositions and Emotions: What’s Risky and What’s Protective?
Negative dispositions and personality traits have been strongly tied to an array of health risks. One of the earliest negative trait-to-health connections was discovered in the 1950s by two cardiologists. They made the interesting discovery that there were common behavioral and psychological patterns among their heart patients that were not present in other patient samples. This pattern included being competitive, impatient, hostile, and time urgent. They labeled it Type A Behavior. Importantly, it was found to be associated with double the risk of heart disease as compared with Type B Behavior (Friedman & Rosenman, 1959). Since the 1950s, researchers have discovered that it is the hostility and competitiveness components of Type A that are especially harmful to heart health (Iribarren et al., 2000; Matthews, Glass, Rosenman, & Bortner, 1977; Miller, Smith, Turner, Guijarro, & Hallet, 1996). Hostile individuals are quick to get upset, and this angry arousal can damage the arteries of the heart. In addition, given their negative personality style, hostile people often lack a heath-protective supportive social network.
Positive traits and states, on the other hand, are often health protective. For example, characteristics like positive emotions (e.g., feeling happy or excited) have been tied to a wide range of benefits such as increased longevity, a reduced likelihood of developing some illnesses, and better outcomes once you are diagnosed with certain diseases (e.g., heart disease, HIV) (Pressman & Cohen, 2005). Across the world, even in the most poor and underdeveloped nations, positive emotions are consistently tied to better health (Pressman, Gallagher, & Lopez, 2013). Positive emotions can also serve as the “antidote” to stress, protecting us against some of its damaging effects (Fredrickson, 2001; Pressman & Cohen, 2005; see Figure 10.4.2). Similarly, looking on the bright side can also improve health. Optimism has been shown to improve coping, reduce stress, and predict better disease outcomes like recovering from a heart attack more rapidly (Kubzansky, Sparrow, Vokonas, & Kawachi, 2001; Nes & Segerstrom, 2006; Scheier & Carver, 1985; Segerstrom, Taylor, Kemeny, & Fahey, 1998).
Stress Management
About 20 percent of Americans report having stress, with 18–33 year-olds reporting the highest levels (American Psychological Association, 2012). Given that the sources of our stress are often difficult to change (e.g., personal finances, current job), a number of interventions have been designed to help reduce the aversive responses to duress. For example, relaxation activities and forms of meditation are techniques that allow individuals to reduce their stress via breathing exercises, muscle relaxation, and mental imagery. Physiological arousal from stress can also be reduced via biofeedback, a technique where the individual is shown bodily information that is not normally available to them (e.g., heart rate), and then taught strategies to alter this signal. This type of intervention has even shown promise in reducing heart and hypertension risk, as well as other serious conditions (e.g., Moravec, 2008; Patel, Marmot, & Terry, 1981). But reducing stress does not have to be complicated! For example, exercise is a great stress reduction activity (Salmon, 2001) that has a myriad of health benefits.
The Importance Of Good Health Practices
As a student, you probably strive to maintain good grades, to have an active social life, and to stay healthy (e.g., by getting enough sleep), but there is a popular joke about what it’s like to be in college: you can only pick two of these things (see Figure 10.4.3 for an example). The busy life of a college student doesn’t always allow you to maintain all three areas of your life, especially during test-taking periods. In one study, researchers found that students taking exams were more stressed and, thus, smoked more, drank more caffeine, had less physical activity, and had worse sleep habits (Oaten & Chang, 2005), all of which could have detrimental effects on their health. Positive health practices are especially important in times of stress when your immune system is compromised due to high stress and the elevated frequency of exposure to the illnesses of your fellow students in lecture halls, cafeterias, and dorms.
Psychologists study both health behaviors and health habits. The former are behaviors that can improve or harm your health. Some examples include regular exercise, flossing, and wearing sunscreen, versus negative behaviors like drunk driving, pulling all-nighters, or smoking. These behaviors become habits when they are firmly established and performed automatically. For example, do you have to think about putting your seatbelt on or do you do it automatically? Habits are often developed early in life thanks to parental encouragement or the influence of our peer group.
While these behaviors sound minor, studies have shown that those who engaged in more of these protective habits (e.g., getting 7–8 hours of sleep regularly, not smoking or drinking excessively, exercising) had fewer illnesses, felt better, and were less likely to die over a 9–12-year follow-up period (Belloc & Breslow 1972; Breslow & Enstrom 1980). For college students, health behaviors can even influence academic performance. For example, poor sleep quality and quantity are related to weaker learning capacity and academic performance (Curcio, Ferrara, & De Gennaro, 2006). Due to the effects that health behaviors can have, much effort is put forward by psychologists to understand how to change unhealthy behaviors, and to understand why individuals fail to act in healthy ways. Health promotion involves enabling individuals to improve health by focusing on behaviors that pose a risk for future illness, as well as spreading knowledge on existing risk factors. These might be genetic risks you are born with, or something you developed over time like obesity, which puts you at risk for Type 2 diabetes and heart disease, among other illnesses.
Psychology And Medicine
There are many psychological factors that influence medical treatment outcomes. For example, older individuals, (Meara, White, & Cutler, 2004), women (Briscoe, 1987), and those from higher socioeconomic backgrounds (Adamson, Ben-Shlomo, Chaturvedi, & Donovan, 2008) are all more likely to seek medical care. On the other hand, some individuals who need care might avoid it due to financial obstacles or preconceived notions about medical practitioners or the illness. Thanks to the growing amount of medical information online, many people now use the Internet for health information and 38% percent report that this influences their decision to see a doctor (Fox & Jones, 2009). Unfortunately, this is not always a good thing because individuals tend to do a poor job assessing the credibility of health information. For example, college-student participants reading online articles about HIV and syphilis rated a physician’s article and a college student’s article as equally credible if the participants said they were familiar with the health topic (Eastin, 2001). Credibility of health information often means how accurate or trustworthy the information is, and it can be influenced by irrelevant factors, such as the website’s design, logos, or the organization’s contact information (Freeman & Spyridakis, 2004). Similarly, many people post health questions on online, unmoderated forums where anyone can respond, which allows for the possibility of inaccurate information being provided for serious medical conditions by unqualified individuals.
After individuals decide to seek care, there is also variability in the information they give their medical provider. Poor communication (e.g., due to embarrassment or feeling rushed) can influence the accuracy of the diagnosis and the effectiveness of the prescribed treatment. Similarly, there is variation following a visit to the doctor. While most individuals are tasked with a health recommendation (e.g., buying and using a medication appropriately, losing weight, going to another expert), not everyone adheres to medical recommendations (Dunbar-Jacob & Mortimer-Stephens, 2010). For example, many individuals take medications inappropriately (e.g., stopping early, not filling prescriptions) or fail to change their behaviors (e.g., quitting smoking). Unfortunately, getting patients to follow medical orders is not as easy as one would think. For example, in one study, over one third of diabetic patients failed to get proper medical care that would prevent or slow down diabetes-related blindness (Schoenfeld, Greene, Wu, & Leske, 2001)! Fortunately, as mobile technology improves, physicians now have the ability to monitor adherence and work to improve it (e.g., with pill bottles that monitor if they are opened at the right time). Even text messages are useful for improving treatment adherence and outcomes in depression, smoking cessation, and weight loss (Cole-Lewis, & Kershaw, 2010).
Being A Health Psychologist
Training as a clinical health psychologist provides a variety of possible career options. Clinical health psychologists often work on teams of physicians, social workers, allied health professionals, and religious leaders. These teams may be formed in locations like rehabilitation centers, hospitals, primary care offices, emergency care centers, or in chronic illness clinics. Work in each of these settings will pose unique challenges in patient care, but the primary responsibility will be the same. Clinical health psychologists will evaluate physical, personal, and environmental factors contributing to illness and preventing improved health. In doing so, they will then help create a treatment strategy that takes into account all dimensions of a person’s life and health, which maximizes its potential for success. Those who specialize in health psychology can also conduct research to discover new health predictors and risk factors, or develop interventions to prevent and treat illness. Researchers studying health psychology work in numerous locations, such as universities, public health departments, hospitals, and private organizations. In the related field of behavioral medicine, careers focus on the application of this type of research. Occupations in this area might include jobs in occupational therapy, rehabilitation, or preventative medicine. Training as a health psychologist provides a wide skill set applicable in a number of different professional settings and career paths.
The Future Of Health Psychology
Much of the past medical research literature provides an incomplete picture of human health. “Health care” is often “illness care.” That is, it focuses on the management of symptoms and illnesses as they arise. As a result, in many developed countries, we are faced with several health epidemics that are difficult and costly to treat. These include obesity, diabetes, and cardiovascular disease, to name a few. The National Institutes of Health have called for researchers to use the knowledge we have about risk factors to design effective interventions to reduce the prevalence of preventable illness. Additionally, there are a growing number of individuals across developed countries with multiple chronic illnesses and/or lasting disabilities, especially with older age. Addressing their needs and maintaining their quality of life will require skilled individuals who understand how to properly treat these populations. Health psychologists will be on the forefront of work in these areas.
With this focus on prevention, it is important that health psychologists move beyond studying risk (e.g., depression, stress, hostility, low socioeconomic status) in isolation, and move toward studying factors that confer resilience and protection from disease. There is, fortunately, a growing interest in studying the positive factors that protect our health (e.g., Diener & Chan, 2011; Pressman & Cohen, 2005; Richman, Kubzansky, Maselko, Kawachi, Choo, & Bauer, 2005) with evidence strongly indicating that people with higher positivity live longer, suffer fewer illnesses, and generally feel better. Seligman (2008) has even proposed a field of “Positive Health” to specifically study those who exhibit “above average” health—something we do not think about enough. By shifting some of the research focus to identifying and understanding these health-promoting factors, we may capitalize on this information to improve public health.
Innovative interventions to improve health are already in use and continue to be studied. With recent advances in technology, we are starting to see great strides made to improve health with the aid of computational tools. For example, there are hundreds of simple applications (apps) that use email and text messages to send reminders to take medication, as well as mobile apps that allow us to monitor our exercise levels and food intake (in the growing mobile-health, or m-health, field). These m-health applications can be used to raise health awareness, support treatment and compliance, and remotely collect data on a variety of outcomes. Also exciting are devices that allow us to monitor physiology in real time; for example, to better understand the stressful situations that raise blood pressure or heart rate. With advances like these, health psychologists will be able to serve the population better, learn more about health and health behavior, and develop excellent health-improving strategies that could be specifically targeted to certain populations or individuals. These leaps in equipment development, partnered with growing health psychology knowledge and exciting advances in neuroscience and genetic research, will lead health researchers and practitioners into an exciting new time where, hopefully, we will understand more and more about how to keep people healthy.
Outside Resources
App: 30 iPhone apps to monitor your health
http://www.hongkiat.com/blog/iphone-health-app/
Quiz: Hostility
http://www.mhhe.com/socscience/hhp/f...sheet_090.html
Self-assessment: Perceived Stress Scale
www.ncsu.edu/assessment/resou...ress_scale.pdf
Self-assessment: What’s your real age (based on your health practices and risk factors)?
http://www.realage.com
Video: Try out a guided meditation exercise to reduce your stress
Web: American Psychosomatic Society
http://www.psychosomatic.org/home/index.cfm
Web: APA Division 38, Health Psychology
http://www.health-psych.org
Web: Society of Behavioral Medicine
http://www.sbm.org
Discussion Questions
1. What psychological factors contribute to health?
2. Which psychosocial constructs and behaviors might help protect us from the damaging effects of stress?
3. What kinds of interventions might help to improve resilience? Who will these interventions help the most?
4. How should doctors use research in health psychology when meeting with patients?
5. Why do clinical health psychologists play a critical role in improving public health?
Vocabulary
Adherence
In health, it is the ability of a patient to maintain a health behavior prescribed by a physician. This might include taking medication as prescribed, exercising more, or eating less high-fat food.
Behavioral medicine
A field similar to health psychology that integrates psychological factors (e.g., emotion, behavior, cognition, and social factors) in the treatment of disease. This applied field includes clinical areas of study, such as occupational therapy, hypnosis, rehabilitation or medicine, and preventative medicine.
Biofeedback
The process by which physiological signals, not normally available to human perception, are transformed into easy-to-understand graphs or numbers. Individuals can then use this information to try to change bodily functioning (e.g., lower blood pressure, reduce muscle tension).
Biomedical Model of Health
A reductionist model that posits that ill health is a result of a deviation from normal function, which is explained by the presence of pathogens, injury, or genetic abnormality.
Biopsychosocial Model of Health
An approach to studying health and human function that posits the importance of biological, psychological, and social (or environmental) processes.
Chronic disease
A health condition that persists over time, typically for periods longer than three months (e.g., HIV, asthma, diabetes).
Control
Feeling like you have the power to change your environment or behavior if you need or want to.
Daily hassles
Irritations in daily life that are not necessarily traumatic, but that cause difficulties and repeated stress.
Emotion-focused coping
Coping strategy aimed at reducing the negative emotions associated with a stressful event.
General Adaptation Syndrome
A three-phase model of stress, which includes a mobilization of physiological resources phase, a coping phase, and an exhaustion phase (i.e., when an organism fails to cope with the stress adequately and depletes its resources).
Health
According to the World Health Organization, it is a complete state of physical, mental, and social well-being and not merely the absence of disease or infirmity.
Health behavior
Any behavior that is related to health—either good or bad.
Hostility
An experience or trait with cognitive, behavioral, and emotional components. It often includes cynical thoughts, feelings of emotion, and aggressive behavior.
Mind–body connection
The idea that our emotions and thoughts can affect how our body functions.
Problem-focused coping
A set of coping strategies aimed at improving or changing stressful situations.
Psychoneuroimmunology
A field of study examining the relationship among psychology, brain function, and immune function.
Psychosomatic medicine
An interdisciplinary field of study that focuses on how biological, psychological, and social processes contribute to physiological changes in the body and health over time.
Resilience
The ability to “bounce back” from negative situations (e.g., illness, stress) to normal functioning or to simply not show poor outcomes in the face of adversity. In some cases, resilience may lead to better functioning following the negative experience (e.g., post-traumatic growth).
Self-efficacy
The belief that one can perform adequately in a specific situation.
Social integration
The size of your social network, or number of social roles (e.g., son, sister, student, employee, team member).
Social support
The perception or actuality that we have a social network that can help us in times of need and provide us with a variety of useful resources (e.g., advice, love, money).
Stress
A pattern of physical and psychological responses in an organism after it perceives a threatening event that disturbs its homeostasis and taxes its abilities to cope with the event.
Stressor
An event or stimulus that induces feelings of stress.
Type A Behavior
Type A behavior is characterized by impatience, competitiveness, neuroticism, hostility, and anger.
Type B Behavior
Type B behavior reflects the absence of Type A characteristics and is represented by less competitive, aggressive, and hostile behavior patterns. | textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/11%3A_Well-Being/11.01%3A_The_Healthy_Life.txt |
5α-reductase
An enzyme required to convert testosterone to 5α-dihydrotestosterone.
Abducens nucleus
A group of excitatory motor neurons in the medial brainstem that send projections through the VIth cranial nerve to control the ipsilateral lateral rectus muscle. In addition, abducens interneurons send an excitatory projection across the midline to a subdivision of cells in the ipsilateral oculomotor nucleus, which project through the IIIrd cranial nerve to innervate the ipsilateral medial rectus muscle.
Ablation
Surgical removal of brain tissue.
Acetylcholine
An organic compound neurotransmitter consisting of acetic acid and choline. Depending upon the receptor type, acetycholine can have excitatory, inhibitory, or modulatory effects.
Adaptations
Evolved solutions to problems that historically contributed to reproductive success.
Adherence
In health, it is the ability of a patient to maintain a health behavior prescribed by a physician. This might include taking medication as prescribed, exercising more, or eating less high-fat food.
Adoption study
A behavior genetic research method that involves comparison of adopted children to their adoptive and biological parents.
Affect
An emotional process; includes moods, subjective feelings, and discrete emotions.
Afferent nerve fibers
Single neurons that innervate the receptor hair cells and carry vestibular signals to the brain as part of the vestibulocochlear nerve (cranial nerve VIII).
Afferent nerves
Nerves that carry messages to the brain or spinal cord.
A-fibers
Fast-conducting sensory nerves with myelinated axons. Larger diameter and thicker myelin sheaths increases conduction speed. Aβ-fibers conduct touch signals from low-threshold mechanoreceptors with a velocity of 80 m/s and a diameter of 10 μm; Aδ-fibers have a diameter of 2.5 μm and conduct cold, noxious, and thermal signals at 12 m/s. The third and fastest conducting A-fiber is the Aα, which conducts proprioceptive information with a velocity of 120 m/s and a diameter of 20 μm.
Age identity
How old or young people feel compared to their chronological age; after early adulthood, most people feel younger than their chronological age.
Aggression
Any behavior intended to harm another person who does not want to be harmed.
Aggression
A form of social interaction that includes threat, attack, and fighting.
Agnosias
Due to damage of Wernicke’s area. An inability to recognize objects, words, or faces.
Agonists
A drug that increases or enhances a neurotransmitter’s effect.
Agreeableness
A core personality trait that includes such dispositional characteristics as being sympathetic, generous, forgiving, and helpful, and behavioral tendencies toward harmonious social relations and likeability.
Agreeableness
A personality trait that reflects a person’s tendency to be compassionate, cooperative, warm, and caring to others. People low in agreeableness tend to be rude, hostile, and to pursue their own interests over those of others.
Allodynia
Pain due to a stimulus that does not normally provoke pain, e.g., when a light, stroking touch feels painful.
Altruism
A motivation for helping that has the improvement of another’s welfare as its ultimate goal, with no expectation of any benefits for the helper.
Ambulatory assessment
An overarching term to describe methodologies that assess the behavior, physiology, experience, and environments of humans in naturalistic settings.
Amygdala
A region located deep within the brain in the medial area (toward the center) of the temporal lobes (parallel to the ears). If you could draw a line through your eye sloping toward the back of your head and another line between your two ears, the amygdala would be located at the intersection of these lines. The amygdala is involved in detecting relevant stimuli in our environment and has been implicated in emotional responses.
Amygdala
Two almond-shaped structures located in the medial temporal lobes of the brain.
Analgesia
Pain relief.
Anchoring
The bias to be affected by an initial anchor, even if the anchor is arbitrary, and to insufficiently adjust our judgments away from that anchor.
Anhedonia
Loss of interest or pleasure in activities one previously found enjoyable or rewarding.
Animism
The belief that everyone and everything had a “soul” and that mental illness was due to animistic causes, for example, evil spirits controlling an individual and his/her behavior.
Antagonist
A drug that blocks a neurotransmitter’s effect.
Anterograde amnesia
Inability to form new memories for facts and events after the onset of amnesia.
Aphasia
Due to damage of the Broca’s area. An inability to produce or understand words.
Arcuate fasciculus
A fiber tract that connects Wernicke’s and Broca’s speech areas.
Aromatase
An enzyme that converts androgens into estrogens.
Arousal: cost–reward model
An egoistic theory proposed by Piliavin et al. (1981) that claims that seeing a person in need leads to the arousal of unpleasant feelings, and observers are motivated to eliminate that aversive state, often by helping the victim. A cost–reward analysis may lead observers to react in ways other than offering direct assistance, including indirect help, reinterpretation of the situation, or fleeing the scene.
Aspartate
An excitatory amino acid neurotransmitter that is widely used by vestibular receptors, afferents, and many neurons in the brain.
Asylum
A place of refuge or safety established to confine and care for the mentally ill; forerunners of the mental hospital or psychiatric facility.
Attributional style
The tendency by which a person infers the cause or meaning of behaviors or events.
Autobiographical memory
Memory for the events of one’s life.
Autobiographical narratives
A qualitative research method used to understand characteristics and life themes that an individual considers to uniquely distinguish him- or herself from others.
Automatic process
When a thought, feeling, or behavior occurs with little or no mental effort. Typically, automatic processes are described as involuntary or spontaneous, often resulting from a great deal of practice or repetition.
Autonomic nervous system
A part of the peripheral nervous system that connects to glands and smooth muscles. Consists of sympathetic and parasympathetic divisions.
Availability heuristic
The tendency to judge the frequency or likelihood of an event by the ease with which relevant instances come to mind.
Average life expectancy
Mean number of years that 50% of people in a specific birth cohort are expected to survive. This is typically calculated from birth but is also sometimes re-calculated for people who have already reached a particular age (e.g., 65).
Axial plane
See “horizontal plane.”
Basal ganglia
Subcortical structures of the cerebral hemispheres involved in voluntary movement.
Behavioral genetics
The empirical science of how genes and environments combine to generate behavior.
Behavioral medicine
A field similar to health psychology that integrates psychological factors (e.g., emotion, behavior, cognition, and social factors) in the treatment of disease. This applied field includes clinical areas of study, such as occupational therapy, hypnosis, rehabilitation or medicine, and preventative medicine.
Behaviorism
The study of behavior.
Biases
The systematic and predictable mistakes that influence the judgment of even very talented human beings.
Binocular advantage
Benefits from having two eyes as opposed to a single eye.
Biofeedback
The process by which physiological signals, not normally available to human perception, are transformed into easy-to-understand graphs or numbers. Individuals can then use this information to try to change bodily functioning (e.g., lower blood pressure, reduce muscle tension).
Biomedical Model of Health
A reductionist model that posits that ill health is a result of a deviation from normal function, which is explained by the presence of pathogens, injury, or genetic abnormality.
Biopsychosocial model
A model in which the interaction of biological, psychological, and sociocultural factors is seen as influencing the development of the individual.
Biopsychosocial Model of Health
An approach to studying health and human function that posits the importance of biological, psychological, and social (or environmental) processes.
Blocking
In classical conditioning, the finding that no conditioning occurs to a stimulus if it is combined with a previously conditioned stimulus during conditioning trials. Suggests that information, surprise value, or prediction error is important in conditioning.
Blood-oxygen-level-dependent (BOLD)
The signal typically measured in fMRI that results from changes in the ratio of oxygenated hemoglobin to deoxygenated hemoglobin in the blood.
Bouncing balls illusion
The tendency to perceive two circles as bouncing off each other if the moment of their contact is accompanied by an auditory stimulus.
Bounded awareness
The systematic ways in which we fail to notice obvious and important information that is available to us.
Bounded ethicality
The systematic ways in which our ethics are limited in ways we are not even aware of ourselves.
Bounded rationality
Model of human behavior that suggests that humans try to make rational decisions but are bounded due to cognitive limitations.
Bounded self-interest
The systematic and predictable ways in which we care about the outcomes of others.
Bounded willpower
The tendency to place greater weight on present concerns rather than future concerns.
Brain stem
The “trunk” of the brain comprised of the medulla, pons, midbrain, and diencephalon.
Broca’s area
An area in the frontal lobe of the left hemisphere. Implicated in language production.
Bystander intervention
The phenomenon whereby people intervene to help others in need even if the other is a complete stranger and the intervention puts the helper at risk.
Callosotomy
Surgical procedure in which the corpus callosum is severed (used to control severe epilepsy).
Case study
A thorough study of a patient (or a few patients) with naturally occurring lesions.
Categorize
To sort or arrange different items into classes or categories.
Catharsis
Greek term that means to cleanse or purge. Applied to aggression, catharsis is the belief that acting aggressively or even viewing aggression purges angry feelings and aggressive impulses into harmless channels.
Cathartic method
A therapeutic procedure introduced by Breuer and developed further by Freud in the late 19th century whereby a patient gains insight and emotional relief from recalling and reliving traumatic events.
Central nervous system
The part of the nervous system that consists of the brain and spinal cord.
Central sulcus
The major fissure that divides the frontal and the parietal lobes.
Cerebellum
The distinctive structure at the back of the brain, Latin for “small brain.”
Cerebellum
A nervous system structure behind and below the cerebrum. Controls motor movement coordination, balance, equilibrium, and muscle tone.
Cerebral cortex
The outermost gray matter of the cerebrum; the distinctive convolutions characteristic of the mammalian brain.
Cerebral hemispheres
The cerebral cortex, underlying white matter, and subcortical structures.
Cerebrum
Consists of left and right hemispheres that sit at the top of the nervous system and engages in a variety of higher-order functions.
Cerebrum
Usually refers to the cerebral cortex and associated white matter, but in some texts includes the subcortical structures.
C-fibers
C-fibers: Slow-conducting unmyelinated thin sensory afferents with a diameter of 1 μm and a conduction velocity of approximately 1 m/s. C-pain fibers convey noxious, thermal, and heat signals; C-tactile fibers convey gentle touch, light stroking.
Chromosomal sex
The sex of an individual as determined by the sex chromosomes (typically XX or XY) received at the time of fertilization.
Chronic disease
A health condition that persists over time, typically for periods longer than three months (e.g., HIV, asthma, diabetes).
Chronic pain
Persistent or recurrent pain, beyond usual course of acute illness or injury; sometimes present without observable tissue damage or clear cause.
Chronic stress
Discrete or related problematic events and conditions which persist over time and result in prolonged activation of the biological and/or psychological stress response (e.g., unemployment, ongoing health difficulties, marital discord).
Chunk
The process of grouping information together using our knowledge.
Chutes and Ladders
A numerical board game that seems to be useful for building numerical knowledge.
Cingulate gyrus
A medial cortical portion of the nervous tissue that is a part of the limbic system.
Classical conditioning
The procedure in which an initially neutral stimulus (the conditioned stimulus, or CS) is paired with an unconditioned stimulus (or US). The result is that the conditioned stimulus begins to elicit a conditioned response (CR). Classical conditioning is nowadays considered important as both a behavioral phenomenon and as a method to study simple associative learning. Same as Pavlovian conditioning.
Classical conditioning
Describes stimulus-stimulus associative learning.
Cochlea
Snail-shell-shaped organ that transduces mechanical vibrations into neural signals.
Cognitive psychology
The study of mental processes.
Cohort
Group of people typically born in the same year or historical period, who share common experiences over time; sometimes called a generation (e.g., Baby Boom Generation).
Compensatory reflexes
A stabilizing motor reflex that occurs in response to a perceived movement, such as the vestibuloocular reflex, or the postural responses that occur during running or skiing.
Computerized axial tomography
A noninvasive brain-scanning procedure that uses X-ray absorption around the head.
Concrete operations stage
Piagetian stage between ages 7 and 12 when children can think logically about concrete situations but not engage in systematic scientific reasoning.
Conditioned aversions and preferences
Likes and dislikes developed through associations with pleasurable or unpleasurable sensations.
Conditioned compensatory response
In classical conditioning, a conditioned response that opposes, rather than is the same as, the unconditioned response. It functions to reduce the strength of the unconditioned response. Often seen in conditioning when drugs are used as unconditioned stimuli.
Conditioned response (CR)
The response that is elicited by the conditioned stimulus after classical conditioning has taken place.
Conditioned stimulus (CS)
An initially neutral stimulus (like a bell, light, or tone) that elicits a conditioned response after it has been associated with an unconditioned stimulus.
Cones
Photoreceptors that operate in lighted environments and can encode fine visual details. There are three different kinds (S or blue, M or green and L or red) that are each sensitive to slightly different types of light. Combined, these three types of cones allow you to have color vision.
Conscientiousness
A personality trait that reflects a person’s tendency to be careful, organized, hardworking, and to follow rules.
Consciousness
Awareness of ourselves and our environment.
Conservation problems
Problems pioneered by Piaget in which physical transformation of an object or set of objects changes a perceptually salient dimension but not the quantity that is being asked about.
Consolidation
Process by which a memory trace is stabilized and transformed into a more durable form.
Consolidation
The process occurring after encoding that is believed to stabilize memory traces.
Context
Stimuli that are in the background whenever learning occurs. For instance, the Skinner box or room in which learning takes place is the classic example of a context. However, “context” can also be provided by internal stimuli, such as the sensory effects of drugs (e.g., being under the influence of alcohol has stimulus properties that provide a context) and mood states (e.g., being happy or sad). It can also be provided by a specific period in time—the passage of time is sometimes said to change the “temporal context.”
Continuous development
Ways in which development occurs in a gradual incremental manner, rather than through sudden jumps.
Continuous distributions
Characteristics can go from low to high, with all different intermediate values possible. One does not simply have the trait or not have it, but can possess varying amounts of it.
Contralateral
Literally “opposite side”; used to refer to the fact that the two hemispheres of the brain process sensory information and motor commands for the opposite side of the body (e.g., the left hemisphere controls the right side of the body).
Contrast
Relative difference in the amount and type of light coming from two nearby locations.
Contrast gain
Process where the sensitivity of your visual system can be tuned to be most sensitive to the levels of contrast that are most prevalent in the environment.
Control
Feeling like you have the power to change your environment or behavior if you need or want to.
Converging evidence
Similar findings reported from multiple studies using different methods.
Convoy Model of Social Relations
Theory that proposes that the frequency, types, and reciprocity of social exchanges change with age. These social exchanges impact the health and well-being of the givers and receivers in the convoy.
Coronal plane
A slice that runs from head to foot; brain slices in this plane are similar to slices of a loaf of bread, with the eyes being the front of the loaf.
Cortisol
A hormone made by the adrenal glands, within the cortex. Cortisol helps the body maintain blood pressure and immune function. Cortisol increases when the body is under stress.
Cost–benefit analysis
A decision-making process that compares the cost of an action or thing against the expected benefit to help determine the best course of action.
C-pain or Aδ-fibers
C-pain fibers convey noxious, thermal, and heat signals
Crossmodal phenomena
Effects that concern the influence of the perception of one sensory modality on the perception of another.
Crossmodal receptive field
A receptive field that can be stimulated by a stimulus from more than one sensory modality.
Crossmodal stimulus
A stimulus with components in multiple sensory modalties that interact with each other.
Cross-sectional studies
Research method that provides information about age group differences; age differences are confounded with cohort differences and effects related to history and time of study.
Crowds
Adolescent peer groups characterized by shared reputations or images.
Crystallized intelligence
Type of intellectual ability that relies on the application of knowledge, experience, and learned information.
C-tactile fibers
C-tactile fibers convey gentle touch, light stroking
Cue overload principle
The principle stating that the more memories that are associated to a particular retrieval cue, the less effective the cue will be in prompting retrieval of any one memory.
Cultural display rules
These are rules that are learned early in life that specify the management and modification of emotional expressions according to social circumstances. Cultural display rules can work in a number of different ways. For example, they can require individuals to express emotions “as is” (i.e., as they feel them), to exaggerate their expressions to show more than what is actually felt, to tone down their expressions to show less than what is actually felt, to conceal their feelings by expressing something else, or to show nothing at all.
Cultural relativism
The idea that cultural norms and values of a society can only be understood on their own terms or in their own context.
Cutaneous senses
The senses of the skin: tactile, thermal, pruritic (itchy), painful, and pleasant.
Daily Diary method
A methodology where participants complete a questionnaire about their thoughts, feelings, and behavior of the day at the end of the day.
Daily hassles
Irritations in daily life that are not necessarily traumatic, but that cause difficulties and repeated stress.
Dark adaptation
Process that allows you to become sensitive to very small levels of light, so that you can actually see in the near-absence of light.
Day reconstruction method (DRM)
A methodology where participants describe their experiences and behavior of a given day retrospectively upon a systematic reconstruction on the following day.
Decay
The fading of memories with the passage of time.
Declarative memory
Conscious memories for facts and events.
Defeminization
The removal of the potential for female traits.
Demasculinization
The removal of the potential for male traits.
Deoxygenated hemoglobin
Hemoglobin not carrying oxygen.
Depolarization
A change in a cell’s membrane potential, making the inside of the cell more positive and increasing the chance of an action potential.
Depolarized
When receptor hair cells have mechanically gated channels open, the cell increases its membrane voltage, which produces a release of neurotransmitter to excite the innervating nerve fiber.
Depth perception
The ability to actively perceive the distance from oneself of objects in the environment.
Descending pain modulatory system
A top-down pain-modulating system able to inhibit or facilitate pain. The pathway produces analgesia by the release of endogenous opioids. Several brain structures and nuclei are part of this circuit, such as the frontal lobe areas of the anterior cingulate cortex, orbitofrontal cortex, and insular cortex; and nuclei in the amygdala and the hypothalamus, which all project to a structure in the midbrain called the periaqueductal grey (PAG). The PAG then controls ascending pain transmission from the afferent pain system indirectly through the rostral ventromedial medulla (RVM) in the brainstem, which uses ON- and OFF-cells to inhibit or facilitate nociceptive signals at the spinal dorsal horn.
Detection thresholds
The smallest amount of head motion that can be reliably reported by an observer.
Deviant peer contagion
The spread of problem behaviors within groups of adolescents.
Dichotic listening
An experimental task in which two messages are presented to different ears.
Differential susceptibility
Genetic factors that make individuals more or less responsive to environmental experiences.
Diffuse optical imaging (DOI)
A neuroimaging technique that infers brain activity by measuring changes in light as it is passed through the skull and surface of the brain.
Diffusion of responsibility
When deciding whether to help a person in need, knowing that there are others who could also provide assistance relieves bystanders of some measure of personal responsibility, reducing the likelihood that bystanders will intervene.
Dihydrotestosterone (DHT)
A primary androgen that is an androgenic steroid product of testosterone and binds strongly to androgen receptors.
Directional tuning
The preferred direction of motion that hair cells and afferents exhibit where a peak excitatory response occurs and the least preferred direction where no response occurs. Cells are said to be “tuned” for a best and worst direction of motion, with in-between motion directions eliciting a lesser but observable response.
Discontinuous development
Discontinuous development
Discriminative stimulus
In operant conditioning, a stimulus that signals whether the response will be reinforced. It is said to “set the occasion” for the operant response.
Dissociative amnesia
Loss of autobiographical memories from a period in the past in the absence of brain injury or disease.
Distinctiveness
The principle that unusual events (in a context of similar events) will be recalled and recognized better than uniform (nondistinctive) events.
Divided attention
The ability to flexibly allocate attentional resources between two or more concurrent tasks.
DNA methylation
Covalent modifications of mammalian DNA occurring via the methylation of cytosine, typically in the context of the CpG dinucleotide.
DNA methyltransferases (DNMTs)
Enzymes that establish and maintain DNA methylation using methyl-group donor compounds or cofactors. The main mammalian DNMTs are DNMT1, which maintains methylation state across DNA replication, and DNMT3a and DNMT3b, which perform de novo methylation.
Double flash illusion
The false perception of two visual flashes when a single flash is accompanied by two auditory beeps.
Drive state
Affective experiences that motivate organisms to fulfill goals that are generally beneficial to their survival and reproduction.
Early adversity
Single or multiple acute or chronic stressful events, which may be biological or psychological in nature (e.g., poverty, abuse, childhood illness or injury), occurring during childhood and resulting in a biological and/or psychological stress response.
Ecological momentary assessment
An overarching term to describe methodologies that repeatedly sample participants’ real-world experiences, behavior, and physiology in real time.
Ecological validity
The degree to which a study finding has been obtained under conditions that are typical for what happens in everyday life.
Ectoderm
The outermost layer of a developing fetus.
Efferent nerves
Nerves that carry messages from the brain to glands and organs in the periphery.
Egoism
A motivation for helping that has the improvement of the helper’s own circumstances as its primary goal.
Electroencephalogram
A measure of electrical activity generated by the brain’s neurons.
Electroencephalography
A technique that is used to measure gross electrical activity of the brain by placing electrodes on the scalp.
Electroencephalography (EEG)
A neuroimaging technique that measures electrical brain activity via multiple electrodes on the scalp.
Electronically activated recorder, or EAR
A methodology where participants wear a small, portable audio recorder that intermittently records snippets of ambient sounds around them.
Emotion-focused coping
Coping strategy aimed at reducing the negative emotions associated with a stressful event.
Empathic concern
According to Batson’s empathy–altruism hypothesis, observers who empathize with a person in need (that is, put themselves in the shoes of the victim and imagine how that person feels) will experience empathic concern and have an altruistic motivation for helping.
Empathy–altruism model
An altruistic theory proposed by Batson (2011) that claims that people who put themselves in the shoes of a victim and imagining how the victim feel will experience empathic concern that evokes an altruistic motivation for helping.
Empirical methods
Approaches to inquiry that are tied to actual measurement and observation.
Empiricism
The belief that knowledge comes from experience.
Encoding
The initial experience of perceiving and learning events.
Encoding
Process by which information gets into memory.
Encoding
The pact of putting information into memory.
Encoding specificity principle
The hypothesis that a retrieval cue will be effective to the extent that information encoded from the cue overlaps or matches information in the engram or memory trace.
Endocrine gland
A ductless gland from which hormones are released into the blood system in response to specific biological signals.
Endophenotypes
A characteristic that reflects a genetic liability for disease and a more basic component of a complex clinical presentation. Endophenotypes are less developmentally malleable than overt behavior.
Endorphin
An endogenous morphine-like peptide that binds to the opioid receptors in the brain and body; synthesized in the body’s nervous system.
Engrams
A term indicating the change in the nervous system representing an event; also, memory trace.
Enzyme
A protein produced by a living organism that allows or helps a chemical reaction to occur.
Enzyme induction
Process through which a drug can enhance the production of an enzyme.
Epigenetics
Heritable changes in gene activity that are not caused by changes in the DNA sequence. http://en.Wikipedia.org/wiki/Epigenetics
Epigenetics
The study of heritable changes in gene expression or cellular phenotype caused by mechanisms other than changes in the underlying DNA sequence. Epigenetic marks include covalent DNA modifications and posttranslational histone modifications.
Epigenome
The genome-wide distribution of epigenetic marks.
Episodic memory
Memory for events in a particular time and place.
Error management theory (EMT)
A theory of selection under conditions of uncertainty in which recurrent cost asymmetries of judgment or inference favor the evolution of adaptive cognitive biases that function to minimize the more costly errors.
Estrogen
Any of the C18 class of steroid hormones, so named because of the estrus-generating properties in females. Biologically important estrogens include estradiol and estriol.
Ethics
Professional guidelines that offer researchers a template for making decisions that protect research participants from potential harm and that help steer scientists away from conflicts of interest or other situations that might compromise the integrity of their research.
Etiology
The causal description of all of the factors that contribute to the development of a disorder or illness.
Eugenics
The practice of selective breeding to promote desired traits.
A physiological measure of large electrical change in the brain produced by sensory stimulation or motor responses.
Measures the firing of groups of neurons in the cortex. As a person views or listens to specific types of information, neuronal activity creates small electrical currents that can be recorded from non-invasive sensors placed on the scalp. ERP provides excellent information about the timing of processing, clarifying brain activity at the millisecond pace at which it unfolds.
Evolution
Change over time. Is the definition changing?
Experience-sampling method
A methodology where participants report on their momentary thoughts, feelings, and behaviors at different points in time over the course of a day.
External validity
The degree to which a finding generalizes from the specific sample and context of a study to some larger population and broader settings.
Exteroception
The sense of the external world, of all stimulation originating from outside our own bodies.
Extinction
Decrease in the strength of a learned behavior that occurs when the conditioned stimulus is presented without the unconditioned stimulus (in classical conditioning) or when the behavior is no longer reinforced (in instrumental conditioning). The term describes both the procedure (the US or reinforcer is no longer presented) as well as the result of the procedure (the learned response declines). Behaviors that have been reduced in strength through extinction are said to be “extinguished.”
Extraversion
A personality trait that reflects a person’s tendency to be sociable, outgoing, active, and assertive.
Facets
Broad personality traits can be broken down into narrower facets or aspects of the trait. For example, extraversion has several facets, such as sociability, dominance, risk-taking and so forth.
Factor analysis
A statistical technique for grouping similar things together according to how highly they are associated.
False memories
Memory for an event that never actually occurred, implanted by experimental manipulation or other means.
Fear conditioning
A type of classical or Pavlovian conditioning in which the conditioned stimulus (CS) is associated with an aversive unconditioned stimulus (US), such as a foot shock. As a consequence of learning, the CS comes to evoke fear. The phenomenon is thought to be involved in the development of anxiety disorders in humans.
Feminization
The induction of female traits.
Fight or flight response
The physiological response that occurs in response to a perceived threat, preparing the body for actions needed to deal with the threat.
Five-Factor Model
(also called the Big Five) The Five-Factor Model is a widely accepted model of personality traits. Advocates of the model believe that much of the variability in people’s thoughts, feelings, and behaviors can be summarized with five broad traits. These five traits are Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism.
Flashbulb memory
A highly detailed and vivid memory of an emotionally significant event.
Flashbulb memory
Vivid personal memories of receiving the news of some momentous (and usually emotional) event.
Fluid intelligence
Type of intelligence that relies on the ability to use information processing resources to reason logically and solve novel problems.
Foils
Any member of a lineup (whether live or photograph) other than the suspect.
Forebrain
A part of the nervous system that contains the cerebral hemispheres, thalamus, and hypothalamus.
Foreclosure
Individuals commit to an identity without exploration of options.
Formal operations stage
Piagetian stage starting at age 12 years and continuing for the rest of life, in which adolescents may gain the reasoning powers of educated adults.
Fornix
(plural form, fornices) A nerve fiber tract that connects the hippocampus to mammillary bodies.
Framing
The bias to be systematically affected by the way in which information is presented, while holding the objective information constant.
Frontal lobe
The front most (anterior) part of the cerebrum; anterior to the central sulcus and responsible for motor output and planning, language, judgment, and decision-making.
Frontal lobe
The most forward region (close to forehead) of the cerebral hemispheres.
Full-cycle psychology
A scientific approach whereby researchers start with an observational field study to identify an effect in the real world, follow up with laboratory experimentation to verify the effect and isolate the causal mechanisms, and return to field research to corroborate their experimental findings.
Functional magnetic resonance imaging
(or fMRI) A noninvasive brain-imaging technique that registers changes in blood flow in the brain during a given task (also see magnetic resonance imaging).
Functional magnetic resonance imaging
A measure of changes in the oxygenation of blood flow as areas in the brain become active.
Functional magnetic resonance imaging (fMRI)
Functional magnetic resonance imaging (fMRI): A neuroimaging technique that infers brain activity by measuring changes in oxygen levels in the blood.
Functional magnetic resonance imaging (fMRI)
Entails the use of powerful magnets to measure the levels of oxygen within the brain that vary with changes in neural activity. That is, as the neurons in specific brain regions “work harder” when performing a specific task, they require more oxygen. By having people listen to or view social percepts in an MRI scanner, fMRI specifies the brain regions that evidence a relative increase in blood flow. In this way, fMRI provides excellent spatial information, pinpointing with millimeter accuracy, the brain regions most critical for different social processes.
Functional neuroanatomy
Classifying how regions within the nervous system relate to psychology and behavior.
Functionalism
A school of American psychology that focused on the utility of consciousness.
G
Short for “general factor” and is often used to be synonymous with intelligence itself.
Gamma-aminobutyric acid
A major inhibitory neurotransmitter in the vestibular commissural system.
Gaze stability
A combination of eye, neck, and head responses that are all coordinated to maintain visual fixation (fovea) upon a point of interest.
Gene
A specific deoxyribonucleic acid (DNA) sequence that codes for a specific polypeptide or protein or an observable inherited trait.
Gene Selection Theory
The modern theory of evolution by selection by which differential gene replication is the defining process of evolutionary change.
General Adaptation Syndrome
A three-phase model of stress, which includes a mobilization of physiological resources phase, a coping phase, and an exhaustion phase (i.e., when an organism fails to cope with the stress adequately and depletes its resources).
Generalize
Generalizing, in science, refers to the ability to arrive at broad conclusions based on a smaller sample of observations. For these conclusions to be true the sample should accurately represent the larger population from which it is drawn.
Genome-wide association study (GWAS)
A study that maps DNA polymorphisms in affected individuals and controls matched for age, sex, and ethnic background with the aim of identifying causal genetic variants.
Genotype
The DNA content of a cell’s nucleus, whether a trait is externally observable or not.
Gestalt psychology
An attempt to study the unity of experience.
Global subjective well-being
Individuals’ perceptions of and satisfaction with their lives as a whole.
Globus pallidus
A nucleus of the basal ganglia.
Glutamate
An excitatory amino acid neurotransmitter that is widely used by vestibular receptors, afferents, and many neurons in the brain.
Goal-directed behavior
Instrumental behavior that is influenced by the animal’s knowledge of the association between the behavior and its consequence and the current value of the consequence. Sensitive to the reinforcer devaluation effect.
Gonadal sex
The sex of an individual as determined by the possession of either ovaries or testes. Females have ovaries, whereas males have testes.
Grandiosity
Inflated self-esteem or an exaggerated sense of self-importance and self-worth (e.g., believing one has special powers or superior abilities).
Gray matter
The outer grayish regions of the brain comprised of the neurons’ cell bodies.
Gray matter
Composes the bark or the cortex of the cerebrum and consists of the cell bodies of the neurons (see also white matter).
Gustation
The action of tasting; the ability to taste.
Gyri
(plural) Folds between sulci in the cortex.
Gyrus
(plural form, gyri) A bulge that is raised between or among fissures of the convoluted brain.
Gyrus
A fold between sulci in the cortex.
Habit
Instrumental behavior that occurs automatically in the presence of a stimulus and is no longer influenced by the animal’s knowledge of the value of the reinforcer. Insensitive to the reinforcer devaluation effect.
Habituation
Occurs when the response to a stimulus decreases with exposure.
Hair cells
The receptor cells of the vestibular system. They are termed hair cells due to the many hairlike cilia that extend from the apical surface of the cell into the gelatin membrane. Mechanical gated ion channels in the tips of the cilia open and close as the cilia bend to cause membrane voltage changes in the hair cell that are proportional to the intensity and direction of motion.
Health
According to the World Health Organization, it is a complete state of physical, mental, and social well-being and not merely the absence of disease or infirmity.
Health behavior
Any behavior that is related to health—either good or bad.
Hedonic well-being
Component of well-being that refers to emotional experiences, often including measures of positive (e.g., happiness, contentment) and negative affect (e.g., stress, sadness).
Helpfulness
A component of the prosocial personality orientation; describes individuals who have been helpful in the past and, because they believe they can be effective with the help they give, are more likely to be helpful in the future.
Helping
Prosocial acts that typically involve situations in which one person is in need and another provides the necessary assistance to eliminate the other’s need.
Hemoglobin
The oxygen-carrying portion of a red blood cell.
Heritability coefficient
An easily misinterpreted statistical construct that purports to measure the role of genetics in the explanation of differences among individuals.
Heterogeneity
Inter-individual and subgroup differences in level and rate of change over time.
Heuristics
cognitive (or thinking) strategies that simplify decision making by using mental short-cuts
HEXACO model
The HEXACO model is an alternative to the Five-Factor Model. The HEXACO model includes six traits, five of which are variants of the traits included in the Big Five (Emotionality [E], Extraversion [X], Agreeableness [A], Conscientiousness [C], and Openness [O]). The sixth factor, Honesty-Humility [H], is unique to this model.
Hippocampus
(plural form, hippocampi) A nucleus inside (medial) the temporal lobe implicated in learning and memory.
Histone acetyltransferases (HATs) and histone deacetylases (HDACs)
HATs are enzymes that transfer acetyl groups to specific positions on histone tails, promoting an “open” chromatin state and transcriptional activation. HDACs remove these acetyl groups, resulting in a “closed” chromatin state and transcriptional repression.
Histone modifications
Posttranslational modifications of the N-terminal “tails” of histone proteins that serve as a major mode of epigenetic regulation. These modifications include acetylation, phosphorylation, methylation, sumoylation, ubiquitination, and ADP-ribosylation.
Homeostasis
The tendency of an organism to maintain a stable state across all the different physiological systems in the body.
Homeostatic set point
An ideal level that the system being regulated must be monitored and compared to.
Homo habilis
A human ancestor, handy man, that lived two million years ago.
Homo sapiens
Modern man, the only surviving form of the genus Homo.
Homophily
Adolescents tend to associate with peers who are similar to themselves.
Horizontal plane
A slice that runs horizontally through a standing person (i.e., parallel to the floor); slices of brain in this plane divide the top and bottom parts of the brain; this plane is similar to slicing a hamburger bun.
Hormone
An organic chemical messenger released from endocrine cells that travels through the blood to interact with target cells at some distance to cause a biological response.
Hormones
Chemicals released by cells in the brain or body that affect cells in other parts of the brain or body.
Hostile attribution bias
The tendency to perceive ambiguous actions by others as aggressive.
Hostile expectation bias
The tendency to assume that people will react to potential conflicts with aggression.
Hostile perception bias
The tendency to perceive social interactions in general as being aggressive.
Hostility
An experience or trait with cognitive, behavioral, and emotional components. It often includes cynical thoughts, feelings of emotion, and aggressive behavior.
Humorism (or humoralism)
A belief held by ancient Greek and Roman physicians (and until the 19th century) that an excess or deficiency in any of the four bodily fluids, or humors—blood, black bile, yellow bile, and phlegm—directly affected their health and temperament.
Hyperpolarization
A change in a cell’s membrane potential, making the inside of the cell more negative and decreasing the chance of an action potential.
Hyperpolarizes
When receptor hair cells have mechanically gated channels close, the cell decreases its membrane voltage, which produces less release of neurotransmitters to inhibit the innervating nerve fiber.
Hypersomnia
Excessive daytime sleepiness, including difficulty staying awake or napping, or prolonged sleep episodes.
Hypothalamic-pituitary-adrenal (HPA) axis
A system that involves the hypothalamus (within the brain), the pituitary gland (within the brain), and the adrenal glands (at the top of the kidneys). This system helps maintain homeostasis (keeping the body’s systems within normal ranges) by regulating digestion, immune function, mood, temperature, and energy use. Through this, the HPA regulates the body’s response to stress and injury.
Hypothalamus
A portion of the brain involved in a variety of functions, including the secretion of various hormones and the regulation of hunger and sexual arousal.
Hypothalamus
A brain structure located below the thalamus and above the brain stem.
Hypothalamus
Part of the diencephalon. Regulates biological drives with pituitary gland.
Hypotheses
A logical idea that can be tested.
Hysteria
Term used by the ancient Greeks and Egyptians to describe a disorder believed to be caused by a woman’s uterus wandering throughout the body and interfering with other organs (today referred to as conversion disorder, in which psychological problems are expressed in physical form).
Identical twins
Two individual organisms that originated from the same zygote and therefore are genetically identical or very similar. The epigenetic profiling of identical twins discordant for disease is a unique experimental design as it eliminates the DNA sequence-, age-, and sex-differences from consideration.
Identity achievement
Individuals have explored different options and then made commitments.
Identity diffusion
Adolescents neither explore nor commit to any roles or ideologies.
Immunocytochemistry
A method of staining tissue including the brain, using antibodies.
Implicit learning
Occurs when we acquire information without intent that we cannot easily express.
Implicit memory
A type of long-term memory that does not require conscious thought to encode. It's the type of memory one makes without intent.
Inattentional blindness
The failure to notice a fully visible object when attention is devoted to something else.
Incidental learning
Any type of learning that happens without the intention to learn.
Independent
Two characteristics or traits are separate from one another-- a person can be high on one and low on the other, or vice-versa. Some correlated traits are relatively independent in that although there is a tendency for a person high on one to also be high on the other, this is not always the case.
Individual differences
Ways in which people differ in terms of their behavior, emotion, cognition, and development.
Information processing theories
Theories that focus on describing the cognitive processes that underlie thinking at any one age and cognitive growth over time.
Ingroup
A social group to which an individual identifies or belongs.
Inhibitory functioning
Ability to focus on a subset of information while suppressing attention to less relevant information.
Instrumental conditioning
Process in which animals learn about the relationship between their behaviors and their consequences. Also known as operant conditioning.
Integrated
The process by which the perceptual system combines information arising from more than one modality.
Intelligence
An individual’s cognitive capability. This includes the ability to acquire, process, recall and apply information.
Intentional learning
Any type of learning that happens when motivated by intention.
Interaural differences
Differences (usually in time or intensity) between the two ears.
Interference
Other memories get in the way of retrieving a desired memory
Internal validity
The degree to which a cause-effect relationship between two variables has been unambiguously established.
Interoception
The sense of the physiological state of the body. Hunger, thirst, temperature, pain, and other sensations relevant to homeostasis. Visceral input such as heart rate, blood pressure, and digestive activity give rise to an experience of the body’s internal states and physiological reactions to external stimulation. This experience has been described as a representation of “the material me,” and it is hypothesized to be the foundation of subjective feelings, emotion, and self-awareness.
Interpersonal
This refers to the relationship or interaction between two or more individuals in a group. Thus, the interpersonal functions of emotion refer to the effects of one’s emotion on others, or to the relationship between oneself and others.
Intersexual selection
A process of sexual selection by which evolution (change) occurs as a consequences of the mate preferences of one sex exerting selection pressure on members of the opposite sex.
Intra- and inter-individual differences
Different patterns of development observed within an individual (intra-) or between individuals (inter-).
Intrapersonal
This refers to what occurs within oneself. Thus, the intrapersonal functions of emotion refer to the effects of emotion to individuals that occur physically inside their bodies and psychologically inside their minds.
Intrasexual competition
A process of sexual selection by which members of one sex compete with each other, and the victors gain preferential mating access to members of the opposite sex.
Introspection
A method of focusing on internal processes.
Invasive Procedure
A procedure that involves the skin being broken or an instrument or chemical being introduced into a body cavity.
IQ
Short for “intelligence quotient.” This is a score, typically obtained from a widely used measure of intelligence that is meant to rank a person’s intellectual ability against that of others.
Kin selection
According to evolutionary psychology, the favoritism shown for helping our blood relatives, with the goals of increasing the likelihood that some portion of our DNA will be passed on to future generations.
Lateral geniculate nucleus
(or LGN) A nucleus in the thalamus that is innervated by the optic nerves and sends signals to the visual cortex in the occipital lobe.
Lateral inhibition
A signal produced by a neuron aimed at suppressing the response of nearby neurons.
Lateral rectus muscle
An eye muscle that turns outward in the horizontal plane.
Lateral sulcus
The major fissure that delineates the temporal lobe below the frontal and the parietal lobes.
Lateral vestibulo-spinal tract
Vestibular neurons that project to all levels of the spinal cord on the ipsilateral side to control posture and balance movements.
Lateralized
To the side; used to refer to the fact that specific functions may reside primarily in one hemisphere or the other (e.g., for the majority individuals, the left hemisphere is most responsible for language).
Law of effect
The idea that instrumental or operant responses are influenced by their effects. Responses that are followed by a pleasant state of affairs will be strengthened and those that are followed by discomfort will be weakened. Nowadays, the term refers to the idea that operant or instrumental behaviors are lawfully controlled by their consequences.
Lesion
A region in the brain that suffered damage through injury, disease, or medical intervention.
Lesion studies
A surgical method in which a part of the animal brain is removed to study its effects on behavior or function.
Lesions
Abnormalities in the tissue of an organism usually caused by disease or trauma.
Lesions
Damage or tissue abnormality due, for example, to an injury, surgery, or a vascular problem.
Lexical hypothesis
The lexical hypothesis is the idea that the most important differences between people will be encoded in the language that we use to describe people. Therefore, if we want to know which personality traits are most important, we can look to the language that people use to describe themselves and others.
Life course theories
Theory of development that highlights the effects of social expectations of age-related life events and social roles; additionally considers the lifelong cumulative effects of membership in specific cohorts and sociocultural subgroups and exposure to historical events.
Life span theories
Theory of development that emphasizes the patterning of lifelong within- and between-person differences in the shape, level, and rate of change trajectories.
Limbic system
A loosely defined network of nuclei in the brain involved with learning and emotion.
Limbic system
Includes the subcortical structures of the amygdala and hippocampal formation as well as some cortical structures; responsible for aversion and gratification.
Limited capacity
The notion that humans have limited mental resources that can be used at a given time.
Linguistic inquiry and word count
A quantitative text analysis methodology that automatically extracts grammatical and psychological information from a text by counting word frequencies.
Lived day analysis
A methodology where a research team follows an individual around with a video camera to objectively document a person’s daily life as it is lived.
Longitudinal studies
Research method that collects information from individuals at multiple time points over time, allowing researchers to track cohort differences in age-related change to determine cumulative effects of different life experiences.
Lordosis
A physical sexual posture in females that serves as an invitation to mate.
Magnetic resonance imaging
Or MRI is a brain imaging noninvasive technique that uses magnetic energy to generate brain images (also see fMRI).
Magnification factor
Cortical space projected by an area of sensory input (e.g., mm of cortex per degree of visual field).
Maladaptive
Term referring to behaviors that cause people who have them physical or emotional harm, prevent them from functioning in daily life, and/or indicate that they have lost touch with reality and/or cannot control their thoughts and behavior (also called dysfunctional).
Masculinization
The induction of male traits.
Maternal behavior
Parental behavior performed by the mother or other female.
McGurk effect
An effect in which conflicting visual and auditory components of a speech stimulus result in an illusory percept.
Mechanically gated ion channels
Ion channels located in the tips of the stereocilia on the receptor cells that open/close as the cilia bend toward the tallest/smallest cilia, respectively. These channels are permeable to potassium ions, which are abundant in the fluid bathing the top of the hair cells.
Medial prefrontal cortex
An area of the brain located in the middle of the frontal lobes (at the front of the head), active when people mentalize about the self and others.
Medial temporal lobes
Inner region of the temporal lobes that includes the hippocampus.
Medial vestibulo-spinal tract
Vestibular nucleus neurons project bilaterally to cervical spinal motor neurons for head and neck movement control. The tract principally functions in gaze direction and stability during motion.
Medulla oblongata
An area just above the spinal cord that processes breathing, digestion, heart and blood vessel function, swallowing, and sneezing.
Memory traces
A term indicating the change in the nervous system representing an event.
Mentalizing
The act of representing the mental states of oneself and others. Mentalizing allows humans to interpret the intentions, beliefs, and emotional states of others.
Mesmerism
Derived from Franz Anton Mesmer in the late 18th century, an early version of hypnotism in which Mesmer claimed that hysterical symptoms could be treated through animal magnetism emanating from Mesmer’s body and permeating the universe (and later through magnets); later explained in terms of high suggestibility in individuals.
Metabolism
Breakdown of substances.
Metabolite
A substance necessary for a living organism to maintain life.
Metacognition
Describes the knowledge and skills people have in monitoring and controlling their own learning and memory.
Mind–body connection
The idea that our emotions and thoughts can affect how our body functions.
Misinformation effect
A memory error caused by exposure to incorrect information between the original event (e.g., a crime) and later memory test (e.g., an interview, lineup, or day in court).
Misinformation effect
When erroneous information occurring after an event is remembered as having been part of the original event.
Mnemonic devices
A strategy for remembering large amounts of information, usually involving imaging events occurring on a journey or with some other set of memorized cues.
Mock witnesses
A research subject who plays the part of a witness in a study.
Moratorium
State in which adolescents are actively exploring options but have not yet made identity commitments.
Motor cortex
Region of the frontal lobe responsible for voluntary movement; the motor cortex has a contralateral representation of the human body.
Multimodal
Of or pertaining to multiple sensory modalities.
Multimodal perception
The effects that concurrent stimulation in more than one sensory modality has on the perception of events and objects in the world.
Multimodal phenomena
Effects that concern the binding of inputs from multiple sensory modalities.
Multisensory convergence zones
Regions in the brain that receive input from multiple unimodal areas processing different sensory modalities.
Multisensory enhancement
See “superadditive effect of multisensory integration.”
Myelin
Fatty tissue, produced by glial cells (see module, “Neurons”) that insulates the axons of the neurons; myelin is necessary for normal conduction of electrical impulses among neurons.
Natural selection
Differential reproductive success as a consequence of differences in heritable attributes.
Nature
The genes that children bring with them to life and that influence all aspects of their development.
Negative state relief model
An egoistic theory proposed by Cialdini et al. (1982) that claims that people have learned through socialization that helping can serve as a secondary reinforcement that will relieve negative moods such as sadness.
Neural crest
A set of primordial neurons that migrate outside the neural tube and give rise to sensory and autonomic neurons in the peripheral nervous system.
Neural impulse
An electro-chemical signal that enables neurons to communicate.
Neural induction
A process that causes the formation of the neural tube.
Neural plasticity
The ability of synapses and neural pathways to change over time and adapt to changes in neural process, behavior, or environment.
Neuroblasts
Brain progenitor cells that asymmetrically divide into other neuroblasts or nerve cells.
Neuroendocrinology
The study of how the brain and hormones act in concert to coordinate the physiology of the body.
Neuroepithelium
The lining of the neural tube.
Neuroscience
The study of the nervous system.
Neuroscience methods
A research method that deals with the structure or function of the nervous system and brain.
Neuroticism
A personality trait that reflects the tendency to be interpersonally sensitive and the tendency to experience negative emotions like anxiety, fear, sadness, and anger.
Neurotransmitter
A chemical messenger that travels between neurons to provide communication. Some neurotransmitters, such as norepinephrine, can leak into the blood system and act as hormones.
Neurotransmitter
A chemical substance produced by a neuron that is used for communication between neurons.
Neurotransmitters
A chemical compound used to send signals from a receptor cell to a neuron, or from one neuron to another. Neurotransmitters can be excitatory, inhibitory, or modulatory and are packaged in small vesicles that are released from the end terminals of cells.
Nociception
The neural process of encoding noxious stimuli, the sensory input from nociceptors. Not necessarily painful, and crucially not necessary for the experience of pain.
Nociceptors
High-threshold sensory receptors of the peripheral somatosensory nervous system that are capable of transducing and encoding noxious stimuli. Nociceptors send information about actual or impending tissue damage to the brain. These signals can often lead to pain, but nociception and pain are not the same.
Nomenclature
Naming conventions.
Nonassociative learning
Occurs when a single repeated exposure leads to a change in behavior.
Noninvasive procedure
A procedure that does not require the insertion of an instrument or chemical through the skin or into a body cavity.
Norm
Assessments are given to a representative sample of a population to determine the range of scores for that population. These “norms” are then used to place an individual who takes that assessment on a range of scores in which he or she is compared to the population at large.
Noxious stimulus
A stimulus that is damaging or threatens damage to normal tissues.
Nucleus accumbens
A region of the basal forebrain located in front of the preoptic region.
Numerical magnitudes
The sizes of numbers.
Nurture
The environments, starting with the womb, that influence all aspects of children’s development.
Object permanence task
The Piagetian task in which infants below about 9 months of age fail to search for an object that is removed from their sight and, if not allowed to search immediately for the object, act as if they do not know that it continues to exist.
Observational learning
Learning by observing the behavior of others.
Occipital lobe
The back most (posterior) part of the cerebrum; involved in vision.
Occipital lobe
The back part of the cerebrum, which houses the visual areas.
Ocial touch hypothesis
Proposes that social touch is a distinct domain of touch. C-tactile afferents form a special pathway that distinguishes social touch from other types of touch by selectively firing in response to touch of social-affective relevance; thus sending affective information parallel to the discriminatory information from the Aβ-fibers. In this way, the socially relevant touch stands out from the rest as having special positive emotional value and is processed further in affect-related brain areas such as the insula.
Oculomotor nuclei
Includes three neuronal groups in the brainstem, the abducens nucleus, the oculomotor nucleus, and the trochlear nucleus, whose cells send motor commands to the six pairs of eye muscles.
Oculomotor nucleus
A group of cells in the middle brainstem that contain subgroups of neurons that project to the medial rectus, inferior oblique, inferior rectus, and superior rectus muscles of the eyes through the 3rd cranial nerve.
Olfaction
The sense of smell; the action of smelling; the ability to smell.
Omnivore
A person or animal that is able to survive by eating a wide range of foods from plant or animal origin.
Openness to Experience
A personality trait that reflects a person’s tendency to seek out and to appreciate new things, including thoughts, feelings, values, and experiences.
Operant
A behavior that is controlled by its consequences. The simplest example is the rat’s lever-pressing, which is controlled by the presentation of the reinforcer.
Operant conditioning
See instrumental conditioning.
Operant conditioning
Describes stimulus-response associative learning.
Opponent Process Theory
Theory of color vision that assumes there are four different basic colors, organized into two pairs (red/green and blue/yellow) and proposes that colors in the world are encoded in terms of the opponency (or difference) between the colors in each pair. There is an additional black/white pair responsible for coding light contrast.
Orbital frontal cortex
A region of the frontal lobes of the brain above the eye sockets.
Orthonasal olfaction
Perceiving scents/smells introduced via the nostrils.
Other-oriented empathy
A component of the prosocial personality orientation; describes individuals who have a strong sense of social responsibility, empathize with and feel emotionally tied to those in need, understand the problems the victim is experiencing, and have a heightened sense of moral obligations to be helpful.
Otoconia
Small calcium carbonate particles that are packed in a layer on top of the gelatin membrane that covers the otolith receptor hair cell stereocilia.
Otolith receptors
Two inner ear vestibular receptors (utricle and saccule) that transduce linear accelerations and head tilt relative to gravity into neural signals that are then transferred to the brain.
Outgroup
A social group to which an individual does not identify or belong.
Overconfident
The bias to have greater confidence in your judgment than is warranted based on a rational assessment.
Oxygenated hemoglobin
Hemoglobin carrying oxygen.
Oxytocin
A peptide hormone secreted by the pituitary gland to trigger lactation, as well as social bonding.
Oxytocin
A nine amino acid mammalian neuropeptide. Oxytocin is synthesized primarily in the brain, but also in other tissues such as uterus, heart and thymus, with local effects. Oxytocin is best known as a hormone of female reproduction due to its capacity to cause uterine contractions and eject milk. Oxytocin has effects on brain tissue, but also acts throughout the body in some cases as an antioxidant or anti-inflammatory.
Pain
Defined as “an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage,” according to the International Association for the Study of Pain.
Parasympathetic nervous system
A division of the autonomic nervous system that is slower than its counterpart—that is, the sympathetic nervous system—and works in opposition to it. Generally engaged in “rest and digest” functions.
Parasympathetic nervous system (PNS)
One of the two major divisions of the autonomic nervous system, responsible for stimulation of “rest and digest” activities.
Parental behavior
Behaviors performed in relation to one’s offspring that contributes directly to the survival of those offspring
Parietal lobe
The part of the cerebrum between the frontal and occipital lobes; involved in bodily sensations, visual attention, and integrating the senses.
Parietal lobe
An area of the cerebrum just behind the central sulcus that is engaged with somatosensory and gustatory sensation.
Paternal behavior
Parental behavior performed by the father or other male.
Pavlovian conditioning
See classical conditioning.
Perceptual learning
Occurs when aspects of our perception changes as a function of experience.
Periaqueductal gray
The gray matter in the midbrain near the cerebral aqueduct.
Peripheral nervous system
The part of the nervous system that is outside the brain and spinal cord.
Personal distress
According to Batson’s empathy–altruism hypothesis, observers who take a detached view of a person in need will experience feelings of being “worried” and “upset” and will have an egoistic motivation for helping to relieve that distress.
Personality
Enduring predispositions that characterize a person, such as styles of thought, feelings and behavior.
Personality traits
Enduring dispositions in behavior that show differences across individuals, and which tend to characterize the person across varying types of situations.
Person-situation debate
The person-situation debate is a historical debate about the relative power of personality traits as compared to situational influences on behavior. The situationist critique, which started the person-situation debate, suggested that people overestimate the extent to which personality traits are consistent across situations.
Phantom pain
Pain that appears to originate in an amputated limb.
Pharmacokinetics
The action of a drug through the body, including absorption, distribution, metabolism, and excretion.
Phenotype
The pattern of expression of the genotype or the magnitude or extent to which it is observably expressed—an observable characteristic or trait of an organism, such as its morphology, development, biochemical or physiological properties, or behavior.
Phonemic awareness
Awareness of the component sounds within words.
Photo spreads
A selection of normally small photographs of faces given to a witness for the purpose of identifying a perpetrator.
Photoactivation
A photochemical reaction that occurs when light hits photoreceptors, producing a neural signal.
Phrenology
A now-discredited field of brain study, popular in the first half of the 19th century that correlated bumps and indentations of the skull with specific functions of the brain.
Piaget’s theory
Theory that development occurs through a sequence of discontinuous stages: the sensorimotor, preoperational, concrete operational, and formal operational stages.
Pinna
Visible part of the outer ear.
Placebo effect
Effects from a treatment that are not caused by the physical properties of a treatment but by the meaning ascribed to it. These effects reflect the brain’s own activation of modulatory systems, which is triggered by positive expectation or desire for a successful treatment. Placebo analgesia is the most well-studied placebo effect and has been shown to depend, to a large degree, on opioid mechanisms. Placebo analgesia can be reversed by the pharmacological blocking of opioid receptors. The word “placebo” is probably derived from the Latin word “placebit” (“it will please”).
Pluralistic ignorance
Relying on the actions of others to define an ambiguous need situation and to then erroneously conclude that no help or intervention is necessary.
Polypharmacy
The use of many medications.
Pons
A bridge that connects the cerebral cortex with the medulla, and reciprocally transfers information back and forth between the brain and the spinal cord.
Positron
A particle having the same mass and numerically equal but positive charge as an electron.
Positron Emission Tomography
(or PET) An invasive procedure that captures brain images with positron emissions from the brain after the individual has been injected with radio-labeled isotopes.
Positron emission tomography (PET)
A neuroimaging technique that measures brain activity by detecting the presence of a radioactive substance in the brain that is initially injected into the bloodstream and then pulled in by active brain tissue.
Practitioner-Scholar Model
A model of training of professional psychologists that emphasizes clinical practice.
Prediction error
When the outcome of a conditioning trial is different from that which is predicted by the conditioned stimuli that are present on the trial (i.e., when the US is surprising). Prediction error is necessary to create Pavlovian conditioning (and associative learning generally). As learning occurs over repeated conditioning trials, the conditioned stimulus increasingly predicts the unconditioned stimulus, and prediction error declines. Conditioning works to correct or reduce prediction error.
Preoperational reasoning stage
Period within Piagetian theory from age 2 to 7 years, in which children can represent objects through drawing and language but cannot solve logical reasoning problems, such as the conservation problems.
Preoptic area
A region in the anterior hypothalamus involved in generating and regulating male sexual behavior.
Preoptic region
A part of the anterior hypothalamus.
Preparedness
The idea that an organism’s evolutionary history can make it easy to learn a particular association. Because of preparedness, you are more likely to associate the taste of tequila, and not the circumstances surrounding drinking it, with getting sick. Similarly, humans are more likely to associate images of spiders and snakes than flowers and mushrooms with aversive outcomes like shocks.
Primary auditory cortex
A region of the cortex devoted to the processing of simple auditory information.
Primary Motor Cortex
A strip of cortex just in front of the central sulcus that is involved with motor control.
Primary Somatosensory Cortex
A strip of cerebral tissue just behind the central sulcus engaged in sensory reception of bodily sensations.
Primary visual cortex
A region of the cortex devoted to the processing of simple visual information.
Primary visual cortex (V1)
Brain region located in the occipital cortex (toward the back of the head) responsible for processing basic visual information like the detection, thickness, and orientation of simple lines, color, and small-scale motion.
Principle of Inverse Effectiveness
The finding that, in general, for a multimodal stimulus, if the response to each unimodal component (on its own) is weak, then the opportunity for multisensory enhancement is very large. However, if one component—by itself—is sufficient to evoke a strong response, then the effect on the response gained by simultaneously processing the other components of the stimulus will be relatively small.
Problem-focused coping
A set of coping strategies aimed at improving or changing stressful situations.
Processing speed
The time it takes individuals to perform cognitive operations (e.g., process information, react to a signal, switch attention from one task to another, find a specific target object in a complex picture).
Progesterone
A primary progestin that is involved in pregnancy and mating behaviors.
Progestin
A class of C21 steroid hormones named for their progestational (pregnancy-supporting) effects. Progesterone is a common progestin.
Prohormone
A molecule that can act as a hormone itself or be converted into another hormone with different properties. For example, testosterone can serve as a hormone or as a prohormone for either dihydrotestosterone or estradiol.
Prolactin
A protein hormone that is highly conserved throughout the animal kingdom. It has many biological functions associated with reproduction and synergistic actions with steroid hormones.
Proprioceptive
Sensory information regarding muscle position and movement arising from receptors in the muscles, tendons, and joints.
Prosocial behavior
Social behavior that benefits another person.
Prosocial personality orientation
A measure of individual differences that identifies two sets of personality characteristics (other-oriented empathy, helpfulness) that are highly correlated with prosocial behavior.
Psychoactive drugs
A drug that changes mood or the way someone feels.
Psychogenesis
Developing from psychological origins.
Psychological adaptations
Mechanisms of the mind that evolved to solve specific problems of survival or reproduction; conceptualized as information processing devices.
Psychological control
Parents’ manipulation of and intrusion into adolescents’ emotional and cognitive world through invalidating adolescents’ feelings and pressuring them to think in particular ways.
Psychometric approach
Approach to studying intelligence that examines performance on tests of intellectual functioning.
Psychomotor agitation
Increased motor activity associated with restlessness, including physical actions (e.g., fidgeting, pacing, feet tapping, handwringing).
Psychomotor retardation
A slowing of physical activities in which routine activities (e.g., eating, brushing teeth) are performed in an unusually slow manner.
Psychoneuroimmunology
A field of study examining the relationship among psychology, brain function, and immune function.
Psychophysics
Study of the relationships between physical stimuli and the perception of those stimuli.
Psychophysiological methods
Any research method in which the dependent variable is a physiological measure and the independent variable is behavioral or mental (such as memory).
Psychosomatic medicine
An interdisciplinary field of study that focuses on how biological, psychological, and social processes contribute to physiological changes in the body and health over time.
Psychotropic drug
A drug that changes mood or emotion, usually used when talking about drugs prescribed for various mental conditions (depression, anxiety, schizophrenia, etc.).
Punisher
A stimulus that decreases the strength of an operant behavior when it is made a consequence of the behavior.
Punishment
Inflicting pain or removing pleasure for a misdeed. Punishment decreases the likelihood that a behavior will be repeated.
Qualitative changes
Large, fundamental change, as when a caterpillar changes into a butterfly; stage theories such as Piaget’s posit that each stage reflects qualitative change relative to previous stages.
Quantitative changes
Gradual, incremental change, as in the growth of a pine tree’s girth.
Quantitative genetics
Scientific and mathematical methods for inferring genetic and environmental processes based on the degree of genetic and environmental similarity among organisms.
Quantitative law of effect
A mathematical rule that states that the effectiveness of a reinforcer at strengthening an operant response depends on the amount of reinforcement earned for all alternative behaviors. A reinforcer is less effective if there is a lot of reinforcement in the environment for other behaviors.
Realism
A point of view that emphasizes the importance of the senses in providing knowledge of the external world.
Recall
Type of memory task where individuals are asked to remember previously learned information without the help of external cues.
Receptive field
The portion of the world to which a neuron will respond if an appropriate stimulus is present there.
Receptor
A chemical structure on the cell surface or inside of a cell that has an affinity for a specific chemical configuration of a hormone, neurotransmitter, or other compound.
Reciprocal altruism
According to evolutionary psychology, a genetic predisposition for people to help those who have previously helped them.
Recoding
The ubiquitous process during learning of taking information in one form and converting it to another form, usually one more easily remembered.
Recognition
Type of memory task where individuals are asked to remember previously learned information with the assistance of cues.
Reinforcer
Any consequence of a behavior that strengthens the behavior or increases the likelihood that it will be performed it again.
Reinforcer devaluation effect
The finding that an animal will stop performing an instrumental response that once led to a reinforcer if the reinforcer is separately made aversive or undesirable.
Relational aggression
Intentionally harming another person’s social relationships, feelings of acceptance, or inclusion within a group.
Renewal effect
Recovery of an extinguished response that occurs when the context is changed after extinction. Especially strong when the change of context involves return to the context in which conditioning originally occurred. Can occur after extinction in either classical or instrumental conditioning.
Resilience
The ability to “bounce back” from negative situations (e.g., illness, stress) to normal functioning or to simply not show poor outcomes in the face of adversity. In some cases, resilience may lead to better functioning following the negative experience (e.g., post-traumatic growth).
Retrieval
Process by which information is accessed from memory and utilized.
Retrieval
The process of accessing stored information.
Retroactive interference
The phenomenon whereby events that occur after some particular event of interest will usually cause forgetting of the original event.
Retrograde amnesia
Inability to retrieve memories for facts and events acquired before the onset of amnesia.
Retronasal olfaction
Perceiving scents/smells introduced via the mouth/palate.
Reward value
A neuropsychological measure of an outcome’s affective importance to an organism.
Rods
Photoreceptors that are very sensitive to light and are mostly responsible for night vision.
Rostrocaudal
A front-back plane used to identify anatomical structures in the body and the brain.
Rubber hand illusion
The false perception of a fake hand as belonging to a perceiver, due to multimodal sensory information.
Sagittal plane
A slice that runs vertically from front to back; slices of brain in this plane divide the left and right side of the brain; this plane is similar to slicing a baked potato lengthwise.
Satiation
The state of being full to satisfaction and no longer desiring to take on more.
Schema (plural: schemata)
A memory template, created through repeated exposure to a particular class of objects or events.
Scientist-practitioner model
A model of training of professional psychologists that emphasizes the development of both research and clinical skills.
Selective attention
The ability to select certain stimuli in the environment to process, while ignoring distracting information.
Self-efficacy
The belief that one can perform adequately in a specific situation.
Self-perceptions of aging
An individual’s perceptions of their own aging process; positive perceptions of aging have been shown to be associated with greater longevity and health.
Semantic memory
The more or less permanent store of knowledge that people have.
Semicircular canals
A set of three inner ear vestibular receptors (horizontal, anterior, posterior) that transduce head rotational accelerations into head rotational velocity signals that are then transferred to the brain. There are three semicircular canals in each ear, with the major planes of each canal being orthogonal to each other.
Sensitization
Occurs when the response to a stimulus increases with exposure
Sensitization
Increased responsiveness of nociceptive neurons to their normal input and/or recruitment of a response to normally subthreshold inputs. Clinically, sensitization may only be inferred indirectly from phenomena such as hyperalgesia or allodynia. Sensitization can occur in the central nervous system (central sensitization) or in the periphery (peripheral sensitization).
Sensorimotor stage
Period within Piagetian theory from birth to age 2 years, during which children come to represent the enduring reality of objects.
Sensory modalities
A type of sense; for example, vision or audition.
Sex determination
The point at which an individual begins to develop as either a male or a female. In animals that have sex chromosomes, this occurs at fertilization. Females are XX and males are XY. All eggs bear X chromosomes, whereas sperm can either bear X or Y chromosomes. Thus, it is the males that determine the sex of the offspring.
Sex differentiation
The process by which individuals develop the characteristics associated with being male or female. Differential exposure to gonadal steroids during early development causes sexual differentiation of several structures including the brain.
Sexual selection
The evolution of characteristics because of the mating advantage they give organisms.
Sexual strategies theory
A comprehensive evolutionary theory of human mating that defines the menu of mating strategies humans pursue (e.g., short-term casual sex, long-term committed mating), the adaptive problems women and men face when pursuing these strategies, and the evolved solutions to these mating problems.
Shadowing
A task in which the individual is asked to repeat an auditory message as it is presented.
Simulation
Imaginary or real imitation of other people’s behavior or feelings.
Social and cultural
Society refers to a system of relationships between individuals and groups of individuals; culture refers to the meaning and information afforded to that system that is transmitted across generations. Thus, the social and cultural functions of emotion refer to the effects that emotions have on the functioning and maintenance of societies and cultures.
Social brain
The set of neuroanatomical structures that allows us to understand the actions and intentions of other people.
Social categorization
The act of mentally classifying someone into a social group (e.g., as female, elderly, a librarian).
Social integration
The size of your social network, or number of social roles (e.g., son, sister, student, employee, team member).
Social Learning Theory
The theory that people can learn new responses and behaviors by observing the behavior of others.
Social models
Authorities that are the targets for observation and who model behaviors.
Social network
Network of people with whom an individual is closely connected; social networks provide emotional, informational, and material support and offer opportunities for social engagement.
Social referencing
This refers to the process whereby individuals look for information from others to clarify a situation, and then use that information to act. Thus, individuals will often use the emotional expressions of others as a source of information to make decisions about their own behavior.
Social support
A subjective feeling of psychological or physical comfort provided by family, friends, and others.
Social support
The perception or actuality that we have a social network that can help us in times of need and provide us with a variety of useful resources (e.g., advice, love, money).
Social touch hypothesis
Proposes that social touch is a distinct domain of touch. C-tactile afferents form a special pathway that distinguishes social touch from other types of touch by selectively firing in response to touch of social-affective relevance; thus sending affective information parallel to the discriminatory information from the Aβ-fibers. In this way, the socially relevant touch stands out from the rest as having special positive emotional value and is processed further in affect-related brain areas such as the insula.
Social zeitgeber
Zeitgeber is German for “time giver.” Social zeitgebers are environmental cues, such as meal times and interactions with other people, that entrain biological rhythms and thus sleep-wake cycle regularity.
Sociocultural theories
Theory founded in large part by Lev Vygotsky that emphasizes how other people and the attitudes, values, and beliefs of the surrounding culture influence children’s development.
Socioeconomic status (SES)
A person’s economic and social position based on income, education, and occupation.
Socioemotional Selectivity Theory
Theory proposed to explain the reduction of social partners in older adulthood; posits that older adults focus on meeting emotional over information-gathering goals, and adaptively select social partners who meet this need.
Somatic nervous system
A part of the peripheral nervous system that uses cranial and spinal nerves in volitional actions.
Somatogenesis
Developing from physical/bodily origins.
Somatosensory (body sensations) cortex
The region of the parietal lobe responsible for bodily sensations; the somatosensory cortex has a contralateral representation of the human body.
Somatosensory cortex
Consists of primary sensory cortex (S1) in the postcentral gyrus in the parietal lobes and secondary somatosensory cortex (S2), which is defined functionally and found in the upper bank of the lateral sulcus, called the parietal operculum. Somatosensory cortex also includes parts of the insular cortex.
Somatotopically organized
When the parts of the body that are represented in a particular brain region are organized topographically according to their physical location in the body (see Figure 2 illustration).
Spatial principle of multisensory integration
The finding that the superadditive effects of multisensory integration are observed when the sources of stimulation are spatially related to one another.
Spatial resolution
A term that refers to how small the elements of an image are; high spatial resolution means the device or technique can resolve very small elements; in neuroscience it describes how small of a structure in the brain can be imaged.
Spatial resolution
The degree to which one can separate a single object in space from another.
Spina bifida
A developmental disease of the spinal cord, where the neural tube does not close caudally.
Spinothalamic tract
Runs through the spinal cord’s lateral column up to the thalamus. C-fibers enter the dorsal horn of the spinal cord and form a synapse with a neuron that then crosses over to the lateral column and becomes part of the spinothalamic tract.
Split-brain patient
A patient who has had most or all of his or her corpus callosum severed.
Spontaneous recovery
Recovery of an extinguished response that occurs with the passage of time after extinction. Can occur after extinction in either classical or instrumental conditioning.
Standardize
Assessments that are given in the exact same manner to all people . With regards to intelligence tests standardized scores are individual scores that are computed to be referenced against normative scores for a population (see “norm”).
Stereocilia
Hairlike projections from the top of the receptor hair cells. The stereocilia are arranged in ascending height and when displaced toward the tallest cilia, the mechanical gated channels open and the cell is excited (depolarized). When the stereocilia are displaced toward the smallest cilia, the channels close and the cell is inhibited (hyperpolarized).
Stereotype threat
The phenomenon in which people are concerned that they will conform to a stereotype or that their performance does conform to that stereotype, especially in instances in which the stereotype is brought to their conscious awareness.
Stereotypes
The beliefs or attributes we associate with a specific social group. Stereotyping refers to the act of assuming that because someone is a member of a particular group, he or she possesses the group’s attributes. For example, stereotyping occurs when we assume someone is unemotional just because he is man, or particularly athletic just because she is African American.
Stimulus control
When an operant behavior is controlled by a stimulus that precedes it.
Storage
The stage in the learning/memory process that bridges encoding and retrieval; the persistence of memory over time.
Stress
A pattern of physical and psychological responses in an organism after it perceives a threatening event that disturbs its homeostasis and taxes its abilities to cope with the event.
Stress
A threat or challenge to our well-being. Stress can have both a psychological component, which consists of our subjective thoughts and feelings about being threatened or challenged, as well as a physiological component, which consists of our body’s response to the threat or challenge (see “fight or flight response”).
Stressor
An event or stimulus that induces feelings of stress.
Stria terminalis
A band of fibers that runs along the top surface of the thalamus.
Structuralism
A school of American psychology that sought to describe the elements of conscious experience.
Subcortical
Structures that lie beneath the cerebral cortex, but above the brain stem.
Subjective age
A multidimensional construct that indicates how old (or young) a person feels and into which age group a person categorizes him- or herself
Subliminal perception
The ability to process information for meaning when the individual is not consciously aware of that information.
Successful aging
Includes three components: avoiding disease, maintaining high levels of cognitive and physical functioning, and having an actively engaged lifestyle.
Suicidal ideation
Recurring thoughts about suicide, including considering or planning for suicide, or preoccupation with suicide.
Sulci
(plural) Grooves separating folds of the cortex.
Sulcus
A groove separating folds of the cortex.
Sulcus
(plural form, sulci) The crevices or fissures formed by convolutions in the brain.
Superadditive effect of multisensory integration
The finding that responses to multimodal stimuli are typically greater than the sum of the independent responses to each unimodal component if it were presented on its own.
Superior temporal sulcus
The sulcus (a fissure in the surface of the brain) that separates the superior temporal gyrus from the middle temporal gyrus. Located in the temporal lobes (parallel to the ears), it is involved in perception of biological motion or the movement of animate objects.
Supernatural
Developing from origins beyond the visible observable universe.
Sympathetic nervous system
A branch of the autonomic nervous system that controls many of the body’s internal organs. Activity of the SNS generally mobilizes the body’s fight or flight response.
Sympathetic nervous system
A division of the autonomic nervous system, that is faster than its counterpart that is the parasympathetic nervous system and works in opposition to it. Generally engaged in “fight or flight” functions.
Sympathetic nervous system (SNS)
One of the two major divisions of the autonomic nervous system, responsible for stimulation of “fight or flight” activities.
Synapse
The tiny space separating neurons.
Syndrome
Involving a particular group of signs and symptoms.
Synesthesia
The blending of two or more sensory experiences, or the automatic activation of a secondary (indirect) sensory experience due to certain aspects of the primary (direct) sensory stimulation.
System 1
Our intuitive decision-making system, which is typically fast, automatic, effortless, implicit, and emotional.
System 2
Our more deliberative decision-making system, which is slower, conscious, effortful, explicit, and logical.
Systematic observation
The careful observation of the natural world with the aim of better understanding it. Observations provide the basic data that allow scientists to track, tally, or otherwise organize information about the natural world.
Target cell
A cell that has receptors for a specific chemical messenger (hormone or neurotransmitter).
Taste aversion learning
The phenomenon in which a taste is paired with sickness, and this causes the organism to reject—and dislike—that taste in the future.
Temporal lobe
The part of the cerebrum in front of (anterior to) the occipital lobe and below the lateral fissure; involved in vision, auditory processing, memory, and integrating vision and audition.
Temporal lobe
An area of the cerebrum that lies below the lateral sulcus; it contains auditory and olfactory (smell) projection regions.
Temporal parietal junction
The area where the temporal lobes (parallel to the ears) and parieta lobes (at the top of the head toward the back) meet. This area is important in mentalizing and distinguishing between the self and others.
Temporal resolution
A term that refers to how small a unit of time can be measured; high temporal resolution means capable of resolving very small units of time; in neuroscience it describes how precisely in time a process can be measured in the brain.
Temporal resolution
The degree to which one can separate a single point in time from another.
Temporally graded retrograde amnesia
Inability to retrieve memories from just prior to the onset of amnesia with intact memory for more remote events.
Testosterone
The primary androgen secreted by the testes of most vertebrate animals, including men.
Thalamus
A structure in the midline of the brain located between the midbrain and the cerebral cortex.
Thalamus
A part of the diencephalon that works as a gateway for incoming and outgoing information.
Theories
Groups of closely related phenomena or observations.
Tip-of-the-tongue phenomenon
The inability to pull a word from memory even though there is the sensation that that word is available.
Torsion
A rotational eye movement around the line of sight that consists of a clockwise or counterclockwise direction.
“Traitement moral” (moral treatment)
A therapeutic regimen of improved nutrition, living conditions, and rewards for productive behavior that has been attributed to Philippe Pinel during the French Revolution, when he released mentally ill patients from their restraints and treated them with compassion and dignity rather than with contempt and denigration.
Transcranial direct current stimulation (tDCS)
A neuroscience technique that passes mild electrical current directly through a brain area by placing small electrodes on the skull.
Transcranial magnetic stimulation (TMS)
A neuroscience technique whereby a brief magnetic pulse is applied to the head that temporarily induces a weak electrical current that interferes with ongoing activity.
Transduction
The mechanisms that convert stimuli into electrical signals that can be transmitted and processed by the nervous system. Physical or chemical stimulation creates action potentials in a receptor cell in the peripheral nervous system, which is then conducted along the axon to the central nervous system.
Transduction
A process in which physical energy converts into neural energy.
Transfer-appropriate processing
A principle that states that memory performance is superior when a test taps the same cognitive processes as the original encoding activity.
Transverse plane
See “horizontal plane.”
Trephination
The drilling of a hole in the skull, presumably as a way of treating psychological disorders.
Trichromacy theory
Theory that proposes that all of your color perception is fundamentally based on the combination of three (not two, not four) different color signals.
Twin studies
A behavior genetic research method that involves comparison of the similarity of identical (monozygotic; MZ) and fraternal (dizygotic; DZ) twins.
Tympanic membrane
Ear drum, which separates the outer ear from the middle ear.
Type A Behavior
Type A behavior is characterized by impatience, competitiveness, neuroticism, hostility, and anger.
Type B Behavior
Type B behavior reflects the absence of Type A characteristics and is represented by less competitive, aggressive, and hostile behavior patterns.
Unconditioned response (UR)
In classical conditioning, an innate response that is elicited by a stimulus before (or in the absence of) conditioning.
Unconditioned stimulus (US)
In classical conditioning, the stimulus that elicits the response before conditioning occurs.
Unimodal
Of or pertaining to a single sensory modality.
Unimodal components
The parts of a stimulus relevant to one sensory modality at a time.
Unimodal cortex
A region of the brain devoted to the processing of information from a single sensory modality.
Vagus nerve
The 10th cranial nerve. The mammalian vagus has an older unmyelinated branch which originates in the dorsal motor complex and a more recently evolved, myelinated branch, with origins in the ventral vagal complex including the nucleus ambiguous. The vagus is the primary source of autonomic-parasympathetic regulation for various internal organs, including the heart, lungs and other parts of the viscera. The vagus nerve is primarily sensory (afferent), transmitting abundant visceral input to the central nervous system.
Vasopressin
A nine amino acid mammalian neuropeptide. Vasopressin is synthesized primarily in the brain, but also may be made in other tissues. Vasopressin is best known for its effects on the cardiovascular system (increasing blood pressure) and also the kidneys (causing water retention). Vasopressin has effects on brain tissue, but also acts throughout the body.
Vergence angle
The angle between the line of sight for the two eyes. Low vergence angles indicate far-viewing objects, whereas large angles indicate viewing of near objects.
Vestibular compensation
Following injury to one side of vestibular receptors or the vestibulocochlear nerve, the central vestibular nuclei neurons gradually recover much of their function through plasticity mechanisms. The recovery is never complete, however, and extreme motion environments can lead to dizziness, nausea, problems with balance, and spatial memory.
Vestibular efferents
Nerve fibers originating from a nucleus in the brainstem that project from the brain to innervate the vestibular receptor hair cells and afferent nerve terminals. Efferents have a modulatory role on their targets, which is not well understood.
Vestibular system
Consists of a set of motion and gravity detection receptors in the inner ear, a set of primary nuclei in the brainstem, and a network of pathways carrying motion and gravity signals to many regions of the brain.
Vestibulocochlear nerve
The VIIIth cranial nerve that carries fibers innervating the vestibular receptors and the cochlea.
Vestibulo-ocular reflex
Coordination of motion information with visual information that allows you to maintain your gaze on an object while you move.
Vestibuloocular reflex
Eye movements produced by the vestibular brainstem that are equal in magnitude and opposite in direction to head motion. The VOR functions to maintain visual stability on a point of interest and is nearly perfect for all natural head movements.
Vicarious reinforcement
Learning that occurs by observing the reinforcement or punishment of another person.
Violence
Aggression intended to cause extreme physical harm, such as injury or death.
Visual cortex
The part of the brain that processes visual information, located in the back of the brain.
Visual hemifield
The half of visual space (what we see) on one side of fixation (where we are looking); the left hemisphere is responsible for the right visual hemifield, and the right hemisphere is responsible for the left visual hemifield.
Voltage
The difference in electric charge between two points.
Wernicke’s area
A language area in the temporal lobe where linguistic information is comprehended (Also see Broca’s area).
What pathway
Pathway of neural processing in the brain that is responsible for your ability to recognize what is around you.
Where-and-How pathway
Pathway of neural processing in the brain that is responsible for you knowing where things are in the world and how to interact with them.
White coat hypertension
A phenomenon in which patients exhibit elevated blood pressure in the hospital or doctor’s office but not in their everyday lives.
White matter
The inner whitish regions of the cerebrum comprised of the myelinated axons of neurons in the cerebral cortex.
White matter
Regions of the nervous system that represent the axons of the nerve cells; whitish in color because of myelination of the nerve cells.
Working memory
The form of memory we use to hold onto information temporarily, usually for the purposes of manipulation.
Working memory
Short transitory memory processed in the hippocampus.
Working memory
Memory system that allows for information to be simultaneously stored and utilized or manipulated. | textbooks/socialsci/Psychology/Introductory_Psychology/Psychology_as_a_Biological_Science_(Noba)/12%3A_Appendices/12.01%3A_Vocabulary.txt |
Welcome to Abnormal Psychology! As you’ll read more about in this chapter, abnormal psychology refers to the scientific study of people who are exhibiting behaviour that seems atypical or unusual, with the intent to be able to reliably predict, explain, diagnose, identify the causes of, and treat maladaptive behavior. Abnormal psychology is one of the largest sub-fields in psychology, representing a great deal of research and applied work trying to understand and cure mental disorders. As you will see from the first part of this chapter, and as you learn more in this book, the costs of mental illness are substantial.
This chapter will introduce you broadly to important concepts, definitions, and terminology in abnormal psychology that will frame the rest of your learning. It reviews how to define mental disorder as well as the strengths and limitations of our current diagnostic approaches. You’ll read, as well, about how culture and cultural expectations influence our views on abnormality. We cannot examine abnormality without taking cultural norms into account.
In this chapter you will also learn about how mental health professionals assess individuals who might be experiencing a mental disorder, some important concepts for measurement like validity and reliability, and read an overview of some of the many different tools these professionals use to conduct their assessments. Last, you’ll learn how these professionals diagnose and classify abnormal behaviour.
01: Defining and Classifying Abnormal Behaviour
Learning Objectives
• Know the cost of mental illness to society.
• Define abnormal psychology, psychopathology, and psychological disorders.
• Explain the concept of dysfunction as it relates to mental illness.
• Explain the concept of distress as it relates to mental illness.
• Explain the concept of deviance as it relates to mental illness.
• Explain the concept of dangerousness as it relates to mental illness.
What is the Cost of Mental Illness to Society?
Mental illness has significant social and economic costs in Canada. People with mental illness are more likely to experience social and economic marginalization, including social isolation, inability to work, and lower educational attainment and income, compared to Canadians who do not have a mental illness (Burczycka, 2018). People with mental illness also have a higher risk of being victimized. One in ten people with mental health-related disabilities in Canada report experiencing violence over the past year, a rate that is double that found in the general population (Burczycka, 2018). Moreover, mental illness can significantly impact people’s ability to work. It is estimated that 2 out of 9 workers suffer from a mental illness that affects their work performance, and this amounts to an annual wage loss of over \$6.3 billion (Smetanin et al., 2011).
Each year, the economic burnout of mental illness in Canada is estimated at \$51 billion (Smetanin et al., 2011). Mental illness significantly impacts the health care system directly and indirectly. Directly, there are costs of about \$21.3 billion due to hospitalizations, medical visits, and support staff (Smetanin et al., 2011). There are also indirect costs to the justice system, social service and education systems, and other costs due to losses in quality of life. The personal and economic costs of mental illness will further increase due to greater numbers of Canadians expected to be impacted by mental health problems, combined with our aging population and growth of the Canadian population over the next 30 years (Smetanin et al., 2011). By 2041, annual costs of mental illness are expected to be \$307 billion (Mental Health Commission of Canada, 2010).
In terms of worldwide impact, the World Economic Forum used 2010 data to estimate \$2.5 trillion in global costs of mental illness in 2010 and projected costs of \$6 trillion by 2030. The costs for mental illness are greater than the combined costs of cancer, diabetes, and respiratory disorders (Whiteford et al., 2013).
Though there is no one behavior that we can use to classify people as abnormal, most clinical practitioners agree that any behavior that strays from what is considered the norm or is unexpected within the confines of one’s culture, that causes dysfunction in cognition, emotion, and/or behavior, and that causes distress and/or impairment in functioning, is abnormal behavior. Armed with this understanding, let’s discuss what mental disorders are.
Definition of Abnormal Psychology and Psychopathology
The term abnormal psychology refers to the scientific study of people who are atypical or unusual, with the intent to be able to reliably predict, explain, diagnose, identify the causes of, and treat maladaptive behavior. A more sensitive and less stigmatizing term that is used to refer to the scientific study of psychological disorders is psychopathology. These definitions beg the questions of, what is considered abnormal and what is a psychological or mental disorder?
Defining Psychological Disorders
It may be surprising to you, but the concept of mental or psychological disorders has proven very difficult to define and even the American Psychiatric Association (APA), in its publication, the Diagnostic and Statistical Manual of Mental Disorders, 5th edition (DSM-5 for short), states that though “no definition can capture all aspects of all disorders in the range contained in the DSM-5” certain aspects are required. While the concept of mental or psychological disorders is difficult to define, and no definition will ever be perfect, it is recognized as an extremely important concept and therefore psychological disorders (aka mental disorders) have been defined as a psychological dysfunction which causes distress or impaired functioning and deviates from typical or expected behavior according to societal or cultural standards. This definition includes three components (3 Ds). Let’s break these down now:
• Dysfunction – includes “clinically significant disturbance in an individual’s cognition, emotion regulation, or behavior that reflects a dysfunction in the psychological, biological, or developmental processes underlying mental functioning” (pg. 20). In other words, dysfunction refers to a breakdown in cognition, emotion, and/or behavior. For instance, an individual experiencing delusions that he is an omnipotent deity would have a breakdown in cognition because his thought processes are not consistent with reality. An individual who is unable to experience pleasure would have a breakdown in emotion. Finally, an individual who is unable to leave her home and attend work due to fear of having a panic attack would be exhibiting a breakdown in behavior. Abnormal behavior has the capacity to make our well-being difficult to obtain and can be assessed by looking at an individual’s current performance and comparing it to what is expected in general or how the person has performed in the past.
• Distress or Impairment Distress can take the form of psychological or physical pain, or both concurrently. Simply put, distress refers to suffering. Alone though, distress is not sufficient enough to describe behavior as abnormal. Why is that? The loss of a loved one would cause even the most “normally” functioning individual pain and suffering. An athlete who experiences a career-ending injury would display distress as well. Suffering is part of life and cannot be avoided. And some people who display abnormal behavior are generally positive while doing so. Typically, if distress is absent then impairment must be present to deem behavior abnormal. Impairment refers to when the person experiences a disabling condition “in social, occupational, or other important activities” (pg. 20). In other words, impairment refers to when a person loses the capacity to function normally in daily life (e.g., can no longer maintain minimum standards of hygiene, pay bills, attend social functions, or go to work). Once again typically distress and/or impairment in functioning are required to consider behavior abnormal and to diagnose a psychological disorder.
• Deviance – A closer examination of the word abnormal shows that it indicates a move away from what is normal, typical, or average. Our culture – or the totality of socially transmitted behaviors, customs, values, technology, attitudes, beliefs, art, and other products that are particular to a group – determines what is normal and so a person is said to be deviant when he or she fails to follow the stated and unstated rules of society, called social norms. What is considered “normal” by society can change over time due to shifts in accepted values and expectations. For instance, just a few decades ago homosexuality was considered taboo in the U.S. and it was included as a mental disorder in the first edition of the DSM; but today, it is generally accepted. Likewise, PDAs, or public displays of affection, do not cause a second look by most people unlike the past when these outward expressions of love were restricted to the privacy of one’s own house or bedroom. In the U.S., crying is generally seen as a weakness for males but if the behavior occurs in the context of a tragedy such as the Vegas mass shooting on October 1, 2017, in which 58 people were killed and about 500 were wounded, then it is appropriate and understandable. Finally, consider that statistically deviant behavior is not necessarily negative. Genius is an example of behavior that is not the norm, but it is generally considered a positive attribute rather than a negative one.
Though not part of the DSM 5’s conceptualization of what abnormal behavior is, many clinicians add a 4th D – dangerousness to this list. Dangerousness refers to when behavior represents a threat to the safety of the person or others. Individuals expressing suicidal intent, those experiencing acute paranoid ideation combined with aggressive impulses (e.g., wanting to harm people who are perceived as “being out to get them”), and many individuals with antisocial personality disorder may be considered dangerous. Mental health professionals (and many other professionals including researchers) have a duty to report to law enforcement when an individual expresses an intent to harm themselves or others. Nevertheless, individuals with depression, anxiety, and obsessive-compulsive disorder are typically no more a threat to others than individuals without these disorders. As such, it is important to note that having a mental disorder does not automatically deem one to be dangerous and most dangerous individuals are not mentally ill. Indeed, a review of the literature (Matthias & Angermeyer, 2002) found that only a small proportion of crimes are committed by individuals with severe mental disorders, that strangers are at a lower risk of being attacked by a person with a severe mental disorder than by someone who is mentally healthy, and that elevated risks to behave violently are limited to a small number of symptom constellations. Similarly, Hiday and Burns (2010) showed that dangerousness is more the exception than the rule. | textbooks/socialsci/Psychology/Psychological_Disorders/Abnormal_Psychology_(Cummings)/01%3A_Defining_and_Classifying_Abnormal_Behaviour/1.01%3A_Defining_Psychopathology.txt |
Learning Objectives
• Understand the cultural problems inherent in defining the concept of psychological disorder
What is Clinical Assessment?
Violating cultural expectations is not, in and of itself, a satisfactory means of identifying the presence of a psychological disorder. Since behavior varies from one culture to another, what may be expected and considered appropriate in one culture may not be viewed as such in other cultures. For example, returning a stranger’s smile is expected in the United States because a pervasive social norm dictates that we reciprocate friendly gestures. A person who refuses to acknowledge such gestures might be considered socially awkward—perhaps even disordered—for violating this expectation. However, such expectations are not universally shared. Cultural expectations in Japan involve showing reserve, restraint, and a concern for maintaining privacy around strangers. Japanese people are generally unresponsive to smiles from strangers (Patterson et al., 2007). Eye contact provides another example. In the United States and Europe, eye contact with others typically signifies honesty and attention. However, most Latin-American, Asian, and African cultures interpret direct eye contact as rude, confrontational, and aggressive (Pazain, 2010). Thus, someone who makes eye contact with you could be considered appropriate and respectful or brazen and offensive, depending on your culture.
Hallucinations (seeing or hearing things that are not physically present) in Western societies is a violation of cultural expectations, and a person who reports such inner experiences is readily labeled as psychologically disordered. In other cultures, visions that, for example, pertain to future events may be regarded as normal experiences that are positively valued (Bourguignon, 1970). Finally, it is important to recognize that cultural norms change over time: what might be considered typical in a society at one time may no longer be viewed this way later, similar to how fashion trends from one era may elicit quizzical looks decades later—imagine how a headband, legwarmers, and the big hair of the 1980s would go over on your campus today.
The Myth of Mental Illness
In the 1950s and 1960s, the concept of mental illness was widely criticized. One of the major criticisms focused on the notion that mental illness was a “myth that justifies psychiatric intervention in socially disapproved behavior” (Wakefield, 1992). Thomas Szasz (1960), a noted psychiatrist, was perhaps the biggest proponent of this view. Szasz argued that the notion of mental illness was invented by society (and the mental health establishment) to stigmatize and subjugate people whose behavior violates accepted social and legal norms. Indeed, Szasz suggested that what appear to be symptoms of mental illness are more appropriately characterized as “problems in living” (Szasz, 1960).
In his 1961 book, The Myth of Mental Illness: Foundations of a Theory of Personal Conduct, Szasz expressed his disdain for the concept of mental illness and for the field of psychiatry in general (Oliver, 2006). The basis for Szasz’s attack was his contention that detectable abnormalities in bodily structures and functions (e.g., infections and organ damage or dysfunction) represent the defining features of genuine illness or disease, and because symptoms of purported mental illness are not accompanied by such detectable abnormalities, so-called psychological disorders are not disorders at all. Szasz (1961/2010) proclaimed that “disease or illness can only affect the body; hence, there can be no mental illness” (p. 267).
Today, we recognize the extreme level of psychological suffering experienced by people with psychological disorders: the painful thoughts and feelings they experience, the disordered behavior they demonstrate, and the levels of distress and impairment they exhibit. This makes it very difficult to deny the reality of mental illness.
However controversial Szasz’s views and those of his supporters might have been, they have influenced the mental health community and society in several ways. First, lay people, politicians, and professionals now often refer to mental illness as mental health “problems,” implicitly acknowledging the “problems in living” perspective Szasz described (Buchanan-Barker & Barker, 2009). Also influential was Szasz’s view of homosexuality. Szasz was perhaps the first psychiatrist to openly challenge the idea that homosexuality represented a form of mental illness or disease (Szasz, 1965). By challenging the idea that homosexuality represented a form a mental illness, Szasz helped pave the way for the social and civil rights that gay and lesbian people now have (Barker, 2010). His work also inspired legal changes that protect the rights of people in psychiatric institutions and allow such individuals a greater degree of influence and responsibility over their lives (Buchanan-Barker & Barker, 2009). | textbooks/socialsci/Psychology/Psychological_Disorders/Abnormal_Psychology_(Cummings)/01%3A_Defining_and_Classifying_Abnormal_Behaviour/1.02%3A_Cultural_Expectations.txt |
Learning Objectives
• Define clinical assessment.
• Clarify why clinical assessment is an ongoing process.
• Define and exemplify reliability.
• Define and exemplify validity.
• Define standardization.
• List and describe six methods of assessment.
What is Clinical Assessment?
In order for a mental health professional to be able to effectively treat a client and know that the selected treatment actually worked (or is working), he/she first must engage in the clinical assessment of the client. Clinical assessment refers to collecting information and drawing conclusions through the use of observation, psychological tests, neurological tests, and interviews to determine what the person’s problem is and what symptoms he/she is presenting with. This collection of information involves learning about the client’s skills, abilities, personality characteristics, cognitive and emotional functioning, social context (e.g., environmental stressors), and cultural factors particular to them such as their language or ethnicity. Clinical assessment is not just conducted at the beginning of the process of seeking help but all throughout the process. Why is that?
Consider this. First, we need to determine if a treatment is even needed. By having a clear accounting of the person’s symptoms and how they affect daily functioning we can determine to what extent the individual is adversely affected. Assuming treatment is needed, our second reason to engage in clinical assessment is to determine what treatment will work best. As you will see later in this chapter, there are numerous approaches to treatment. These include Behavior Therapy, Cognitive Therapy, Cognitive-Behavioral Therapy (CBT), Humanistic-Experiential Therapies, Psychodynamic Therapies, Couples and Family Therapy, and biological treatments (e.g., psychopharmacology). Of course, for any mental disorder, some of the aforementioned therapies will have greater efficacy than others. Even if several can work well, it does not mean a particular therapy will work well for that specific client. Assessment can help the clinician figure this out. Finally, we need to know if the treatment worked. This will involve measuring symptoms and behavior before any treatment is used and then measuring symptoms and behavior while the treatment is in place. We will even want to measure symptoms and behavior after the treatment ends to make sure symptoms do not return. Knowing what the person’s baselines are for different aspects of psychological functioning will help us to see when improvement occurs. In recap, obtaining the baselines happens in the beginning, implementing the treatment plan happens more so in the middle, and then making sure the treatment produces the desired outcome occurs at the end. It should be clear from this discussion that clinical assessment is an ongoing process.
Key Concepts in Assessment
Important to the assessment process are three critical concepts – reliability, validity, and standardization. Actually, these three are important to science in general. First, we want assessment to be reliable or consistent. Outside of clinical assessment, when our car has an issue and we take it to the mechanic, we want to make sure that what one mechanic says is wrong with our car is the same as what another says or even two others. If not, the measurement tools they use to assess cars are flawed. The same is true of a patient who is experiencing a mental disorder. If one mental health professional says the person has major depressive disorder and another says the issue is borderline personality disorder, then there is an issue with the assessment tool being used. Ensuring that two different raters (e.g., mechanics, mental health professionals) are consistent in their assessments is called interrater reliability. Another type of reliability occurs when a person takes a test one day, and then the same test on another day. We would expect the person’s answers to be consistent with one another, which is called test-retest reliability. An example is if the person takes the Minnesota Multiphasic Personality Inventory (MMPI) on Tuesday and then the same test on Friday, then unless something miraculous or tragic happened over the two days in between tests, the scores on the MMPI should be nearly identical to one another. In other words, the two scores (test and retest) should be correlated with one another. If the test is reliable, the correlation should be very high (remember, a correlation goes from -1.00 to +1.00 and positive means as one score goes up, so does the other, so the correlation for the two tests should be high on the positive side).
In addition to reliability, we want to make sure the test measures what it says it measures. This is called validity. Let’s say a new test is developed to measure symptoms of depression. It is compared against an existing, and proven test, such as the Beck Depression Inventory (BDI). If the new test measures depression, then the scores on it should be highly correlated with the ones obtained by the BDI. This is called concurrent or descriptive validity. We might even ask if an assessment tool looks valid. If we answer yes, then it has face validity, though it should be noted that this is not based on any statistical or evidence-based method of assessing validity. An example would be a personality test that asks about how people behave in certain situations. It, therefore, seems to measure personality or we have an overall feeling that it measures what we expect it to measure.
A tool should also be able to accurately predict what will happen in the future, called predictive validity. Let’s say we want to tell if a high school student will do well in college. We might create a national exam to test needed skills and call it something like the Scholastic Aptitude Test (SAT). We would have high school students take it by their senior year and then wait until they are in college for a few years and see how they are doing. If they did well on the SAT, we would expect that at that point, they should be doing well in college. If so, then the SAT accurately predicts college success. The same would be true of a test such as the Graduate Record Exam (GRE) and its ability to predict graduate school performance.
Finally, we want to make sure that the experience one patient has when taking a test or being assessed is the same as another patient taking the test the same day or on a different day, and with either the same tester or another tester. This is accomplished with the use of clearly laid out rules, norms, and/or procedures, and is called standardization. Equally important is that mental health professionals interpret the results of the testing in the same way or otherwise it will be unclear what the meaning of a specific score is.
Methods of Assessment
So how do we assess patients in our care? We will discuss psychological tests, neurological tests, the clinical interview, behavioral assessment, and a few others in this section.
The Clinical Interview
A clinical interview is a face-to-face encounter between a mental health professional and a patient in which the former observes the latter and gathers data about the person’s behavior, attitudes, current situation, personality, and life history. The interview may be unstructured in which open-ended questions are asked, structured in which a specific set of questions according to an interview schedule are asked, or semi-structured, in which there is a pre-set list of questions but clinicians are able to follow up on specific issues that catch their attention.
A mental status examination is used to organize the information collected during the interview and to systematically evaluate the client through a series of observations and questions assessing appearance and behavior (e.g., grooming and body language), thought processes and content (e.g., disorganized speech or thought and false beliefs), mood and affect (e.g., hopelessness or elation), intellectual functioning (e.g., speech and memory), and awareness of surroundings (e.g., does the client know where he/she is, when it is, and who he/she is?). The exam covers areas not normally part of the interview and allows the mental health professional to determine which areas need to be examined further. The limitation of the interview is that it lacks reliability, especially in the case of the unstructured interview.
Psychological Tests and Inventories
Psychological tests are used to assess the client’s personality, social skills, cognitive abilities, emotions, behavioral responses, or interests and can be administered either individually or to groups. Projective tests consist of simple ambiguous stimuli that can elicit an unlimited number of responses. They include the Rorschach test or inkblot test and the Thematic Apperception Test which requires the individual to write a complete story about each of 20 cards shown to them and give details about what led up to the scene depicted, what the characters are thinking, what they are doing, and what the outcome will be. From these responses, the clinician gains perspective on the patient’s worries, needs, emotions, conflicts. Another projective test is the sentence completion test and asks individuals to finish an incomplete sentence. Examples include ‘My mother’ …. or ‘I hope.’
Personality inventories ask clients to state whether each item in a long list of statements applies to them, and could ask about feelings, behaviors, or beliefs. Examples include the MMPI or Minnesota Multiphasic Personality Inventory and the NEO-PI-R which is a concise measure of the five major domains of personality – Neuroticism, Extroversion, Openness, Agreeableness, and Conscientiousness. Six facets define each of the five domains and the measure assess emotional, interpersonal, experimental, attitudinal, and motivational styles (Costa & McCrae, 1992). These inventories have the advantage of being easy to administer by either a professional or the individual taking it, are standardized, objectively scored, and are completed either on the computer or through paper and pencil. That said, personality cannot be directly assessed and so you can never completely know the individual on the basis of these inventories.
Neurological Tests
Neurological tests are also used to diagnose cognitive impairments caused by brain damage due to tumors, infections, or head injury; or changes in brain activity. Positron Emission Tomography or PET is used to study the brain’s functioning and begins by injecting the patient with a radionuclide which collects in the brain. Patients then lie on a scanning table while a ring-shaped machine is positioned over their head. Images are produced that yield information about the functioning of the brain. Magnetic Resonance Imaging or MRI produces 3D images of the brain or other body structures using magnetic fields and computers. They are used to detect structural abnormalities such as brain and spinal cord tumors or nervous system disorders such as multiple sclerosis. Finally, computed tomography or the CT scan involves taking X-rays of the brain at different angles that are then combined. They are used to detect structural abnormalities such as brain tumors and brain damage caused by head injuries.
Physical Examination
Many mental health professionals recommend the patient see their family physician for a physical examination which is much like a check-up. Why is that? Some organic conditions, such as hyperthyroidism or hormonal irregularities, manifest behavioral symptoms that are similar to mental disorders and so ruling such conditions out can save costly therapy or surgery.
Behavioral Assessment
Within the realm of behavior modification and applied behavior analysis, is behavioral assessment which is simply the measurement of a target behavior. The target behavior is whatever behavior we want to change and it can be in excess (needing to be reduced), or in a deficit state (needing to be increased). During behavioral assessment we assess the ABCs of behavior:
• Antecedents are the environmental events or stimuli that trigger a behavior
• Behaviors are what the person does, says, thinks/feels; and
• Consequences are the outcome of a behavior that either encourages it to be made again in the future or discourages its future occurrence.
Though we might try to change another person’s behavior using behavior modification, we can also change our own behavior using self-monitoring which refers to measuring and recording one’s own ABCs. In the context of psychopathology, behavior modification can be useful in treating phobias, reducing habit disorders, and ridding the person of maladaptive cognitions.
A limitation of this method is that the process of observing and/or recording a behavior can cause the behavior to change, called reactivity. Have you ever noticed someone staring at you while you sat and ate your lunch? If you have, what did you do? Did you change your behavior? Did you become self-conscious? Likely yes and this is an example of reactivity. Another issue is that the behavior that is made in one situation may not be made in other situations, such as your significant other only acting out at their favorite team’s football game and not at home. This form of validity is called cross-sectional validity.
Intelligence Tests
Intelligence testing is occasionally used to determine the client’s level of cognitive functioning. Intelligence testing consists of a series of tasks asking the patient to use both verbal and nonverbal skills. An example is the Stanford-Binet Intelligence test which is used to assess fluid reasoning, knowledge, quantitative reasoning, visual-spatial processing and working memory. These tests are rather time-consuming and require specialized training to administer. As such, they are typically only used in cases where there is a suspected cognitive disorder or intellectual disability. Intelligence tests have been criticized for not predicting future behaviors such as achievement and reflecting social or cultural factors/biases and not actual intelligence. | textbooks/socialsci/Psychology/Psychological_Disorders/Abnormal_Psychology_(Cummings)/01%3A_Defining_and_Classifying_Abnormal_Behaviour/1.03%3A_Clinical_Assessment.txt |
Learning Objectives
• Explain what it means to make a clinical diagnosis.
• Define syndrome.
• Clarify and exemplify what a classification system does.
• Identify the two most used classification systems.
• Outline the history of the DSM.
• Identify and explain the elements of a diagnosis.
• Outline the major disorder categories of the DSM-5.
• Describe the ICD-11.
• Clarify why the DSM-5 and ICD-11 need to be harmonized.
Clinical Diagnosis and Classification Systems
To begin any type of treatment, the client/patient must be clearly diagnosed with a mental disorder. Clinical diagnosis is the process of using assessment data to determine if the pattern of symptoms the person presents with is consistent with the diagnostic criteria for a specific mental disorder set forth in an established classification system such as the DSM-5 or ICD-10 (both will be described shortly). Any diagnosis should have clinical utility, meaning it aids the mental health professional in determining the prognosis, the treatment plan, and possible outcomes of treatment (APA, 2013). Receiving a diagnosis does not necessarily mean the person requires treatment. This decision is made based upon how severe the symptoms are, the level of distress caused by the symptoms, symptom salience such as expressing suicidal ideation, risks and benefits of treatment, disability, and other factors (APA, 2013). Likewise, a patient may not meet full criteria for a diagnosis but require treatment nonetheless.
Symptoms that cluster together on a regular basis are called a syndrome. If they also follow the same, predictable course, we say that they are characteristic of a specific disorder. Classification systems for mental disorders provide mental health professionals with an agreed upon list of disorders falling in distinct categories for which there are clear descriptions and criteria for making a diagnosis. Distinct is the key word here. People experiencing delusions, hallucinations, disorganized speech, catatonia, and/or negative symptoms are different from people presenting with a primary clinical deficit in cognitive functioning that is not developmental in nature but has been acquired (i.e. they have shown a decline in cognitive functioning over time). The former would likely be diagnosed with a schizophrenia spectrum disorder while the latter likely has a neurocognitive disorder (NCD). The latter can be further distinguished from neurodevelopmental disorders which manifest early in development and involve developmental deficits that cause impairments in social, academic, or occupational functioning (APA, 2013). These three disorder groups or categories can be clearly distinguished from one another. Classification systems also permit the gathering of statistics for the purpose of determining incidence and prevalence rates, they facilitate research on the etiology and treatment of disorders, and they conform to the requirements of insurance companies for the payment of claims.
The most widely used classification system in the United States and Canada is the Diagnostic and Statistical Manual of Mental Disorders currently in its 5th edition and produced by the American Psychiatric Association (APA, 2013). Alternatively, the World Health Organization (WHO) produces the International Statistical Classification of Diseases and Related Health Problems (ICD) currently in its 10th edition with an 11th edition expected to be published in 2018. We will begin by discussing the DSM and then move to the ICD.
The DSM Classification System
A Brief History of the DSM
The DSM 5 was published in 2013 and took the place of the DSM IV-TR (TR means Text Revision; published in 2000) but the history of the DSM goes back to 1844 when the American Psychiatric Association published a predecessor of the DSM which was a “statistical classification of institutionalized mental patients” and “…was designed to improve communication about the types of patients cared for in these hospitals” (APA, 2013, p. 6). However, the first official version of the DSM was not published until 1952. The DSM evolved through four subsequent editions after World War II into a diagnostic classification system to be used by psychiatrists and physicians, but also other mental health professionals. The Herculean task of revising the DSM IV-TR began in 1999 when the APA embarked upon an evaluation of the strengths and weaknesses of the DSM in coordination with the World Health Organization (WHO) Division of Mental Health, the World Psychiatric Association, and the National Institute of Mental Health (NIMH). This resulted in the publication of a monograph in 2002 called, A Research Agenda for DSM-V. From 2003 to 2008, the APA, WHO, NIMH, the National Institute on Drug Abuse (NIDA), and the National Institute on Alcoholism and Alcohol Abuse (NIAAA) convened 13 international DSM-5 research planning conferences, “to review the world literature in specific diagnostic areas to prepare for revisions in developing both DSM-5 and the International Classification of Disease, 11th Revision (ICD-11)” (APA, 2013).
After the naming of a DSM-5 Task Force Chair and Vice-Chair in 2006, task force members were selected and approved by 2007 and workgroup members were approved in 2008. What resulted from this was an intensive process of “conducting literature reviews and secondary analyses, publishing research reports in scientific journals, developing draft diagnostic criteria, posting preliminary drafts on the DSM-5 Web site for public comment, presenting preliminary findings at professional meetings, performing field trials, and revisiting criteria and text”(APA, 2013).
What resulted was a “common language for communication between clinicians about the diagnosis of disorders” along with a realization that the criteria and disorders contained within were based on current research and may undergo modification with new evidence gathered (APA, 2013). Additionally, some disorders were not included within the main body of the document because they did not have the scientific evidence to support their widespread clinical use, but were included in Section III under “Conditions for Further Study” to “highlight the evolution and direction of scientific advances in these areas to stimulate further research” (APA, 2013).
Elements of a Diagnosis
The DSM 5 states that the following make up the key elements of a diagnosis (APA, 2013):
• Diagnostic Criteria and Descriptors – Diagnostic criteria are the guidelines for making a diagnosis. When the full criteria are met, mental health professionals can add severity and course specifiers to indicate the patient’s current presentation. If the full criteria are not met, designators such as “other specified” or “unspecified” can be used. If applicable, an indication of severity (mild, moderate, severe, or extreme), descriptive features, and course (type of remission – partial or full – or recurrent) can be provided with the diagnosis. The final diagnosis is based on the clinical interview, text descriptions, criteria, and clinical judgment.
• Subtypes and Specifiers – Since the same disorder can be manifested in different ways in different individuals the DSM uses subtypes and specifiers to better characterize an individual’s disorder. Subtypes denote “mutually exclusive and jointly exhaustive phenomenological subgroupings within a diagnosis” (APA, 2013). For example, non-rapid eye movement sleep arousal disorders can have either a sleepwalking or sleep terror type. Enuresis is nocturnal only, diurnal only, or both. Specifiers are not mutually exclusive or jointly exhaustive and so more than one specifier can be given. For instance, binge eating disorder has remission and severity specifiers. Major depressive disorder has a wide range of specifiers that can be used to characterize the severity, course, or symptom clusters. Again the fundamental distinction between subtypes and specifiers is that there can be only one subtype but multiple specifiers.
• Principle Diagnosis – A principal diagnosis is used when more than one diagnosis is given for an individual (when an individual has comorbid disorders). The principal diagnosis is the reason for the admission in an inpatient setting or the reason for a visit resulting in ambulatory care medical services in outpatient settings. The principal diagnosis is generally the main focus of treatment.
• Provisional Diagnosis – If not enough information is available for a mental health professional to make a definitive diagnosis, but there is a strong presumption that the full criteria will be met with additional information or time, then the provisional specifier can be used.
DSM-5 Disorder Categories
The DSM-5 includes the following categories of disorders:
Table 1.1. DSM-5 Classification System of Mental Disorders
Disorder Category Short Description
Neurodevelopmental Disorders A group of conditions that arise in the developmental period and include intellectual disability, communication disorders, autism spectrum disorder, motor disorders, and ADHD
Schizophrenia Spectrum and Other Psychotic Disorders Disorders characterized by one or more of the following: delusions, hallucinations, disorganized thinking and speech, disorganized motor behavior, and negative symptoms
Bipolar and Related Disorders Characterized by mania or hypomania and possibly depressed mood; includes Bipolar I and II, cyclothymic disorder
Depressive Disorders Characterized by sad, empty, or irritable mood, as well as somatic and cognitive changes that affect functioning; includes major depressive and persistent depressive disorders
Anxiety Disorders Characterized by excessive fear and anxiety and related behavioral disturbances; Includes phobias, separation anxiety, panic attack, generalized anxiety disorder
Obsessive-Compulsive and Related Disorders Characterized by obsessions and compulsions and includes OCD, hoarding, and body dysmorphic disorders
Trauma- and Stressor-Related Disorders Characterized by exposure to a traumatic or stressful event; PTSD, acute stress disorder, and adjustment disorders
Dissociative Disorders Characterized by a disruption or disturbance in memory, identity, emotion, perception, or behavior; dissociative identity disorder, dissociative amnesia, and depersonalization/derealization disorder
Somatic Symptom and Related Disorders Characterized by prominent somatic symptoms to include illness anxiety disorder somatic symptom disorder, and conversion disorder
Feeding and Eating Disorders Characterized by a persistent disturbance of eating or eating-related behavior to include bingeing and purging
Elimination Disorders Characterized by the inappropriate elimination of urine or feces; usually first diagnosed in childhood or adolescence
Sleep-Wake Disorders Characterized by sleep-wake complaints about the quality, timing, and amount of sleep; includes insomnia, sleep terrors, narcolepsy, and sleep apnea
Sexual Dysfunctions Characterized by sexual difficulties and include premature ejaculation, female orgasmic disorder, and erectile disorder
Gender Dysphoria Characterized by distress associated with the incongruity between one’s experienced or expressed gender and the gender assigned at birth
Disruptive, Impulse-Control, and Conduct Disorders Characterized by problems in self-control of emotions and behavior and involve the violation of the rights of others and cause the individual to be in violation of societal norms; Includes oppositional defiant disorder, antisocial personality disorder, kleptomania, etc.
Substance-Related and Addictive Disorders Characterized by the continued use of a substance despite significant problems related to its use
Neurocognitive Disorders Characterized by a decline in cognitive functioning over time and the NCD has not been present since birth or early in life
Personality Disorders Characterized by a pattern of stable traits which are inflexible, pervasive, and leads to distress or impairment
Paraphilic Disorders Characterized by recurrent and intense sexual fantasies that can cause harm to the individual or others; includes exhibitionism, voyeurism, and sexual sadism
The ICD-11
In 1893, the International Statistical Institute adopted the International List of Causes of Death which was the first edition of the ICD. The World Health Organization was entrusted with the development of the ICD in 1948 and published the 6th version (ICD-6), which was the first version to include mental disorders. The ICD-11 was published in June 2018 and adopted by member states of WHO in June 2019. The WHO states:
ICD is the foundation for the identification of health trends and statistics globally, and the international standard for reporting diseases and health conditions. It is the diagnostic classification standard for all clinical and research purposes. ICD defines the universe of diseases, disorders, injuries and other related health conditions, listed in a comprehensive, hierarchical fashion that allows for:
• easy storage, retrieval and analysis of health information for evidence-based decision-making;
• sharing and comparing health information between hospitals, regions, settings, and countries;
• and data comparisons in the same location across different time periods.
Source: http://www.who.int/classifications/icd/en/
The ICD lists many types of diseases and disorders and includes Chapter V: Mental and Behavioral Disorders. The list of mental disorders is broken down as follows:
• Organic, including symptomatic, mental disorders
• Mental and behavioral disorders due to psychoactive substance use
• Schizophrenia, schizotypal and delusional disorders
• Mood (affective) disorders
• Neurotic, stress-related and somatoform disorders
• Behavioral syndromes associated with physiological disturbances and physical factors
• Disorders of adult personality and behavior
• Mental retardation
• Disorders of psychological development
• Behavioral and emotional disorders with onset usually occurring in childhood and adolescence
• Unspecified mental disorder
Harmonization of DSM-5 and ICD-11
According to the DSM-5, there is an effort to harmonize the two classification systems so that there can be a more accurate collection of national health statistics and design of clinical trials, increased ability to replicate scientific findings across national boundaries and to rectify the lack of agreement between the DSM-IV and ICD-10 diagnoses. (APA, 2013). At time of publication of this text, however, this had not yet occurred. | textbooks/socialsci/Psychology/Psychological_Disorders/Abnormal_Psychology_(Cummings)/01%3A_Defining_and_Classifying_Abnormal_Behaviour/1.04%3A_Diagnosing_and_Classifying_Abnormal_Behavior.txt |
Summary
Mental illness has a significant social and economic cost on society, both directly and indirectly through costs like victimization, lost ability to work, burnout, hospitalizations, and medical visits. Abnormal psychology is a field of psychology that studies people who are atypical or unusual. The intention of this study is to predict, explain, diagnose, identify the causes of, and treat mental disorders.
Mental disorders are hard to define. Most definitions include the “3 Ds”: Dysfunction, distress (or impairment), and deviance. Meaning that disorders disturb an individual’s cognition, emotion regulation or behaviour, that this causes distress for the individual, and that this behaviour is a move away from what our culture determines is normal, typical, or average.
It is important to consider culture when evaluating abnormal behaviour. Violating cultural expectations is not, in and of itself, a satisfactory means of identifying the presence of a psychological disorder. Behaviour varies from culture to culture, so what may be expected and appropriate in one culture may not be viewed as such in others.
In order for a mental health professional to effectively treat a client (and know if the treatment is working), they must first know what a client’s presenting problem is. Clinical assessment refers to collecting information about this and drawing conclusions through the use of observation, psychological tests, neurological tests, and interviews to determine what the symptoms the client is presenting with. The concepts of reliability, validity, and standardization are key to the assessment process.
After the assessment is complete, a professional can consider if the person meets criteria for a clinical diagnosis. Diagnosis is the process of using assessment data to determine if the pattern of symptoms the person presents with is consistent with the diagnostic criteria for a specific mental health disorder, set forth in an established classification system like the DSM-5 or ICD-10. Symptoms that cluster together on a regular basis are called a syndrome.
Classification systems for mental disorders provide health professionals with an agreed-upon list of disorders, for which there are clear descriptions and criteria for making a diagnosis. The most widely used classification system in North America is the Diagnostic and Statistical Manual of Mental Disorders, currently in its 5th edition. It is published by the American Psychiatric Association. The first edition of the DSM was published in 1952 and the current edition was published in 2013 after almost 14 years of research! The World Health Organization (WHO) also publishes the International List of Causes of Death (ICD), an alternative classification system.
The DSM also states the key elements of diagnosis, which include diagnostic criteria and descriptors, which are guidelines for making a diagnosis. A second key element are subtypes and specifiers, used to better characterize an individual’s disorder since the same disorder can manifest in different ways for different people. Last, principle diagnoses are given when more than one diagnosis is applicable for one individual and provisional diagnoses are given when not enough information is available to make a definitive diagnosis.
An interactive or media element has been excluded from this version of the text. You can view it online here:
https://openpress.usask.ca/abnormalpsychology/?p=331 | textbooks/socialsci/Psychology/Psychological_Disorders/Abnormal_Psychology_(Cummings)/01%3A_Defining_and_Classifying_Abnormal_Behaviour/1.05%3A_Summary_and_Self-Test-_Defining_and_Classifying_Abnormal_Behaviour.txt |
Abnormal psychology is a broad and diverse topic of study that, as you will see in this chapter, has fascinated humans for centuries. There are many different lenses through which we can view abnormal behaviour. In this chapter we will discuss many of those lenses, illustrating and comparing how they conceptualize psychopathology. In the first part of this chapter, you’ll read about historical perspectives on abnormality, all the way from prehistoric beliefs through to the 18th and 19th centuries. As you will see, views on abnormality have always been influenced by the broader culture and historical context that they exist in.
You will then learn about the modern therapeutic orientations, beginning with the birth of abnormal psychology as a discipline: Sigmund Freud’s psychoanalysis. These orientations are used to explain abnormal behaviour and tell clinicians how to treat it. This section also covers humanistic and person-centred approaches, the behavioural and cognitive models as well as cognitive-behavioural therapy (which combine the two), acceptance and mindfulness-based approaches, and some emerging views on abnormality. Each of these orientations views abnormal behaviour slightly differently, although many of their ideas overlap and many orientations have informed one another. This chapter will also discuss the biological model of mental illness and psychopharmacology as a treatment option.
Another modern development, beginning in the 1990s, was the rise of evidence-based practice in clinical psychology and the development of criteria for evaluating the research evidence for various psychotherapies, in order to develop the distinction of “empirically supported treatment.” In the final part of this chapter, you will learn what evidence-based practice and empirically supported treatments involve, and also learn about characteristics for identifying the opposite: treatments that harm.
02: Perspectives on Abnormal Behaviour
Learning Objectives
• Describe prehistoric and ancient beliefs about mental illness.
• Describe Greco-Roman thought on mental illness.
• Describe thoughts on mental illness during the Middle Ages.
• Describe thoughts on mental illness during the Renaissance.
• Describe thoughts on mental illness during the 18th and 19th centuries
As we have seen so far, what is considered abnormal behavior is often dictated by the culture/society a person lives in, and unfortunately, the past has not treated the afflicted very well. In this section, we will examine how past societies viewed and dealt with mental illness.
Prehistoric and Ancient Beliefs
Prehistoric cultures often held a supernatural view of abnormal behavior and saw it as the work of evil spirits, demons, gods, or witches who took control of the person. This form of demonic possession was believed to occur when the person engaged in behavior contrary to the religious teachings of the time. Treatment by cave dwellers included a technique called trephination, in which a stone instrument known as a trephine was used to remove part of the skull, creating an opening. They believed that evil spirits could escape through the hole in the skull, thereby ending the person’s mental affliction and returning them to normal behavior. Early Greek, Hebrew, Egyptian, and Chinese cultures used a treatment method called exorcism in which evil spirits were cast out through prayer, magic, flogging, starvation, noise-making, or having the person ingest horrible tasting drinks.
Greco-Roman Thought
Rejecting the idea of demonic possession, Greek physician, Hippocrates (460-377 B.C.), said that mental disorders were akin to physical disorders and had natural causes. Specifically, he suggested that they arose from brain pathology, or head trauma/brain dysfunction or disease, and were also affected by heredity. Hippocrates classified mental disorders into three main categories – melancholia, mania, and phrenitis (brain fever) and gave detailed clinical descriptions of each. He also described four main fluids or humors that directed normal functioning and personality – blood which arose in the heart, black bile arising in the spleen, yellow bile or choler from the liver, and phlegm from the brain. Mental disorders occurred when the humors were in a state of imbalance such as an excess of yellow bile causing frenzy/mania and too much black bile causing melancholia/depression. Hippocrates believed mental illnesses could be treated as any other disorder and focused on the underlying pathology.
Also important was Greek philosopher, Plato (429-347 B.C.), who said that the mentally ill were not responsible for their own actions and so should not be punished. He emphasized the role of social environment and early learning in the development of mental disorders and believed it was the responsibility of the community and their families to care for them in a humane manner using rational discussions. Greek physician, Galen (A.D. 129-199) said mental disorders had either physical or mental causes that included fear, shock, alcoholism, head injuries, adolescence, and changes in menstruation.
In Rome, physician Asclepiades (124-40 BC) and philosopher Cicero (106-43 BC) rejected Hippocrates’ idea of the four humors and instead stated that melancholy arises from grief, fear, and rage; not excess black bile. Roman physicians treated mental disorders with massage and warm baths, with the hope that their patients be as comfortable as possible. They practiced the concept of “contrariis contrarius”, meaning opposite by opposite, and introduced contrasting stimuli to bring about balance in the physical and mental domains. An example would be consuming a cold drink while in a warm bath.
The Middle Ages – 500 AD to 1500 AD
The progress made during the time of the Greeks and Romans was quickly reversed during the Middle Ages with the increase in power of the Church and the fall of the Roman Empire. Mental illness was yet again explained as possession by the Devil and methods such as exorcism, flogging, prayer, the touching of relics, chanting, visiting holy sites, and holy water were used to rid the person of the Devil’s influence. In extreme cases, the afflicted were confined, beat, and even executed. Scientific and medical explanations, such as those proposed by Hippocrates, were discarded at this time.
Group hysteria, or mass madness, was also seen in which large numbers of people displayed similar symptoms and false beliefs. This included the belief that one was possessed by wolves or other animals and imitated their behavior, called lycanthropy, and a mania in which large numbers of people had an uncontrollable desire to dance and jump, called tarantism. The latter was believed to have been caused by the bite of the wolf spider, now called the tarantula, and spread quickly from Italy to Germany and other parts of Europe where it was called Saint Vitus’s dance.
Perhaps the return to supernatural explanations during the Middle Ages makes sense given events of the time. The Black Death or Bubonic Plague had killed up to a third, and according to other estimates almost half, of the population. Famine, war, social oppression, and pestilence were also factors. Death was ever present which led to an epidemic of depression and fear. Nevertheless, near the end of the Middle Ages, mystical explanations for mental illness began to lose favor and government officials regained some of their lost power over nonreligious activities. Science and medicine were once again called upon to explain mental disorders.
The Renaissance – 14th to 16th Centuries
The most noteworthy development in the realm of philosophy during the Renaissance was the rise of humanism, or the worldview that emphasizes human welfare and the uniqueness of the individual. This helped continue the decline of supernatural views of mental illness. In the mid to late 1500s, Johann Weyer (1515-1588), a German physician, published his book, On the Deceits of the Demons, that rebutted the Church’s witch-hunting handbook, the Malleus Maleficarum, and argued that many accused of being witches and subsequently imprisoned, tortured, hung, and/or burned at the stake, were mentally disturbed and not possessed by demons or the Devil himself. He believed that like the body, the mind was susceptible to illness. Not surprisingly, the book was met with vehement protest and even banned from the church. It should be noted that these types of acts occurred not only in Europe but also in the United States. The most famous example was the Salem Witch Trials of 1692 in which more than 200 people were accused of practicing witchcraft and 20 were killed.
The number of asylums, or places of refuge for the mentally ill where they could receive care, began to rise during the 16th century as the government realized there were far too many people afflicted with mental illness to be left in private homes. Hospitals and monasteries were converted into asylums. Though the intent was benign in the beginning, as they began to overflow patients came to be treated more like animals than people. In 1547, the Bethlem Hospital opened in London with the sole purpose of confining those with mental disorders. Patients were chained up, placed on public display, and often heard crying out in pain. The asylum became a tourist attraction, with sightseers paying a penny to view the more violent patients, and soon was called “Bedlam” by local people; a term that today means “a state of uproar and confusion” (https://www.merriam-webster.com/dictionary/bedlam).
Reform Movement – 18th to 19th Centuries
The rise of the moral treatment movement occurred in Europe in the late 18th century and then in the United States in the early 19th century. Its earliest proponent was Phillipe Pinel (1745-1826) who was assigned as the superintendent of la Bicetre, a hospital for mentally ill men in Paris. He emphasized the importance of affording the mentally ill respect, moral guidance, and humane treatment, all while considering their individual, social, and occupational needs. Arguing that the mentally ill were sick people, Pinel ordered that chains be removed, outside exercise be allowed, sunny and well-ventilated rooms replace dungeons, and patients be extended kindness and support. This approach led to considerable improvement for many of the patients, so much so, that several were released.
Following Pinel’s lead in England, William Tuke (1732-1822), a Quaker tea merchant, established a pleasant rural estate called the York Retreat. The Quakers believed that all people should be accepted for who they were and treated kindly. At the retreat, patients could work, rest, talk out their problems, and pray (Raad & Makari, 2010). The work of Tuke and others led to the passage of the County Asylums Act of 1845 which required that every county in England and Wales provide asylum to the mentally ill. This was even extended to English colonies such as Canada, India, Australia, and the West Indies as word of the maltreatment of patients at a facility in Kingston, Jamaica spread, leading to an audit of colonial facilities and their policies.
Reform in the United States started with the figure largely considered to be the father of American psychiatry, Benjamin Rush (1745-1813). Rush advocated for the humane treatment of the mentally ill, showing them respect, and even giving them small gifts from time to time. Despite this, his practice included treatments such as bloodletting and purgatives, the invention of the “tranquilizing chair,” and a reliance on astrology, showing that even he could not escape from the beliefs of the time.
Due to the rise of the moral treatment movement in both Europe and the United States, asylums became habitable places where those afflicted with mental illness could recover. However, it is often said that the moral treatment movement was a victim of its own success. The number of mental hospitals greatly increased leading to staffing shortages and a lack of funds to support them. Though treating patients humanely was a noble endeavor, it did not work for some and other treatments were needed, though they had not been developed yet. It was also recognized that the approach worked best when the facility had 200 or fewer patients. However, waves of immigrants arriving in the U.S. after the Civil War were overwhelming the facilities, with patient counts soaring to 1,000 or more. Prejudice against the new arrivals led to discriminatory practices in which immigrants were not afforded moral treatments provided to native citizens, even when the resources were available to treat them.
Another leader in the moral treatment movement was Dorothea Dix (1802-1887), a New Englander who observed the deplorable conditions suffered by the mentally ill while teaching Sunday school to female prisoners. She instigated the mental hygiene movement, which focused on the physical well-being of patients. Over the span of 40 years, from 1841 to 1881, she motivated people and state legislators to do something about this injustice and raised millions of dollars to build over 30 more appropriate mental hospitals and improve others. Her efforts even extended beyond the U.S. to Canada and Scotland.
Finally, in 1908 Clifford Beers (1876-1943) published his book, A Mind that Found Itself, in which he described his personal struggle with bipolar disorder and the “cruel and inhumane treatment people with mental illnesses received. He witnessed and experienced horrific abuse at the hands of his caretakers. At one point during his institutionalization, he was placed in a straightjacket for 21 consecutive nights.” (http://www.mentalhealthamerica.net/our-history). His story aroused sympathy in the public and led him to found the National Committee for Mental Hygiene, known today as Mental Health America, which provides education about mental illness and the need to treat these people with dignity. Today, MHA has over 200 affiliates in 41 states and employs 6,500 affiliate staff and over 10,000 volunteers.
For more information on MHA, please visit: http://www.mentalhealthamerica.net/ | textbooks/socialsci/Psychology/Psychological_Disorders/Abnormal_Psychology_(Cummings)/02%3A_Perspectives_on_Abnormal_Behaviour/2.01%3A_Historical_Perspectives_on_Mental_Illness.txt |
Learning Objectives
• Describe the similarities and differences between theoretical orientations and therapeutic orientations
• Explain how the following orientations view abnormality and suggest we treat it: Psychodynamic/psychoanalytic, person-centered, behavioural, cognitive, cognitive behavioural, acceptance-based, and mindfulness-based
• Discuss emerging treatment strategies
What are Theoretical Orientations?
Although all psychologists share a common goal of seeking to understand human behaviour, psychology is a diverse discipline, in which many different ways of viewing human behaviour have developed. Those views are impacted by the cultural and historical context they exist within, as we have previously discussed in earlier sections of this book.
The purpose of a theoretical orientation is to present a framework through which to understand, organize, and predict human behaviour. Theoretical orientations explain, from that orientation’s perspective, why humans act the way they do. Applied to mental health, they are often referred to as therapeutic orientations and serve to also provide a framework for how to treat psychopathology. You can think of each orientation as a different pair of coloured glasses through which we view human behaviour. Each orientation will see human behaviour slightly differently, and thus have a different explanation for why people act the way they do. When applied to treatment, this means that each orientation might recommend different types of interventions for the same disorder.
Practicing mental health workers, like clinical psychologists, generally adopt a theoretical orientation that they feel best explains human behaviour and thus helps them treat their clients and patients. For example, as you will see below, a psychodynamic therapist will have a very different explanation for a client’s presenting problem than a behaviour therapist would, and as such each would choose different techniques to treat that presenting problem. In this section we’ll discuss some of the major theoretical/therapeutic orientations, how they view human behaviour and treatment, and what techniques they might use.
Psychoanalysis and Psychodynamic Therapy
The earliest organized therapy for mental disorders was psychoanalysis. Made famous in the early 20th century by one of the best-known clinicians of all time, Sigmund Freud, this approach sees mental health problems as rooted in unconscious conflicts and desires. In order to resolve the mental illness, then, these unconscious struggles must be identified and addressed. Psychoanalysis often does this through exploring one’s early childhood experiences that may have continuing repercussions on one’s mental health in the present and later in life. Psychoanalysis is an intensive, long-term approach in which patients and therapists may meet multiple times per week, often for many years.
Freud suggested more generally that psychiatric problems are the result of tension between different parts of the mind: the id, the superego, and the ego. In Freud’s structural model, the id represents pleasure-driven unconscious urges (e.g., our animalistic desires for sex and aggression), while the superego is the semi-conscious part of the mind where morals and societal judgment are internalized (e.g., the part of you that automatically knows how society expects you to behave). The ego—also partly conscious—mediates between the id and superego. Freud believed that bringing unconscious struggles like these (where the id demands one thing and the superego another) into conscious awareness would relieve the stress of the conflict (Freud, 1920/1955)—which became the goal of psychoanalytic therapy.
Although psychoanalysis is still practiced today, it has largely been replaced by the more broadly defined psychodynamic therapy. This latter approach has the same basic tenets as psychoanalysis, but is briefer, makes more of an effort to put clients in their social and interpersonal context, and focuses more on relieving psychological distress than on changing the person.
Techniques in Psychoanalysis
Psychoanalysts and psychodynamic therapists employ several techniques to explore patients’ unconscious mind. One common technique is called free association. Here, the patient shares any and all thoughts that come to mind, without attempting to organize or censor them in any way. For example, if you took a pen and paper and just wrote down whatever came into your head, letting one thought lead to the next without allowing conscious criticism to shape what you were writing, you would be doing free association. The analyst then uses his or her expertise to discern patterns or underlying meaning in the patient’s thoughts.
Sometimes, free association exercises are applied specifically to childhood recollections. That is, psychoanalysts believe a person’s childhood relationships with caregivers often determine the way that person relates to others, and predicts later psychiatric difficulties. Thus, exploring these childhood memories, through free association or otherwise, can provide therapists with insights into a patient’s psychological makeup.
Because we don’t always have the ability to consciously recall these deep memories, psychoanalysts also discuss their patients’ dreams. In Freudian theory, dreams contain not only manifest (or literal) content, but also latent (or symbolic) content (Freud, 1900; 1955). For example, someone may have a dream that his/her teeth are falling out—the manifest or actual content of the dream. However, dreaming that one’s teeth are falling out could be a reflection of the person’s unconscious concern about losing his or her physical attractiveness—the latent or metaphorical content of the dream. It is the therapist’s job to help discover the latent content underlying one’s manifest content through dream analysis.
In psychoanalytic and psychodynamic therapy, the therapist plays a receptive role—interpreting the patient’s thoughts and behavior based on clinical experience and psychoanalytic theory. For example, if during therapy a patient begins to express unjustified anger toward the therapist, the therapist may recognize this as an act of transference. That is, the patient may be displacing feelings for people in his or her life (e.g., anger toward a parent) onto the therapist. At the same time, though, the therapist has to be aware of his or her own thoughts and emotions, for, in a related process, called countertransference, the therapist may displace his/her own emotions onto the patient.
The key to psychoanalytic theory is to have patients uncover the buried, conflicting content of their mind, and therapists use various tactics—such as seating patients to face away from them—to promote a freer self-disclosure. And, as a therapist spends more time with a patient, the therapist can come to view his or her relationship with the patient as another reflection of the patient’s mind.
Advantages and Disadvantages of Psychoanalytic Therapy
Psychoanalysis was once the only type of psychotherapy available, but presently the number of therapists practicing this approach is decreasing around the world. Psychoanalysis is not appropriate for some types of patients, including those with severe psychopathology or intellectual disability. Further, psychoanalysis is often expensive because treatment usually lasts many years. Still, some patients and therapists find the prolonged and detailed analysis very rewarding.
Perhaps the greatest disadvantage of psychoanalysis and related approaches is the lack of empirical support for their effectiveness. The limited research that has been conducted on these treatments suggests that they do not reliably lead to better mental health outcomes (e.g., Driessen et al., 2010). And, although there are some reviews that seem to indicate that long-term psychodynamic therapies might be beneficial (e.g., Leichsenring & Rabung, 2008), other researchers have questioned the validity of these reviews. Nevertheless, psychoanalytic theory was history’s first attempt at formal treatment of mental illness, setting the stage for the more modern approaches used today.
Humanistic and Person-Centered Therapy
One of the next developments in therapy for mental illness, which arrived in the mid-20th century, is called humanistic or person-centered therapy (PCT). Here, the belief is that mental health problems result from an inconsistency between patients’ behavior and their true personal identity. Thus, the goal of PCT is to create conditions under which patients can discover their self-worth, feel comfortable exploring their own identity, and alter their behavior to better reflect this identity.
PCT was developed by a psychologist named Carl Rogers, during a time of significant growth in the movements of humanistic theory and human potential. These perspectives were based on the idea that humans have an inherent drive to realize and express their own capabilities and creativity. Rogers, in particular, believed that all people have the potential to change and improve, and that the role of therapists is to foster self-understanding in an environment where adaptive change is most likely to occur (Rogers, 1951). Rogers suggested that the therapist and patient must engage in a genuine, egalitarian relationship in which the therapist is nonjudgmental and empathetic. In PCT, the patient should experience both a vulnerability to anxiety, which motivates the desire to change, and an appreciation for the therapist’s support.
Techniques in Person-Centered Therapy
Humanistic and person-centered therapy, like psychoanalysis, involves a largely unstructured conversation between the therapist and the patient. Unlike psychoanalysis, though, a therapist using PCT takes a passive role, guiding the patient toward his or her own self-discovery. Rogers’s original name for PCT was non-directive therapy, and this notion is reflected in the flexibility found in PCT. Therapists do not try to change patients’ thoughts or behaviors directly. Rather, their role is to provide the therapeutic relationship as a platform for personal growth. In these kinds of sessions, the therapist tends only to ask questions and doesn’t provide any judgment or interpretation of what the patient says. Instead, the therapist is present to provide a safe and encouraging environment for the person to explore these issues for him- or herself.
An important aspect of the PCT relationship is the therapist’s unconditional positive regard for the patient’s feelings and behaviors. That is, the therapist is never to condemn or criticize the patient for what s/he has done or thought; the therapist is only to express warmth and empathy. This creates an environment free of approval or disapproval, where patients come to appreciate their value and to behave in ways that are congruent with their own identity.
Advantages and Disadvantages of Person-Centered Therapy
One key advantage of person-centered therapy is that it is highly acceptable to patients. In other words, people tend to find the supportive, flexible environment of this approach very rewarding. Furthermore, some of the themes of PCT translate well to other therapeutic approaches. For example, most therapists of any orientation find that clients respond well to being treated with nonjudgmental empathy. The main disadvantage to PCT, however, is that findings about its effectiveness are mixed. One possibility for this could be that the treatment is primarily based on unspecific treatment factors. That is, rather than using therapeutic techniques that are specific to the patient and the mental problem (i.e., specific treatment factors), the therapy focuses on techniques that can be applied to anyone (e.g., establishing a good relationship with the patient) (Cuijpers et al., 2012; Friedli, King, Lloyd, & Horder, 1997). Similar to how “one-size-fits-all” doesn’t really fit every person, PCT uses the same practices for everyone, which may work for some people but not others. Further research is necessary to evaluate its utility as a therapeutic approach. It is important to note, however, that many practitioners incorporate Rogerian concepts, like unconditional positive regard, into their work even if they primarily practice from a different theoretical orientation.
The Behavioural Model
The Behaviourists believed that how we act is learned – we continue to act in the ways that we are reinforced and we avoid acting in ways that result in us being punished. Likewise, they believed that abnormal behaviour resulted from learning and could be treated by learning via new reinforcements and punishments. Early behaviourists identified a number of ways of learning: First, conditioning, a type of associative learning, occurs which two events are linked and has two forms – classical conditioning, or linking together two types of stimuli, and operant conditioning, or linking together a response with its consequence. Second, observational learning occurs when we learn by observing the world around us.
We should also note the existence of non-associative learning or when there is no linking of information or observing the actions of others around you. Types include habituation, or when we simply stop responding to repetitive and harmless stimuli in our environment such as a fan running in your laptop as you work on a paper, and sensitization, or when our reactions are increased due to a strong stimulus, such as an individual who experienced a mugging and now experiences panic when someone walks up behind him/her on the street.
One of the most famous studies in psychology was conducted by Watson and Rayner (1920). Essentially, they wanted to explore the possibility of conditioning emotional responses. The researchers ran a 9-month-old child, known as Little Albert, through a series of trials in which he was exposed to a white rat. At first, he showed no response except curiosity. Then the researchers began to make a loud sound (UCS) whenever the rat was presented. Little Albert exhibited the normal fear response to this sound. After several conditioning trials like these, Albert responded with fear to the mere presence of the white rat.
As fears can be learned, so too they can be unlearned. Considered the follow-up to Watson and Rayner (1920), Jones (1924) wanted to see if a child (named Peter) who learned to be afraid of white rabbits could be conditioned to become unafraid of them. Simply, she placed Peter in one end of a room and then brought in the rabbit. The rabbit was far enough away so as to not cause distress. Then, Jones gave Peter some pleasant food (i.e., something sweet such as cookies; remember the response to the food is unlearned). She continued this procedure with the rabbit being brought in a bit closer each time until eventually, Peter did not respond with distress to the rabbit. This process is called counterconditioning or extinction, or the reversal of previous learning.
Operant conditioning, is more directly relevant to therapeutic work. Operant conditioning is a type of associate learning which focuses on consequences that follow a response or behavior that we make (anything we do, say, or think/feel) and whether it makes a behavior more or less likely to occur. Skinner talked about contingencies or when one thing occurs due to another. Think of it as an If-Then statement. If I do X then Y will happen. For operant conditioning, this means that if I make a behavior, then a specific consequence will follow. The events (response and consequence) are linked in time.
What form do these consequences take? There are two main ways they can present themselves.
• Reinforcement – Due to the consequence, a behavior/response is more likely to occur in the future. It is strengthened.
• Punishment – Due to the consequence, a behavior/response is less likely to occur in the future. It is weakened.
Reinforcement and punishment can occur as two types – positive and negative. These words have no affective connotation to them meaning they do not imply good or bad. Positive means that you are giving something – good or bad. Negative means that something is being taken away – good or bad. Check out the figure below for how these contingencies are arranged.
Figure 2.1. Contingencies in Operant Conditioning
Let’s go through each:
• Positive Punishment (PP) – If something bad or aversive is given or added, then the behavior is less likely to occur in the future. If you talk back to your mother and she slaps your mouth, this is a PP. Your response of talking back led to the consequence of the aversive slap being delivered or given to your face. Ouch!!!
• Positive Reinforcement (PR) – If something good is given or added, then the behavior is more likely to occur in the future. If you study hard and earn an A on your exam, you will be more likely to study hard in the future. Similarly, your parents may give you money for your stellar performance. Cha Ching!!!
• Negative Reinforcement (NR) – This is a tough one for students to comprehend because the terms don’t seem to go together and are counterintuitive. But it is really simple and you experience NR all the time. This is when you are more likely to engage in a behavior that has resulted in the removal of something aversive in the past. For instance, what do you do if you have a headache? You likely answered take Tylenol. If you do this and the headache goes away, you will take Tylenol in the future when you have a headache. Another example is continually smoking marijuana because it temporarily decreases feelings of anxiety. The behavior of smoking marijuana is being reinforced because it reduces a negative state.
• Negative Punishment (NP) – This is when something good is taken away or subtracted making a behavior less likely in the future. If you are late to class and your professor deducts 5 points from your final grade (the points are something good and the loss is negative), you will hopefully be on time in all subsequent classes. Another example is taking away a child’s allowance when he misbehaves.
Techniques in Behaviour Therapy
Within the context of abnormal behavior or psychopathology, the behavioral perspective is useful because it suggests that maladaptive behavior occurs when learning goes awry. As you will see throughout this book, a large number of treatment techniques have been developed from the behavioural model and proven to be effective over the years. For example, desensitization (Wolpe, 1997) teaches clients to respond calmly to fear-producing stimuli. It begins with the individual learning a relaxation technique such as diaphragmatic breathing. Next, a fear hierarchy, or list of feared objects and situations, is constructed in which the individual moves from least to most feared. Finally, the individual either imagines (systematic) or experiences in real life (in-vivo) each object or scenario from the hierarchy and uses the relaxation technique while doing so. This represents individual pairings of feared object or situation and relaxation and so if there are 10 objects/situations in the list, the client will experience ten such pairings and eventually be able to face each without fear. Outside of phobias, desensitization has been shown to be effective in the treatment of Obsessive Compulsive Disorder symptoms (Hakimian and D’Souza, 2016) and limitedly with the treatment of depression that is co-morbid with OCD (Masoumeh and Lancy, 2016).
Critics of the behavioral perspective point out that it oversimplifies behavior and often ignores inner determinants of behavior. Behaviorism has also been accused of being mechanistic and seeing people as machines. Watson and Skinner defined behavior as what we do or say, but later, behaviorists added what we think or feel. In terms of the latter, cognitive behavior modification procedures arose after the 1960s along with the rise of cognitive psychology. This led to a cognitive-behavioral perspective which combines concepts from the behavioral and cognitive models. After reviewing the basics of the cognitive model, we’ll discuss the cognitive-behavioural model in more detail.
The Cognitive Model
Behaviorism said psychology was to be the study of observable behavior, and any reference to cognitive processes was dismissed as this was not overt, but covert according to Watson and later Skinner. Of course, removing cognition from the study of psychology ignored an important part of what makes us human and separates us from the rest of the animal kingdom. Fortunately, the work of George Miller, Albert Ellis, Aaron Beck, and Ulrich Neisser demonstrated the importance of cognitive abilities in understanding thoughts, behaviors, and emotions, and in the case of psychopathology, they helped to show that people can create their own problems by how they come to interpret events experienced in the world around them. How so? According to the cognitive model, irrational or dysfunctional thought patterns can be the basis of psychopathology. Throughout this book, we will discuss several treatment strategies that are used to change unwanted, maladaptive cognitions, whether they are present as an excess such as with paranoia, suicidal ideation, or feelings of worthlessness; or as a deficit such as with self-confidence and self-efficacy. More specifically, cognitive distortions/maladaptive cognitions can take the following forms:
• Overgeneralizing – You see a larger pattern of negatives based on one event.
• What if? – Asking yourself what if something happens without being satisfied by any of the answers.
• Blaming – Focusing on someone else as the source of your negative feelings and not taking any responsibility for changing yourself.
• Personalizing – Blaming yourself for negative events rather than seeing the role that others play.
• Inability to disconfirm – Ignoring any evidence that may contradict your maladaptive cognition.
• Regret orientation – Focusing on what you could have done better in the past rather than on making an improvement now.
• Dichotomous thinking – Viewing people or events in all-or-nothing terms.
For more on cognitive distortions, check out this website: http://www.goodtherapy.org/blog/20-cognitive-distortions-and-how-they-affect-your-life-0407154
Cognitive Behavioral Therapy
Although the behavioural and cognitive models have separate origins, cognitive-behavioral therapy (CBT), which combines elements of both, has gained more widespread support and practice. CBT refers to a family of therapeutic approaches whose goal is to alleviate psychological symptoms by changing their underlying cognitions and behaviors. The premise of CBT is that thoughts, behaviors, and emotions interact and contribute to various mental disorders. For example, let’s consider how a CBT therapist would view a patient who compulsively washes her hands for hours every day. First, the therapist would identify the patient’s maladaptive thought: “If I don’t wash my hands like this, I will get a disease and die.” The therapist then identifies how this maladaptive thought leads to a maladaptive emotion: the feeling of anxiety when her hands aren’t being washed. And finally, this maladaptive emotion leads to the maladaptive behavior: the patient washing her hands for hours every day.
CBT is a present-focused therapy (i.e., focused on the “now” rather than causes from the past, such as childhood relationships) that uses behavioral goals to improve one’s mental illness. Often, these behavioral goals involve between-session homework assignments. For example, the therapist may give the hand-washing patient a worksheet to take home; on this worksheet, the woman is to write down every time she feels the urge to wash her hands, how she deals with the urge, and what behavior she replaces that urge with. When the patient has her next therapy session, she and the therapist review her “homework” together. CBT is a relatively brief intervention of 12 to 16 weekly sessions, closely tailored to the nature of the psychopathology and treatment of the specific mental disorder. And, as the empirical data shows, CBT has proven to be highly efficacious for virtually all psychiatric illnesses (Hofmann, Asnaani, Vonk, Sawyer, & Fang, 2012).
History of Cognitive Behavioral Therapy
CBT developed from clinical work conducted in the mid-20th century by Dr. Aaron T. Beck, a psychiatrist, and Albert Ellis, a psychologist. Beck used the term automatic thoughts to refer to the thoughts depressed patients report experiencing spontaneously. He observed that these thoughts arise from three belief systems, or schemas: beliefs about the self, beliefs about the world, and beliefs about the future. In treatment, therapy initially focuses on identifying automatic thoughts (e.g., “If I don’t wash my hands constantly, I’ll get a disease”), testing their validity, and replacing maladaptive thoughts with more adaptive thoughts (e.g., “Washing my hands three times a day is sufficient to prevent a disease”). In later stages of treatment, the patient’s maladaptive schemas are examined and modified. Ellis (1957) took a comparable approach, in what he called rational-emotive-behavioral therapy (REBT), which also encourages patients to evaluate their own thoughts about situations.
Techniques in CBT
Beck and Ellis strove to help patients identify maladaptive appraisals, or the untrue judgments and evaluations of certain thoughts. For example, if it’s your first time meeting new people, you may have the automatic thought, “These people won’t like me because I have nothing interesting to share.” That thought itself is not what’s troublesome; the appraisal (or evaluation) that it might have merit is what’s troublesome. The goal of CBT is to help people make adaptive, instead of maladaptive, appraisals (e.g., “I do know interesting things!”). This technique of reappraisal, or cognitive restructuring, is a fundamental aspect of CBT. With cognitive restructuring, it is the therapist’s job to help point out when a person has an inaccurate or maladaptive thought, so that the patient can either eliminate it or modify it to be more adaptive.
In addition to thoughts, though, another important treatment target of CBT is maladaptive behavior. Every time a person engages in maladaptive behavior (e.g., never speaking to someone in new situations), he or she reinforces the validity of the maladaptive thought, thus maintaining or perpetuating the psychological illness. In treatment, the therapist and patient work together to develop healthy behavioral habits (often tracked with worksheet-like homework), so that the patient can break this cycle of maladaptive thoughts and behaviors.
For many mental health problems, especially anxiety disorders, CBT incorporates what is known as exposure therapy. During exposure therapy, a patient confronts a problematic situation and fully engages in the experience instead of avoiding it. For example, imagine a man who is terrified of spiders. Whenever he encounters one, he immediately screams and panics. In exposure therapy, the man would be forced to confront and interact with spiders, rather than simply avoiding them as he usually does. The goal is to reduce the fear associated with the situation through extinction learning, a neurobiological and cognitive process by which the patient “unlearns” the irrational fear. For example, exposure therapy for someone terrified of spiders might begin with him looking at a cartoon of a spider, followed by him looking at pictures of real spiders, and later, him handling a plastic spider. After weeks of this incremental exposure, the patient may even be able to hold a live spider. After repeated exposure (starting small and building one’s way up), the patient experiences less physiological fear and maladaptive thoughts about spiders, breaking his tendency for anxiety and subsequent avoidance.
Advantages and Disadvantages of CBT
CBT interventions tend to be relatively brief, making them cost-effective for the average consumer. In addition, CBT is an intuitive treatment that makes logical sense to patients. It can also be adapted to suit the needs of many different populations. One disadvantage, however, is that CBT does involve significant effort on the patient’s part, because the patient is an active participant in treatment. Therapists often assign “homework” (e.g., worksheets for recording one’s thoughts and behaviors) between sessions to maintain the cognitive and behavioral habits the patient is working on. The greatest strength of CBT is the abundance of empirical support for its effectiveness. Studies have consistently found CBT to be equally or more effective than other forms of treatment, including medication and other therapies (Butler, Chapman, Forman, & Beck, 2006; Hofmann et al., 2012). For this reason, CBT is considered a first-line treatment for many mental disorders.
Focus: Topic: Pioneers of CBT
The central notion of CBT is the idea that a person’s behavioral and emotional responses are causally influenced by one’s thinking. The stoic Greek philosopher Epictetus is quoted as saying, “men are not moved by things, but by the view they take of them.” Meaning, it is not the event per se, but rather one’s assumptions (including interpretations and perceptions) of the event that are responsible for one’s emotional response to it. Beck calls these assumptions about events and situations automatic thoughts (Beck, 1979), whereas Ellis (1962) refers to these assumptions as self-statements. The cognitive model assumes that these cognitive processes cause the emotional and behavioral responses to events or stimuli. This causal chain is illustrated in Ellis’s ABC model, in which A stands for the antecedent event, B stands for belief, and C stands for consequence. During CBT, the person is encouraged to carefully observe the sequence of events and the response to them, and then explore the validity of the underlying beliefs through behavioral experiments and reasoning, much like a detective or scientist.
Acceptance and Mindfulness-Based Approaches
Unlike the preceding therapies, which were developed in the 20th century, this next one was born out of age-old Buddhist and yoga practices. Mindfulness, or a process that tries to cultivate a nonjudgmental, yet attentive, mental state, is a therapy that focuses on one’s awareness of bodily sensations, thoughts, and the outside environment. Whereas other therapies work to modify or eliminate these sensations and thoughts, mindfulness focuses on nonjudgmentally accepting them (Kabat-Zinn, 2003; Baer, 2003). For example, whereas CBT may actively confront and work to change a maladaptive thought, mindfulness therapy works to acknowledge and accept the thought, understanding that the thought is spontaneous and not what the person truly believes. There are two important components of mindfulness: (1) self-regulation of attention, and (2) orientation toward the present moment (Bishop et al., 2004). Mindfulness is thought to improve mental health because it draws attention away from past and future stressors, encourages acceptance of troubling thoughts and feelings, and promotes physical relaxation.
Techniques in Mindfulness-Based Therapy
Psychologists have adapted the practice of mindfulness as a form of psychotherapy, generally called mindfulness-based therapy (MBT). Several types of MBT have become popular in recent years, including mindfulness-based stress reduction (MBSR) (e.g., Kabat-Zinn, 1982) and mindfulness-based cognitive therapy (MBCT) (e.g., Segal, Williams, & Teasdale, 2002).
MBSR uses meditation, yoga, and attention to physical experiences to reduce stress. The hope is that reducing a person’s overall stress will allow that person to more objectively evaluate his or her thoughts. In MBCT, rather than reducing one’s general stress to address a specific problem, attention is focused on one’s thoughts and their associated emotions. For example, MBCT helps prevent relapses in depression by encouraging patients to evaluate their own thoughts objectively and without value judgment (Baer, 2003). Although cognitive behavioral therapy (CBT) may seem similar to this, it focuses on “pushing out” the maladaptive thought, whereas mindfulness-based cognitive therapy focuses on “not getting caught up” in it. The treatments used in MBCT have been used to address a wide range of illnesses, including depression, anxiety, chronic pain, coronary artery disease, and fibromyalgia (Hofmann, Sawyer, Witt & Oh, 2010).
Mindfulness and acceptance—in addition to being therapies in their own right—have also been used as “tools” in other cognitive-behavioral therapies, particularly in dialectical behavior therapy (DBT) (e.g., Linehan, Amstrong, Suarez, Allmon, & Heard, 1991). DBT, often used in the treatment of borderline personality disorder, focuses on skills training. That is, it often employs mindfulness and cognitive behavioral therapy practices, but it also works to teach its patients “skills” they can use to correct maladaptive tendencies. For example, one skill DBT teaches patients is called distress tolerance—or, ways to cope with maladaptive thoughts and emotions in the moment. For example, people who feel an urge to cut themselves may be taught to snap their arm with a rubber band instead. The primary difference between DBT and CBT is that DBT employs techniques that address the symptoms of the problem (e.g., cutting oneself) rather than the problem itself (e.g., understanding the psychological motivation to cut oneself). CBT does not teach such skills training because of the concern that the skills—even though they may help in the short-term—may be harmful in the long-term, by maintaining maladaptive thoughts and behaviors.
DBT is founded on the perspective of a dialectical worldview. That is, rather than thinking of the world as “black and white,” or “only good and only bad,” it focuses on accepting that some things can have characteristics of both “good” and “bad.” So, in a case involving maladaptive thoughts, instead of teaching that a thought is entirely bad, DBT tries to help patients be less judgmental of their thoughts (as with mindfulness-based therapy) and encourages change through therapeutic progress, using cognitive-behavioral techniques as well as mindfulness exercises.
Another form of treatment that also uses mindfulness techniques is acceptance and commitment therapy (ACT) (Hayes, Strosahl, & Wilson, 1999). In this treatment, patients are taught to observe their thoughts from a detached perspective (Hayes et al., 1999). ACT encourages patients not to attempt to change or avoid thoughts and emotions they observe in themselves, but to recognize which are beneficial and which are harmful. However, the differences among ACT, CBT, and other mindfulness-based treatments are a topic of controversy in the current literature.
Advantages and Disadvantages of Mindfulness-Based Therapy
Two key advantages of mindfulness-based therapies are their acceptability and accessibility to patients. Because yoga and meditation are already widely known in popular culture, consumers of mental healthcare are often interested in trying related psychological therapies. Currently, psychologists have not come to a consensus on the efficacy of MBT, though growing evidence supports its effectiveness for treating mood and anxiety disorders. For example, one review of MBT studies for anxiety and depression found that mindfulness-based interventions generally led to moderate symptom improvement (Hofmann et al., 2010).
Emerging Treatment Strategies
With growth in research and technology, psychologists have been able to develop new treatment strategies in recent years. Often, these approaches focus on enhancing existing treatments, such as cognitive-behavioral therapies, through the use of technological advances. For example, internet- and mobile-delivered therapies make psychological treatments more available, through smartphones and online access. Clinician-supervised online CBT modules allow patients to access treatment from home on their own schedule—an opportunity particularly important for patients with less geographic or socioeconomic access to traditional treatments. Furthermore, smartphones help extend therapy to patients’ daily lives, allowing for symptom tracking, homework reminders, and more frequent therapist contact.
Another benefit of technology is cognitive bias modification. Here, patients are given exercises, often through the use of video games, aimed at changing their problematic thought processes. For example, researchers might use a mobile app to train alcohol abusers to avoid stimuli related to alcohol. One version of this game flashes four pictures on the screen—three alcohol cues (e.g., a can of beer, the front of a bar) and one health-related image (e.g., someone drinking water). The goal is for the patient to tap the healthy picture as fast as s/he can. Games like these aim to target patients’ automatic, subconscious thoughts that may be difficult to direct through conscious effort. That is, by repeatedly tapping the healthy image, the patient learns to “ignore” the alcohol cues, so when those cues are encountered in the environment, they will be less likely to trigger the urge to drink. Approaches like these are promising because of their accessibility, however they require further research to establish their effectiveness.
Yet another emerging treatment employs CBT-enhancing pharmaceutical agents. These are drugs used to improve the effects of therapeutic interventions. Based on research from animal experiments, researchers have found that certain drugs influence the biological processes known to be involved in learning. Thus, if people take these drugs while going through psychotherapy, they are better able to “learn” the techniques for improvement. For example, the antibiotic d-cycloserine improves treatment for anxiety disorders by facilitating the learning processes that occur during exposure therapy. Ongoing research in this exciting area may prove to be quite fruitful.
Conclusion
Throughout human history we have had to deal with mental illness in one form or another. Over time, several schools of thought have emerged for treating these problems. Although various therapies have been shown to work for specific individuals, cognitive behavioral therapy is currently the treatment most widely supported by empirical research. Still, practices like psychodynamic therapies, person-centered therapy, mindfulness-based treatments, and acceptance and commitment therapy have also shown success. And, with recent advances in research and technology, clinicians are able to enhance these and other therapies to treat more patients more effectively than ever before. However, what is important in the end is that people actually seek out mental health specialists to help them with their problems. One of the biggest deterrents to doing so is that people don’t understand what psychotherapy really entails. Through understanding how current practices work, not only can we better educate people about how to get the help they need, but we can continue to advance our treatments to be more effective in the future.
Outside Resources
Article: A personal account of the benefits of mindfulness-based therapy https://www.theguardian.com/lifeandstyle/2014/jan/11/julie-myerson-mindfulness-based-cognitive-therapy
Article: The Effect of Mindfulness-Based Therapy on Anxiety and Depression: A Meta-Analytic Review https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2848393/
Video: An example of a person-centered therapy session.
A YouTube element has been excluded from this version of the text. You can view it online here: https://openpress.usask.ca/abnormalpsychology/?p=87
Video: Carl Rogers, the founder of the humanistic, person-centered approach to psychology, discusses the position of the therapist in PCT.
A YouTube element has been excluded from this version of the text. You can view it online here: https://openpress.usask.ca/abnormalpsychology/?p=87
Video: CBT (cognitive behavioral therapy) is one of the most common treatments for a range of mental health problems, from anxiety, depression, bipolar, OCD or schizophrenia. This animation explains the basics and how you can decide whether it’s best for you or not.
A YouTube element has been excluded from this version of the text. You can view it online here: https://openpress.usask.ca/abnormalpsychology/?p=87
Web: An overview of the purpose and practice of cognitive behavioral therapy (CBT) http://psychcentral.com/lib/in-depth-cognitive-behavioral-therapy/
Web: The history and development of psychoanalysis http://www.freudfile.org/psychoanalysis/history.html
Discussion Questions
1. Psychoanalytic theory is no longer the dominant therapeutic approach, because it lacks empirical support. Yet many consumers continue to seek psychoanalytic or psychodynamic treatments. Do you think psychoanalysis still has a place in mental health treatment? If so, why?
2. What might be some advantages and disadvantages of technological advances in psychological treatment? What will psychotherapy look like 100 years from now?
3. Some people have argued that all therapies are about equally effective, and that they all affect change through common factors such as the involvement of a supportive therapist. Does this claim sound reasonable to you? Why or why not?
4. When choosing a psychological treatment for a specific patient, what factors besides the treatment’s demonstrated efficacy should be taken into account? | textbooks/socialsci/Psychology/Psychological_Disorders/Abnormal_Psychology_(Cummings)/02%3A_Perspectives_on_Abnormal_Behaviour/2.02%3A_Therapeutic_Orientations.txt |
Learning Objectives
• Describe how communication in the nervous system occurs.
• List the parts of the nervous system.
• Describe the structure of the neuron and all key parts.
• Outline how neural transmission occurs.
• Identify and define important neurotransmitters.
• List the major structures of the brain.
• Clarify how specific areas of the brain are involved in mental illness.
• Describe the role of genes in mental illness.
• Describe the role of hormonal imbalances in mental illness.
• Describe commonly used treatments for mental illness.
• Evaluate the usefulness of the biological model.
Proponents of the biological model view mental illness as being a result of a malfunction in the body to include issues with brain anatomy or chemistry. As such, we will need to establish a foundation for how communication in the nervous system occurs, what the parts of the nervous system are, what a neuron is and its structure, how neural transmission occurs, and what the parts of the brain are. While doing this, we will identify areas of concern for psychologists focused on the treatment of mental disorders.
Brain Structure and Chemistry
Communication in the Nervous System
To really understand brain structure and chemistry, it is a good idea to understand how communication occurs within the nervous system. Simply:
1. Receptor cells in each of the five sensory systems detect energy.
2. This information is passed to the nervous system due to the process of transduction and through sensory or afferent neurons, which are part of the peripheral nervous system.
3. The information is received by brain structures (central nervous system) and perception occurs.
4. Once the information has been interpreted, commands are sent out, telling the body how to respond, also via the peripheral nervous system.
Please note that we will not cover this process in full, but just the parts relevant to our topic of psychopathology.
The Nervous System
The nervous system consists of two main parts – the central and peripheral nervous systems. The central nervous system(CNS) is the control center for the nervous system which receives, processes, interprets, and stores incoming sensory information. It consists of the brain and spinal cord. The peripheral nervous system consists of everything outside the brain and spinal cord. It handles the CNS’s input and output and divides into the somatic and autonomic nervous systems. The somatic nervous system allows for voluntary movement by controlling the skeletal muscles and it carries sensory information to the CNS. The autonomic nervous system regulates the functioning of blood vessels, glands, and internal organs such as the bladder, stomach, and heart. It consists of sympathetic and parasympathetic nervous systems. The sympathetic nervous system is involved when a person is intensely aroused. It provides the strength to fight back or to flee (fight-or-flight response). Eventually, the response brought about by the sympathetic nervous system must end so the parasympathetic nervous system kicks in to calm the body.
Figure 2.3. The Structure of the Nervous System
The Neuron
The fundamental unit of the nervous system is the neuron, or nerve cell (See Figure 2.3). It has several structures in common with all cells in the body. The nucleus is the control center of the body and the soma is the cell body. In terms of structures that make it different, these focus on the ability of a neuron to send and receive information. The axon sends signals/information through the neuron while the dendrites receive information from neighboring neurons and look like little trees. Notice the s on the end of dendrite and that axon has no such letter. In other words, there are lots of dendrites but only one axon. Also of importance to the neuron is the myelin sheath or the white, fatty covering which: 1) provides insulation so that signals from adjacent neurons do not affect one another and, 2) increases the speed at which signals are transmitted. The axon terminals are the end of the axon where the electrical impulse becomes a chemical message and is released into the synaptic cleft which is the space between neurons.
Though not neurons, glial cells play an important part in helping the nervous system to be the efficient machine that it is. Glial cells are support cells in the nervous system that serve five main functions.
1. They act as a glue and hold the neuron in place.
2. They form the myelin sheath.
3. They provide nourishment for the cell.
4. They remove waste products.
5. They protect the neuron from harmful substances.
Finally, nerves are a group of axons bundled together like wires in an electrical cable.
Figure 2.4. The Structure of the Neuron
Neural Transmission
Transducers or receptor cells in the major organs of our five sensory systems – vision (the eyes), hearing (the ears), smell (the nose), touch (the skin), and taste (the tongue) – convert the physical energy that they detect or sense, and send it to the brain via the neural impulse. How so? We will cover this process in three parts.
Part 1. The Neural Impulse
• Step 1 – Neurons waiting to fire are said to be in resting potential and to be polarized (meaning they have a negative charge inside the neuron and a positive charge outside).
• Step 2 – If adequately stimulated, the neuron experiences an action potential and becomes depolarized. When this occurs, ion gated channels open allowing positively charged Sodium (Na) ions to enter. This shifts the polarity to positive on the inside and negative outside.
• Step 3 – Once the action potential passes from one segment of the axon to the next, the previous segment begins to repolarize. This occurs because the Na channels close and Potassium (K) channels open. K has a positive charge and so the neuron becomes negative again on the inside and positive on the outside.
• Step 4 – After the neuron fires, it will not fire again no matter how much stimulation it receives. This is called the absolute refractory period.
• Step 5 – After a short period of time, the neuron can fire again, but needs greater than normal levels of stimulation to do so. This is called the relative refractory period.
• Step 6 – Please note that the process is cyclical. Once the relative refractory period has passed the neuron returns to its resting potential.
Part 2. The Action Potential
Let’s look at the electrical portion of the process in another way and add some detail.
Figure 2.5. The Action Potential
• Recall that a neuron is normally at resting potential and polarized. The charge inside is -70mV at rest.
• If it receives sufficient stimulation meaning that the polarity inside the neuron rises from -70 mV to -55mV defined as the threshold of excitation, the neuron will fire or send an electrical impulse down the length of the axon (the action potential or depolarization). It should be noted that it either hits -55mV and fires or it does not. This is the all-or-nothing principle. The threshold must be reached.
• Once the electrical impulse has passed from one segment of the axon to the next, the neuron begins the process of resetting called repolarization.
• During repolarization, the neuron will not fire no matter how much stimulation it receives. This is called absolute refractory period.
• The neuron next moves into relative refractory period meaning it can fire, but needs greater than normal levels of stimulation. Notice how the line has dropped below -70mV. Hence, to reach -55mV and fire, it will need more than the normal gain of +15mV (-70 to -55 mV).
• And then it returns to resting potential, as you saw in Figure 2.3
Ions are charged particles found both inside and outside the neuron. It is positively charged Sodium (Na) ions that cause the neuron to depolarize and fire and positively charged Potassium (K) ions that exit and return the neuron to a polarized state.
Part 3. The Synapse
The electrical portion of the neural impulse is just the start. The actual code passes from one neuron to another in a chemical form called a neurotransmitter. The point where this occurs is called the synapse. The synapse consists of three parts – the axon terminals of the sending neuron (presynaptic neuron); the space in between called the synaptic cleft, space, or gap; and the dendrite of the receiving neuron (postsynaptic neuron). Once the electrical impulse reaches the end of the axon, called the axon terminal, it stimulates synaptic vesicles or neurotransmitter sacs to release the neurotransmitter. Neurotransmitters will only bind to their specific receptor sites, much like a key will only fit into the lock it was designed for. You might say neurotransmitters are part of a lock-and-key system. What happens to the neurotransmitters that do not bind to a receptor site? They might go through reuptake which is a process in which the presynaptic neuron takes back excess neurotransmitters in the synaptic space for future use or enzymatic degradation when enzymes destroy excess neurotransmitters in the synaptic space.
Neurotransmitters
What exactly are some of the neurotransmitters which are so critical for neural transmission, and are important to our discussion of psychopathology?
• Dopamine – controls voluntary movements and is associated with the reward mechanism in the brain
• Serotonin – controls pain, sleep cycle, and digestion; leads to a stable mood and so low levels leads to depression
• Norepinephrine – increases the heart rate and blood pressure and regulates mood
• GABA – an inhibitory neurotransmitter responsible for blocking the signals of excitatory neurotransmitters responsible for anxiety and panic.
• Glutamate – an excitatory neurotransmitter associated with learning and memory
The critical thing to understand here is that there is a belief in the realm of mental health that chemical imbalances are responsible for many mental disorders. Chief among these are neurotransmitter imbalances. For instance, people with Seasonal Affective Disorder (SAD) have difficulty regulating serotonin. More on this throughout the book as we discuss each disorder.
The Brain
The central nervous system consists of the brain and spinal cord; the former we will discuss briefly and in terms of key structures which include:
• Medulla – regulates breathing, heart rate, and blood pressure
• Pons – acts as a bridge connecting the cerebellum and medulla and helps to transfer messages between different parts of the brain and spinal cord.
• Reticular formation – responsible for alertness and attention
• Cerebellum – involved in our sense of balance and for coordinating the body’s muscles so that movement is smooth and precise. Involved in the learning of certain kinds of simple responses and acquired reflexes.
• Thalamus – major sensory relay center for all senses except smell.
• Hypothalamus – involved in drives associated with the survival of both the individual and the species. It regulates temperature by triggering sweating or shivering and controls the complex operations of the autonomic nervous system
• Amygdala – responsible for evaluating sensory information and quickly determining its emotional importance
• Hippocampus – our “gateway” to memory. Allows us to form spatial memories so that we can accurately navigate through our environment and helps us to form new memories (involved in memory consolidation)
• The cerebrum has four distinct regions in each cerebral hemisphere. First, the frontal lobe contains the motor cortex which issues orders to the muscles of the body that produce voluntary movement. The frontal lobe is also involved in emotion and in the ability to make plans, think creatively, and take initiative. The parietal lobe contains the somatosensory cortex and receives information about pressure, pain, touch, and temperature from sense receptors in the skin, muscles, joints, internal organs, and taste buds. The occipital lobe contains the visual cortex and receives and processes visual information. Finally, the temporal lobe is involved in memory, perception, and emotion. It contains the auditory cortex which processes sound.
Figure 2.6. Anatomy of the Brain
Of course, this is not an exhaustive list of structures found in the brain but gives you a pretty good idea of function and which structures help to support those functions. What is important to mental health professionals is that for some disorders, specific areas of the brain are involved. For instance, individuals with borderline personality disorder have been shown to have structural and functional changes in brain areas associated with impulse control and emotional regulation while imaging studies reveal differences in the frontal cortex and subcortical structures of individuals with OCD.
Exercises
Check out the following from Harvard Health for more on depression and the brain as a cause: https://www.health.harvard.edu/mind-and-mood/what-causes-depression
Genes, Hormonal Imbalances, and Viral Infections
Genetic Issues and Explanations
DNA, or deoxyribonucleic acid, is our heredity material and is found in the nucleus of each cell packaged in threadlike structures known as chromosomes. Most of us have 23 pairs of chromosomes or 46 total. Twenty-two of these pairs are the same in both sexes, but the 23rd pair is called the sex chromosome and differs between males and females. Males have X and Y chromosomes while females have two Xs. According to the Genetics Home Reference website as part of NIH’s National Library of Medicine, a gene is “the basic physical and functional unit of heredity” (https://ghr.nlm.nih.gov/primer/basics/gene). They act as the instructions to make proteins and it is estimated by the Human Genome Project that we have between 20,000 and 25,000 genes. We all have two copies of each gene and one is inherited from our mother and one from our father.
Recent research has discovered that autism, ADHD, bipolar disorder, major depression, and schizophrenia all share genetic roots. They “were more likely to have suspect genetic variation at the same four chromosomal sites. These included risk versions of two genes that regulate the flow of calcium into cells.” For more on this development, please check out the article at: https://www.nimh.nih.gov/news/science-news/2013/five-major-mental-disorders-share-genetic-roots.shtml. Likewise, twin and family studies have shown that people with first-degree relatives with OCD are at higher risk of developing the disorder themselves. The same is true of most mental disorders. Indeed, it is presently believed that genetic factors contribute to all mental disorders but typically account for less than half of the explanation. Moreover, most mental disorders are linked to abnormalities in many genes, rather than just one; that is, most are polygenetic.
Moreover, there are important gene-environment interactions that are unique for every person (even twins) which help to explain why some people with a genetic predisposition toward a certain disorder develop that disorder and others do not (e.g., why one identical twin may develop schizophrenia but the other does not). The diathesis-stress model posits that people can inherit tendencies or vulnerabilities to express certain traits, behaviors, or disorders, which may then be activated under certain environmental conditions like stress (e.g., abuse, traumatic events). However, it is also important to note that certain protective factors (like being raised in a consistent, loving, supportive environment) may modify the response to stress and thereby help to protect individuals against mental disorders.
Exercises
For more on the role of genes in the development of mental illness, check out this article from Psychology Today:
https://www.psychologytoday.com/blog/saving-normal/201604/what-you-need-know-about-the-genetics-mental-disorders
Hormonal Imbalances
The body has two coordinating and integrating systems in the body. The nervous system is one and the endocrine system is the second. The main difference between these two systems is in terms of the speed with which they act. The nervous system moves quickly with nerve impulses moving in a few hundredths of a second. The endocrine system moves slowly with hormones, released by endocrine glands, taking seconds, or even minutes, to reach their target. Hormones are important to psychologists because they organize the nervous system and body tissues at certain stages of development and activate behaviors such as alertness or sleepiness, sexual behavior, concentration, aggressiveness, reaction to stress, a desire for companionship.
The pituitary gland is the “master gland” which regulates other endocrine glands. It influences blood pressure, thirst, contractions of the uterus during childbirth, milk production, sexual behavior and interest, body growth, the amount of water in the body’s cells, and other functions as well. The pineal gland produces melatonin which helps regulate the sleep-wake cycle and other circadian rhythms. Overproduction of the hormone melatonin can lead to Seasonal Affective Disorder (a specific type of Major Depressive Disorder). The thyroid gland produces thyroxin which facilitates energy, metabolism, and growth. Hypothyroidism is a condition in which the thyroid glands become underactive and this condition can produce symptoms of depression. In contrast, hyperthyroidism is a condition in which the thyroid glands become overactive and this condition can produce symptoms of mania. Therefore it is important for individuals experiencing these symptoms to have their thyroid checked, because conventional treatments for depression and mania will not correct the problem with the thyroid, and will therefore not resolve the symptoms. Rather, individuals with these conditions need to be treated with thyroid medications. Also of key importance to mental health professionals are the adrenal glands which are located on top of the kidneys, and release cortisol which helps the body deal with stress. However, chronically, elevated levels of cortisol can lead to increased weight gain, interfere with learning and memory, decrease the immune response, reduce bone density, increase cholesterol, and increase the risk of depression.
Figure 2.7. Hormone Systems
The Hypothalamic-Pituitary-Adrenal-Cortical Axis (HPA Axis) is the connection between the hypothalamus, pituitary glands, and adrenal glands. Specifically, the hypothalamus releases corticotropin-releasing factor (CRF) which stimulates the anterior pituitary to release adrenocorticotrophic hormone (ACTH), which in turn stimulates the adrenal cortex to release cortisol (see Figure 2.4). Malfunctioning of this system is implicated in a wide range of mental disorders including, depression, anxiety, and post-traumatic stress disorder. Exposure to chronic, unpredictable stress during early development can sensitive this system, making it over-responsive to stress (meaning it activates too readily and does not shut down appropriately). Sensitization of the HPA axis leads to an overproduction of cortisol which once again can damage the body and brain when it remains at chronically high levels.
Figure 2.8. The HPA Axis
For more on the link between cortisol and depression, check out this article:
https://www.psychologytoday.com/blog/the-athletes-way/201301/cortisol-why-the-stress-hormone-is-public-enemy-no-1
Viral Infections
Infections can cause brain damage and lead to the development of mental illness or an exacerbation of symptoms. For example, evidence suggests that contracting strep infection can lead to the development of OCD, Tourette’s syndrome, and tic disorder in children (Mell, Davis, & Owens, 2005; Giedd et al., 2000; Allen et al., 1995; https://www.psychologytoday.com/blog/the-perfectionists-handbook/201202/can-infections-result-in-mental-illness). Influenza epidemics have also been linked to schizophrenia (Brown et al., 2004; McGrath and Castle, 1995; McGrath et al., 1994; O’Callaghan et al., 1991) though more recent research suggests this evidence is weak at best (Selten & Termorshuizen, 2017; Ebert & Kotler, 2005).
Treatments
Psychopharmacology and Psychotropic Drugs
One option to treat severe mental illness is psychotropic medications. These medications fall into five major categories. In this section we will broadly discuss these categories, and in the next we will cover them in more detail.
Antidepressants are used to treat depression, but also anxiety, insomnia, or pain. The most common types of antidepressants are selective serotonin reuptake inhibitors (SSRIs) and include Citalopram (Celexa), Paroxetine, and Fluoxetine (Prozac). They can often take 2-6 weeks to take effect. Possible side effects include weight gain, sleepiness, nausea and vomiting, panic attacks, or thoughts about suicide or dying.
Anti-anxiety medications help with the symptoms of anxiety and include the benzodiazepines such as Diazepam (Valium), Alprazolam (Xanax), and Lorazepam (Ativan). These medications are effective in reducing anxiety in the short-term and take less time to take effect than antidepressants which are also commonly prescribed for anxiety. However, benzodiazepines are rather addictive. As such, tolerance to these drugs can develop quickly and individuals may experience withdrawal symptoms (e.g., anxiety, panic, insomnia) when they cease taking the drugs. For this reason, benzodiazepines should not be used in the long-term. Side effects include drowsiness, dizziness, nausea, difficulty urinating, and irregular heartbeat, to name a few.
Stimulants increase one’s alertness and attention and are frequently used to treat ADHD. They include Lisdexamfetamine, the combination of dextroamphetamine and amphetamine, and Methylphenidate (Ritalin). Stimulants are generally effective and produce a calming effect. Possible side effects include loss of appetite, headache, motor tics or verbal tics, and personality changes such as appearing emotionless.
Antipsychotics are used to treat psychosis (i.e., hallucinations and delusions). They can also be used to treat eating disorders, severe depression, PTSD, OCD, ADHD, and Generalized Anxiety Disorder. Common antipsychotics include Chlorpromazine, Perphenazine, Quetiapine, and Lurasidone. Side effects include nausea, vomiting, blurred vision, weight gain, restlessness, tremors, and rigidity.
Mood stabilizers are used to treat bipolar disorder and at times depression, schizoaffective disorder, and disorders of impulse control. A common example is Lithium and side effects include loss of coordination, hallucinations, seizures, and frequent urination.
For more information on psychotropic medications, please visit:
https://www.nimh.nih.gov/health/topics/mental-health-medications/index.shtml
The use of these drugs has been generally beneficial to patients. Most report that their symptoms decline, leading them to feel better and improve their functioning. Also, long-term hospitalizations are less likely to occur as a result, though the medications do not benefit the individual in terms of improved living skills.
Electroconvulsive Therapy
According to Mental Health America, “Electroconvulsive therapy (ECT) is a procedure in which a brief application of electric stimulus is used to produce a generalized seizure.” Patients are placed on a padded bed and administered a muscle relaxant to avoid injury during the seizures. Annually, approximately 100,000 are treated using ECT for conditions including severe depression, acute mania, and suicidality. The procedure is still the most controversial available to mental health professionals due to “its effectiveness vs. the side effects, the objectivity of ECT experts, and the recent increase in ECT as a quick and easy solution, instead of long-term psychotherapy or hospitalization” (http://www.mentalhealthamerica.net/ect). Its popularity has declined since the 1940s and 1950s.
Psychosurgery
Another option to treat mental disorders is to perform brain surgeries. In the past, we have conducted trephining and lobotomies, neither of which are used today. Today’s techniques are much more sophisticated and have been used to treat schizophrenia, depression, and obsessive-compulsive disorder, though critics cite obvious ethical issues with conducting such surgeries as well as scientific issues. Due to these issues, psychosurgery is only used as a radical last resort when all other treatment options have failed to resolve a serious mental illness.
For more on psychosurgery, check out this article from Psychology Today:
https://www.psychologytoday.com/articles/199203/psychosurgery
Evaluation of the Model
The biological model is generally well respected today but suffers a few key issues. First, consider the list of side effects given for the psychotropic medications. You might make the case that some of the side effects are worse than the condition they are treating. Second, the viewpoint that all human behavior is explainable in biological terms, and therefore, when issues arise they can be treated using biological methods, overlooks factors that are not biological in nature. More on that over the next two sections. | textbooks/socialsci/Psychology/Psychological_Disorders/Abnormal_Psychology_(Cummings)/02%3A_Perspectives_on_Abnormal_Behaviour/2.03%3A_The_Biological_Model.txt |
Learning Objectives
• How do the majority of psychoactive drugs work in the brain?
• How does the route of administration affect how rewarding a drug might be?
• Why is grapefruit dangerous to consume with many psychotropic medications?
• Why might individualized drug doses based on genetic screening be helpful for treating conditions like depression?
• Why is there controversy regarding pharmacotherapy for children, adolescents, and the elderly?
Psychopharmacology is the study of how drugs affect behavior. If a drug changes your perception, or the way you feel or think, the drug exerts effects on your brain and nervous system. We call drugs that change the way you think or feel psychoactive or psychotropic drugs, and almost everyone has used a psychoactive drug at some point (yes, caffeine counts). Understanding some of the basics about psychopharmacology can help us better understand a wide range of things that interest psychologists and others. For example, the pharmacological treatment of certain neurodegenerative diseases such as Parkinson’s disease tells us something about the disease itself. The pharmacological treatments used to treat psychiatric conditions such as schizophrenia or depression have undergone amazing development since the 1950s, and the drugs used to treat these disorders tell us something about what is happening in the brain of individuals with these conditions. Finally, understanding something about the actions of drugs of abuse and their routes of administration can help us understand why some psychoactive drugs are so addictive. In this module, we will provide an overview of some of these topics as well as discuss some current controversial areas in the field of psychopharmacology.
Introduction
Psychopharmacology, the study of how drugs affect the brain and behavior, is a relatively new science, although people have probably been taking drugs to change how they feel from early in human history (consider the of eating fermented fruit, ancient beer recipes, chewing on the leaves of the cocaine plant for stimulant properties as just some examples). The word psychopharmacology itself tells us that this is a field that bridges our understanding of behavior (and brain) and pharmacology, and the range of topics included within this field is extremely broad.
Virtually any drug that changes the way you feel does this by altering how neurons communicate with each other. Neurons (more than 100 billion in your nervous system) communicate with each other by releasing a chemical (neurotransmitter) across a tiny space between two neurons (the synapse). When the neurotransmitter crosses the synapse, it binds to a postsynaptic receptor (protein) on the receiving neuron and the message may then be transmitted onward. Obviously, neurotransmission is far more complicated than this – links at the end of this module can provide some useful background if you want more detail – but the first step is understanding that virtually all psychoactive drugs interfere with or alter how neurons communicate with each other.
There are many neurotransmitters. Some of the most important in terms of psychopharmacological treatment and drugs of abuse are outlined in Table 1. The neurons that release these neurotransmitters, for the most part, are localized within specific circuits of the brain that mediate these behaviors. Psychoactive drugs can either increase activity at the synapse (these are called agonists) or reduce activity at the synapse (antagonists). Different drugs do this by different mechanisms, and some examples of agonists and antagonists are presented in Table 2. For each example, the drug’s trade name, which is the name of the drug provided by the drug company, and generic name (in parentheses) are provided.
A very useful link at the end of this module shows the various steps involved in neurotransmission and some ways drugs can alter this.
Table 2 provides examples of drugs and their primary mechanism of action, but it is very important to realize that drugs also have effects on other neurotransmitters. This contributes to the kinds of side effects that are observed when someone takes a particular drug. The reality is that no drugs currently available work only exactly where we would like in the brain or only on a specific neurotransmitter. In many cases, individuals are sometimes prescribed one psychotropic drug but then may also have to take additional drugs to reduce the side effects caused by the initial drug. Sometimes individuals stop taking medication because the side effects can be so profound.
Pharmacokinetics: What Is It – Why Is It Important?
While this section may sound more like pharmacology, it is important to realize how important pharmacokinetics can be when considering psychoactive drugs. Pharmacokinetics refers to how the body handles a drug that we take. As mentioned earlier, psychoactive drugs exert their effects on behavior by altering neuronal communication in the brain, and the majority of drugs reach the brain by traveling in the blood. The acronym ADME is often used with A standing for absorption (how the drug gets into the blood), Distribution (how the drug gets to the organ of interest – in this module, that is the brain), Metabolism (how the drug is broken down so it no longer exerts its psychoactive effects), and Excretion (how the drug leaves the body). We will talk about a couple of these to show their importance for considering psychoactive drugs.
Drug Administration
There are many ways to take drugs, and these routes of drug administration can have a significant impact on how quickly that drug reaches brain. The most common route of administration is oral administration, which is relatively slow and – perhaps surprisingly – often the most variable and complex route of administration. Drugs enter the stomach and then get absorbed by the blood supply and capillaries that line the small intestine. The rate of absorption can be affected by a variety of factors including the quantity and the type of food in the stomach (e.g., fats vs. proteins). This is why the medicine label for some drugs (like antibiotics) may specifically state foods that you should or should NOT consume within an hour of taking the drug because they can affect the rate of absorption. Two of the most rapid routes of administration include inhalation (i.e., smoking or gaseous anesthesia) and intravenous (IV) in which the drug is injected directly into the vein and hence the blood supply. Both of these routes of administration can get the drug to brain in less than 10 seconds. IV administration also has the distinction of being the most dangerous because if there is an adverse drug reaction, there is very little time to administer any antidote, as in the case of an IV heroin overdose.
Why might how quickly a drug gets to the brain be important? If a drug activates the reward circuits in the brain AND it reaches the brain very quickly, the drug has a high risk for abuse and addiction. Psychostimulants like amphetamine or cocaine are examples of drugs that have high risk for abuse because they are agonists at DA neurons involved in reward AND because these drugs exist in forms that can be either smoked or injected intravenously. Some argue that cigarette smoking is one of the hardest addictions to quit, and although part of the reason for this may be that smoking gets the nicotine into the brain very quickly (and indirectly acts on DA neurons), it is a more complicated story. For drugs that reach the brain very quickly, not only is the drug very addictive, but so are the cues associated with the drug (see Rohsenow, Niaura, Childress, Abrams, & Monti, 1990). For a crack user, this could be the pipe that they use to smoke the drug. For a cigarette smoker, however, it could be something as normal as finishing dinner or waking up in the morning (if that is when the smoker usually has a cigarette). For both the crack user and the cigarette smoker, the cues associated with the drug may actually cause craving that is alleviated by (you guessed it) – lighting a cigarette or using crack (i.e., relapse). This is one of the reasons individuals that enroll in drug treatment programs, especially out-of-town programs, are at significant risk of relapse if they later find themselves in proximity to old haunts, friends, etc. But this is much more difficult for a cigarette smoker. How can someone avoid eating? Or avoid waking up in the morning, etc. These examples help you begin to understand how important the route of administration can be for psychoactive drugs.
Drug Metabolism
Metabolism involves the breakdown of psychoactive drugs, and this occurs primarily in the liver. The liver produces enzymes (proteins that speed up a chemical reaction), and these enzymes help catalyze a chemical reaction that breaks down psychoactive drugs. Enzymes exist in “families,” and many psychoactive drugs are broken down by the same family of enzymes, the cytochrome P450 superfamily. There is not a unique enzyme for each drug; rather, certain enzymes can break down a wide variety of drugs. Tolerance to the effects of many drugs can occur with repeated exposure; that is, the drug produces less of an effect over time, so more of the drug is needed to get the same effect. This is particularly true for sedative drugs like alcohol or opiate-based painkillers. Metabolic tolerance is one kind of tolerance and it takes place in the liver. Some drugs (like alcohol) cause enzyme induction – an increase in the enzymes produced by the liver. For example, chronic drinking results in alcohol being broken down more quickly, so the alcoholic needs to drink more to get the same effect – of course, until so much alcohol is consumed that it damages the liver (alcohol can cause fatty liver or cirrhosis).
Recent Issues Related to Psychotropic Drugs and Metabolism
Grapefruit Juice and Metabolism
Certain types of food in the stomach can alter the rate of drug absorption, and other foods can also alter the rate of drug metabolism. The most well known is grapefruit juice. Grapefruit juice suppresses cytochrome P450 enzymes in the liver, and these liver enzymes normally break down a large variety of drugs (including some of the psychotropic drugs). If the enzymes are suppressed, drug levels can build up to potentially toxic levels. In this case, the effects can persist for extended periods of time after the consumption of grapefruit juice. As of 2013, there are at least 85 drugs shown to adversely interact with grapefruit juice (Bailey, Dresser, & Arnold, 2013). Some psychotropic drugs that are likely to interact with grapefruit juice include carbamazepine (Tegretol), prescribed for bipolar disorder; diazepam (Valium), used to treat anxiety, alcohol withdrawal, and muscle spasms; and fluvoxamine (Luvox), used to treat obsessive compulsive disorder and depression. A link at the end of this module gives the latest list of drugs reported to have this unusual interaction.
Individualized Therapy, Metabolic Differences, and Potential Prescribing Approaches for the Future
Mental illnesses contribute to more disability in western countries than all other illnesses including cancer and heart disease. Depression alone is predicted to be the second largest contributor to disease burden by 2020 (World Health Organization, 2004). The numbers of people affected by mental health issues are pretty astonishing, with estimates that 25% of adults experience a mental health issue in any given year, and this affects not only the individual but their friends and family. One in 17 adults experiences a serious mental illness (Kessler, Chiu, Demler, & Walters, 2005). Newer antidepressants are probably the most frequently prescribed drugs for treating mental health issues, although there is no “magic bullet” for treating depression or other conditions. Pharmacotherapy with psychological therapy may be the most beneficial treatment approach for many psychiatric conditions, but there are still many unanswered questions. For example, why does one antidepressant help one individual yet have no effect for another? Antidepressants can take 4 to 6 weeks to start improving depressive symptoms, and we don’t really understand why. Many people do not respond to the first antidepressant prescribed and may have to try different drugs before finding something that works for them. Other people just do not improve with antidepressants (Ioannidis, 2008). As we better understand why individuals differ, the easier and more rapidly we will be able to help people in distress.
One area that has received interest recently has to do with an individualized treatment approach. We now know that there are genetic differences in some of the cytochrome P450 enzymes and their ability to break down drugs. The general population falls into the following 4 categories: 1) ultra-extensive metabolizers break down certain drugs (like some of the current antidepressants) very, very quickly, 2) extensive metabolizers are also able to break down drugs fairly quickly, 3) intermediate metabolizers break down drugs more slowly than either of the two above groups, and finally 4) poor metabolizers break down drugs much more slowly than all of the other groups. Now consider someone receiving a prescription for an antidepressant – what would the consequences be if they were either an ultra-extensive metabolizer or a poor metabolizer? The ultra-extensive metabolizer would be given antidepressants and told it will probably take 4 to 6 weeks to begin working (this is true), but they metabolize the medication so quickly that it will never be effective for them. In contrast, the poor metabolizer given the same daily dose of the same antidepressant may build up such high levels in their blood (because they are not breaking the drug down), that they will have a wide range of side effects and feel really badly – also not a positive outcome. What if – instead – prior to prescribing an antidepressant, the doctor could take a blood sample and determine which type of metabolizer a patient actually was? They could then make a much more informed decision about the best dose to prescribe. There are new genetic tests now available to better individualize treatment in just this way. A blood sample can determine (at least for some drugs) which category an individual fits into, but we need data to determine if this actually is effective for treating depression or other mental illnesses (Zhou, 2009). Currently, this genetic test is expensive and not many health insurance plans cover this screen, but this may be an important component in the future of psychopharmacology.
Other Controversial Issues
Juveniles and Psychopharmacology
A recent Centers for Disease Control (CDC) report has suggested that as many as 1 in 5 children between the ages of 5 and 17 may have some type of mental disorder (e.g., ADHD, autism, anxiety, depression) (CDC, 2013). The incidence of bipolar disorder in children and adolescents has also increased 40 times in the past decade (Moreno, Laje, Blanco, Jiang, Schmidt, & Olfson, 2007), and it is now estimated that 1 in 88 children have been diagnosed with an autism spectrum disorder (CDC, 2011). Why has there been such an increase in these numbers? There is no single answer to this important question. Some believe that greater public awareness has contributed to increased teacher and parent referrals. Others argue that the increase stems from changes in criterion currently used for diagnosing. Still others suggest environmental factors, either prenatally or postnatally, have contributed to this upsurge.
We do not have an answer, but the question does bring up an additional controversy related to how we should treat this population of children and adolescents. Many psychotropic drugs used for treating psychiatric disorders have been tested in adults, but few have been tested for safety or efficacy with children or adolescents. The most well-established psychotropics prescribed for children and adolescents are the psychostimulant drugs used for treating attention deficit hyperactivity disorder (ADHD), and there are clinical data on how effective these drugs are. However, we know far less about the safety and efficacy in young populations of the drugs typically prescribed for treating anxiety, depression, or other psychiatric disorders. The young brain continues to mature until probably well after age 20, so some scientists are concerned that drugs that alter neuronal activity in the developing brain could have significant consequences. There is an obvious need for clinical trials in children and adolescents to test the safety and effectiveness of many of these drugs, which also brings up a variety of ethical questions about who decides what children and adolescents will participate in these clinical trials, who can give consent, who receives reimbursements, etc.
The Elderly and Psychopharmacology
Another population that has not typically been included in clinical trials to determine the safety or effectiveness of psychotropic drugs is the elderly. Currently, there is very little high-quality evidence to guide prescribing for older people – clinical trials often exclude people with multiple comorbidities (other diseases, conditions, etc.), which are typical for elderly populations (see Hilmer and Gnjidict, 2008; Pollock, Forsyth, & Bies, 2008). This is a serious issue because the elderly consume a disproportionate number of the prescription meds prescribed. The term polypharmacy refers to the use of multiple drugs, which is very common in elderly populations in the United States. As our population ages, some estimate that the proportion of people 65 or older will reach 20% of the U.S. population by 2030, with this group consuming 40% of the prescribed medications. As shown in Table 3 (from Schwartz and Abernethy, 2008), it is quite clear why the typical clinical trial that looks at the safety and effectiveness of psychotropic drugs can be problematic if we try to interpret these results for an elderly population.
Metabolism of drugs is often slowed considerably for elderly populations, so less drug can produce the same effect (or all too often, too much drug can result in a variety of side effects). One of the greatest risk factors for elderly populations is falling (and breaking bones), which can happen if the elderly person gets dizzy from too much of a drug. There is also evidence that psychotropic medications can reduce bone density (thus worsening the consequences if someone falls) (Brown & Mezuk, 2012). Although we are gaining an awareness about some of the issues facing pharmacotherapy in older populations, this is a very complex area with many medical and ethical questions.
This module provided an introduction of some of the important areas in the field of psychopharmacology. It should be apparent that this module just touched on a number of topics included in this field. It should also be apparent that understanding more about psychopharmacology is important to anyone interested in understanding behavior and that our understanding of issues in this field has important implications for society.
Outside Resources
Video: Neurotransmission
A YouTube element has been excluded from this version of the text. You can view it online here: https://openpress.usask.ca/abnormalpsychology/?p=120
Web: Description of how some drugs work and the brain areas involved – 1 http://www.drugabuse.gov/news-events/nida-notes/2007/10/impacts-drugs-neurotransmission
Web: Description of how some drugs work and the brain areas involved – 2 http://learn.genetics.utah.edu/content/addiction/mouse/
Web: Information about how neurons communicate and the reward pathways http://learn.genetics.utah.edu/content/addiction/rewardbehavior/
Web: National Institute of Alcohol Abuse and Alcoholism http://www.niaaa.nih.gov/
Web: National Institute of Drug Abuse http://www.drugabuse.gov/
Web: National Institute of Mental Healthhttp://www.nimh.nih.gov/index.shtml
Web: Report of the Working Group on Psychotropic Medications for Children and Adolescents: Psychopharmacological, Psychosocial, and Combined Interventions for Childhood Disorders: Evidence Base, Contextual Factors, and Future Directions (2008): http://www.apa.org/pi/families/resources/child-medications.pdf
Web: Ways drugs can alter neurotransmission http://thebrain.mcgill.ca/flash/d/d_03/d_03_m/d_03_m_par/d_03_m_par.html
Discussion Questions
1. What are some of the issues surrounding prescribing medications for children and adolescents? How might this be improved?
2. What are some of the factors that can affect relapse to an addictive drug?
3. How might prescribing medications for depression be improved in the future to increase the likelihood that a drug would work and minimize side effects? | textbooks/socialsci/Psychology/Psychological_Disorders/Abnormal_Psychology_(Cummings)/02%3A_Perspectives_on_Abnormal_Behaviour/2.04%3A_Psychopharmacology.txt |
Section Learning Objectives
• Describe the key components of evidence-based practice
• Explain how a treatment is deemed empirically-supported
• Define treatments that harm and explain why they should be of concern for mental health providers
Evidence-Based Practice
Evidence-based practice (EBP) is defined by the Canadian Psychological Association as the intentional and careful use of the best research evidence available at the time, in order to guide each clinical decision and delivered service. To practice in an evidence-based way, a clinician must make themselves aware of the best available research and utilize it while considering specific client preferences, personality traits, and cultural contexts. Selecting a treatment approach that has been shown to be effective for the specific problem is important, as well as tailoring it to fit the individual client (referred to as client specificity). Delivering treatment is therefore a more intentional process than simply learning one treatment modality and applying it indiscriminately to every client.
Given that research is constantly evolving and new studies are frequently added to the existing body of literature, evidence-based practice requires that a clinician maintain a commitment to being and staying informed. Clinicians must also not just consume empirical research, but thoughtfully evaluate it for validity. Every study has limitations, and understanding these limitations is integral to the critical consumption of research. Then, a clinician is charged with the difficult task of deciding how to translate the empirical research into every decision made in clinical practice. Lastly, there must always be open and honest communication between the clinician and client, in an environment where the client feels comfortable and safe expressing their needs.
Although EBP requires a great amount of work on the part of the service provider, it is necessary in order to protect the public from intentional or inadvertent harm. It also maximizes the chances for successful treatment. Evidence-based practice also encourages the view of Psychology as a legitimate, ethical and scientific field of study and practice.
Empirically-Supported Treatments
Born out of an increasing focus on accountability, cost effectiveness, and protecting Psychology’s reputation as a credible health service, task forces were mobilized in the 1990s to investigate the available treatments and services. By endorsing only those modalities that met certain criteria, the task forces created lists of empirically supported treatments. In order to be on the list, the therapy approach had to have been shown to be effective in controlled research settings. This means that the therapy was better than placebo in a statistically significant way, or was found to be at least as effective as an already empirically supported treatment. There was also a move towards standardized and manualized treatment. Treatments that could be easily described (and therefore taught) through a clear step-by-step set of rules were prioritized over those that could not. Clinicians were urged to utilize only those treatments that were found to be empirically supported, in an effort to be fully evidence based in practice.
The advantages of using empirically supported treatments are numerous. Subjecting each therapy to in-depth scrutiny helps to prevent ineffective or harmful approaches from being used. It therefore protects the public from adverse effects that range from paying for an ineffective treatment, to sustaining psychological damage. Focusing on empirically supported treatments serves as a quality control system for the field of Psychology, and protects it from becoming “watered down” by treatment approaches that lack efficacy. By using this system it also becomes less likely that one will make ethical missteps. When a clinician commits to evidence based practice using only empirically supported treatments, the public can be confident that they will receive therapy that is cost effective and has been shown to have a high likelihood of helping them.
However, any big change within a field is likely to have negative consequences no matter how beneficial it may be. There have been several arguments made against a system that strictly adheres to empirically supported treatments. Some took issue with the notion that “validity” is objective and can ever be achieved. They argued that validity is an ever-changing process and that judgments of validity are only as good as the studies that investigate each treatment approach (some of which are plagued with small sample sizes and subpar research conditions). Other critics suggested that many legitimate therapies do not lend themselves to manualized approaches and that strict adherence to a manual does not allow the flexibility required for client specificity. Yet another argument against the list of empirically supported treatments is that it is easily misinterpreted and used as a tool of elitism. Third-party payers may decide to fund only those approaches that are on the list and exclude all others, which is not how the list was intended to be used. Also, therapy approaches for use with certain psychological disorders (notably the personality disorders) are underrepresented on the list of empirically supported treatments, leaving a large subset of clients without appropriate services. As with most issues, the concept of empirically supported treatments is therefore likely best used as a flexible guideline rather than a rigid prescription for practice.
Treatments that Harm
In 2007 Scott Lilienfeld wrote an important article about psychological treatments that cause harm. He argued that the potential for psychology treatments to be harmful had been largely ignored. Despite an increased interest in the negative side effects of psychiatric medications, the field of psychology had been allowed to “fly under the radar.” Lilienfeld posited that this oversight carried with it serious risk to both the field of psychology and the public at large. He researched potentially harmful therapies (PHTs) and broke them down into two categories: Level I (probably harmful) and Level II (possibly harmful). It was noted that the distinction between these two categories likely requires further research, as the therapies listed under Level II may actually be moved to Level I with further information gathered.
According to Lilienfeld, there are two reasons why clinicians need to be concerned about potentially harmful therapies. First, clinicians are bound by an ethical duty to avoid harming their clients. Ignorance is not a valid defense for causing harm, no matter how unintentional. Second, investigating the sometimes negative effects of therapy can shed light on potential causes of client deterioration. Learning about situations in which clients do not get better is as important as the cases in which they do – failure presents an opportunity for growth and increased knowledge. In his article Lilienfeld describes potential harm as including several possibilities: a worsening of symptoms or emergence of new ones, increased distress about existing symptoms, unhealthy dependency on the therapist, reluctance to seek future treatment when needed, and in extreme cases physical harm. Harm can even be done to family and friends of the client, as in the case of false abuse accusations. A therapy is considered a PHT if (1) it causes harmful psychological or physical effects in clients or their relatives, (2) the harmful effects are enduring and are not simply a short-term worsening of symptoms during treatment (as in the case of some PTSD treatments), and (3) the harm has been replicated by independent study. Treatments that harm are concerning because they contribute to client attrition (i.e., clients prematurely leaving therapy), long-term deterioration (i.e., a worsening of client functioning), and a general degradation of psychology’s reputation as a discipline.
In Lilienfeld’s opinion, the topic of treatments that harm requires further investigation. His suggestions for future research include the extent to which harmful therapies are being administered, reasons for the continued popularity of harmful therapies, therapist or client variables that may increase or decrease the likelihood of harm, as well as any mediating variables. He also posits that the antidote to PHTs may include using standardized questionnaires at every session to track client outcomes. | textbooks/socialsci/Psychology/Psychological_Disorders/Abnormal_Psychology_(Cummings)/02%3A_Perspectives_on_Abnormal_Behaviour/2.05%3A_Evidence_Based_Practice_and_Empirically_Supported_Treatments.txt |
Summary
What is considered abnormal behaviour is often dictated by the culture/society a person lives in, as well as the historical context of the time.
Prehistoric cultures often held supernatural views of abnormal behaviour, seeing abnormal behaviour as demonic possession that occurred when a person engaged in behaviour contrary to the religious teachings of the time. Treatment included trephination and exorcism.
Greco-Roman thought on abnormal behaviour rejected the idea of demonic possession. Hippocrates proposed that mental disorders are similar to physical disorders and had natural causes. He also proposed that mental disorders resulted when our humors were imbalanced. Plato further proposed that the mentally ill were not responsible for their actions and so should not be punished.
Progress made by the Greeks and Romans was reversed during the Middle Ages, when mental illness was yet again seen as the result of demonic possession. Exorcism, flogging, prayer, visiting holy sites, and holy water were all used as treatments. At the time, group hysteria was also seen in large numbers.
The Renaissance saw the rise in humanism, which emphasized human welfare and the uniqueness of the individual. The number of asylums began to rise as the government took more responsibility for people’s care.
The moral treatment movement began in the late 18th century in Europe and then rose in the United States in the early 19th century. This movement emphasized respect for the mentally ill, moral guidance, and humane treatment.
Theoretical orientations present a framework through which to understand, organize, and predict human behaviour. When used to treat people with mental illness they are referred to as therapeutic orientations.
The earliest orientation was psychoanalysis, developed by Freud. This model suggests that psychiatric problems are the result of tension between the id, superego, and ego. Although psychoanalysis is still practiced today it has largely been replaced by psychodynamic theory, which uses the same underlying principles of psychoanalysis but is briefer, more present-focused, and sometimes manualized.
Person-centered therapy is referred to as a humanistic therapy, and it is based on the belief that mental health problems arise when our innate human tendency for self-actualization gets blocked somehow. Person-centered therapy believes that providing clients with unconditional positive regard and a place of support will allow them to grow and change. In this sense, it is an unstructured therapy.
The behavioural model of psychopathology believes that how we act is learned, including dysfunctional, abnormal behaviour. It relies upon principles of operant conditioning. Behaviour therapises are popular choices for a wide range of mental illness, especially anxiety disorders. Overall, they focus on learning new behaviour.
The cognitive model arose in direct response to the behavioural model; cognitive theorists believe that by overlooking thoughts, behaviourism was missing an important component of mental illness. According to the cognitive model our thoughts, especially about how we interpret events, influence mental disorder.
Cognitive behavioural therapy (CBT) combines aspects of both behavioural therapy and cognitive therapy. It is one of the most popular therapies, internationally, and it works for a wide variety of diagnoses and presenting problems.
Newer forms of therapy include the acceptance- and mindfulness-based approaches. Mindfulness is a process that cultivates a non-judgmental state of attention. These types of therapies work by altering people’s relationships with their thoughts, behaviours, and emotions, whereas previously developed therapies try to change this content directly.
Emerging treatment strategies include the use of internet-delivered therapies, cognitive bias modification via gamification, and CBT-enhancing pharmaceutical agents
The biological model explains how mental illness develops a medical perspective. The neuron is the fundamental unit of communication of the nervous system. Neurotransmitters like dopamine and serotonin play a key role in our mental health.
Genetic issues, hormonal imbalances, and viral infections can also influence mental illness.
There are five major categories of psychotropic medication: Antidepressants, anti-anxiety medications, stimulants, antipsychotics, and mood stabilizers. Electroconvulsive therapy and psychosurgery are also sometimes used to treat cases of mental illness that do not respond well to medication.
Pharmacokinetics refers to how the body handles drugs that we take, including different drug administrations and drug metabolism.
Controversial issues in psychopharmacology include the use of medications by juveniles and the elderly.
Evidence-based practice is the intentional and careful use of the best available research evidence combined with clinical experience and specific client preferences. Empirically-supported treatments are those that meet certain research criteria in order to be labeled as scientifically supported. Last, treatments that harm are those that cause damage to either clients or their families.
Self-Test
An interactive or media element has been excluded from this version of the text. You can view it online here:
https://openpress.usask.ca/abnormalpsychology/?p=437 | textbooks/socialsci/Psychology/Psychological_Disorders/Abnormal_Psychology_(Cummings)/02%3A_Perspectives_on_Abnormal_Behaviour/2.06%3A_Summary_and_Self-Test-_Perspectives_on_Abnormal_Behaviour.txt |
Sadness and euphoria are two very human experiences. We have all felt down, blue, sad, or maybe even deep grief before. Likewise, all of us have been at one time or another elated, joyful, thrilled and excited. But as intense as these experiences might feel, they are very different from clinical mood disorders. In this chapter you’ll learn about both depression and mania, and the variety of mood disorders that are marked by these experiences. Although “depressed,” in particular, is a phrase that gets used often feeling down is not the same as being diagnosed with depression. In addition, the mood disorders consist of many more symptoms than just feeling down or elated.
In this chapter we’ll discuss all of the symptoms of depressed, manic, and hypomanic episodes as well as the diagnostic criteria for both the unipolar and bipolar mood disorders. We’ll also review the rates and vulnerabilities for both and the etiologies hypothesized to underly them. Last, we’re going to discuss how mood disorders are treated with both biological and psychological interventions.
03: Mood Disorders
Learning Objectives
• Describe the diagnostic criteria for mood disorders.
• Understand age, gender, and ethnic differences in prevalence rates of mood disorders.
• Identify common risk factors for mood disorders.
• Know effective treatments of mood disorders.
Everyone feels down or euphoric from time to time, but this is different from having a mood disorder such as major depressive disorder or bipolar disorder. Mood disorders are extended periods of depressed, euphoric, or irritable moods that in combination with other symptoms cause the person significant distress and interfere with his or her daily life, often resulting in social and occupational difficulties. In this module, we describe major mood disorders, including their symptom presentations, general prevalence rates, and how and why the rates of these disorders tend to vary by age, gender, and race. In addition, biological and environmental risk factors that have been implicated in the development and course of mood disorders, such as heritability and stressful life events, are reviewed. Finally, we provide an overview of treatments for mood disorders, covering treatments with demonstrated effectiveness, as well as new treatment options showing promise.
The actress Brooke Shields published a memoir titled Down Came the Rain: My Journey through Postpartum Depression in which she described her struggles with depression following the birth of her daughter. Despite the fact that about one in 20 women experience depression after the birth of a baby (American Psychiatric Association [APA], 2013), postpartum depression—recently renamed “perinatal depression”—continues to be veiled by stigma, owing in part to a widely held expectation that motherhood should be a time of great joy. In an opinion piece in the New York Times, Shields revealed that entering motherhood was a profoundly overwhelming experience for her. She vividly describes experiencing a sense of “doom” and “dread” in response to her newborn baby. Because motherhood is conventionally thought of as a joyous event and not associated with sadness and hopelessness, responding to a newborn baby in this way can be shocking to the new mother as well as those close to her. It may also involve a great deal of shame for the mother, making her reluctant to divulge her experience to others, including her doctors and family.
Feelings of shame are not unique to perinatal depression. Stigma applies to other types of depressive and bipolar disorders and contributes to people not always receiving the necessary support and treatment for these disorders. In fact, the World Health Organization ranks both major depressive disorder (MDD) and bipolar disorder (BD) among the top 10 leading causes of disability worldwide. Further, MDD and BD carry a high risk of suicide. It is estimated that 25%–50% of people diagnosed with BD will attempt suicide at least once in their lifetimes (Goodwin & Jamison, 2007).
What Are Mood Disorders?
Mood Episodes
Everyone experiences brief periods of sadness, irritability, or euphoria. This is different than having a mood disorder, such as MDD or BD, which are characterized by a constellation of symptoms that causes people significant distress or impairs their everyday functioning.
Major Depressive Episode
A major depressive episode (MDE) refers to symptoms that co-occur for at least two weeks and cause significant distress or impairment in functioning, such as interfering with work, school, or relationships. Core symptoms include feeling down or depressed or experiencing anhedonia—loss of interest or pleasure in things that one typically enjoys. According to the fifth edition of the Diagnostic and Statistical Manual (DSM-5; APA, 2013), the criteria for an MDE require five or more of the following nine symptoms, including one or both of the first two symptoms, for most of the day, nearly every day:
1. depressed mood
2. diminished interest or pleasure in almost all activities
3. significant weight loss or gain or an increase or decrease in appetite
4. insomnia or hypersomnia
5. psychomotor agitation or retardation
6. fatigue or loss of energy
7. feeling worthless or excessive or inappropriate guilt
8. diminished ability to concentrate or indecisiveness
9. recurrent thoughts of death, suicidal ideation, or a suicide attempt
These symptoms cannot be caused by physiological effects of a substance or a general medical condition (e.g., hypothyroidism).
Manic or Hypomanic Episode
The core criterion for a manic or hypomanic episode is a distinct period of abnormally and persistently euphoric, expansive, or irritable mood and persistently increased goal-directed activity or energy. The mood disturbance must be present for one week or longer in mania (unless hospitalization is required) or four days or longer in hypomania. Concurrently, at least three of the following symptoms must be present in the context of euphoric mood (or at least four in the context of irritable mood):
1. inflated self-esteem or grandiosity
2. increased goal-directed activity or psychomotor agitation
3. reduced need for sleep
4. racing thoughts or flight of ideas
5. distractibility
6. increased talkativeness
7. excessive involvement in risky behaviors
Manic episodes are distinguished from hypomanic episodes by their duration and associated impairment; whereas manic episodes must last one week and are defined by a significant impairment in functioning, hypomanic episodes are shorter and not necessarily accompanied by impairment in functioning.
Mood Disorders
Unipolar Mood Disorders
Two major types of unipolar disorders described by the DSM-5 (APA, 2013) are major depressive disorder and persistent depressive disorder (PDD; dysthymia). MDD is defined by one or more MDEs, but no history of manic or hypomanic episodes. Criteria for PDD are feeling depressed most of the day for more days than not, for at least two years. At least two of the following symptoms are also required to meet criteria for PDD:
1. poor appetite or overeating
2. insomnia or hypersomnia
3. low energy or fatigue
4. low self-esteem
5. poor concentration or difficulty making decisions
6. feelings of hopelessness
Like MDD, these symptoms need to cause significant distress or impairment and cannot be due to the effects of a substance or a general medical condition. To meet criteria for PDD, a person cannot be without symptoms for more than two months at a time. PDD has overlapping symptoms with MDD. If someone meets criteria for an MDE during a PDD episode, the person will receive diagnoses of PDD and MDD.
Bipolar Mood Disorders
Three major types of BDs are described by the DSM-5 (APA, 2013). Bipolar I Disorder (BD I), which was previously known as manic-depression, is characterized by a single (or recurrent) manic episode. A depressive episode is not necessary but commonly present for the diagnosis of BD I. Bipolar II Disorder is characterized by single (or recurrent) hypomanic episodes and depressive episodes. Another type of BD is cyclothymic disorder, characterized by numerous and alternating periods of hypomania and depression, lasting at least two years. To qualify for cyclothymic disorder, the periods of depression cannot meet full diagnostic criteria for an MDE; the person must experience symptoms at least half the time with no more than two consecutive symptom-free months; and the symptoms must cause significant distress or impairment.
It is important to note that the DSM-5 was published in 2013, and findings based on the updated manual will be forthcoming. Consequently, the research presented below was largely based on a similar, but not identical, conceptualization of mood disorders drawn from the DSM-IV (APA, 2000).
Box 1. Specifiers
Both MDEs and manic episodes can be further described using standardized tags based on the timing of, or other symptoms that are occurring during, the mood episode, to increase diagnostic specificity and inform treatment. Psychotic features is specified when the episodes are accompanied by delusions (rigidly held beliefs that are false) or hallucinations (perceptual disturbances that are not based in reality). Seasonal pattern is specified when a mood episode occurs at the same time of the year for two consecutive years — most commonly occurring in the fall and winter. Peripartum onset is specified when a mood episode has an onset during pregnancy or within four weeks of the birth of a child. Approximately 3%–6% of women who have a child experience an MDE with peripartum onset (APA, 2013). This is less frequent and different from the baby blues or when women feel transient mood symptoms usually within 10 days of giving birth, which are experienced by most women (Nolen-Hoeksema & Hilt, 2009).
How Common Are Mood Disorders? Who Develops Mood Disorders?
Depressive Disorders
In a nationally representative sample of Americans, lifetime prevalence rate for MDD was 16.6% (Kessler, Berglund, Demler, Jin, Merikangas, & Walters, 2005). This means that nearly one in five Americans will meet the criteria for MDD during their lifetime. Lifetime prevalence rates in Canada have been estimated at 11.2% (Knoll & Maclennan, 2017). The 12-month prevalence—the proportion of people who meet criteria for a disorder during a 12-month period— of MDD in Canada is 4.7% (Knoll & MacLennan, 2017).
Although the onset of MDD can occur at any time throughout the lifespan, the average age of onset is mid-20s, with the age of onset decreasing with people born more recently (APA, 2000). Prevalence of MDD among older adults is much lower than it is for younger cohorts (Kessler, Birnbaum, Bromet, Hwang, Sampson, & Shahly, 2010). The duration of MDEs varies widely. Recovery begins within three months for 40% of people with MDD and within 12 months for 80% (APA, 2013). MDD tends to be a recurrent disorder with about 40%–50% of those who experience one MDE experiencing a second MDE (Monroe & Harkness, 2011). An earlier age of onset predicts a worse course. About 5%–10% of people who experience an MDE will later experience a manic episode (APA, 2000), thus no longer meeting criteria for MDD but instead meeting them for BD I. Diagnoses of other disorders across the lifetime are common for people with MDD: 59% experience an anxiety disorder; 32% experience an impulse control disorder, and 24% experience a substance use disorder (Kessler, Merikangas, & Wang, 2007).
Women experience two to three times higher rates of MDD than do men (Nolen-Hoeksema & Hilt, 2009). This gender difference emerges during puberty (Conley & Rudolph, 2009). Before puberty, boys exhibit similar or higher prevalence rates of MDD than do girls (Twenge & Nolen-Hoeksema, 2002). MDD is inversely correlated with socioeconomic status (SES), a person’s economic and social position based on income, education, and occupation. Higher prevalence rates of MDD are associated with lower SES (Lorant, Deliege, Eaton, Robert, Philippot, & Ansseau, 2003), particularly for adults over 65 years old (Kessler et al., 2010). Independent of SES, results from a nationally representative sample found that European Americans had a higher prevalence rate of MDD than did African Americans and Hispanic Americans, whose rates were similar (Breslau, Aguilar-Gaxiola, Kendler, Su, Williams, & Kessler, 2006). The course of MDD for African Americans is often more severe and less often treated than it is for European Americans, however (Williams et al., 2007). American research indicates that Native Americans (a designation still used in the United States) have a higher prevalence rate than do European Americans, African Americans, or Hispanic Americans (Hasin, Goodwin, Stinson & Grant, 2005). Depression is not limited to industrialized or western cultures; it is found in all countries that have been examined, although the symptom presentation as well as prevalence rates vary across cultures (Chentsova-Dutton & Tsai, 2009).
It is important to note that sexual minorities, including non-gender binary individuals tend to experience higher rates of depression than the general population. For example, a recent Canadian study estimated the lifetime prevalence rates of depression as 67.7% for sexual minorities and 72% for gender liminal individuals living in Ontario (Williams et al., 2017). In another study conducted in Ontario, 66.4% of transgender participants reported experiencing current depression (Rotondi, Bauer, Scanlon, Kaay, Travers, & Travers, 2011).
Bipolar Disorders
The lifetime prevalence rate of bipolar spectrum disorders in the general U.S. population is estimated at approximately 4.4%, with BD I constituting about 1% of this rate (Merikangas et al., 2007). In Canadian samples, the lifetime prevalence rate for bipolar disorder has been estimated at 2.6% (Statistics Canada, 2013) and the 12-month prevalence rate as 1.5% (Statistics Canada, 2013). More recent data shows the lifetime prevalence rate of Bipolar I and II in Canada at 0.87% and 0.57%, respectively (McDonald et al., 2015).
Prevalence estimates, however, are highly dependent on the diagnostic procedures used (e.g., interviews vs. self-report) and whether or not sub-threshold forms of the disorder are included in the estimate. BD often co-occurs with other psychiatric disorders. Approximately 65% of people with BD meet diagnostic criteria for at least one additional psychiatric disorder, most commonly anxiety disorders and substance use disorders (McElroy et al., 2001). The co-occurrence of BD with other psychiatric disorders is associated with poorer illness course, including higher rates of suicidality (Leverich et al., 2003). A recent cross-national study sample of more than 60,000 adults from 11 countries, estimated the worldwide prevalence of BD at 2.4%, with BD I constituting 0.6% of this rate (Merikangas et al., 2011). In this study, the prevalence of BD varied somewhat by country. Whereas the United States had the highest lifetime prevalence (4.4%), India had the lowest (0.1%). Variation in prevalence rates was not necessarily related to SES, as in the case of Japan, a high-income country with a very low prevalence rate of BD (0.7%).
With regard to ethnicity, data from studies not confounded by SES or inaccuracies in diagnosis are limited, but available reports suggest rates of BD among European Americans are similar to those found among African Americans (Blazer et al., 1985) and Hispanic Americans (Breslau, Kendler, Su, Gaxiola-Aguilar, & Kessler, 2005). Another large community-based study found that although prevalence rates of mood disorders were similar across ethnic groups, Hispanic Americans and African Americans with a mood disorder were more likely to remain persistently ill than European Americans (Breslau et al., 2005). Compared with European Americans with BD, African Americans tend to be underdiagnosed for BD (and over-diagnosed for schizophrenia) (Kilbourne, Haas, Mulsant, Bauer, & Pincus, 2004; Minsky, Vega, Miskimen, Gara, & Escobar, 2003), and Hispanic Americans with BD have been shown to receive fewer psychiatric medication prescriptions and specialty treatment visits (Gonzalez et al., 2007). Misdiagnosis of BD can result in the underutilization of treatment or the utilization of inappropriate treatment, and thus profoundly impact the course of illness.
As with MDD, adolescence is known to be a significant risk period for BD; mood symptoms start by adolescence in roughly half of BD cases (Leverich et al., 2007; Perlis et al., 2004). Longitudinal studies show that those diagnosed with BD prior to adulthood experience a more pernicious course of illness relative to those with adult onset, including more episode recurrence, higher rates of suicidality, and profound social, occupational, and economic repercussions (e.g., Lewinsohn, Seeley, Buckley, & Klein, 2002). The prevalence of BD is substantially lower in older adults compared with younger adults (1% vs. 4%) (Merikangas et al., 2007).
What Are Some of the Factors Implicated in the Development and Course of Mood Disorders?
Mood disorders are complex disorders resulting from multiple factors. Causal explanations can be attempted at various levels, including biological and psychosocial levels. Below are several of the key factors that contribute to onset and course of mood disorders are highlighted.
Depressive Disorders
Research across family and twin studies has provided support that genetic factors are implicated in the development of MDD. Twin studies suggest that familial influence on MDD is mostly due to genetic effects and that individual-specific environmental effects (e.g., romantic relationships) play an important role, too. By contrast, the contribution of shared environmental effect by siblings is negligible (Sullivan, Neale & Kendler, 2000). The mode of inheritance is not fully understood although no single genetic variation has been found to increase the risk of MDD significantly. Instead, several genetic variants and environmental factors most likely contribute to the risk for MDD (Lohoff, 2010).
One environmental stressor that has received much support in relation to MDD is stressful life events. In particular, severe stressful life events—those that have long-term consequences and involve loss of a significant relationship (e.g., divorce) or economic stability (e.g., unemployment) are strongly related to depression (Brown & Harris, 1989; Monroe et al., 2009). Stressful life events are more likely to predict the first MDE than subsequent episodes (Lewinsohn, Allen, Seeley, & Gotlib, 1999). In contrast, minor events may play a larger role in subsequent episodes than the initial episodes (Monroe & Harkness, 2005).
Depression research has not been limited to examining reactivity to stressful life events. Much research, particularly brain imagining research using functional magnetic resonance imaging (fMRI), has centered on examining neural circuitry—the interconnections that allow multiple brain regions to perceive, generate, and encode information in concert. A meta-analysis of neuroimaging studies showed that when viewing negative stimuli (e.g., picture of an angry face, picture of a car accident), compared with healthy control participants, participants with MDD have greater activation in brain regions involved in stress response and reduced activation of brain regions involved in positively motivated behaviors (Hamilton, Etkin, Furman, Lemus, Johnson, & Gotlib, 2012).
Other environmental factors related to increased risk for MDD include experiencing early adversity (e.g., childhood abuse or neglect; Widom, DuMont, & Czaja, 2007), chronic stress (e.g., poverty) and interpersonal factors. For example, marital dissatisfaction predicts increases in depressive symptoms in both men and women. On the other hand, depressive symptoms also predict increases in marital dissatisfaction (Whisman & Uebelacker, 2009). Research has found that people with MDD generate some of their interpersonal stress (Hammen, 2005). People with MDD whose relatives or spouses can be described as critical and emotionally overinvolved have higher relapse rates than do those living with people who are less critical and emotionally overinvolved (Butzlaff & Hooley, 1998).
People’s attributional styles or their general ways of thinking, interpreting, and recalling information have also been examined in the etiology of MDD (Gotlib & Joormann, 2010). People with a pessimistic attributional style tend to make internal (versus external), global (versus specific), and stable (versus unstable) attributions to negative events, serving as a vulnerability to developing MDD. For example, someone who when he fails an exam thinks that it was his fault (internal), that he is stupid (global), and that he will always do poorly (stable) has a pessimistic attribution style. Several influential theories of depression incorporate attributional styles (Abramson, Metalsky, & Alloy, 1989; Abramson Seligman, & Teasdale, 1978).
Bipolar Disorders
Although there have been important advances in research on the etiology, course, and treatment of BD, there remains a need to understand the mechanisms that contribute to episode onset and relapse. There is compelling evidence for biological causes of BD, which is known to be highly heritable (McGuffin, Rijsdijk, Andrew, Sham, Katz, & Cardno, 2003). It may be argued that a high rate of heritability demonstrates that BD is fundamentally a biological phenomenon. However, there is much variability in the course of BD both within a person across time and across people (Johnson, 2005). The triggers that determine how and when this genetic vulnerability is expressed are not yet understood; however, there is evidence to suggest that psychosocial triggers may play an important role in BD risk (e.g., Johnson et al., 2008; Malkoff-Schwartz et al., 1998).
In addition to the genetic contribution, biological explanations of BD have also focused on brain function. Many of the studies using fMRI techniques to characterize BD have focused on the processing of emotional stimuli based on the idea that BD is fundamentally a disorder of emotion (APA, 2000). Findings show that regions of the brain thought to be involved in emotional processing and regulation are activated differently in people with BD relative to healthy controls (e.g., Altshuler et al., 2008; Hassel et al., 2008; Lennox, Jacob, Calder, Lupson, & Bullmore, 2004).
However, there is little consensus as to whether a particular brain region becomes more or less active in response to an emotional stimulus among people with BD compared with healthy controls. Mixed findings are in part due to samples consisting of participants who are at various phases of illness at the time of testing (manic, depressed, inter-episode). Sample sizes tend to be relatively small, making comparisons between subgroups difficult. Additionally, the use of a standardized stimulus (e.g., facial expression of anger) may not elicit a sufficiently strong response. Personally engaging stimuli, such as recalling a memory, may be more effective in inducing strong emotions (Isacowitz, Gershon, Allard, & Johnson, 2013).
Within the psychosocial level, research has focused on the environmental contributors to BD. A series of studies show that environmental stressors, particularly severe stressors (e.g., loss of a significant relationship), can adversely impact the course of BD. People with BD have substantially increased risk of relapse (Ellicott, Hammen, Gitlin, Brown, & Jamison, 1990) and suffer more depressive symptoms (Johnson, Winett, Meyer, Greenhouse, & Miller, 1999) following a severe life stressor. Interestingly, positive life events can also adversely impact the course of BD. People with BD suffer more manic symptoms after life events involving attainment of a desired goal (Johnson et al., 2008). Such findings suggest that people with BD may have a hypersensitivity to rewards.
Evidence from the life stress literature has also suggested that people with mood disorders may have a circadian vulnerability that renders them sensitive to stressors that disrupt their sleep or rhythms. According to social zeitgeber theory (Ehlers, Frank, & Kupfer, 1988; Frank et al., 1994), stressors that disrupt sleep, or that disrupt the daily routines that entrain the biological clock (e.g., meal times) can trigger episode relapse. Consistent with this theory, studies have shown that life events that involve a disruption in sleep and daily routines, such as overnight travel, can increase bipolar symptoms in people with BD (Malkoff-Schwartz et al., 1998).
What Are Some of the Well-Supported Treatments for Mood Disorders?
Depressive Disorders
There are many treatment options available for people with MDD. First, a number of antidepressant medications are available, all of which target one or more of the neurotransmitters implicated in depression.The earliest antidepressant medications were monoamine oxidase inhibitors (MAOIs). MAOIs inhibit monoamine oxidase, an enzyme involved in deactivating dopamine, norepinephrine, and serotonin. Although effective in treating depression, MAOIs can have serious side effects. Patients taking MAOIs may develop dangerously high blood pressure if they take certain drugs (e.g., antihistamines) or eat foods containing tyramine, an amino acid commonly found in foods such as aged cheeses, wine, and soy sauce. Tricyclics, the second-oldest class of antidepressant medications, block the reabsorption of norepinephrine, serotonin, or dopamine at synapses, resulting in their increased availability. Tricyclics are most effective for treating vegetative and somatic symptoms of depression. Like MAOIs, they have serious side effects, the most concerning of which is being cardiotoxic. Selective serotonin reuptake inhibitors (SSRIs; e.g., Fluoxetine) and serotonin and norepinephrine reuptake inhibitors (SNRIs; e.g., Duloxetine) are the most recently introduced antidepressant medications. SSRIs, the most commonly prescribed antidepressant medication, block the reabsorption of serotonin, whereas SNRIs block the reabsorption of serotonin and norepinephrine. SSRIs and SNRIs have fewer serious side effects than do MAOIs and tricyclics. In particular, they are less cardiotoxic, less lethal in overdose, and produce fewer cognitive impairments. They are not, however, without their own side effects, which include but are not limited to difficulty having orgasms, gastrointestinal issues, and insomnia. It should be noted that anti-depressant medication may not work equally for all people. This approach to treatment often involves experimentation with several medications and dosages, and may be more effective when paired with physical exercise and psychotherapy.
Other biological treatments for people with depression include electroconvulsive therapy (ECT), transcranial magnetic stimulation (TMS), and deep brain stimulation. ECT involves inducing a seizure after a patient takes muscle relaxants and is under general anesthesia. ECT is viable treatment for patients with severe depression or who show resistance to antidepressants although the mechanisms through which it works remain unknown. A common side effect is confusion and memory loss, usually short-term (Schulze-Rauschenbach, Harms, Schlaepfer, Maier, Falkai, & Wagner, 2005). Repetitive TMS is a noninvasive technique administered while a patient is awake. Brief pulsating magnetic fields are delivered to the cortex, inducing electrical activity. TMS has fewer side effects than ECT (Schulze-Rauschenbach et al., 2005), and while outcome studies are mixed, there is evidence that TMS is a promising treatment for patients with MDD who have shown resistance to other treatments (Rosa et al., 2006). Most recently, deep brain stimulation is being examined as a treatment option for patients who did not respond to more traditional treatments like those already described. Deep brain stimulation involves implanting an electrode in the brain. The electrode is connected to an implanted neurostimulator, which electrically stimulates that particular brain region. Although there is some evidence of its effectiveness (Mayberg et al., 2005), additional research is needed.
Several psychosocial treatments have received strong empirical support, meaning that independent investigations have achieved similarly positive results—a high threshold for examining treatment outcomes. These treatments include but are not limited to behavior therapy, cognitive therapy, and interpersonal therapy. Behavior therapies focus on increasing the frequency and quality of experiences that are pleasant or help the patient achieve mastery. Cognitive therapies primarily focus on helping patients identify and change distorted automatic thoughts and assumptions (e.g., Beck, 1967). Cognitive-behavioral therapies are based on the rationale that thoughts, behaviors, and emotions affect and are affected by each other. Interpersonal Therapy for Depression focuses largely on improving interpersonal relationships by targeting problem areas, specifically unresolved grief, interpersonal role disputes, role transitions, and interpersonal deficits. The overall response rate for cognitive behavioral therapy for depression, based on international samples, has ranged from 34% to 71% (Beard, Stein, Hearon, Lee, Hsu, & Bjorgvinsson, 2016; Santoft, Axelsson, Ost, Hedman-Lagerlof, Fust, & Hedman-Lagerlof, 2019). Finally, there is also some support for the effectiveness of Short-Term Psychodynamic Therapy for Depression (Leichsenring, 2001). The short-term treatment focuses on a limited number of important issues, and the therapist tends to be more actively involved than in more traditional psychodynamic therapy.
Bipolar Disorders
Patients with BD are typically treated with pharmacotherapy. Antidepressants such as SSRIs and SNRIs are the primary choice of treatment for depression, whereas for BD, lithium is the first line treatment choice. This is because SSRIs and SNRIs have the potential to induce mania or hypomania in patients with BD. Lithium acts on several neurotransmitter systems in the brain through complex mechanisms, including reduction of excitatory (dopamine and glutamate) neurotransmission, and increasing of inhibitory (GABA) neurotransmission (Lenox & Hahn, 2000). Lithium has strong efficacy for the treatment of BD (Geddes, Burgess, Hawton, Jamison, & Goodwin, 2004). However, a number of side effects can make lithium treatment difficult for patients to tolerate. Side effects include impaired cognitive function (Wingo, Wingo, Harvey, & Baldessarini, 2009), as well as physical symptoms such as nausea, tremor, weight gain, and fatigue (Dunner, 2000). Some of these side effects can improve with continued use; however, medication noncompliance remains an ongoing concern in the treatment of patients with BD. Anticonvulsant medications (e.g., carbamazepine, valproate) are also commonly used to treat patients with BD, either alone or in conjunction with lithium.
There are several adjunctive treatment options for people with BD. Interpersonal and social rhythm therapy (IPSRT; Frank et al., 1994) is a psychosocial intervention focused on addressing the mechanism of action posited in social zeitgeber theory to predispose patients who have BD to relapse, namely sleep disruption. A growing body of literature provides support for the central role of sleep dysregulation in BD (Harvey, 2008). Consistent with this literature, IPSRT aims to increase rhythmicity of patients’ lives and encourage vigilance in maintaining a stable rhythm. The therapist and patient work to develop and maintain a healthy balance of activity and stimulation such that the patient does not become overly active (e.g., by taking on too many projects) or inactive (e.g., by avoiding social contact). The efficacy of IPSRT has been demonstrated in that patients who received this treatment show reduced risk of episode recurrence and are more likely to remain well (Frank et al., 2005).
Conclusion
Everyone feels down or euphoric from time to time. For some people, these feelings can last for long periods of time and can also co-occur with other symptoms that, in combination, interfere with their everyday lives. When people experience an MDE or a manic episode, they see the world differently. During an MDE, people often feel hopeless about the future, and may even experience suicidal thoughts. During a manic episode, people often behave in ways that are risky or place them in danger. They may spend money excessively or have unprotected sex, often expressing deep shame over these decisions after the episode. MDD and BD cause significant problems for people at school, at work, and in their relationships and affect people regardless of gender, age, nationality, race, religion, or sexual orientation. If you or someone you know is suffering from a mood disorder, it is important to seek help. Effective treatments are available and continually improving. If you have an interest in mood disorders, there are many ways to contribute to their understanding, prevention, and treatment, whether by engaging in research or clinical work.
Outside Resources
Books: Recommended memoirs include A Memoir of Madness by William Styron (MDD); Noonday Demon: An Atlas of Depression by Andrew Solomon (MDD); and An Unquiet Mind: A Memoir of Moods and Madness by Kay Redfield (BD).
Web: Visit the Association for Behavioral and Cognitive Therapies to find a list of the recommended therapists and evidence-based treatments. http://www.abct.org
Web: Visit the Depression and Bipolar Support Alliance for educational information and social support options. http://www.dbsalliance.org/
Discussion Questions
1. What factors might explain the large gender difference in the prevalence rates of MDD?
2. Why might American ethnic minority groups experience more persistent BD than European Americans?
3. Why might the age of onset for MDD be decreasing over time?
4. Why might overnight travel constitute a potential risk for a person with BD?
5. What are some reasons positive life events may precede the occurrence of manic episode?
3.02: Summary and Self-Test- Mood Disorders
Summary
Everyone feels down or euphoric from time to time, but this is different from having a mood disorder like major depressive disorder or bipolar disorder. Mood disorders are extended periods of depressed, euphoric, or irritable moods that in combination with other symptoms cause the person distress and interfere with their life.
Mood episodes are shortened periods of mood disruption. A major depressive episode refers to symptoms that last for at least two weeks and cause significant distress or impairment in functioning. Core symptoms include low mood and anhedonia.
Manic and hypomanic episodes are periods of abnormally and persistently euphoric, expansive, or irritable mood and persistently increased goal-directed activity or energy. For mania this must be present for one week or longer, or four days for hypomania.
There are two major types of unipolar mood disorders: major depressive disorder, which is defined by one or more major depressive episodes, and persistent depressive disorder, which is feeling depressed most days for at least two years.
Bipiolar I disorder is characterized by a single or recurrent manic episode whereas Bipolar II is characterized by a single or recurrent hypomanic episode. Cyclothymic disorder is characterized by numerous and alternating periods of hypomania and depression, lasting at least two years.
The lifetime prevalence rate for major depression in Canada is 11.2%. The average age of onset for depression is in the mid-20s, and an earlier age of onset predicts a worse course. About 5-10% of people who experience a major depressive episode will later experience mania.
Women experience 2-3 times higher rates of major depression than men do, although before puberty rates of childhood depression are equal for boys and girls. Major depression is inversely related to socioeconomic status. Unfortunately, sexual minorities experience much higher rates of depression than the general population.
The lifetime prevalence rate for bipolar disorder is 2.6% in Canada. The majority of people with bipolar disorder also meet criteria for another disorder. Adolescence is a significant risk period for bipolar disorder.
Multiple variables are implicated in the development of depressive disorders including genetic factors, stressful life events, early adversity, chronic stress, and attributional styles.
Bipolar disorder is highly heritable and might fundamentally be a biological phenomenon. However, as each person experiences the course of their bipolar disorder differently, environmental variables still impact it including stress and social rhythms.
There are many treatment options for depression including antidepressant medication, electroconvulsive therapy, transcranial magnetic stimulation, deep brain stimulation, cognitive-behavioural therapy, interpersonal therapy, psychodynamic therapy, and mindfulness-based cognitive therapy.
Patients with bipolar disorder are typically treated with lithium, Interpersonal and social rhythm therapy is also effective for bipolar disorder.
An interactive or media element has been excluded from this version of the text. You can view it online here:
https://openpress.usask.ca/abnormalpsychology/?p=265 | textbooks/socialsci/Psychology/Psychological_Disorders/Abnormal_Psychology_(Cummings)/03%3A_Mood_Disorders/3.01%3A_Mood_Disorders.txt |
Anxiety is a natural part of life and, at normal levels, helps us to function at our best. However, for people with anxiety disorders, anxiety is overwhelming and hard to control. Anxiety disorders develop out of a blend of biological (genetic) and psychological factors that, when combined with stress, may lead to the development of ailments. Primary anxiety-related diagnoses include generalized anxiety disorder, panic disorder, specific phobia, social anxiety disorder (social phobia), post-traumatic stress disorder (PTSD), and obsessive-compulsive disorder. In this module, we summarize the main clinical features of each of these disorders and discuss their similarities and differences with everyday experiences of anxiety. We will also briefly discuss how the anxiety disorders are treated. Note that we will not focus on PTSD in this chapter, as we will be discussing it in more detail in a later module.
04: Anxiety Disorders
Learning Objectives
• Understand the relationship between anxiety and anxiety disorders.
• Identify key vulnerabilities for developing anxiety and related disorders.
• Identify main diagnostic features of specific anxiety-related disorders.
• Differentiate between disordered and non-disordered functioning.
• Describe treatments for anxiety disorders
What is Anxiety?
What is anxiety? Most of us feel some anxiety almost every day of our lives. Maybe you have an important test coming up for school. Or maybe there’s that big game next Saturday, or that first date with someone new you are hoping to impress. Anxiety can be defined as a negative mood state that is accompanied by bodily symptoms such as increased heart rate, muscle tension, a sense of unease, and apprehension about the future (APA, 2013; Barlow, 2002).
Anxiety is what motivates us to plan for the future, and in this sense, anxiety is actually a good thing. It’s that nagging feeling that motivates us to study for that test, practice harder for that game, or be at our very best on that date. But some people experience anxiety so intensely that it is no longer helpful or useful. They may become so overwhelmed and distracted by anxiety that they actually fail their test, fumble the ball, or spend the whole date fidgeting and avoiding eye contact. If anxiety begins to interfere in the person’s life in a significant way, it is considered a disorder.
Vulnerabilities to Anxiety
Anxiety and closely related disorders emerge from “triple vulnerabilities,”a combination of biological, psychological, and specific factors that increase our risk for developing a disorder (Barlow, 2002; Suárez, Bennett, Goldstein, & Barlow, 2009). Biological vulnerabilities refer to specific genetic and neurobiological factors that might predispose someone to develop anxiety disorders. No single gene directly causes anxiety or panic, but our genes may make us more susceptible to anxiety and influence how our brains react to stress (Drabant et al., 2012; Gelernter & Stein, 2009; Smoller, Block, & Young, 2009). Psychological vulnerabilities refer to the influences that our early experiences have on how we view the world. If we were confronted with unpredictable stressors or traumatic experiences at younger ages, we may come to view the world as unpredictable and uncontrollable, even dangerous (Chorpita & Barlow, 1998; Gunnar & Fisher, 2006). Specific vulnerabilities refer to how our experiences lead us to focus and channel our anxiety (Suárez et al., 2009). If we learned that physical illness is dangerous, maybe through witnessing our family’s reaction whenever anyone got sick, we may focus our anxiety on physical sensations. If we learned that disapproval from others has negative, even dangerous consequences, such as being yelled at or severely punished for even the slightest offense, we might focus our anxiety on social evaluation. If we learn that the “other shoe might drop” at any moment, we may focus our anxiety on worries about the future. None of these vulnerabilities directly causes anxiety disorders on its own—instead, when all of these vulnerabilities are present, and we experience some triggering life stress, an anxiety disorder may be the result (Barlow, 2002; Suárez et al., 2009). In the next sections, we will briefly explore each of the major anxiety based disorders, found in the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) (APA, 2013).
Generalized Anxiety Disorder
Most of us worry some of the time, and this worry can actually be useful in helping us to plan for the future or make sure we remember to do something important. Most of us can set aside our worries when we need to focus on other things or stop worrying altogether whenever a problem has passed. However, for someone with generalized anxiety disorder (GAD), these worries become difficult, or even impossible, to turn off. They may find themselves worrying excessively about a number of different things, both minor and catastrophic. Their worries also come with a host of other symptoms such as muscle tension, fatigue, agitation or restlessness, irritability, difficulties with sleep (either falling asleep, staying asleep, or both), or difficulty concentrating.The DSM-5 criteria specify that at least six months of excessive anxiety and worry of this type must be ongoing, happening more days than not for a good proportion of the day, to receive a diagnosis of GAD.
About 5.7% of the population has met criteria for GAD at some point during their lifetime (Kessler, Berglund, et al., 2005), making it one of the most common anxiety disorders (see Table 1). Data from the 2012 Canadian Community Health Survey found that the 12-month and lifetime prevalence rate of GAD for Canadians aged 15 or older was 2.6% and 8.7%, respectively (Statistics Canada, 2016). GAD has been found more commonly among women and in urban geographical areas (Pelletier, O’Donnell, McRae, & Grenier, 2017).
What makes a person with GAD worry more than the average person? Research shows that individuals with GAD are more sensitive and vigilant toward possible threats than people who are not anxious (Aikins & Craske, 2001; Barlow, 2002; Bradley, Mogg, White, Groom, & de Bono, 1999). This may be related to early stressful experiences, which can lead to a view of the world as an unpredictable, uncontrollable, and even dangerous place. Some have suggested that people with GAD worry as a way to gain some control over these otherwise uncontrollable or unpredictable experiences and against uncertain outcomes (Dugas, Gagnon, Ladouceur, & Freeston, 1998). By repeatedly going through all of the possible “What if?” scenarios in their mind, the person might feel like they are less vulnerable to an unexpected outcome, giving them the sense that they have some control over the situation (Wells, 2002). Others have suggested people with GAD worry as a way to avoid feeling distressed (Borkovec, Alcaine, & Behar, 2004). For example, Borkovec and Hu (1990) found that those who worried when confronted with a stressful situation had less physiological arousal than those who didn’t worry, maybe because the worry “distracted” them in some way.
The problem is, all of this “what if?”-ing doesn’t get the person any closer to a solution or an answer and, in fact, might take them away from important things they should be paying attention to in the moment, such as finishing an important project. Many of the catastrophic outcomes people with GAD worry about are very unlikely to happen, so when the catastrophic event doesn’t materialize, the act of worrying gets reinforced (Borkovec, Hazlett-Stevens, & Diaz, 1999). For example, if a mother spends all night worrying about whether her teenage daughter will get home safe from a night out and the daughter returns home without incident, the mother could easily attribute her daughter’s safe return to her successful “vigil.” What the mother hasn’t learned is that her daughter would have returned home just as safe if she had been focusing on the movie she was watching with her husband, rather than being preoccupied with worries. In this way, the cycle of worry is perpetuated, and, subsequently, people with GAD often miss out on many otherwise enjoyable events in their lives.
Panic Disorder and Agoraphobia
Have you ever gotten into a near-accident or been taken by surprise in some way? You may have felt a flood of physical sensations, such as a racing heart, shortness of breath, or tingling sensations. This alarm reaction is called the “fight or flight” response (Cannon, 1929) and is your body’s natural reaction to fear, preparing you to either fight or escape in response to threat or danger. It’s likely you weren’t too concerned with these sensations, because you knew what was causing them. But imagine if this alarm reaction came “out of the blue,” for no apparent reason, or in a situation in which you didn’t expect to be anxious or fearful. This is called an “unexpected” panic attack or a false alarm. Because there is no apparent reason or cue for the alarm reaction, you might react to the sensations with intense fear, maybe thinking you are having a heart attack, or going crazy, or even dying. You might begin to associate the physical sensations you felt during this attack with this fear and may start to go out of your way to avoid having those sensations again.
Unexpected panic attacks such as these are at the heart of panic disorder (PD). However, to receive a diagnosis of PD, the person must not only have unexpected panic attacks but also must experience continued intense anxiety and avoidance related to the attack for at least one month, causing significant distress or interference in their lives. People with panic disorder tend to interpret even normal physical sensations in a catastrophic way, which triggers more anxiety and, ironically, more physical sensations, creating a vicious cycle of panic (Clark, 1986, 1996). The person may begin to avoid a number of situations or activities that produce the same physiological arousal that was present during the beginnings of a panic attack. For example, someone who experienced a racing heart during a panic attack might avoid exercise or caffeine. Someone who experienced choking sensations might avoid wearing high-necked sweaters or necklaces. Avoidance of these internal bodily or somatic cues for panic has been termed interoceptive avoidance (Barlow & Craske, 2007; Brown, White, & Barlow, 2005; Craske & Barlow, 2008; Shear et al., 1997).
The individual may also have experienced an overwhelming urge to escape during the unexpected panic attack. This can lead to a sense that certain places or situations—particularly situations where escape might not be possible—are not “safe.” These situations become external cues for panic. If the person begins to avoid several places or situations, or still endures these situations but does so with a significant amount of apprehension and anxiety, then the person also has agoraphobia (Barlow, 2002; Craske & Barlow, 1988; Craske & Barlow, 2008). Agoraphobia can cause significant disruption to a person’s life, causing them to go out of their way to avoid situations, such as adding hours to a commute to avoid taking the train or only ordering take-out to avoid having to enter a grocery store. In one tragic case seen by our clinic, a woman suffering from agoraphobia had not left her apartment for 20 years and had spent the past 10 years confined to one small area of her apartment, away from the view of the outside. In some cases, agoraphobia develops in the absence of panic attacks and therefore is a separate disorder in DSM-5. But agoraphobia often accompanies panic disorder.
One third of adults in Canada experience a panic attack each year; however, only 1-2% of Canadians that same year are diagnosed with panic disorder (Canadian Mental Health Association’s BC Division, 2013). About 4.7% of the population has met criteria for PD or agoraphobia over their lifetime, according to both American (Kessler, Chiu, Demler, Merikangas, & Walters, 2005; Kessler et al., 2006) (see Table 4.1) and Canadian data (Canadian Mental Health Association’s BC Division, 2013). In all of these cases of panic disorder, what was once an adaptive natural alarm reaction now becomes a learned, and much feared, false alarm. Data from the 2002 Canadian Community Health Survey found that the prevalence of agoraphobia was 0.78% for people aged 15-54 years and 0.61% for adults aged 55 years or older (McCabe, Cairney, Veldhuizen, Herrmann, & Streiner, 2006). In that paper, agoraphobia was reported to be more common in women, younger age groups, and people who were widowed or divorced (McCabe et al., 2006).
Specific Phobia
The majority of us might have certain things we fear, such as bees, or needles, or heights (Myers et al., 1984). But what if this fear is so consuming that you can’t go out on a summer’s day, or get vaccines needed to go on a special trip, or visit your doctor in her new office on the 26th floor? To meet criteria for a diagnosis of specific phobia, there must be an irrational fear of a specific object or situation that substantially interferes with the person’s ability to function. For example, a patient at our clinic turned down a prestigious and coveted artist residency because it required spending time near a wooded area, bound to have insects. Another patient purposely left her house two hours early each morning so she could walk past her neighbor’s fenced yard before they let their dog out in the morning. Specific phobias affect about 1 in every 10 Canadians (Canadian Psychological Association, 2015).
The list of possible phobias is staggering, but four major subtypes of specific phobia are recognized: blood-injury-injection (BII) type, situational type (such as planes, elevators, or enclosed places), natural environment type for events one may encounter in nature (for example, heights, storms, and water), and animal type.
A fifth category “other” includes phobias that do not fit any of the four major subtypes (for example, fears of choking, vomiting, or contracting an illness). Most phobic reactions cause a surge of activity in the sympathetic nervous system and increased heart rate and blood pressure, maybe even a panic attack. However, people with BII type phobias usually experience a marked drop in heart rate and blood pressure and may even faint. In this way, those with BII phobias almost always differ in their physiological reaction from people with other types of phobia (Barlow & Liebowitz, 1995; Craske, Antony, & Barlow, 2006; Hofmann, Alpers, & Pauli, 2009; Ost, 1992). BII phobia also runs in families more strongly than any phobic disorder we know (Antony & Barlow, 2002; Page & Martin, 1998). Specific phobia is one of the most common psychological disorders in the United States, with 12.5% of the population reporting a lifetime history of fears significant enough to be considered a “phobia” (Arrindell et al., 2003; Kessler, Berglund, et al., 2005) (see Table 1). Most people who suffer from specific phobia tend to have multiple phobias of several types (Hofmann, Lehman, & Barlow, 1997).
Social Anxiety Disorder (Social Phobia)
Many people consider themselves shy, and most people find social evaluation uncomfortable at best, or giving a speech somewhat mortifying. Yet, only a small proportion of the population fear these types of situations significantly enough to merit a diagnosis of social anxiety disorder (SAD) (APA, 2013). SAD is more than exaggerated shyness (Bogels et al., 2010; Schneier et al., 1996). To receive a diagnosis of SAD, the fear and anxiety associated with social situations must be so strong that the person avoids them entirely, or if avoidance is not possible, the person endures them with a great deal of distress. Further, the fear and avoidance of social situations must get in the way of the person’s daily life, or seriously limit their academic or occupational functioning. For example, a patient at our clinic compromised her perfect 4.0 grade point average because she could not complete a required oral presentation in one of her classes, causing her to fail the course. Fears of negative evaluation might make someone repeatedly turn down invitations to social events or avoid having conversations with people, leading to greater and greater isolation.
The specific social situations that trigger anxiety and fear range from one-on-one interactions, such as starting or maintaining a conversation; to performance-based situations, such as giving a speech or performing on stage; to assertiveness, such as asking someone to change disruptive or undesirable behaviors. Fear of social evaluation might even extend to such things as using public restrooms, eating in a restaurant, filling out forms in a public place, or even reading on a train. Any type of situation that could potentially draw attention to the person can become a feared social situation. For example, one patient of ours went out of her way to avoid any situation in which she might have to use a public restroom for fear that someone would hear her in the bathroom stall and think she was disgusting. If the fear is limited to performance-based situations, such as public speaking, a diagnosis of SAD performance only is assigned.
What causes someone to fear social situations to such a large extent? The person may have learned growing up that social evaluation in particular can be dangerous, creating a specific psychological vulnerability to develop social anxiety (Bruch & Heimberg, 1994; Lieb et al., 2000; Rapee & Melville, 1997). For example, the person’s caregivers may have harshly criticized and punished them for even the smallest mistake, maybe even punishing them physically.
Or, someone might have experienced a social trauma that had lasting effects, such as being bullied or humiliated. Interestingly, one group of researchers found that 92% of adults in their study sample with social phobia experienced severe teasing and bullying in childhood, compared with only 35% to 50% among people with other anxiety disorders (McCabe, Antony, Summerfeldt, Liss, & Swinson, 2003). Someone else might react so strongly to the anxiety provoked by a social situation that they have an unexpected panic attack. This panic attack then becomes associated (conditioned response) with the social situation, causing the person to fear they will panic the next time they are in that situation. This is not considered PD, however, because the person’s fear is more focused on social evaluation than having unexpected panic attacks, and the fear of having an attack is limited to social situations. According to American studies, many as 12.1% of the general population suffer from social phobia at some point in their lives (Kessler, Berglund, et al., 2005), making it one of the most common anxiety disorders, second only to specific phobia (see Table 1). In a survey of residents (aged 15-64) from Ontario, Canada, the 12-month and lifetime prevalence of social anxiety was 6.7% and 13%, respectively (Stein & Kean, 2000). Social anxiety disorder is more common among females and younger age groups (Stein & Kean, 2000).
Obsessive-Compulsive Disorder
Have you ever had a strange thought pop into your mind, such as picturing the stranger next to you naked? Or maybe you walked past a crooked picture on the wall and couldn’t resist straightening it. Most people have occasional strange thoughts and may even engage in some “compulsive” behaviors, especially when they are stressed (Boyer & Liénard, 2008; Fullana et al., 2009). But for most people, these thoughts are nothing more than a passing oddity, and the behaviors are done (or not done) without a second thought. For someone with obsessive-compulsive disorder (OCD), however, these thoughts and compulsive behaviors don’t just come and go. Instead, strange or unusual thoughts are taken to mean something much more important and real, maybe even something dangerous or frightening. The urge to engage in some behavior, such as straightening a picture, can become so intense that it is nearly impossible not to carry it out, or causes significant anxiety if it can’t be carried out. Further, someone with OCD might become preoccupied with the possibility that the behavior wasn’t carried out to completion and feel compelled to repeat the behavior again and again, maybe several times before they are “satisfied.”
To receive a diagnosis of OCD, a person must experience obsessive thoughts and/or compulsions that seem irrational or nonsensical, but that keep coming into their mind. Some examples of obsessions include doubting thoughts (such as doubting a door is locked or an appliance is turned off), thoughts of contamination (such as thinking that touching almost anything might give you cancer), or aggressive thoughts or images that are unprovoked or nonsensical. Compulsions may be carried out in an attempt to neutralize some of these thoughts, providing temporary relief from the anxiety the obsessions cause, or they may be nonsensical in and of themselves. Either way, compulsions are distinct in that they must be repetitive or excessive, the person feels “driven” to carry out the behavior, and the person feels a great deal of distress if they can’t engage in the behavior. Some examples of compulsive behaviors are repetitive washing (often in response to contamination obsessions), repetitive checking (locks, door handles, appliances often in response to doubting obsessions), ordering and arranging things to ensure symmetry, or doing things according to a specific ritual or sequence (such as getting dressed or ready for bed in a specific order). To meet diagnostic criteria for OCD, engaging in obsessions and/or compulsions must take up a significant amount of the person’s time, at least an hour per day, and must cause significant distress or impairment in functioning.
According to large American samples, 1.6% of the population has met criteria for OCD over the course of a lifetime (Kessler, Berglund, et al., 2005) (see Table 1). Data from the 2012 Canadian Community Health Survey estimated the prevalence of OCD at 0.93%, and found that it was more common among females, younger adults, and those with lower incomes (Osland, Arnold, & Pringsheim, 2018). Although people with OCD, compared to people without OCD, are significantly more likely to report needing help for mental health, they are more likely to not actually receive help (Osland et al., 2018). There may be a potential gap between the needs of people with OCD symptoms and existing services. Whereas OCD was previously categorized as an Anxiety Disorder, in the most recent version of the DSM (DSM-5; APA, 2013) it has been reclassified under the more specific category of Obsessive-Compulsive and Related Disorders.
People with OCD often confuse having an intrusive thought with their potential for carrying out the thought. Whereas most people when they have a strange or frightening thought are able to let it go, a person with OCD may become “stuck” on the thought and be intensely afraid that they might somehow lose control and act on it. Or worse, they believe that having the thought is just as bad as doing it. This is called thought-action fusion. For example, one patient of ours was plagued by thoughts that she would cause harm to her young daughter. She experienced intrusive images of throwing hot coffee in her daughter’s face or pushing her face underwater when she was giving her a bath. These images were so terrifying to the patient that she would no longer allow herself any physical contact with her daughter and would leave her daughter in the care of a babysitter if her husband or another family was not available to “supervise” her. In reality, the last thing she wanted to do was harm her daughter, and she had no intention or desire to act on the aggressive thoughts and images, nor does anybody with OCD act on these thoughts, but these thoughts were so horrifying to her that she made every attempt to prevent herself from the potential of carrying them out, even if it meant not being able to hold, cradle, or cuddle her daughter. These are the types of struggles people with OCD face every day.
What is Eco-Anxiety?
An assessment conducted by the Government of Canada in 2019 analyzed climate data from 1948 to 2016, which confirmed what many expected but feared – a steady, linear progression of climate warming that is projected to amplify. Further, risks of extreme weather and climate-related natural disasters are increasingly widespread and are having serious consequences (Government of Canada, 2019). For Canadians, this includes increased exposure to wildfires (Jain et al., 2017), heatwaves (Hartmann et al., 2013), droughts (Girardin & Wotton, 2009), floods (Burn & Whitfield, 2015), snow and ice cover durations, freshwater availability, and changes in surrounding ocean activity in the coming years (as cited in Government of Canada, 2019). You may even recall some relatively recent disasters, such as the 2013 Southern Alberta flood and the 2016 Fort McMurray wildfire.
Historically, discussions and research on the impact of climate change primarily focused on physical health, such as increased risk of asthma and cardiovascular disease (Centers for Disease Control and Prevention & National Center for Environmental Health, 2014). However, as Canadians continue to be affected by widespread climate change, interest in the effects of climate change on mental health has significantly increased (APA, 2017). For example, with exposure to climate change comes increased emotional responding, which can be debilitating (APA, 2017). Heightened emotional responses can disrupt individuals’ information processing and decision making, interfering with daily functioning (APA, 2017). Further, extreme weather and climate-related natural disasters can be traumatic, regardless if individuals experience them directly, further impairing mental health (APA, 2017). Researchers have demonstrated positive correlations between climate change and several mental health issues, including depression and substance use (Neria & Shultz, 2012), as well as posttraumatic stress (Bryant et al., 2014). Concerningly, climate change also appears to relate to increased aggression and interpersonal violence (Anderson, 2001; Ranson, 2012).
Have you ever found yourself worrying about these issues and the potential consequences for not only our current living, but also future generations? If so, you are not alone. Individuals of all ages are becoming increasingly worried and fearful about environmental damage and potential future disaster – this is something we refer to as eco-anxiety (Albrecht, 2011; American Psychological Association [APA], 2017). However, you may have heard other variations of this term, including ‘climate anxiety,’ ‘climate grief,’ and ‘environmental doom.’ Eco-anxiety is largely founded on the current state of the environment and its uncertain future. Additionally, given current evidence demonstrating the direct impact of humans on climate change (see Government of Canada, 2019, for a detailed review), those struggling with eco-anxiety are often primarily concerned about the role of human activities (APA, 2017). While data indicating the exact prevalence of eco-anxiety is limited, research based in Australia suggests that ~96% of young people consider climate change to be a serious issue and ~89% report being concerned about the long-term consequences (Chiw & Ling, 2019). Moreover, researchers have begun investigating the presence of eco-anxiety among undergraduate students, with the vast majority appearing to have high levels of eco-anxiety and stress over the earth’s state (Kelly, 2017).
Unfortunately, it is proposed that common psychological responses to eco-related distress – such as perceived lack of control, feelings of helplessness, and avoidant behaviours – hinder individuals’ ability to contribute to climate-change solutions (APA, 2017). On the contrary, when individuals personally relate the state of the climate to their own well-being, their motivation to engage in positive solutions increases (Sawitri, Hadiyanto, & Hadi, 2015, as cited in Government of Canada, 2019). If you thought about the effects of climate change on humans prior to reading this section, did mental health ever come to mind? What have your experiences with eco-anxiety been, if at all?
Treatments for Anxiety and Related Disorders
Many successful treatments for anxiety and related disorders have been developed over the years. Medications (anti-anxiety drugs and antidepressants) have been found to be beneficial for disorders other than specific phobia, but relapse rates are high once medications are stopped (Heimberg et al., 1998; Hollon et al., 2005), and some classes of medications (minor tranquilizers or benzodiazepines) can be habit forming.
Exposure-based cognitive behavioral therapies (CBT) are effective psychosocial treatments for anxiety disorders, and many show greater treatment effects than medication in the long term (Barlow, Allen, & Basden, 2007; Barlow, Gorman, Shear, & Woods, 2000). In CBT, patients are taught skills to help identify and change problematic thought processes, beliefs, and behaviors that tend to worsen symptoms of anxiety, and practice applying these skills to real-life situations through exposure exercises. Patients learn how the automatic “appraisals” or thoughts they have about a situation affect both how they feel and how they behave. Similarly, patients learn how engaging in certain behaviors, such as avoiding situations, tends to strengthen the belief that the situation is something to be feared. A key aspect of CBT is exposure exercises, in which the patient learns to gradually approach situations they find fearful or distressing, in order to challenge their beliefs and learn new, less fearful associations about these situations.
Typically 50% to 80% of patients receiving drugs or CBT will show a good initial response, with the effect of CBT more durable. Newer developments in the treatment of anxiety disorders are focusing on novel interventions, such as the use of certain medications to enhance learning during CBT (Otto et al., 2010), and transdiagnostic treatments targeting core, underlying vulnerabilities (Barlow et al., 2011). As we advance our understanding of anxiety and related disorders, so too will our treatments advance, with the hopes that for the many people suffering from these disorders, anxiety can once again become something useful and adaptive, rather than something debilitating.
Outside Resources
American Psychological Association (APA) http://www.apa.org/topics/anxiety/index.aspx
National Institutes of Mental Health (NIMH) http://www.nimh.nih.gov/health/topics/anxiety-disorders/index.shtml
Web: Anxiety and Depression Association of America (ADAA) http://www.adaa.org/
Web: Center for Anxiety and Related Disorders (CARD) http://www.bu.edu/card/
Discussion Questions
1. Name and describe the three main vulnerabilities contributing to the development of anxiety and related disorders. Do you think these disorders could develop out of biological factors alone? Could these disorders develop out of learning experiences alone?
2. Many of the symptoms in anxiety and related disorders overlap with experiences most people have. What features differentiate someone with a disorder versus someone without?
3. What is an “alarm reaction?” If someone experiences an alarm reaction when they are about to give a speech in front of a room full of people, would you consider this a “true alarm” or a “false alarm?”
4. Many people are shy. What differentiates someone who is shy from someone with social anxiety disorder? Do you think shyness should be considered an anxiety disorder?
5. Is anxiety ever helpful? What about worry? | textbooks/socialsci/Psychology/Psychological_Disorders/Abnormal_Psychology_(Cummings)/04%3A_Anxiety_Disorders/4.01%3A_Anxiety_and_Related_Disorders.txt |
Learning Objectives
• Describe how body dysmorphic disorder presents itself.
• Describe the epidemiology of body dysmorphic.
• Indicate which disorders are commonly comorbid with body dysmorphic.
• Describe the theories for the etiology of body dysmorphic disorder.
• Describe the treatment for body dysmorphic disorder.
Clinical Description
Body Dysmorphic Disorder (BDD) is another obsessive-compulsive disorder, however, the focus of these obsessions are with a perceived defect or flaw in physical appearance. A key feature of these obsessions with defects or flaws are that they are not observable to others. An individual who has a congenital facial defect or a burn victim who is concerned about scars are not examples of an individual with BDD. The obsessions related to one’s appearance can run the spectrum from feeling “unattractive” to “looking hideous.” While any part of the body can be a concern for an individual with BDD, the most commonly reported areas are skin (e.g., acne, wrinkles, skin color), hair (e.g., thinning hair or excessive body hair), or nose (e.g., size, shape).
The distressing nature of the obsessions regarding one’s body, often drive individuals with BDD to engage in compulsive behaviors that take up a considerable amount of time. For example, an individual may repeatedly compare her body to other people’s bodies in the general public; repeatedly look at herself in the mirror; engage in excessive grooming which includes using make-up to modify her appearance. Some individuals with BDD will go as far as having numerous plastic surgeries in attempts to obtain the “perfect” appearance. The problem is plastic surgery does not usually resolve the issue after all the physical defect or flaw is not observable to others. While most of us are guilty of engaging in some of these behaviors, to meet criteria for BDD, one must spend a considerable amount of time preoccupied with his/her appearance (i.e., on average 3-8 hours a day), as well as display significant impairment in social, occupational, or other areas of functioning.
Muscle Dysmorphia
While muscle dysmorphia is not a formal diagnosis, it is a common type of BDD, particularly within the male population. Muscle dysmorphia refers to the belief that one’s body is too small, or lacks appropriate amount of muscle definition (Ahmed, Cook, Genen & Schwartz, 2014). While severity of BDD between individuals with and without muscle dysmorphia appears to be the same, some studies have found a higher use of substance abuse (i.e. steroid use), poorer quality of life, and an increased reports of suicide attempts in those with muscle dysmorphia (Pope, Pope, Menard, Fay Olivardia, & Philips, 2005).
Epidemiology
The point prevalence rate for BDD among U.S. adults is 2.4% (APA, 2013). Internationally, this rate drops to 1.7% –1.8% (APA, 2013). Despite the difference between the national and international prevalence rates, the symptoms across races and cultures are similar.
Gender-based prevalence rates indicate a fairly balanced sex ratio (2.5% females; 2.2% males; APA, 2013). While the diagnosis rates may be different, general symptoms of BDD appear to be the same across genders with one exception: males tend to report genital preoccupations, while females are more likely to present with a comorbid eating disorder.
Comorbidity
While research on BDD is still in its infancy, initial studies suggest that major depressive disorder is the most common comorbid psychological disorder (APA, 2013). Major depressive disorder typically occurs after the onset of BDD. Additionally, there are some reports of social anxiety, OCD, and substance-related disorders (likely related to muscle enhancement; APA, 2013).
Etiology
Initial studies exploring genetic factors for BDD indicate a hereditary influence as the prevalence of BDD is elevated in first degree relatives of people with BDD. Interestingly, the prevalence of BDD is also heightened in first degree relatives of individuals with OCD (suggesting a shared genetic influence to these disorders).
However, environmental factors appear to play a larger role in the development of BDD than OCD (Ahmed, et al., 2014; Lervolino et al., 2009). Specifically, it is believed that negative life experiences such as teasing in childhood, negative social evaluations about one’s body, and even childhood neglect and abuse may contribute to BDD. Cognitive research has further discovered that people with BDD tend to have an attentional bias towards beauty and attractiveness, selectively attending to words related to beauty and attractiveness. Cognitive theories have also proposed that individuals with BDD have dysfunctional beliefs that their worth is inherently tied to their attractiveness and hold attractiveness as one of their primary core values. These beliefs are further reinforced by our society, which overly values and emphasizes beauty.
Treatment
Seeing as though there are strong similarities between OCD and BDD, it should not come as a surprise that the only two effective treatments for BDD are those that are effective in OCD. Exposure and response prevention has been successful in treating symptoms of BDD, as clients are repeatedly exposed to their body imperfections/obsessions and prevented from engaging in compulsions used to reduce their anxiety (Veale, Gournay, et al., 1996; Wilhelm, Otto, Lohr, & Deckersbach, 1999).
The other treatment option, psychopharmacology, has also been shown to reduce symptoms in individuals diagnosed with BDD. Similar to OCD, medications such as clomipramine and other SSRIs are generally prescribed. While these are effective in reducing BDD symptoms, once the medication is discontinued, symptoms resume nearly immediately, suggesting this is not an effective long-term treatment option for those with BDD.
Treatment of BDD appears to be difficult, with one study finding that only 9% of clients had full remission at a 1-year follow-up, and 21% reported partial remission (Phillips, Pagano, Menard & Stout, 2006). A more recent finding reported more promising findings with 76% of participants reporting full remission over an 8-year period (Bjornsson, Dyck, et al., 2011).
Plastic surgery and medical treatments
It should not come as a surprise that many individuals with BDD seek out plastic surgery to attempt to correct their perceived defects. Phillips and colleagues (2001) evaluated treatments of clients with BDD and found that 76.4% reported some form of plastic surgery or medical treatment, with dermatology treatment the most reported (45%) followed by plastic surgery (23%). The problem with this type of treatment is that the individual is rarely satisfied with the outcome of the procedure, thus leading them to seek out additional surgeries on the same defect (Phillips, et al., 2001). Therefore, it is important that medical professionals thoroughly screen patients for BDD before completing any type of medical treatment.
4.03: Summary and Self-Test- Anxiety Disorders
Summary
Anxiety is a negative mood state that is accompanied by bodily symptoms such as increased heart rate, muscle tension, a sense of unease, and apprehension about the future. Anxiety is a normal human experience, but when it becomes extreme and impairs someone’s functioning, it enters the realm of possible mental illness.
A combination of biological, psychological, and specific vulnerabilities increase a person’s likelihood of developing an anxiety disorder.
Generalized anxiety disorder (GAD) is marked by excessive worry that is difficult or even impossible to turn off. This worry is accompanied by muscle tension, fatigue, agitation or restlessness, irritability, difficulties with sleep, or difficulties concentrating.
Unexpected panic attacks are core to panic disorder. In addition to the panic attacks, the person must also experience continued intense anxiety and avoidance related to the attach for at least one month, causing significant distress or interference in their lives. Sometimes people with panic disorder also develop agoraphobia, which is when they begin to avoid several places or situations, or still endures the situations but with a significant amount of anxiety.
Specific phobia occurs when someone has an irrational fear of a specific object or situation that substantially interferes with their ability to function. Four major subtypes of specific phobia are recognized: blood-injury-injection type, situational type, natural environment type, and animal type.
Social anxiety disorder involves severe anxiety in social situations where one can be evaluated. This anxiety must get in the way of the person’s daily life or otherwise severely impact their functioning. If the fear is specific to performance-based situations, this subtype of social anxiety can be diagnosed.
Obsessive-Compulsive Disorder (OCD) occurs when obsessive thoughts (intrusive thoughts that are unusual) and compulsions (activities that must be done) are present and they interfere with someone’s functioning. Less than 1% of Canadians have OCD. People with OCD often suffer from thought-action fusion or the idea that having a thought is directly linked with their potential for carrying out the thought.
Anxiety disorders are sometimes treated with anti-anxiety medications or antidepressants. Exposure-based cognitive behavioural therapies are very effective ways of treating anxiety disorders psychotherapeutically.
Body-dysmorphic disorder (BDD) is seen as a type of OCD focused on perceived defects or flaws in physical appearance. A key feature of these perceived defects or flaws is that they are not observable to others. It is common for individuals with BDD to also experience major depression.
BDD seems to be predicted by a combination of hereditary factors and environmental factors like teasing in childhood, negative social evaluations about one’s body, and childhood trauma.
Like OCD, BDD is treated with a specific type of CBT called exposure and response prevention.
An interactive or media element has been excluded from this version of the text. You can view it online here:
https://openpress.usask.ca/abnormalpsychology/?p=281 | textbooks/socialsci/Psychology/Psychological_Disorders/Abnormal_Psychology_(Cummings)/04%3A_Anxiety_Disorders/4.02%3A_Body_Dysmorphic_Disorder.txt |
Schizophrenia and the other psychotic disorders are some of the most impairing forms of psychopathology, frequently associated with a profound negative effect on the individual’s educational, occupational, and social function. Up to 3% of Canadians will experience psychosis at some point in their lives (Canadian Mental Health Association, 2013; Schizophrenia Society of Canada 2017-2018). Schizophrenia affects 1% of the Canadian population (about 1 in 100 persons), with the most affected being people aged 16 to 30 years (Hafner & an der Heiden, 1997).
Sadly, these disorders often manifest right at time of the transition from adolescence to adulthood, just as young people should be evolving into independent young adults. The spectrum of psychotic disorders includes schizophrenia, schizoaffective disorder, delusional disorder, schizotypal personality disorder, schizophreniform disorder, brief psychotic disorder, as well as psychosis associated with substance use or medical conditions. In this module, we summarize the primary clinical features of these disorders, describe the known cognitive and neurobiological changes associated with schizophrenia, describe potential risk factors and/or causes for the development of schizophrenia, and describe currently available treatments for schizophrenia.
05: Schizophrenia and Related Psychotic Disorders
Section Learning Objectives
• Describe the signs and symptoms of schizophrenia and related psychotic disorders.
• Describe the most well-replicated cognitive and neurobiological changes associated with schizophrenia.
• Describe the potential risk factors for the development of schizophrenia.
• Describe the controversies associated with “clinical high risk” approaches to identifying individuals at risk for the development of schizophrenia.
• Describe the treatments that work for some of the symptoms of schizophrenia.
The Phenomenology of Schizophrenia and Related Psychotic Disorders
Most of you have probably had the experience of walking down the street in a city and seeing a person you thought was acting oddly. They may have been dressed in an unusual way, perhaps disheveled or wearing an unusual collection of clothes, makeup, or jewelry that did not seem to fit any particular group or subculture. They may have been talking to themselves or yelling at someone you could not see. If you tried to speak to them, they may have been difficult to follow or understand, or they may have acted paranoid or started telling a bizarre story about the people who were plotting against them. If so, chances are that you have encountered an individual with schizophrenia or another type of psychotic disorder. If you have watched the movie A Beautiful Mind or The Fisher King, you have also seen a portrayal of someone thought to have schizophrenia. Sadly, a few of the individuals who have committed some of the recently highly publicized mass murders may have had schizophrenia, though most people who commit such crimes do not have schizophrenia and the vast majority of people with schizophrenia are not dangerous. It is also likely that you have met people with schizophrenia without ever knowing it, as they may suffer in silence or stay isolated to protect themselves from the horrors they see, hear, or believe are operating in the outside world. As these examples begin to illustrate, psychotic disorders involve many different types of symptoms, including delusions, hallucinations, disorganized speech and behavior, abnormal motor behavior (including catatonia), and negative symptoms such anhedonia/amotivation and blunted affect/reduced speech.
Delusions are false beliefs that are often fixed, hard to change even when the person is presented with conflicting information, and are often culturally influenced in their content (e.g., delusions involving Jesus in Judeo-Christian cultures, delusions involving Allah in Muslim cultures). They can be terrifying for the person, who may remain convinced that they are true even when loved ones and friends present them with clear information that they cannot be true. There are many different types or themes to delusions.
Under Surveillance: Abstract groups like the police or the government are commonly the focus of a schizophrenic’s persecutory delusions. [Image: Thomas Hawk, https://goo.gl/qsrqiR, CC BY-NC 2.0, https://goo.gl/VnKlK8]
The most common delusions are persecutory and involve the belief that individuals or groups are trying to hurt, harm, or plot against the person in some way. These can be people that the person knows (people at work, the neighbors, family members), or more abstract groups (the FBI, the CIA, aliens, etc.). Other types of delusions include grandiose delusions, where the person believes that they have some special power or ability (e.g., I am the new Buddha, I am a rock star); referential delusions, where the person believes that events or objects in the environment have special meaning for them (e.g., that song on the radio is being played specifically for me); or other types of delusions where the person may believe that others are controlling their thoughts and actions, their thoughts are being broadcast aloud, or that others can read their mind (or they can read other people’s minds).
When you see a person on the street talking to themselves or shouting at other people, they are experiencing hallucinations. These are perceptual experiences that occur even when there is no stimulus in the outside world generating the experiences. They can be auditory, visual, olfactory (smell), gustatory (taste), or somatic (touch). The most common hallucinations in psychosis (at least in adults) are auditory, and can involve one or more voices talking about the person, commenting on the person’s behavior, or giving them orders. The content of the hallucinations is frequently negative (“you are a loser,” “that drawing is stupid,” “you should go kill yourself”) and can be the voice of someone the person knows or a complete stranger. Sometimes the voices sound as if they are coming from outside the person’s head. Other times the voices seem to be coming from inside the person’s head, but are not experienced the same as the person’s inner thoughts or inner speech.
People who suffer from schizophrenia may see the world differently. This can include hallucinations, delusions, and disorganized thinking. [Image: Noba Project CCBYNCSA 4.0 https://tinyurl.com/y3k6qoz4]
Talking to someone with schizophrenia is sometimes difficult, as their speech may be difficult to follow, either because their answers do not clearly flow from your questions, or because one sentence does not logically follow from another. This is referred to as disorganized speech, and it can be present even when the person is writing. Disorganized behavior can include odd dress, odd makeup (e.g., lipstick outlining a mouth for 1 inch), or unusual rituals (e.g., repetitive hand gestures). Abnormal motor behavior can include catatonia, which refers to a variety of behaviors that seem to reflect a reduction in responsiveness to the external environment. This can include holding unusual postures for long periods of time, failing to respond to verbal or motor prompts from another person, or excessive and seemingly purposeless motor activity.
Some of the most debilitating symptoms of schizophrenia are difficult for others to see. These include what people refer to as “negative symptoms” or the absence of certain things we typically expect most people to have. For example, anhedonia or amotivation reflect a lack of apparent interest in or drive to engage in social or recreational activities. These symptoms can manifest as a great amount of time spent in physical immobility. Importantly, anhedonia and amotivation do not seem to reflect a lack of enjoyment in pleasurable activities or events (Cohen & Minor, 2010; Kring & Moran, 2008; Llerena, Strauss, & Cohen, 2012) but rather a reduced drive or ability to take the steps necessary to obtain the potentially positive outcomes (Barch & Dowd, 2010). Flat affect and reduced speech (alogia) reflect a lack of showing emotions through facial expressions, gestures, and speech intonation, as well as a reduced amount of speech and increased pause frequency and duration.
In many ways, the types of symptoms associated with psychosis are the most difficult for us to understand, as they may seem far outside the range of our normal experiences. Unlike depression or anxiety, many of us may not have had experiences that we think of as on the same continuum as psychosis. However, just like many of the other forms of psychopathology described in this book, the types of psychotic symptoms that characterize disorders like schizophrenia are on a continuum with “normal” mental experiences. For example, work by Jim van Os in the Netherlands has shown that a surprisingly large percentage of the general population (10%+) experience psychotic-like symptoms, though many fewer have multiple experiences and most will not continue to experience these symptoms in the long run (Verdoux & van Os, 2002). Similarly, work in a general population of adolescents and young adults in Kenya has also shown that a relatively high percentage of individuals experience one or more psychotic-like experiences (~19%) at some point in their lives (Mamah et al., 2012; Ndetei et al., 2012), though again most will not go on to develop a full-blown psychotic disorder.
Schizophrenia is the primary disorder that comes to mind when we discuss “psychotic” disorders (see Table 1 for diagnostic criteria), though there are a number of other disorders that share one or more features with schizophrenia. In the remainder of this module, we will use the terms “psychosis” and “schizophrenia” somewhat interchangeably, given that most of the research has focused on schizophrenia. In addition to schizophrenia (see Table 1), other psychotic disorders include schizophreniform disorder (a briefer version of schizophrenia), schizoaffective disorder (a mixture of psychosis and depression/mania symptoms), delusional disorder (the experience of only delusions), and brief psychotic disorder (psychotic symptoms that last only a few days or weeks).
Table 5.1: Types of Psychotic Disorders (Simplified from the Diagnostic and Statistical Manual – 5th Edition (DSM-5) (APA, 2013)
The Cognitive Neuroscience of Schizophrenia
As described above, when we think of the core symptoms of psychotic disorders such as schizophrenia, we think of people who hear voices, see visions, and have false beliefs about reality (i.e., delusions). However, problems in cognitive function are also a critical aspect of psychotic disorders and of schizophrenia in particular. This emphasis on cognition in schizophrenia is in part due to the growing body of research suggesting that cognitive problems in schizophrenia are a major source of disability and loss of functional capacity (Green, 2006; Nuechterlein et al., 2011). The cognitive deficits that are present in schizophrenia are widespread and can include problems with episodic memory (the ability to learn and retrieve new information or episodes in one’s life), working memory (the ability to maintain information over a short period of time, such as 30 seconds), and other tasks that require one to “control” or regulate one’s behavior (Barch & Ceaser, 2012; Bora, Yucel, & Pantelis, 2009a; Fioravanti, Carlone, Vitale, Cinti, & Clare, 2005; Forbes, Carrick, McIntosh, & Lawrie, 2009; Mesholam-Gately, Giuliano, Goff, Faraone, & Seidman, 2009). Individuals with schizophrenia also have difficulty with what is referred to as “processing speed” and are frequently slower than healthy individuals on almost all tasks. Importantly, these cognitive deficits are present prior to the onset of the illness (Fusar-Poli et al., 2007) and are also present, albeit in a milder form, in the first-degree relatives of people with schizophrenia (Snitz, Macdonald, & Carter, 2006). This suggests that cognitive impairments in schizophrenia reflect part of the risk for the development of psychosis, rather than being an outcome of developing psychosis. Further, people with schizophrenia who have more severe cognitive problems also tend to have more severe negative symptoms and more disorganized speech and behavior (Barch, Carter, & Cohen, 2003; Barch et al., 1999; Dominguez Mde, Viechtbauer, Simons, van Os, & Krabbendam, 2009; Ventura, Hellemann, Thames, Koellner, & Nuechterlein, 2009; Ventura, Thames, Wood, Guzik, & Hellemann, 2010). In addition, people with more cognitive problems have worse function in everyday life (Bowie et al., 2008; Bowie, Reichenberg, Patterson, Heaton, & Harvey, 2006; Fett et al., 2011).
Some with schizophrenia suffer from difficulty with social cognition. They may not be able to detect the meaning of facial expressions or other subtle cues that most other people rely on to navigate the social world. [Image: Ralph Buckley, https://goo.gl/KuBzsD, CC BY-SA 2.0, https://goo.gl/i4GXf5]
Some people with schizophrenia also show deficits in what is referred to as social cognition, though it is not clear whether such problems are separate from the cognitive problems described above or the result of them (Hoe, Nakagami, Green, & Brekke, 2012; Kerr & Neale, 1993; van Hooren et al., 2008). This includes problems with the recognition of emotional expressions on the faces of other individuals (Kohler, Walker, Martin, Healey, & Moberg, 2010) and problems inferring the intentions of other people (theory of mind) (Bora, Yucel, & Pantelis, 2009b). Individuals with schizophrenia who have more problems with social cognition also tend to have more negative and disorganized symptoms (Ventura, Wood, & Hellemann, 2011), as well as worse community function (Fett et al., 2011).
The advent of neuroimaging techniques such as structural and functional magnetic resonance imaging and positron emission tomography opened up the ability to try to understand the brain mechanisms of the symptoms of schizophrenia as well as the cognitive impairments found in psychosis. For example, a number of studies have suggested that delusions in psychosis may be associated with problems in “salience” detection mechanisms supported by the ventral striatum (Jensen & Kapur, 2009; Jensen et al., 2008; Kapur, 2003; Kapur, Mizrahi, & Li, 2005; Murray et al., 2008) and the anterior prefrontal cortex (Corlett et al., 2006; Corlett, Honey, & Fletcher, 2007; Corlett, Murray, et al., 2007a, 2007b). These are regions of the brain that normally increase their activity when something important (aka “salient”) happens in the environment. If these brain regions misfire, it may lead individuals with psychosis to mistakenly attribute importance to irrelevant or unconnected events. Further, there is good evidence that problems in working memory and cognitive control in schizophrenia are related to problems in the function of a region of the brain called the dorsolateral prefrontal cortex (DLPFC) (Minzenberg, Laird, Thelen, Carter, & Glahn, 2009; Ragland et al., 2009). These problems include changes in how the DLPFC works when people are doing working-memory or cognitive-control tasks, and problems with how this brain region is connected to other brain regions important for working memory and cognitive control, including the posterior parietal cortex (e.g., Karlsgodt et al., 2008; J. J. Kim et al., 2003; Schlosser et al., 2003), the anterior cingulate (Repovs & Barch, 2012), and temporal cortex (e.g., Fletcher et al., 1995; Meyer-Lindenberg et al., 2001). In terms of understanding episodic memory problems in schizophrenia, many researchers have focused on medial temporal lobe deficits, with a specific focus on the hippocampus (e.g., Heckers & Konradi, 2010). This is because there is much data from humans and animals showing that the hippocampus is important for the creation of new memories (Squire, 1992). However, it has become increasingly clear that problems with the DLPFC also make important contributions to episodic memory deficits in schizophrenia (Ragland et al., 2009), probably because this part of the brain is important for controlling our use of memory.
In addition to problems with regions such as the DLFPC and medial temporal lobes in schizophrenia described above, magnitude resonance neuroimaging studies have also identified changes in cellular architecture, white matter connectivity, and gray matter volume in a variety of regions that include the prefrontal and temporal cortices (Bora et al., 2011). People with schizophrenia also show reduced overall brain volume, and reductions in brain volume as people get older may be larger in those with schizophrenia than in healthy people (Olabi et al., 2011). Taking antipsychotic medications or taking drugs such as marijuana, alcohol, and tobacco may cause some of these structural changes. However, these structural changes are not completely explained by medications or substance use alone. Further, both functional and structural brain changes are seen, again to a milder degree, in the first-degree relatives of people with schizophrenia (Boos, Aleman, Cahn, Pol, & Kahn, 2007; Brans et al., 2008; Fusar-Poli et al., 2007; MacDonald, Thermenos, Barch, & Seidman, 2009). This again suggests that that neural changes associated with schizophrenia are related to a genetic risk for this illness.
Risk Factors for Developing Schizophrenia
It is clear that there are important genetic contributions to the likelihood that someone will develop schizophrenia, with consistent evidence from family, twin, and adoption studies. (Sullivan, Kendler, & Neale, 2003). However, there is no “schizophrenia gene” and it is likely that the genetic risk for schizophrenia reflects the summation of many different genes that each contribute something to the likelihood of developing psychosis (Gottesman & Shields, 1967; Owen, Craddock, & O’Donovan, 2010). Further, schizophrenia is a very heterogeneous disorder, which means that two different people with “schizophrenia” may each have very different symptoms (e.g., one has hallucinations and delusions, the other has disorganized speech and negative symptoms). This makes it even more challenging to identify specific genes associated with risk for psychosis. Importantly, many studies also now suggest that at least some of the genes potentially associated with schizophrenia are also associated with other mental health conditions, including bipolar disorder, depression, and autism (Gejman, Sanders, & Kendler, 2011; Y. Kim, Zerwas, Trace, & Sullivan, 2011; Owen et al., 2010; Rutter, Kim-Cohen, & Maughan, 2006).
There are a number of genetic and environmental risk factors associated with higher likelihood of developing schizophrenia including older fathers, complications during pregnancy/delivery, family history of schizophrenia, and growing up in an urban environment. [Image: CC0 Public Domain]
There are also a number of environmental factors that are associated with an increased risk of developing schizophrenia. For example, problems during pregnancy such as increased stress, infection, malnutrition, and/or diabetes have been associated with increased risk of schizophrenia. In addition, complications that occur at the time of birth and which cause hypoxia (lack of oxygen) are also associated with an increased risk for developing schizophrenia (M. Cannon, Jones, & Murray, 2002; Miller et al., 2011). Children born to older fathers are also at a somewhat increased risk of developing schizophrenia. Further, using cannabis increases risk for developing psychosis, especially if you have other risk factors (Casadio, Fernandes, Murray, & Di Forti, 2011; Luzi, Morrison, Powell, di Forti, & Murray, 2008). The likelihood of developing schizophrenia is also higher for kids who grow up in urban settings (March et al., 2008) and for some minority ethnic groups (Bourque, van der Ven, & Malla, 2011). Both of these factors may reflect higher social and environmental stress in these settings. Unfortunately, none of these risk factors is specific enough to be particularly useful in a clinical setting, and most people with these “risk” factors do not develop schizophrenia. However, together they are beginning to give us clues as the neurodevelopmental factors that may lead someone to be at an increased risk for developing this disease.
An important research area on risk for psychosis has been work with individuals who may be at “clinical high risk.” These are individuals who are showing attenuated (milder) symptoms of psychosis that have developed recently and who are experiencing some distress or disability associated with these symptoms. When people with these types of symptoms are followed over time, about 35% of them develop a psychotic disorder (T. D. Cannon et al., 2008), most frequently schizophrenia (Fusar-Poli, McGuire, & Borgwardt, 2012). In order to identify these individuals, a new category of diagnosis, called “Attenuated Psychotic Syndrome,” was added to Section III (the section for disorders in need of further study) of the DSM-5 (see Table 1 for symptoms) (APA, 2013). However, adding this diagnostic category to the DSM-5 created a good deal of controversy (Batstra & Frances, 2012; Fusar-Poli & Yung, 2012). Many scientists and clinicians have been worried that including “risk” states in the DSM-5 would create mental disorders where none exist, that these individuals are often already seeking treatment for other problems, and that it is not clear that we have good treatments to stop these individuals from developing to psychosis. However, the counterarguments have been that there is evidence that individuals with high-risk symptoms develop psychosis at a much higher rate than individuals with other types of psychiatric symptoms, and that the inclusion of Attenuated Psychotic Syndrome in Section III will spur important research that might have clinical benefits. Further, there is some evidence that non-invasive treatments such as omega-3 fatty acids and intensive family intervention may help reduce the development of full-blown psychosis (Preti & Cella, 2010) in people who have high-risk symptoms.
Treatment of Schizophrenia
The currently available treatments for schizophrenia leave much to be desired, and the search for more effective treatments for both the psychotic symptoms of schizophrenia (e.g., hallucinations and delusions) as well as cognitive deficits and negative symptoms is a highly active area of research. The first line of treatment for schizophrenia and other psychotic disorders is the use of antipsychotic medications. There are two primary types of antipsychotic medications, referred to as “typical” and “atypical.” The fact that “typical” antipsychotics helped some symptoms of schizophrenia was discovered serendipitously more than 60 years ago (Carpenter & Davis, 2012; Lopez-Munoz et al., 2005). These are drugs that all share a common feature of being a strong block of the D2 type dopamine receptor. Although these drugs can help reduce hallucinations, delusions, and disorganized speech, they do little to improve cognitive deficits or negative symptoms and can be associated with distressing motor side effects. The newer generation of antipsychotics is referred to as “atypical” antipsychotics. These drugs have more mixed mechanisms of action in terms of the receptor types that they influence, though most of them also influence D2 receptors. These newer antipsychotics are not necessarily more helpful for schizophrenia but have fewer motor side effects. However, many of the atypical antipsychotics are associated with side effects referred to as the “metabolic syndrome,” which includes weight gain and increased risk for cardiovascular illness, Type-2 diabetes, and mortality (Lieberman et al., 2005).
The evidence that cognitive deficits also contribute to functional impairment in schizophrenia has led to an increased search for treatments that might enhance cognitive function in schizophrenia. Unfortunately, as of yet, there are no pharmacological treatments that work consistently to improve cognition in schizophrenia, though many new types of drugs are currently under exploration. However, there is a type of psychological intervention, referred to as cognitive remediation, which has shown some evidence of helping cognition and function in schizophrenia. In particular, a version of this treatment called Cognitive Enhancement Therapy (CET) has been shown to improve cognition, functional outcome, social cognition, and to protect against gray matter loss (Eack et al., 2009; Eack, Greenwald, Hogarty, & Keshavan, 2010; Eack et al., 2010; Eack, Pogue-Geile, Greenwald, Hogarty, & Keshavan, 2010; Hogarty, Greenwald, & Eack, 2006) in young individuals with schizophrenia. The development of new treatments such as Cognitive Enhancement Therapy provides some hope that we will be able to develop new and better approaches to improving the lives of individuals with this serious mental health condition and potentially even prevent it some day.
Outside Resources
Book: Ben Behind His Voices: One family’s journal from the chaos of schizophrenia to hope (2011). Randye Kaye. Rowman and Littlefield.
Book: Conquering Schizophrenia: A father, his son, and a medical breakthrough (1997). Peter Wyden. Knopf.
Book: Henry’s Demons: Living with schizophrenia, a father and son’s story (2011). Henry and Patrick Cockburn. Scribner Macmillan.
Book: My Mother’s Keeper: A daughter’s memoir of growing up in the shadow of schizophrenia (1997). Tara Elgin Holley. William Morrow Co.
Book: Recovered, Not Cured: A journey through schizophrenia (2005). Richard McLean. Allen and Unwin.
Book: The Center Cannot Hold: My journey through madness (2008). Elyn R. Saks. Hyperion.
Book: The Quiet Room: A journal out of the torment of madness (1996). Lori Schiller. Grand Central Publishing.
Book: Welcome Silence: My triumph over schizophrenia (2003). Carol North. CSS Publishing.
Web: National Alliance for the Mentally Ill. This is an excellent site for learning more about advocacy for individuals with major mental illnesses such as schizophrenia. http://www.nami.org/
Web: National Institute of Mental Health. This website has information on NIMH-funded schizophrenia research. http://www.nimh.nih.gov/health/topics/schizophrenia/index.shtml
Web: Schizophrenia Research Forum. This is an excellent website that contains a broad array of information about current research on schizophrenia. http://www.schizophreniaforum.org/
Discussion Questions
1. Describe the major differences between the major psychotic disorders.
2. How would one be able to tell when an individual is “delusional” versus having non-delusional beliefs that differ from the societal normal? How should cultural and sub-cultural variation been taken into account when assessing psychotic symptoms?
3. Why are cognitive impairments important to understanding schizophrenia?
4. Why has the inclusion of a new diagnosis (Attenuated Psychotic Syndrome) in Section III of the DSM-5 created controversy?
5. What are some of the factors associated with increased risk for developing schizophrenia? If we know whether or not someone has these risk factors, how well can we tell whether they will develop schizophrenia?
6. What brain changes are most consistent in schizophrenia?
7. Do antipsychotic medications work well for all symptoms of schizophrenia? If not, which symptoms respond better to antipsychotic medications?
8. Are there any treatments besides antipsychotic medications that help any of the symptoms of schizophrenia? If so, what are they?
5.02: Summary and Self-Test- Schizophrenia and Related Psychotic Disorders
Summary
Schizophrenia and the related psychotic disorders are some of the most impairing forms of psychopathology. Psychotic disorders involve many different types of symptoms that involved altered cognition and perception.
Symptoms include delusions, which are false beliefs that are often fixed, and hallucinations, which are perceptual experiences that occur without stimulus from the outside world generating them. Other symptoms include disorganized speech and behaviour, flat affect, alogia, catatonia, and lack of motivation.
Problems in cognitive functioning are a critical aspect of psychotic disorders, and a major source of disability and loss of functional capacity. These include problems with episodic memory, working memory, and processing speed. Some people with schizophrenia also show deficits in social cognition.
There are important genetic contributions to the likelihood someone will develop schizophrenia, but it is important to know there is no “schizophrenia gene.” Like most forms of psychopathology, the genetic risk for schizophrenia reflects the summation of many different genes.
Environmental factors can also increase risk of developing schizophrenia such as stress, infection, malnutrition and diabetes during pregnancy. Birth complications that cause hypoxia (lack of oxygen) are also associated with an increased risk for schizophrenia.
Using cannabis increases risk for developing psychosis, especially if you have other risk factors. The likelihood is also higher for children who grow up in urban settings and for some minority ethnic groups.
Unfortunately, none of these risk factors are specific enough to be used in a clinical setting.
An important area of research is with individuals who are at “clinical high risk,” for psychosis. These are individuals who show milder symptoms that have developed recently and who are experiencing some distress or disability. When followed over time, about 35% of these individuals develop a psychotic disorder.
Schizophrenia is treated with antipsychotic medication. Newer antipsychotics have fewer size effects. Schizophrenia is also treated with Cognitive Enhancement Therapy, which has been shown to improve cognition, functional outcome, and social cognition.
An interactive or media element has been excluded from this version of the text. You can view it online here:
https://openpress.usask.ca/abnormalpsychology/?p=408 | textbooks/socialsci/Psychology/Psychological_Disorders/Abnormal_Psychology_(Cummings)/05%3A_Schizophrenia_and_Related_Psychotic_Disorders/5.01%3A_Schizophrenia_and_Related_Psychotic_Disorders.txt |
Trauma- and stressor-related disorders occur in response to exposure to a traumatic or very stressful negative event, like sexual abuse, a natural disaster, a car accident, or violent assault. How we respond to trauma is variable, with some reactions and disorders clearly being based on anxiety and fear, but with other reactions being marked by anhedonia and dysphoria (APA, 2013). The DSM-5 trauma- and stressor-related disorders section includes two childhood disorders (Reactive Attachment Disorder and Disinhibited Social Engagement Disorder), as well as Posttraumatic Stress Disorder (PTSD) and Acute Stress Disorder. In this chapter, we will focus on PTSD. You will read about the symptoms of PTSD, some of the vulnerabilities that people might possess for developing PTSD, as well as how we treat PTSD.
06: Post-traumatic Stress Disorder
Learning Objectives
• Describe the diagnostic criteria for posttraumatic stress disorder (PTSD) in adults and in children
• Identify the predictors or potential risk factors for the development of PTSD
• Outline empirically supported treatments for PTSD
• Describe the difference between strongly recommended treatments and conditionally supported treatments
In a nationally representative sample of Canadians aged 18 years and over, at least 76% of participants reported being exposed to at least one traumatic event in their lifetime (Van Ameringen, Mancini, Patterson, & Boyle, 2008). Traumatic events are defined by the DSM-5 as “exposure to actual or threatened death, serious injury, or sexual violence” (p. 271). The most commonly reported events were unexpected death of a loved one sexual assault and witnessing someone being seriously harmed or killed. However, only about 8% of Canadians who experience a traumatic event develop PTSD (Canadian Mental Health Association, 2013). Unfortunately, like adults, children are also exposed to high rates of trauma. At least 30% of Canadians self-report that they experienced physical and/or sexual abuse and/or exposure to intimate partner violence before the age of 15 (Afifi et al., 2014; Canadian Centre for Justice Statistics, 2017).
In Canada, the current and lifetime prevalence of PTSD was 2.4% and 9.2%, respectively (Van Ameringen et al., 2008). Rates of PTSD were higher for people living in rural areas, Western Canada, and Ontario, and the risk for developing PTSD was significantly lower among males (Van Ameringen, Mancini, Patterson, & Boyle, 2008). The rates of PTSD are especially common among Canadian veterans. According to Statistics Canada, the lifetime and 12-month prevalence of PTSD among Canadian Armed Force members was 11.1% and 5.3%, respectively (Caryn, Zamorski, & Janz, 2014). Of concern, the 12-month prevalence for PTSD is twice as high among members who were deployed in Afghanistan compared to those who were not (Caryn et al., 2014).
Symptoms of PTSD
According to the DSM-5, for a person to receive a diagnosis of Post-Traumatic Stress Disorder (PTSD), they must meet the following 8 criteria (APA, 2013). First, as mentioned, the person must have been exposed to a traumatic or stressful event such as actual or threatened death, serious bodily harm, or sexual violence. The person may have experienced the event themselves, witnessed to happening to somebody else, or learned that a close family member or friend was exposed to a trauma (APA, 2013). Second, the person has intrusive symptoms such that they re-experience the trauma, for example through unwanted memories, nightmares, or flashbacks that are related to the traumatic event. These symptoms are not within the person’s control, which can be particularly distressing for those with PTSD.
Third, the person avoids trauma-related stimuli (e.g., thoughts, emotions, reminders) (e.g., people, places, objects). They do so in order to avoid the overwhelming fear response that arises when they are around trauma-related stimuli. For some people with PTSD exposure to trauma-related stimuli can lead to an increase in intrusive thoughts, nightmares, or flashbacks. Some examples of things that people might avoid include certain locations, people, conversations or memories, rooms in their homes, etc.
Fourth, the person experiences negative changes in mood or cognition related to the traumatic event (e.g., inability to remember important parts of the event, exaggerated negative beliefs, negative emotions and the inability to experience positive emotions). Fifth, the person experiences significant changes in arousal and behaviour (e.g., irritability, hypervigilance, sleep disturbance) (APA, 2013). For example, it is not uncommon for individuals with PTSD to experience insomnia or to be hypervigilant to concerns about safety. This overarousal sometimes results in feeling tense, “keyed up” or on edge. It is also common for individuals with PTSD to have exaggerated startle responses, compared to people without PTSD.
Sixth, the disturbances in mood, cognition, and behaviour must occur for at least 1 month, and seventh, and they must cause clinically significant distress or impairment in important areas of functioning (e.g., social, occupational). Eighth, the disturbances should not be better explained by the effects of a substance or another medical condition. In addition to making a diagnosis of PTSD, a psychologist can specify if the person also has symptoms of dissociation and/or if they have delayed expression of symptoms (i.e., full diagnostic criteria are not met until at least 6 months after the traumatic event) (APA, 2013).
The DSM-5 has separate diagnostic criteria for children 6 years and younger. Some important differences are that in young children, intrusive memories may not look the same as they do in adults. In children, intrusive memories can be expressed through repetitive play. Children can also experience less interest in play, an exaggerated startle response, and they may have extreme temper tantrums (APA, 2013).
Predictors of PTSD
Why do some individuals, when exposed to trauma, develop PTSD but others do not? In this section we will discuss just a few of the variables that influence the development of PTSD.
Centrality of Events
In Canada, it is estimated that 75.9% of individuals will experience a traumatic event in their lifetime, but the lifetime rate of PTSD in Canada is only 9.2% (Van Ameringen et al., 2008). Therefore, not everyone that is exposed to a traumatic event will develop PTSD. The discrepancy between the rate of trauma exposure and the rate of PTSD has led researchers to try to identify factors that increase the likelihood of developing PTSD after exposure to a trauma. One such identified factor is event centrality (Berntsen & Rubin, 2006), or how central we come to see that event to our lives, memories, and identity. The centrality of events scale (CES) was introduced by Berntsen and Rubin (2006) to measure the extent to which a memory for a trauma becomes a reference point for one’s identity, life story, and the attribution of meaning to other experiences. The CES has a full 20-item version and a short-form 7-item version. Both have high reliability and validity (Berntsen & Rubin, 2006). The CES has three factors. It measures the extent to which the individual’s traumatic memory: 1) becomes a reference point for everyday inferences; 2) represents a turning point in the individual’s life story; and 3) becomes a reference point for their personal identity. Each of these factors are positively related to PTSD (Robinaugh & McNally, 2011).
Berntsen and Rubin (2006) discussed why each factor of the CES may contribute to symptoms of PTSD. Berntsen and Robin (2006) proposed that the availability heuristic (Tversky & Kahnman, 1973) helps to explain the relationship between the first factor and PTSD. For example, if the trauma memories are highly accessible, then the individual will overestimate the frequency of such events in everyday life, leading to unnecessary worries, precautions, and other traumatization symptoms (Berntsen & Rubin, 2006). The second factor was developed from research on how trauma can profoundly change a person’s outlook (Janoff-Bulman, 1989). Berntsen and Rubin (2006) proposed that symptoms of PTSD may be exacerbated when the individual focuses on aspects of their life that can be explained by referencing this turning point in the life story, while discounting aspects that defy these references (Berntsen & Rubin, 2006). Lastly, the third factor was developed from research that suggests that an individual may perceive a trauma as causally related to a stable characteristic of the self (Abramson, Seligman, & Teasdale, 1978; Berntsen & Rubin, 2006). Therefore, this factor is proposed to be related to PTSD when individuals attribute the trauma to stable negative identity characteristics (Berntsen & Rubin, 2006). Overall, research on event centrality supports the autobiographical memory model of PTSD, which purposes that PTSD symptoms result from the over integration of the trauma into one’s memory, identity, and understanding of the world (Berntsen & Rubin, 2006; Rubin, Berntsen, & Bohni, 2008; Rubin, Boals, & Berntsen, 2008).
Since the construction of the centrality of events scale (Berntsen & Rubin, 2006) research has demonstrated a robust positive relationship between event centrality and PTSD for a range of trauma types and participant populations (Gehrt, Berntsen, Hoyle, & Rubin, 2018). For example, the positive relationship between event centrality and PTSD has been found for individuals exposed to child sexual abuse (Robinaugh & McNally, 2011), military combat (Brown, Antonius, Kramer, Root, & Hirst, 2010), terrorist attacks/bombings (Blix, Solberg, & Heir, 2014), physical injury or assault/abuse, illness, exposure to death, sexual assault/abuse, accidents, and natural disasters (Teale Sapach et al., 2019; Barton, Boals, & Knowles, 2013). The positive relationship between event centrality and PTSD has also been found for a range of participant samples, including community members (Rubin, Dennis, & Beckham, 2011; Ogle et al., 2014), undergraduate students (Barton et al., 2013; Berntsen & Rubin, 2006; Broadbridge, 2018; Fitzgerald, Berntsen, & Broadbridge, 2016), treatment-seeking individuals (Boals & Murrel, 2016; Silva et al., 2016), and military veterans (Brown et al., 2010). This relationship between event centrality and PTSD is also evident for adults ranging from 18 to 93 (Barton et al., 2013; Berntsen, Rubin, & Siegler, 2011; Wamser-Nanney, 2019; Ogle et al., 2013; Boals, Hayslip, Knowles, & Banks, 2012). However, there are nuances in the relationship between event centrality and PTSD for certain participant characteristics. For instance, younger adults (Boals et al., 2012) and women (Boals, 2010) are more likely to centralize a traumatic event and develop PTSD compared to older adults and men, respectively. Therefore, the difference in event centrality may help to explain the higher prevalence of PTSD in these populations (i.e., young adults and women; Van Ameringen et al., 2008).
Trauma Type & Social Support
There are certain types of trauma that have a greater impact on the development and maintenance of PTSD. Interpersonal traumatic events that are purposefully caused by other people contribute the most to PTSD risk and symptom severity. Events that occur by accident or by natural disaster have a far less impact on the risk for PTSD compared to interpersonal traumas (Charuvastra & Cloitre, 2008). There are several reasons explaining why interpersonal traumas are so powerful in increasing a person’s risk and severity of PTSD. In interpersonal traumas, the appraisal of threat tends to be higher, and people tend to experience a higher level of distress and decreased sense of safety in the world. In addition, interpersonal traumas can affect people’s ability to effectively interact with others (Charuvastra & Cloitre, 2008).
Social support before and after an exposure to a traumatic event plays an important role in determining a person’s risk and severity of PTSD (Charuvastra & Cloitre, 2008). Social support helps people to effectively regulate their emotions, which is central for recovery from PTSD. If a person is not able to effectively manage intense emotions and memories, they are more likely to re-experience traumatic events and use avoidance as a way to cope with difficult emotional experiences. Social support plays an important role throughout life. In childhood, the bond between the caregiver and child helps to establish a sense of safety and emotion regulation. Abuse during childhood is a significant risk factor for the PTSD later on in life and it plays an important role in dysregulating the stress response system (Charuvastra & Cloitre, 2008).
Positive social interactions act as a protective factor against stress (Charuvastra & Cloitre, 2008). The value of social support lies in the perceived helpfulness and sense of connectedness with others. It is not the quantity of social support that is protective against PTSD, but rather it is the match between what the person needs and the type of support that is offered. Social support can decrease feelings of distress and increase safety and a sense of belonging. If a person feels isolated ostracized, blamed, or feels unsupported by their relationships, this can contribute to the onset and severity of PTSD symptoms (Charuvastra & Cloitre, 2008). Negative relationships can reinforce the belief that the world is a place that is unsafe and harmful.
Genetic & Biological Risk Factors
In a review on the biological risk factors for PTSD, Yahyavi, Zarghami, and Marwah (2014) found that the risk for PTSD can begin in utero. The HPA axis, which plays an important role in the stress response, is greatly affected by early development. Maternal exposure to trauma, for example, can lead to changes in the fetal brain that disrupt gene expression. An example of this is DNA methylation, which re-programs the activity of genes and impacts a person’s response to stress by activating the sympathetic nervous system and causing dysfunction in the HPA axis (Yahyavi et al., 2014). Changes in these biological systems disrupts emotion regulation and the ability to effectively manage stress. However, there is growing consensus that genetic markers do not act in isolation but interact with environmental factors to impact a person’s vulnerability to developing PTSD (Klengel & Binder, 2015). In addition, the genetic risk factors for PTSD are complex and the biologic pathways for this disorder are not fully understood (Sharma & Ressler, 2019).
Treatments for PTSD
The American Psychological Association (APA) has developed a list of empirically supported treatments (ESTs) that are indicated for the treatment of PTSD. Within this list, the APA differentiates between treatments that are conditionally recommended and strongly recommended. Treatments that are conditionally recommended all have evidence that indicates that they can lead to good treatment outcomes. However, the evidence may not be as strong, the balance of treatment benefits and possible harms may be less favorable, or the intervention may be less applicable across treatment settings or subgroups of individuals with PTSD (APA, 2017). Additional research on these conditionally recommended treatments might lead, with time, to a change in the strength of recommendations in future guidelines. Treatments that are strongly recommended all have strong evidence that they lead to good treatment outcomes, that the balance of treatment benefits and possible harms are favorable for the client, and have been found to be applicable across treatment settings and subgroups for individuals with PTSD (APA, 2017).
Strongly Recommended Treatments
At present, the APA strongly recommends four treatments for individuals with PTSD, all which are variations of Cognitive Behavioural Therapy (CBT). These treatments include: Prolonged Exposure Therapy, Cognitive Processing Therapy, Cognitive Therapy, and traditional Cognitive Behavioural Therapy (APA, 2017). CBT is a form of therapy that focuses on how individuals’ thoughts, behaviours, and emotions are interrelated. The therapist works with the client to identify thoughts, behaviours, and emotions which might be having negative effects on the client’s wellbeing and uses various skills to alter these as needed. As applied to trauma, oftentimes this takes the form of helping clients learn how to modify and challenge unhelpful beliefs related to the trauma. Modifying and challenging these unhelpful beliefs is meant to modify the client’s emotional and behavioural reactions into ones that are more positive. Oftentimes a technique called exposure is incorporated into the abovementioned treatments. Exposure is a process whereby the client gradually approaches trauma-related memories, feelings, and situations. It can be conducted in a number of ways, including describing the trauma narrative aloud, listening to an audio recording of the trauma narrative, writing out the trauma narrative and/or reading it aloud, and physically going to situations which are feared and/or reminders of the trauma. These different methods of exposure are often referred to as imaginal exposure (occurring within the imagination), and in-vivo exposure (occurring in real life). By facing what has been avoided, the client presumably will learn that the trauma-related memories and cues are not dangerous and do not need to be avoided. By extension, any associated distressing thoughts, feelings, and sensations will be diminished.
There have been various studies performed with the intention of understanding how well these treatments for PTSD work and, as mentioned, they all have strong evidence to support them.
Individuals randomly assigned to exposure therapy have significantly greater pre- to posttreatment reductions in PTSD symptoms compared to supportive counseling (Bryant, et al.., 2003; Schnurr et al., 2007), relaxation training (Marks et al.,1998; Taylor et al., 2003), and treatment as usual including pharmacotherapy (Asukai et al., 2010). A meta-analysis on the effectiveness of PTSD showed that clients treated with PE fared better than 86% of patients in control conditions on PTSD symptoms at the end of treatment (Powers et al., 2010). Furthermore, among PE participants, 41% to 95% lost their PTSD diagnosis at the end of treatment (Jonas et al., 2016), and 66% more participants treated with exposure therapy achieved loss of PTSD diagnosis, compared to those in waitlist control groups (Jonas et al., 2016).
CPT has been found to influence a clinically significant reduction in PTSD, depression, and anxiety symptoms in sexual assault and Veteran samples, with results maintained at 5 and 10 year post treatment follow-up (Cusack et al., 2016; Resick et al., 2012; Watts et al., 2013). Furthermore, rates of participants who no longer met PTSD diagnosis criteria ranged from 30% to 97% and 51% more participants treated with CPT achieved loss of PTSD diagnosis, compared to waitlist, self-help booklet and usual care control groups (Jonas et al., 2016).
Traditional CBTs have also been shown to be more effective than a waitlist (Power et al., 2002), supportive therapy (Blanchard et al., 2003) and a self-help booklet (Ehlers et al., 2003). Researchers have also compared various components of CBT (i.e., imaginal exposure, in vivo exposure, cognitive restructuring) with some mixed results. Marks et al. (1998) compared exposure therapy (that included five sessions of imaginal exposure and five sessions of in vivo exposure), cognitive restructuring, combined exposure therapy and cognitive restructuring, and relaxation in an RCT. Exposure and cognitive restructuring were each effective in reducing PTSD symptoms and were superior to relaxation. Exposure and cognitive restructuring were not mutually enhancing when combined. Furthermore, research suggests that 61% to 82.4% of participants treated with traditional CBT lost their PTSD diagnosis and 26% more CBT participants than waitlist or supportive counseling achieved loss of PTSD diagnosis (Jonas et al., 2016).
Conditionally Recommended Treatments
There are also a number of treatments which the APA indicates are conditionally recommended for the treatment of PTSD. These include Eye Movement Desensitization and Reprocessing Therapy (EMDR), Narrative Exposure Therapy (NET) and Medication (APA, 2017). When utilizing EMDR, the client is asked to focus on the trauma memory while simultaneously experiencing bilateral stimulation (typically tracking the therapist’s finger with their eye; (APA, 2017). This is thought to be associated with a reduction in the vividness and emotion associated with the trauma memories (APA, 2017). With NET, a client establishes a chronological narrative of their life. They are told to concentrate mainly on their traumatic experience(s), but also incorporate some positive events (APA, 2017). NET therapists posit that this process contextualizes the network of cognitive, affective and sensory memories of a client’s trauma (APA, 2017). When the client expresses the narrative, they are able to fill in details of the trauma memories, which are often fragmented, and this helps them to develop a coherent autobiographical story (APA, 2017). In so doing, the memory of a traumatic episode is refined and understood, and symptoms are believed to be reduced (APA, 2017).
In additional to psychological treatments, four medications have received a conditional recommendation for use in the treatment of PTSD. These include the Selective Serotonin Reuptake Inhibitors (SSRIs) sertraline, paroxetine, and fluoxetine and the selective serotonin and norepinephrine reuptake inhibitor (SNRI) venlafaxine (APA, 2017). Currently only sertraline (Zoloft) and paroxetine (Paxil) are approved by the Food and Drug Administration (FDA) for PTSD (APA, 2017). From the FDA perspective, all other medication uses are “off label,” though there are differing levels of evidence supporting their use. These medications work by inhibiting the presynaptic reuptake of serotonin and norepinephrine (neurotransmitters), respectively, thereby increasing the presence of these neurotransmitters in the brain.
As noted above, the evidence for the efficacy of these three treatments is conditional. EMDR received a conditional recommendation there is a low strength of evidence for the critical outcome of PTSD symptom reduction (APA, 2017). However, research suggests that EMDR is effective for loss of PTSD diagnosis, and prevention/reduction of comorbid depression (APA, 2017). Thus, the APA (2017) recommends that clinicians offer EMDR compared to no intervention. With regards to NET, it has received a conditional recommendation, because despite evidence of a large/medium magnitude of benefit for the critical outcome of PTSD symptom reduction, there was low or insufficient/very low strength of evidence for all other important benefit outcomes (e.g., remission or loss of PTSD diagnosis or reduction/prevention of comorbid depression). However, research suggests that NET is effective at reducing PTSD symptoms (APA, 2017). Similarly, the APA (2017), suggests that clinicians offer NET, as opposed to no treatment.
Last, with regards to psychopharmacological treatments, the APA (2017) suggests that the medications noted above all be offered, compared to no intervention. Fluoxetine has been found to reduce PTDS symptoms and prevent/reduce comorbid depression and anxiety (APA, 2017), with the benefits slightly outweighing the harms. Paroxetine has been found to reduce PTSD symptoms, contribute to PTSD remission, and prevent/reduce comorbid depression and disability/functional impairment, with the benefits clearly outweighing the harms (APA, 2017). Sertaline has been found to assist with PTSD symptom reduction, with benefits slightly outweighing the harms (APA, 2017). Last, Venlafaxine has been found to assist with PTSD symptoms reduction, and to assist with remission, with the benefits slightly outweighing the harms (2017).
Overall, the APA (2017) posits that their findings from the panel recommendations, would be unlikely to change if the meta-analyses reported in the systematic review were updated to include the new trials. However, the note that EMDR and NET are exceptions to this, and that it is possible that their recommendations might change, pending additional research on these two treatment modalities.
6.02: Summary and Self-Test- Post-traumatic Stress Disorder
Summary
Over 76% of Canadians report being exposed to a traumatic event at least once in their lifetime. Traumatic events are those that expose someone to actual or threatened death, serious injury, or sexual violence. About 8% of Canadians exposed to a traumatic event will develop post-traumatic stress disorder (PTSD).
Example traumatic events include sexual assault, witnessing domestic or community violence, or military combat.
People with PTSD experience intrusive symptoms such that they re-experience the traumatic event, for example via unwanted memories, nightmares, or flashbacks of the event. These symptoms are not within someone’s control, which can be especially distressing.
PTSD is also marked by avoidance of trauma-related stimuli that might remind the person of the traumatic event(s). People with PTSD engage in this avoidance in order to stay away from the overwhelming fear response that arises when they are around trauma-related stimuli.
Another type of symptom is the negative changes in mood or cognition related to the traumatic event such as an inability to remember important parts of the event, exaggerated negative beliefs about it, negative emotions, and the inability to experience positive emotions.
People with PTSD also experience significant changes in arousal and behaviour such as irritability, hypervigilance, and sleep disturbances. This overarousal sometimes results in them feeling tense, “keyed up,” or on edge. It is also common for people with PTSD to have exaggerated startle responses, for example to loud unexpected noises.
The DSM-5 has separate diagnostic criteria for children younger than 6. Some important differences are that in young children intrusive memories might be expressed through repetitive play.
Event centrality refers to how central we come to see a traumatic event to our lives, memories, and identities. The centrality of events scale measures the extent to which a memory for a traumatic event becomes a reference point for one’s identity, life story, and the attribution of meaning to other experiences.
According to the autobiographical memory model of PTSD, symptoms result from the over integration of the trauma into one’s memory, identity, and understanding of the world. Event centrality is positively associated with PTSD.
Interpersonal traumatic events that are purposefully caused by other people are most likely to lead to PTSD. Social support helps people to effectively manage their emotions, and it is central both for preventing the onset of PTSD and for helping with recovery.
The American Psychological Association (APA) has developed a list of empirically supported treatments for PTSD. They divide them into strongly recommended treatments and conditionally recommended treatments, based on how convincing the research is to support them.
Strongly recommended treatments for PTSD include several versions of cognitive behavioural therapy (CBT) including Prolonged Exposure (PE) and Cognitive Processing Therapy (CPT). Conditionally recommended treatments including Eye Movement Desensitization and Reprocessing Therapy (EMDR), Narrative Exposure Therapy, and medication.
An interactive or media element has been excluded from this version of the text. You can view it online here:
https://openpress.usask.ca/abnormalpsychology/?p=439 | textbooks/socialsci/Psychology/Psychological_Disorders/Abnormal_Psychology_(Cummings)/06%3A_Post-traumatic_Stress_Disorder/6.01%3A_Post-Traumatic_Stress_Disorder.txt |
Attention-Deficit/Hyperactivity Disorder (ADHD) is a psychiatric disorder that is most often diagnosed in school-aged children. It is the most prevalent childhood psychiatric disorder in Canada (Center for ADHD Awareness, Canada [CADDAC], n.d.). ADHD occurs in 3-5% of elementary-school aged children and is more common in males than females (Johnson, 2009). In Canada, each classroom will include at least 1 to 3 students with ADHD (CADDAC, n.d.). Furthermore, at least 50% of children with ADHD will continue to have symptoms in adolescence and adulthood (Bélanger, Andrews, Gray, & Korczak, 2018). Approximately 4% of adults experience at least some symptoms of ADHD (CADDAC, n.d.).
Many children with ADHD find it difficult to focus on tasks and follow instructions, and these characteristics can lead to problems in school and at home. How children with ADHD are diagnosed and treated is a topic of controversy, and many people, including scientists and nonscientists alike, hold strong beliefs about what ADHD is and how people with the disorder should be treated. This module will familiarize the reader with the scientific literature on ADHD. First, we will review how ADHD is diagnosed in children, with a focus on how mental health professionals distinguish between ADHD and normal behavior problems in childhood. Second, we will describe what is known about the causes of ADHD. Third, we will describe the treatments that are used to help children with ADHD and their families. The module will conclude with a brief discussion of how we expect that the diagnosis and treatment of ADHD will change over the coming decades.
07: ADHD and Related Behaviour Disorders in Childhood
Learning Objectives
• Distinguish childhood behavior disorders from phases of typical child development.
• Describe the factors contributing to Attention-Deficit/Hyperactivity Disorder (ADHD)
• Understand the controversies surrounding the legitimacy and treatment of childhood behavior disorders
• Describe the empirically supported treatments for Attention-Deficit/Hyperactivity Disorder (ADHD)
Childhood and ADHD
Childhood is a stage of life characterized by rapid and profound development. Starting at birth, children develop the skills necessary to function in the world around them at a rate that is faster than any other time in life. This is no small accomplishment! By the end of their first decade of life, most children have mastered the complex cognitive operations required to comply with rules, such as stopping themselves from acting impulsively, paying attention to parents and teachers in the face of distraction, and sitting still despite boredom. Indeed, acquiring self-control is an important developmental task for children (Mischel, Shoda, & Rodriguez, 1989), because they are expected to comply with directions from adults, stay on task at school, and play appropriately with peers. For children with Attention-Deficit/Hyperactivity Disorder (ADHD), however, exercising self-control is a unique challenge. These children, oftentimes despite their best intentions, struggle to comply with adults’ instructions, and they are often labeled as “problem children” and “rule breakers.” Historically, people viewed these children as willfully noncompliant due to moral or motivational defect (Still, 1902). However, scientists now know that the noncompliance observed in children with ADHD can be explained by a number of factors, including neurological dysfunction.
The goal of this module is to review the classification, causes, consequences, and treatment of ADHD. ADHD is somewhat unique among the psychiatric disorders in that most people hold strong opinions about the disorder, perhaps due to its more controversial qualities. When applicable, we will discuss some of the controversial beliefs held by social critics and laypeople, as well as scientists who study the disorder. Our hope is that a discussion of these controversies will allow you to reach your own conclusions about the legitimacy of the disorder.
Why Diagnose Children’s Behavior Problems?
When a family is referred to a mental health professional for help dealing with their child’s problematic behaviors, the clinician’s first goal is to identify the nature and cause of the child’s problems. Accurately diagnosing children’s behavior problems is an important step in the intervention process, because a child’s diagnosis can guide clinical decision making. Childhood behavior problems often arise from different causes, require different methods for treating, and have different developmental courses. Arriving at a diagnosis will allow the clinician to make inferences about how each child will respond to different treatments and provide predictive information to the family about how the disorder will affect the child as he or she develops.
Despite the utility of the current diagnostic system, the practice of diagnosing children’s behavior problems is controversial. Many adults feel strongly that labeling children as “disordered” is stigmatizing and harmful to children’s self-concept. There is some truth in this concern. One study found that children have more negative attitudes toward a play partner if they are led to believe that their partner has ADHD, regardless of whether or not their partner actually has the disorder (Harris, Milich, Corbitt, Hoover, & Brady, 1992). Others have criticized the use of the diagnostic system because they believe it pathologizes normal behavior in children. Despite these criticisms, the diagnostic system has played a central role in research and treatment of child behavior disorders, and it is unlikely to change substantially in the near future. This section will describe ADHD as a diagnostic category and discuss controversies surrounding the legitimacy of this disorder.
ADHD is the most commonly diagnosed childhood behavior disorder. It affects 3% to 7% of children in the United States (American Psychiatric Association, 2000), and approximately 65% of children diagnosed with ADHD will continue to experience symptoms as adults (Faraone, Biederman, & Mick, 2006). The core symptoms of ADHD are organized into two clusters, including clusters of hyperactivity/impulsivity and inattention. The hyperactive symptom cluster describes children who are perpetually in motion even during times when they are expected to be still, such as during class or in the car. The impulsive symptom cluster describes difficulty in delaying response and acting without considering the repercussions of behavior. Hyperactive and impulsive symptoms are closely related, and boys are more likely than girls to experience symptoms from this cluster (Hartung & Widiger, 1998). Inattentive symptoms describe difficulty with organization and task follow-through, as well as a tendency to be distracted by external stimuli. Two children diagnosed with ADHD can have very different symptom presentations. In fact, children can be diagnosed with different subtypes of the disorder (i.e., Combined Type, Predominantly Inattentive Type, or Predominantly Hyperactive-Impulsive Type) according to the number of symptoms they have in each cluster.
Are These Diagnoses Valid?
Many laypeople and social critics argue that ADHD is not a “real” disorder. These individuals claim that children with ADHD are only “disordered” because parents and school officials have trouble managing their behavior. These criticisms raise an interesting question about what constitutes a psychiatric disorder in children: How do scientists distinguish between clinically significant ADHD symptoms and normal instances of childhood impulsivity, hyperactivity, and inattention? After all, many 4-year-old boys are hyperactive and cannot focus on a task for very long. To address this issue, several criteria are used to distinguish between normal and disordered behavior:
1. The symptoms must significantly impair the child’s functioning in important life domains (e.g., school, home).
2. The symptoms must be inappropriate for the child’s developmental level.
One goal of this module will be to examine whether ADHD meets the criteria of a “true” disorder. The first criterion states that children with ADHD should show impairment in major functional domains. This is certainly true for children with ADHD. These children have lower academic achievement compared with their peers. They are more likely to repeat a grade or be suspended and less likely to graduate from high school (Loe & Feldman, 2007). Children with ADHD are often unpopular among their peers, and many of these children are actively disliked and socially rejected (Landau, Milich, & Diener, 1998). Children with ADHD are likely to experience comorbid psychological problems such as learning disorders, depression, anxiety, and oppositional defiant disorder. As they grow up, adolescents and adults with ADHD are at risk to abuse alcohol and other drugs (Molina & Pelham, 2003) and experience other adverse outcomes (see Focus Topic 1). In sum, there is sufficient evidence to conclude that children diagnosed with ADHD are significantly impaired by their symptoms.
Focus Topic 1: Adult outcomes of children with ADHD
Children with ADHD often continue to experience symptoms of the disorder as adults. Historically, this fact was not recognized by the medical community; instead, they believed that children “matured out” of their symptoms as they entered adulthood. Fortunately, opinions have changed over time, and it is now generally accepted that ADHD can be present among adults. A recent prevalence estimate suggests that 4.4% of adults in the United States meet criteria for ADHD (Kessler et al., 2006). This study also found that the majority of adults with ADHD are not receiving treatment for their disorder. Adult ADHD, if left untreated, can cause numerous negative outcomes, including:
• Depression and poor self-concept, personality disorder, and other psychiatric comorbidity (Kessler et al., 2006)
• Substance abuse (Molina & Pelham, 2003)
• Poor work performance, termination from jobs, chronic unemployment, and poor academic achievement (Barkley, Fischer, Smallish, & Fletcher, 2006)
• Divorce and problems with interpersonal relationships (Biederman et al., 2006)
• High-risk sexual behaviors and early parenthood (Barkley et al., 2006; Flory, Molina, Pelham, Gnagy, & Smith, 2006)
• Impairments in driving ability (Weafer, Fillmore, & Milich, 2009)
• Obesity (Cortese et al., 2008)
Despite the list of negative outcomes associated with adult ADHD, adults with the disorder are not doomed to live unfulfilling lives of limited accomplishment. Many adults with ADHD have benefited from treatment and are able to overcome their symptoms. For example, pharmacological treatment of adult ADHD has been shown to reduce risk of criminal behavior (Lichtenstein et al., 2012). Others have succeeded by avoiding careers in which their symptoms would be particularly problematic (e.g., those with heavy organizational demands). In any case, it is important that people with ADHD are identified and treated early, because early treatment predicts more positive outcomes in adulthood (Kessler et al., 2006).
It is also important to determine that a child’s symptoms are not caused by normal patterns of development. Many of the behaviors that are diagnostic of ADHD in some children would be considered developmentally appropriate for a younger child. This is true for many psychological and psychiatric disorders in childhood. For example, bedwetting is quite common in 3-year-old children; at this age, most children have not gained control over nighttime urination. For this reason, a 3-year-old child who wets the bed would not be diagnosed with enuresis (i.e., the clinical term for chronic bedwetting), because his or her behavior is developmentally appropriate. Bedwetting in an 8-year-old child, however, is developmentally inappropriate.
At this age, children are expected to remain dry overnight, and failure to master this skill would prevent children from sleeping over at friends’ houses or attending overnight camps. A similar example of developmentally appropriate versus inappropriate hyperactivity and noncompliance is provided in Focus Topic 2.
Focus Topic 2: Two children referred for problems with noncompliance and hyperactivity
Case 1 – Michael
Michael, a 4-year-old boy, was referred to a child psychologist to be evaluated for ADHD. His parents reported that Michael would not comply with their instructions. They also complained that Michael would not remain seated during “quality time” with his father. The evaluating psychologist interviewed the family, and by all accounts Michael was noncompliant and often left his seat. Specifically, when Michael’s mother asked him to prepare his preschool lunch, Michael would leave the kitchen and play with his toys soon after opening his lunch box. Further, the psychologist found that quality time involved Michael and his father sitting down for several hours to watch movies. In other settings, such as preschool, Michael was compliant with his teacher’s request and no more active than his peers.
In this case, Michael’s parents held unrealistic expectations for a child at Michael’s developmental level. The psychologist would likely educate Michael’s parents about normative child development rather than diagnosing Michael with ADHD.
Case 2 – Jake
Jake, a 10-year-old boy, was referred to the same psychologist as Michael. Jake’s mother was concerned because Jake was not getting ready for school on time. Jake also had trouble remaining seated during dinner, which interrupted mealtime for the rest of the family. The psychologist found that in the morning, Jake would complete one or two steps of his routine before he became distracted and switched activities, despite his mother’s constant reminders. During dinnertime, Jake would leave his seat between 10 and 15 times over the course of the meal. Jake’s teachers were worried because Jake was only able to complete 50% of his homework. Further, his classmates would not pick Jake for team sports during recess because he often became distracted and wondered off during the game.
In this case, Jake’s symptoms would not be considered developmentally appropriate for a 10-year-old child. Further, his symptoms caused him to experience impairment at home and school. Unlike Michael, Jake probably would be diagnosed with ADHD.
Why Do Some Children Develop Behavior Disorders?
The reasons that some children develop ADHD are complex, and it is generally recognized that a single cause is insufficient to explain why an individual child does or does not have the disorder. Researchers have attempted to identify risk factors that predispose a child to develop ADHD. These risk factors range in scope from genetic (e.g., specific gene polymorphisms) to familial (e.g., poor parenting) to cultural (e.g., low socioeconomic status). This section will identify some of the risk factors that are thought to contribute to ADHD. It will conclude by reviewing some of the more controversial ideas about the causes of ADHD, such as poor parenting and children’s diets, and review some of the evidence pertaining to these causes.
Most experts believe that genetic and neurophysiological factors cause the majority of ADHD cases. Indeed, ADHD is primarily a genetic disorder—twin studies find that whether or not a child develops ADHD is due in large part (75%) to genetic variations (Faraone et al., 2005). Further, children with a family history of ADHD are more likely to develop ADHD themselves (Faraone & Biederman, 1994). Specific genes that have been associated with ADHD are linked to neurotransmitters such as dopamine and serotonin. In addition, neuroimagining studies have found that children with ADHD show reduced brain volume in some regions of the brain, such as the prefrontal cortex, the corpus callosum, the anterior cingulate cortex, the basal ganglia, and the cerebellum (Seidman, Valera, & Makris, 2005). Among their other functions, these regions of the brain are implicated in organization, impulse control, and motor activity, so the reduced volume of these structures in children with ADHD may cause some of their symptoms.
Although genetics appear to be a main cause of ADHD, recent studies have shown that environmental risk factors may cause a minority of ADHD cases. Many of these environmental risk factors increase the risk for ADHD by disrupting early development and compromising the integrity of the central nervous system. Environmental influences such as low birth weight, malnutrition, and maternal alcohol and nicotine use during pregnancy can increase the likelihood that a child will develop ADHD (Mick, Biederman, Faraone, Sayer, & Kleinman, 2002). Additionally, recent studies have shown that exposure to environmental toxins, such as lead and pesticides, early in a child’s life may also increase risk of developing ADHD (Nigg, 2006).
Controversies on Causes of ADHD
Controversial explanations for the development of ADHD have risen and fallen in popularity since the 1960s. Some of these ideas arise from cultural folklore, others can be traced to “specialists” trying to market an easy fix for ADHD based on their proposed cause. Some other ideas contain a kernel of truth but have been falsely cast as causing the majority of ADHD cases.
Some critics have proposed that poor parenting is a major cause of ADHD. This explanation is popular because it is intuitively appealing—one can imagine how a child who is not being disciplined at home may be noncompliant in other settings. Although it is true that parents of children with ADHD use discipline less consistently, and a lack of structure and discipline in the home can exacerbate symptoms in children with ADHD (Campbell, 2002), it is unlikely that poor parenting alone causes ADHD in the first place. To the contrary, research suggests that the noncompliance and impulsivity on the child’s part can cause caregivers to use discipline less effectively.
In a classic series of studies, Cunningham and Barkley (1979) showed that mothers of children with ADHD were less attentive to their children and imposed more structure to their playtime relative to mothers of typically developing children. However, these researchers also showed that when the children were given stimulant medication, their compliance increased and their mothers’ parenting behavior improved to the point where it was comparable to that of the mothers of children without ADHD (Barkley & Cunningham, 1979). This research suggests that instead of poor parenting causing children to develop ADHD, it is the stressful effects of managing an impulsive child that causes parenting problems in their caregivers. One can imagine how raising a child with ADHD could be stressful for parents. In fact, one study showed that a brief interaction with an impulsive and noncompliant child caused parents to increase their alcohol consumption—presumably these parents were drinking to cope with the stress of dealing with the impulsive child (Pelham et al., 1997). It is, therefore, important to consider the reciprocal effects of noncompliant children on parenting behavior, rather than assuming that parenting ability has a unidirectional effect on child behavior.
Other purported causes of ADHD are dietary. For example, it was long believed that excessive sugar intake can cause children to become hyperactive. This myth is largely disproven (Milich, Wolraich, & Lindgren, 1986). However, other diet-oriented explanations for ADHD, such as sensitivity to certain food additives, have been proposed (Feingold, 1976). These theories have received a bit more support than the sugar hypothesis (Pelsser et al., 2011). In fact, the possibility that certain food additives may cause hyperactivity in children led to a ban on several artificial food colorings in the United Kingdom, although the Food and Drug Administration rejected similar measures in the United States. Even if artificial food dyes do cause hyperactivity in a subgroup of children, research does not support these food additives as a primary cause of ADHD. Further, research support for elimination diets as a treatment for ADHD has been inconsistent at best.
In sum, scientists are still working to determine what causes children to develop ADHD, and despite substantial progress over the past four decades, there are still many unanswered questions. In most cases, ADHD is probably caused by a combination of genetic and environmental factors. For example, a child with a genetic predisposition to ADHD may develop the disorder after his or her mother uses tobacco during her pregnancy, whereas a child without the genetic predisposition may not develop the disorder in the same environment. Fortunately, the causes of ADHD are relatively unimportant for the families of children with ADHD who wish to receive treatment, because what caused the disorder for an individual child generally does not influence how it is treated.
Methods of Treating ADHD in Children
There are several types of evidence-based treatment available to families of children with ADHD. The type of treatment that might be used depends on many factors, including the child’s diagnosis and treatment history, as well as parent preference. To treat children with less severe noncompliance problems, parents can be trained to systematically use contingency management (i.e., rewards and punishments) to manage their children’s behavior more effectively (Kazdin, 2005). For the children with ADHD, however, more intensive treatments often are necessary.
Medication
The most common method of treating ADHD is to prescribe stimulant medications such as Adderall™. These medications treat many of the core symptoms of ADHD—treated children will show improved impulse control, time-on-task, and compliance with adults, and decreased hyperactivity and disruptive behavior. However, there are also negative side effects to stimulant medication, such as growth and appetite suppression, increased blood pressure, insomnia, and changes in mood (Barkley, 2006). Although these side effects can be unpleasant for children, they can often be avoided with careful monitoring and dosage adjustments.
Opinions differ on whether stimulants should be used to treat children with ADHD. Proponents argue that stimulants are relatively safe and effective, and that untreated ADHD poses a much greater risk to children (Barkley, 2006). Critics argue that because many stimulant medications are similar to illicit drugs, such as cocaine and methamphetamine, long-term use may cause cardiovascular problems or predispose children to abuse illicit drugs. However, longitudinal studies have shown that people taking these medications are not more likely to experience cardiovascular problems or to abuse drugs (Biederman, Wilens, Mick, Spencer, & Faraone, 1999; Cooper et al., 2011). On the other hand, it is not entirely clear how long-term stimulant treatment can affect the brain, particularly in adults who have been medicated for ADHD since childhood.
Finally, critics of psychostimulant medication have proposed that stimulants are increasingly being used to manage energetic but otherwise healthy children. It is true that the percentage of children prescribed stimulant medication has increased since the 1980s. This increase in use is not unique to stimulant medication, however. Prescription rates have similarly increased for most types of psychiatric medication (Olfson, Marcus, Weissman, & Jensen, 2002). As parents and teachers become more aware of ADHD, one would expect that more children with ADHD will be identified and treated with stimulant medication. Further, the percentage of children in the United States being treated with stimulant medication is lower than the estimated prevalence of children with ADHD in the general population (Nigg, 2006).
Parent Management Training
Parenting children with ADHD can be challenging. Parents of these children are understandably frustrated by their children’s misbehavior. Standard discipline tactics, such as warnings and privilege removal, can feel ineffective for children with ADHD. This often leads to ineffective parenting, such as yelling at or ridiculing the child with ADHD. This cycle can leave parents feeling hopeless and children with ADHD feeling alienated from their family. Fortunately, parent management training can provide parents with a number of tools to cope with and effectively manage their child’s impulsive and oppositional behavior. Parent management training teaches parents to use immediate, consistent, and powerful consequences (i.e., rewards and punishment), because children with ADHD respond well to these types of behavioral contingencies (Luman, Oosterlaan, & Sergeant, 2005). Other, more intensive, psychosocial treatments use similar behavioral principles in summer camp–based settings (Pelham, Fabiano, Gnagy, Greiner, & Hoza, 2004), and school-based intervention programs are becoming more popular. A description of a school-based intervention program for ADHD is described in Focus Topic 3.
Focus Topic 3: Treating ADHD in Schools
Succeeding at school is one of the most difficult challenges faced by children with ADHD and their parents. Teachers expect students to attend to lessons, complete lengthy assignments, and comply with rules for approximately seven hours every day. One can imagine how a child with hyperactive and inattentive behaviors would struggle under these demands, and this mismatch can lead to frustration for the student and his or her teacher. Disruptions caused by the child with ADHD can also distract and frustrate peers. Succeeding at school is an important goal for children, so researchers have developed and validated intervention strategies based on behavioral principles of contingency management that can help children with ADHD adhere to rules in the classroom (described in DuPaul & Stoner, 2003). Illustrative characteristics of an effective school-based contingency management system are described below:
Token reinforcement program
This program allows a student to earn tokens (points, stars, etc.) by meeting behavioral goals and not breaking rules. These tokens act as secondary reinforcers because they can be redeemed for privileges or goods. Parents and teachers work with the students to identify problem behaviors and create concrete behavioral goals. For example, if a student is disruptive during silent reading time, then a goal might be for him or her to remain seated for at least 80% of reading time. Token reinforcement programs are most effective when tokens are provided for appropriate behavior and removed for inappropriate behavior.
Time out
Time out can be an effective punishment when used correctly. Teachers should place a student in time out only when they fail to respond to token removal or if they engage in a severely disruptive behavior (e.g., physical aggression). When placed in time out, the student should not have access to any type of reinforcement (e.g., toys, social interaction), and the teacher should monitor their behavior throughout time out.
Daily report card
The teacher keeps track of whether or not the student meets his or her goals and records this information on a report card. This information is sent home with the student each day so parents can integrate the student’s performance at school into a home-based contingency management program.
Educational services and accommodations
Students with ADHD often show deficits in specific academic skills (e.g., reading skills, math skills), and these deficits can be improved through direct intervention. Students with ADHD may spend several hours each week working one-on-one with an educator to improve their academic skills. Environmental accommodations can also help a student with ADHD be successful. For example, a student who has difficulty focusing during a test can be allowed extra time in a low-distraction setting.
What Works Best? The Multimodal Treatment Study
Recently, a large-scale study, the Multimodal Treatment Study (MTA) of Children with ADHD, compared pharmacological and behavioral treatment of ADHD (MTA Cooperative Group, 1999). This study compared the outcomes of children with ADHD in four different treatment conditions, including standard community care, intensive behavioral treatment, stimulant medication management, and the combination of intensive behavioral treatment and stimulant medication. In terms of core symptom relief, stimulant medication was the most effective treatment, and combined treatment was no more effective than stimulant medication alone (MTA Cooperative Group, 1999). Behavioral treatment was advantageous in other ways, however. For example, children who received combined treatment were less disruptive at school than children receiving stimulant medication alone (Hinshaw et al., 2000). Other studies have found that children who receive behavioral treatment require lower doses of stimulant medication to achieve the desired outcomes (Pelham et al., 2005). This is important because children are better able to tolerate lower doses of stimulant medication. Further, parents report being more satisfied with treatment when behavioral management is included as a component in the program (Jensen et al., 2001). In sum, stimulant medication and behavioral treatment each have advantages and disadvantages that complement the other, and the best outcomes likely occur when both forms of treatment are used to improve children’s behavior.
The Future of ADHD
It is difficult to predict the future; however, based on trends in research and public discourse, we can predict how the field may change as time progresses. This section will discuss two areas of research and public policy that will shape how we understand and treat ADHD in the coming decades.
Controlling Access to Stimulant Medication
It is no secret that many of the drugs used to treat ADHD are popular drugs of abuse among high school and college students, and this problem seems to be getting worse. The rate of illicit stimulant use has steadily risen over the past several decades (Teter, McCabe, Cranford, Boyd, & Guthrie, 2005), and it is probably not a coincidence that prescription rates for stimulant medication have increased during the same time period (Setlik, Bond, & Ho, 2009). Students who abuse stimulants often report doing so because they act as an academic performance enhancer by boosting alertness and concentration. Although they may enhance performance in the short term, nonmedical use of these drugs can lead to dependence and other adverse health consequences, especially when taken in ways other than prescribed (e.g., crushed and snorted) (Volkow & Swanson, 2003). Stimulants can be particularly dangerous when they are taken without supervision from a physician, because this may lead to adverse drug interactions or side effects. Because this increase in prescription stimulant abuse represents a threat to public health, an important goal for policy makers will be to reduce the availability of prescription stimulants to those who would use them for nonmedical reasons.
One of the first steps for addressing prescription stimulant abuse will be understanding how illicit users gain access to medication. Probably the most common method of obtaining stimulants is through drug diversion. The majority of college students who abuse stimulants report obtaining them from peers with valid prescriptions (McCabe & Boyd, 2005). Another way that would-be abusers may gain access to medication is by malingering (i.e., faking) symptoms of ADHD (Quinn, 2003). These individuals will knowingly exaggerate their symptoms to a physician in order to obtain a prescription. Other sources of illicit prescription drugs have been identified (e.g., pharmacy websites) (Califano, 2004), but more research is needed to understand how much these sources contribute to the problem. As we gain an understanding of how people gain access to illicit medication, policy makers and researchers can make efforts to curtail the rate of stimulant misuse. For example, because drug diversion is a major source of illicit stimulants, policymakers have enacted prescription monitoring programs to keep track of patient’s prescription-seeking behavior (Office of Drug Control Policy, 2011), and, in some cases, patients are required to pass drug screens before receiving their prescriptions. To address malingering, researchers are working to develop psychological tests that can identify individuals who are faking symptoms (Jasinski et al., 2011). Finally, pharmacologists are working to develop stimulant medications that do not carry the same risk of abuse as the currently available drugs (e.g., lisdexamfetamine) (Biederman et al., 2007).
Although all of these measures will reduce illicit users’ access to stimulant medication, it is important to consider how the policies will affect access among people who need these medications to treat their ADHD symptoms. Prescription tracking programs may reduce physicians’ willingness to prescribe stimulants out of fear of being investigated by law enforcement. Patients with ADHD with comorbid substance abuse problems may be denied access to stimulant medication because they are considered high risk for drug diversion. Similarly, lengthy psychological evaluations to assess for malingering and mandated drug screenings may be prohibitively expensive for less affluent individuals with ADHD. These measures to reduce illicit drug use are necessary from a public health perspective, but as we move forward and enact policies to reduce stimulant abuse, it will be equally important to consider impact of such legislation on patients’ access to treatment.
The Role of Neuroscience and Behavioral Genetics in Understanding ADHD
Much of the research on ADHD has been conducted to answer several deceptively complex questions: What causes ADHD? How are people with ADHD different from their typically developing peers? How can ADHD be prevented or treated? Historically, our tools for answering these questions was limited to observing outward human behavior, and our ability to ask questions about the physiology of ADHD was severely limited by the technology of the time. In the past two decades, however, rapid advances in technology (e.g., functional magnetic resonance imaging, genetic analysis) have allowed us to probe the physiological bases of human behavior. An exciting application of this technology is that we are able to extend our understanding of ADHD beyond basic behavior; we are learning about the underlying neurophysiology and genetics of the disorder. As we gain a fuller understanding of ADHD, we may be able to apply this knowledge to improve prevention and treatment of the disorder. Knowledge of the underlying physiology of ADHD may guide efforts to develop new nonstimulant medications, which may not carry the side effects or abuse potential of traditional stimulants. Similarly, these advances may improve our ability to diagnose ADHD. Although it is extremely unlikely that a perfectly accurate genetic or neuroimaging test for ADHD will ever be developed (Thome et al., 2012), such procedures could be used in conjunction with behavioral evaluation and questionnaires to improve diagnostic accuracy. Finally, identifying genetic traits that predispose children to develop ADHD may allow physicians to use targeted prevention programs that could reduce the chances that children at risk for developing the disorder will experience symptoms.
Discussion Questions
1. Does ADHD meet the definition of a psychiatric disorder?
2. Explain the difference between developmentally appropriate and developmentally inappropriate behavior problems.
3. Do you believe that it is ethical to prescribe stimulant medication to children? Why or why not? What are the risks associated with withholding stimulant medication from children with ADHD?
4. How should society balance the need to treat individuals with ADHD using stimulants with public health concerns about the abuse of these same medications?
7.02: Summary and Self-Test- ADHD and Behaviour Disorders in Children
Summary
Attention-Deficit/Hyperactivity Disorder (ADHD) is the most prevalent childhood psychiatric disorder in Canada, occurring in 3-5% of elementary school children. At least half of these children will continue to experience symptoms in adolescence and adulthood.
Children with ADHD have difficulty exercising self-control, complying with adults’ instructions, and are often labeled as “problem children.”
The practice of diagnosing children’s behaviour problems, including ADHD, is controversial. Many feel that labeling children as disordered is stigmatizing and harmful to children’s self-concept. Some believe that the diagnostic system pathologizes normal childhood behaviour.
The core symptoms of ADHD are organized into two clusters: hyperactivity/impulsivity and inattention. The hyperactive symptoms describe being perpetually in motion even during times when children are expected to sit still. Impulsivity describes a difficulty in delaying response and acting without considering the repercussions of behaviour. Inattentive symptoms describe difficulty with organization and task follow-through, as well as a tendency to be distracted by external stimuli.
Many laypeople and critics argue that ADHD is not a “real” disorder, claiming that children with ADHD are only considered disordered because parents and school officials have trouble managing their behaviour. Several criteria are used to distinguish between normal and disordered behaviour, including the level of impairment the symptoms cause for the child’s functioning in important life domains, and that the symptoms are inappropriate for the child’s developmental level.
Most experts believe that genetic and neurophysiological factors cause the majority of ADHD cases. ADHD is, indeed, primarily a genetic disorder.
Environmental risk factors may cause a minority of ADHD cases. Many of these environmental risk factors increase the risk by disputing early development and compromising the integrity of the central nervous system. Examples include low birth weight, malnutrition, and maternal smoking during pregnancy.
Controversy has surrounded the causes of ADHD, which several causes being proposed that have no grounded in research. These include poor parenting, as well as sugar and food additives. Neither of these have been shown to contribute to ADHD.
Parents can be trained to use contingency management more effectively. Stimulant medications and parenting management are used to treat ADHD. The Multimodal Treatment Study of ADHD found that stimulant medication was the most effective treatment.
Ideas for future consideration within the study of ADHD include controlling access to stimulant medication, as well as the role of neuroscience and behavioral genetics in understanding ADHD.
Self-Test
An interactive or media element has been excluded from this version of the text. You can view it online here:
https://openpress.usask.ca/abnormalpsychology/?p=482 | textbooks/socialsci/Psychology/Psychological_Disorders/Abnormal_Psychology_(Cummings)/07%3A_ADHD_and_Related_Behaviour_Disorders_in_Childhood/7.01%3A_ADHD_and_Behaviour_Disorders_in_Children.txt |
People with autism spectrum disorder (ASD) suffer from a profound social disability. Social neuroscience is the study of the parts of the brain that support social interactions or the “social brain.” This module provides an overview of ASD and focuses on understanding how social brain dysfunction leads to ASD. Our increasing understanding of the social brain and its dysfunction in ASD will allow us to better identify the genes that cause ASD and will help us to create and pick out treatments to better match individuals. Because social brain systems emerge in infancy, social neuroscience can help us to figure out how to diagnose ASD even before the symptoms of ASD are clearly present. This is a hopeful time because social brain systems remain malleable well into adulthood and thus open to creative new interventions that are informed by state-of-the-art science.
08: Autism Spectrum Disorder
Learning Objectives
• Know the basic symptoms of ASD.
• Distinguish components of the social brain and understand their dysfunction in ASD.
• Appreciate how social neuroscience may facilitate the diagnosis and treatment of ASD.
Defining Autism Spectrum Disorder
Autism Spectrum Disorder (ASD) is a developmental disorder that usually emerges in the first three years and persists throughout the individual’s life. Though the key symptoms of ASD fall into three general categories (see below), each person with ASD exhibits symptoms in these domains in different ways and to varying degrees. This phenotypic heterogeneity reflects the high degree of variability in the genes underlying ASD (Geschwind & Levitt, 2007). Though we have identified genetic differences associated with individual cases of ASD, each accounts for only a small number of the actual cases, suggesting that no single genetic cause will apply in the majority of people with ASD. There is currently no biological test for ASD.
Autism is in the category of pervasive developmental disorders, which includes Asperger’s disorder, childhood disintegrative disorder, autistic disorder, and pervasive developmental disorder – not otherwise specified. These disorders, together, are labeled autism spectrum disorder (ASD). ASD is defined by the presence of profound difficulties in social interactions and communication combined with the presence of repetitive or restricted interests, cognitions and behaviors. The diagnostic process involves a combination of parental report and clinical observation. Children with significant impairments across the social/communication domain who also exhibit repetitive behaviors can qualify for the ASD diagnosis. There is wide variability in the precise symptom profile an individual may exhibit.
Since Kanner first described ASD in 1943, important commonalities in symptom presentation have been used to compile criteria for the diagnosis of ASD. These diagnostic criteria have evolved during the past 70 years and continue to evolve (e.g., see the recent changes to the diagnostic criteria on the American Psychiatric Association’s website, http://www.dsm5.org), yet impaired social functioning remains a required symptom for an ASD diagnosis.
Previously, DSM-IV-TR included autism under a broader umbrella diagnostic category: Pervasive Developmental Disorders. Under this diagnostic category were four sub-diagnoses: autistic disorder, Asperger’s disorder, childhood disintegrative disorder, and pervasive developmental disorder not otherwise specified. Based on research and clinical experience that have been gained since DSM-IV was published in 1994 (Hyman, 2013), DSM-5 made substantial changes to the conceptualization and diagnostic criteria of these disorders.
DSM-5 collapses these four disorders into one diagnosis: Autism Spectrum Disorder (ASD). This diagnosis reflects scientific consensus that the four previously distinct diagnoses are actually a single diagnosis with different levels of symptom severity and level of impairment. This change to the conceptualization of this ASD acknowledges the heterogeneity in the presentation and severity of ASD symptoms, and in the skills and level of functioning of people with ASD (American Psychological Association, n.d.). In addition, a diagnosis is made based on the severity of symptoms in two areas only: social communication impairments and repetitive/restricted behaviours.
Deficits in social functioning are present in varying degrees for simple behaviors such as eye contact, and complex behaviors like navigating the give and take of a group conversation for individuals of all functioning levels (i.e. high or low IQ). Moreover, difficulties with social information processing occur in both visual (e.g., Pelphrey et al., 2002) and auditory (e.g., Dawson, Meltzoff, Osterling, Rinaldi, & Brown, 1998) sensory modalities.
Consider the results of an eye tracking study in which Pelphrey and colleagues (2002) observed that individuals with autism did not make use of the eyes when judging facial expressions of emotion (see right panels of Figure 8.1). While repetitive behaviors or language deficits are seen in other disorders (e.g., obsessive-compulsive disorder and specific language impairment, respectively), basic social deficits of this nature are unique to ASD. Onset of the social deficits appears to precede difficulties in other domains (Osterling, Dawson, & Munson, 2002) and may emerge as early as 6 months of age (Maestro et al., 2002).
ADHD in Canada
In Canada, 1 in 66 children and youth (ages 5 to 17) are diagnosed with ASD, making it one of the most common developmental disabilities (Ofner et al., 2018). Approximately 1 to 2% of the population in Canada is affected by ASD (Anagnostou et al., 2014). Compared with females, males are four times more likely to receive a diagnosis of ASD (Ofner et al., 2018). More than half of children and youth with ASD are diagnosed by age six, and more than 90% receive a diagnosis by age 12 (Ofner et al., 2018). Unfortunately, the rates of ASD in Canada are increasing and this puts significant strain on the education, healthcare, and social serve systems (Autism Ontario, n.d.).
In February 2019, the government of Ontario announced changes to autism funding. About 23,000 children with ASD were currently on a therapy wait list, and to ensure that these children could access services within 18 months, the government implemented drastic “childhood budgets” (Powers, 2019, March 11). This controversial autism funding model provided families a fixed amount of money that was determined by their child’s age and family income (CBC News, 2019, July 29). The province’s budget plan set significant limitations that would not meet the treatment needs of children with ASD, especially children with more severe ASD. In response, some families left Ontario to receive autism services for their children elsewhere in Canada, and protests and outrage occurred across the province (Monsebraaten & Rushowy, 2019, October 29).
To advocate for the needs of these children, a panel, called the Ontario Autism Panel, was created. This panel included parents, advocates, clinicians, academics, and adults with autism (Monsebraaten & Rushowy, 2019, October 29). Rather than a one-size-fits-all approach that is based on fixed factors like age and family income, the panel made recommendations for a new needs-based model of funding ensures that children receive the appropriate services based on their needs (Monsebraaten & Rushowy, 2019, October 29). Unfortunately, the new Ontario Autism Program (OAP) will not be fully implemented until 2021. In the interim, the government has taken steps to provide support to children and their families, including extra funding, programs, and workshops for parents (Payne, 2019, December 18). Although this program is believed to place children at the centre of care, waiting for it is crucial as it is a critical time in child development, and the costs of treatment are a significant burden for many families across Ontario (Payne, 2019, December 18).
Defining the Social Brain
Within the past few decades, research has elucidated specific brain circuits that support perception of humans and other species. This social perception refers to “the initial stages in the processing of information that culminates in the accurate analysis of the dispositions and intentions of other individuals” (Allison, Puce, & McCarthy, 2000). Basic social perception is a critical building block for more sophisticated social behaviors, such as thinking about the motives and emotions of others. Brothers (1990) first suggested the notion of a social brain, a set of interconnected neuroanatomical structures that process social information, enabling the recognition of other individuals and the evaluation their mental states (e.g., intentions, dispositions, desires, and beliefs).
The social brain is hypothesized to consist of the amygdala, the orbital frontal cortex (OFC), fusiform gyrus (FG), and the posterior superior temporal sulcus (STS) region, among other structures. Though all areas work in coordination to support social processing, each appears to serve a distinct role. The amygdala helps us recognize the emotional states of others (e.g., Morris et al., 1996) and also to experience and regulate our own emotions (e.g., LeDoux, 1992). The OFC supports the “reward” feelings we have when we are around other people (e.g., Rolls, 2000). The FG, located at the bottom of the surface of the temporal lobes detects faces and supports face recognition (e.g., Puce, Allison, Asgari, Gore, & McCarthy, 1996). The posterior STS region recognizes the biological motion, including eye, hand and other body movements, and helps to interpret and predict the actions and intentions of others (e.g., Pelphrey, Morris, Michelich, Allison, & McCarthy, 2005).
Current Understanding of Social Perception in ASD
The social brain is of great research interest because the social difficulties characteristic of ASD are thought to relate closely to the functioning of this brain network. Functional magnetic resonance imaging (fMRI) and event-related potentials (ERP) are complementary brain imaging methods used to study activity in the brain across the lifespan. Each method measures a distinct facet of brain activity and contributes unique information to our understanding of brain function.
FMRI uses powerful magnets to measure the levels of oxygen within the brain, which vary according to changes in neural activity. As the neurons in specific brain regions “work harder”, they require more oxygen. FMRI detects the brain regions that exhibit a relative increase in blood flow (and oxygen levels) while people listen to or view social stimuli in the MRI scanner. The areas of the brain most crucial for different social processes are thus identified, with spatial information being accurate to the millimeter.
In contrast, ERP provides direct measurements of the firing of groups of neurons in the cortex. Non-invasive sensors on the scalp record the small electrical currents created by this neuronal activity while the subject views stimuli or listens to specific kinds of information. While fMRI provides information about where brain activity occurs, ERP specifies when by detailing the timing of processing at the millisecond pace at which it unfolds.
ERP and fMRI are complementary, with fMRI providing excellent spatial resolution and ERP offering outstanding temporal resolution. Together, this information is critical to understanding the nature of social perception in ASD. To date, the most thoroughly investigated areas of the social brain in ASD are the superior temporal sulcus (STS), which underlies the perception and interpretation of biological motion, and the fusiform gyrus (FG), which supports face perception. Heightened sensitivity to biological motion (for humans, motion such as walking) serves an essential role in the development of humans and other highly social species. Emerging in the first days of life, the ability to detect biological motion helps to orient vulnerable young to critical sources of sustenance, support, and learning, and develops independent of visual experience with biological motion (e.g., Simion, Regolin, & Bulf, 2008). This inborn “life detector” serves as a foundation for the subsequent development of more complex social behaviors (Johnson, 2006).
From very early in life, children with ASD display reduced sensitivity to biological motion (Klin, Lin, Gorrindo, Ramsay, & Jones, 2009). Individuals with ASD have reduced activity in the STS during biological motion perception. Similarly, people at increased genetic risk for ASD but who do not develop symptoms of the disorder (i.e. unaffected siblings of individuals with ASD) show increased activity in this region, which is hypothesized to be a compensatory mechanism to offset genetic vulnerability (Kaiser et al., 2010).
In typical development, preferential attention to faces and the ability to recognize individual faces emerge in the first days of life (e.g., Goren, Sarty, & Wu, 1975). The special way in which the brain responds to faces usually emerges by three months of age (e.g., de Haan, Johnson, & Halit, 2003) and continues throughout the lifespan (e.g., Bentin et al., 1996). Children with ASD, however, tend to show decreased attention to human faces by six to 12 months (Osterling & Dawson, 1994). Children with ASD also show reduced activity in the FG when viewing faces (e.g., Schultz et al., 2000). Slowed processing of faces (McPartland, Dawson, Webb, Panagiotides, & Carver, 2004) is a characteristic of people with ASD that is shared by parents of children with ASD (Dawson, Webb, & McPartland, 2005) and infants at increased risk for developing ASD because of having a sibling with ASD (McCleery, Akshoomoff, Dobkins, & Carver, 2009). Behavioral and attentional differences in face perception and recognition are evident in children and adults with ASD as well (e.g., Hobson, 1986).
Exploring Diversity in ASD
Because of the limited quality of the behavioral methods used to diagnose ASD and current clinical diagnostic practice, which permits similar diagnoses despite distinct symptom profiles (McPartland, Webb, Keehn, & Dawson, 2011), it is possible that the group of children currently referred to as having ASD may actually represent different syndromes with distinct causes. Examination of the social brain may well reveal diagnostically meaningful subgroups of children with ASD. Measurements of the “where” and “when” of brain activity during social processing tasks provide reliable sources of the detailed information needed to profile children with ASD with greater accuracy. These profiles, in turn, may help to inform treatment of ASD by helping us to match specific treatments to specific profiles.
The integration of imaging methods is critical for this endeavor. Using face perception as an example, the combination of fMRI and ERP could identify who, of those individuals with ASD, shows anomalies in the FG and then determine the stage of information processing at which these impairments occur. Because different processing stages often reflect discrete cognitive processes, this level of understanding could encourage treatments that address specific processing deficits at the neural level.
For example, differences observed in the early processing stages might reflect problems with low-level visual perception, while later differences would indicate problems with higher-order processes, such as emotion recognition. These same principles can be applied to the broader network of social brain regions and, combined with measures of behavioral functioning, could offer a comprehensive profile of brain-behavior performance for a given individual. A fundamental goal for this kind of subgroup approach is to improve the ability to tailor treatments to the individual.
Another objective is to improve the power of other scientific tools. Most studies of individuals with ASD compare groups of individuals, for example, individuals on with ASD compared to typically developing peers. However, studies have also attempted to compare children across the autism spectrum by group according to differential diagnosis (e.g., Asperger’s disorder versus autistic disorder), or by other behavioral or cognitive characteristics (e.g., cognitively able versus intellectually disabled or anxious versus non-anxious). Yet, the power of a scientific study to detect these kinds of significant, meaningful, individual differences is only as strong as the accuracy of the factor used to define the compared groups.
The identification of distinct subgroups within the autism spectrum according to information about the brain would allow for a more accurate and detailed exposition of the individual differences seen in those with ASD. This is especially critical for the success of investigations into the genetic basis of ASD. As mentioned before, the genes discovered thus far account for only a small portion of ASD cases. If meaningful, quantitative distinctions in individuals with ASD are identified; a more focused examination into the genetic causes specific to each subgroup could then be pursued. Moreover, distinct findings from neuroimaging, or biomarkers, can help guide genetic research. Endophenotypes, or characteristics that are not immediately available to observation but that reflect an underlying genetic liability for disease, expose the most basic components of a complex psychiatric disorder and are more stable across the lifespan than observable behavior (Gottesman & Shields, 1973). By describing the key characteristics of ASD in these objective ways, neuroimaging research will facilitate identification of genetic contributions to ASD.
Atypical Brain Development Before the Emergence of Atypical Behavior
Because autism is a developmental disorder, it is particularly important to diagnose and treat ASD early in life. Early deficits in attention to biological motion, for instance, derail subsequent experiences in attending to higher level social information, thereby driving development toward more severe dysfunction and stimulating deficits in additional domains of functioning, such as language development. The lack of reliable predictors of the condition during the first year of life has been a major impediment to the effective treatment of ASD. Without early predictors, and in the absence of a firm diagnosis until behavioral symptoms emerge, treatment is often delayed for two or more years, eclipsing a crucial period in which intervention may be particularly successful in ameliorating some of the social and communicative impairments seen in ASD.
In response to the great need for sensitive (able to identify subtle cases) and specific (able to distinguish autism from other disorders) early indicators of ASD, such as biomarkers, many research groups from around the world have been studying patterns of infant development using prospective longitudinal studies of infant siblings of children with ASD and a comparison group of infant siblings without familial risks. Such designs gather longitudinal information about developmental trajectories across the first three years of life for both groups followed by clinical diagnosis at approximately 36 months.
These studies are problematic in that many of the social features of autism do not emerge in typical development until after 12 months of age, and it is not certain that these symptoms will manifest during the limited periods of observation involved in clinical evaluations or in pediatricians’ offices. Moreover, across development, but especially during infancy, behavior is widely variable and often unreliable, and at present, behavioral observation is the only means to detect symptoms of ASD and to confirm a diagnosis. This is quite problematic because, even highly sophisticated behavioral methods, such as eye tracking (see Figure 1), do not necessarily reveal reliable differences in infants with ASD (Ozonoff et al., 2010). However, measuring the brain activity associated with social perception can detect differences that do not appear in behavior until much later. The identification of biomarkers utilizing the imaging methods we have described offers promise for earlier detection of atypical social development.
ERP measures of brain response predict subsequent development of autism in infants as young as six months old who showed normal patterns of visual fixation (as measured by eye tracking) (Elsabbagh et al., 2012). This suggests the great promise of brain imaging for earlier recognition of ASD. With earlier detection, treatments could move from addressing existing symptoms to preventing their emergence by altering the course of abnormal brain development and steering it toward normality.
Hope for Improved Outcomes
The brain imaging research described above offers hope for the future of ASD treatment. Many of the functions of the social brain demonstrate significantplasticity, meaning that their functioning can be affected by experience over time. In contrast to theories that suggest difficulty processing complex information or communicating across large expanses of cortex (Minshew & Williams, 2007), this malleability of the social brain is a positive prognosticator for the development of treatment. The brains of people with ASD are not wired to process optimally social information. But this does not mean that these systems are irretrievably broken. Given the observed plasticity of the social brain, remediation of these difficulties may be possible with appropriate and timely intervention.
Outside Resources
Web: American Psychiatric Association’s website for the 5th edition of the Diagnostic and Statistical Manual of Mental Disordershttp://www.dsm5.org
Web: Autism Science Foundation – organization supporting autism research by providing funding and other assistance to scientists and organizations conducting, facilitating, publicizing and disseminating autism research. The organization also provides information about autism to the general public and serves to increase awareness of autism spectrum disorders and the needs of individuals and families affected by autism. http://www.autismsciencefoundation.org/
Web: Autism Speaks – Autism science and advocacy organization http://www.autismspeaks.org/
Discussion Questions
1. How can neuroimaging inform our understanding of the causes of autism?
2. What are the ways in which neuroimaging, including fMRI and ERP, may benefit efforts to diagnosis and treat autism?
3. How can an understanding of the social brain help us to understand ASD?
4. What are the core symptoms of ASD, and why is the social brain of particular interest?
5. What are some of the components of the social brain, and what functions do they serve?
8.02: Summary and Self-Test- Autism
Summary
Autism spectrum disorder (ASD) is a developmental disorder that usually emerges within the first three years and persists throughout the individual’s life. There are three general categories of symptoms of ASD: presence of profound difficulties in social interactions and communication, combined with the presence of repetitive or restricted interests, cognitions, and behaviours. There is a wide variety of symptom combinations that may be present for people with ASD.
Previously, DSM-IV-TR included autism under a broader diagnostic category called Pervasive Developmental Disorders. Based on research and clinical experience, however, the separate disorders were collapsed into one diagnosis for DSM-5 (ASD).
In Canada, 1 in 66 children and youth (aged 5-17) are diagnosed with ASD, making it one of the most common developmental disorders. Males are four times more likely to receive a diagnosis of ASD.
Basic social perception is an important building block for more sophisticated social behaviours, like thinking about the emotions and motivations of others. Because of the social difficulties characterizing ASD, the functioning of the social brain is of great interest to autism researchers.
To date, the most investigated areas of the social brain in ASD are the superior temporal sulcus (STS), which underlies the perception and integration of biological motion, and the fusiform gyrus (FG), which supports face perception. Very early in life, children with ASD display reduced sensitive to biological motion and lack the attention to human faces that non-ASD infants possess.
Because of the many potential symptom combinations of ASD, identification of distinct (neurological) subgroups within the autism spectrum would allow for a more accurate and detailed exploration of individual differences between types of ASD.
It is particularly important to diagnose and treat ASD early in life. The lack of reliable predictors during the first year of life is an impediment to this early intervention. Treatment is often delayed for 2 or more years. | textbooks/socialsci/Psychology/Psychological_Disorders/Abnormal_Psychology_(Cummings)/08%3A_Autism_Spectrum_Disorder/8.01%3A_Autism-_Insights_from_the_study_of_the_social_brain.txt |
Every one of us has our own personality that describes who we generally are. This is often how we organize our sense of self and our impressions of the people around us. Personality disorders, however, describe a form of psychopathology marked by extreme, rigid personality difficulties that can cause impairment (in multiple domains) for the individual. Moreover, they can cause a multitude of interpersonal difficulties for those around them. In this chapter we review one of the more popular theories of personality – the Five Factor Model – and discuss the differences between personality and personality disorders.
The DSM-5 organizes personality disorders into three clusters, based on their common characteristics. Cluster A personality disorders involve odd and eccentric thinking or behaviour, and include paranoid, schizoid, and schizotypal personality disorder. Cluster B personality disorders involve dramatic, overly emotional or unpredictable thinking or behaviour, and include antisocial, borderline, histrionic, and narcissistic personality disorder. Cluster C personality disorders are marked by anxious, fearful thinking or behaviour and include avoidant, dependent, and obsessive-compulsive personality disorder.
Personality disorders are, unfortunately, some of the most challenging disorders to treat. This is because they are so entrenched, chronic, and pervasive for the people who experience them. Very few people with personality disorders present for treatment, and if they do it is often because they experiencing social/occupational impairment or because someone else has pushed them to go. The exception is borderline personality disorder, which is quite distressing for those who experience it. Our treatment development for personality disorders lags behind that of other disorders, although there is one empirically supported treatment for borderline personality disorder: Dialectical Behaviour Therapy.
09: Personality Disorders
Learning Objectives
• Define what is meant by a personality disorder.
• Identify the five domains of general personality.
• Identify the six personality disorders proposed for retention in DSM-5.
• Summarize the etiology for antisocial and borderline personality disorder.
• Identify the treatment for borderline personality disorder.
Personality & the Five-Factor Model
Everybody has their own unique personality; that is, their characteristic manner of thinking, feeling, behaving, and relating to others (John, Robins, & Pervin, 2008). Some people are typically introverted, quiet, and withdrawn; whereas others are more extraverted, active, and outgoing. Some individuals are invariably conscientiousness, dutiful, and efficient; whereas others might be characteristically undependable and negligent. Some individuals are consistently anxious, self-conscious, and apprehensive; whereas others are routinely relaxed, self-assured, and unconcerned. Personality traits refer to these characteristic, routine ways of thinking, feeling, and relating to others. There are signs or indicators of these traits in childhood, but they become particularly evident when the person is an adult. Personality traits are integral to each person’s sense of self, as they involve what people value, how they think and feel about things, what they like to do, and, basically, what they are like most every day throughout much of their lives.
There are literally hundreds of different personality traits. All of these traits can be organized into the broad dimensions referred to as the Five-Factor Model (John, Naumann, & Soto, 2008). These five broad domains are inclusive; there does not appear to be any traits of personality that lie outside of the Five-Factor Model. This even applies to traits that you may use to describe yourself. Table 9.1 provides illustrative traits for both poles of the five domains of this model of personality. A number of the traits that you see in this table may describe you. If you can think of some other traits that describe yourself, you should be able to place them somewhere in this table.
DSM-5 Personality Disorders
When personality traits result in significant distress, social impairment, and/or occupational impairment, they are considered to be a personality disorder (American Psychiatric Association, 2013). The authoritative manual for what constitutes a personality disorder is provided by the American Psychiatric Association’s (APA) Diagnostic and Statistical Manual of Mental Disorders (DSM), the current version of which is DSM-5 (APA, 2013). The DSM provides a common language and standard criteria for the classification and diagnosis of mental disorders. This manual is used by clinicians, researchers, health insurance companies, and policymakers.
According to the DSM-V, a personality disorder is characterized by a pervasive, consistent, and enduring pattern of behaviour and internal experience that differs significantly from that which is usually expected in the individual’s culture. They typically have an onset in adolescence or early adulthood, persist over time, and cause distress or impairment. The pattern must be present in two or more of the four areas of cognition, emotion, interpersonal functioning, and impulse control. It must also not be better explained by another mental disorder or medical condition, or as the effects of a substance. There was much discussion in writing the DSM-V about changing the way in which personality disorders are diagnosed, but for now the system remains unchanged from the previous version of the DSM (the DSM-IV-TR). DSM-5 includes 10 personality disorders, grouped into three clusters: Cluster A (paranoid, schizoid, and schizotypal personality disorders), Cluster B (antisocial, borderline, histrionic, and narcissistic personality disorders), and Cluster C (avoidant, dependent, and obsessive-compulsive personality disorders).
This list of 10 though does not fully cover all of the different ways in which a personality can be maladaptive. DSM-5 also includes a “wastebasket” diagnosis of other specified personality disorder (OSPD) and unspecified personality disorder (UPD). This diagnosis is used when a clinician believes that a patient has a personality disorder but the traits that constitute this disorder are not well covered by one of the 10 existing diagnoses. OSPD and UPD or as they used to be referred to in previous editions – PDNOS (personality disorder not otherwise specified) are often one of the most frequently used diagnoses in clinical practice, suggesting that the current list of 10 is not adequately comprehensive (Widiger & Trull, 2007).
Each of the 10 DSM-5 (and DSM-IV-TR) personality disorders is a constellation of maladaptive personality traits, rather than just one particular personality trait (Lynam & Widiger, 2001). In this regard, personality disorders are “syndromes.” For example, avoidant personality disorder is a pervasive pattern of social inhibition, feelings of inadequacy, and hypersensitivity to negative evaluation (APA, 2013), which is a combination of traits from introversion (e.g., socially withdrawn, passive, and cautious) and neuroticism (e.g., self-consciousness, apprehensiveness, anxiousness, and worrisome). Dependent personality disorder includes submissiveness, clinging behavior, and fears of separation (APA, 2013), for the most part a combination of traits of neuroticism (anxious, uncertain, pessimistic, and helpless) and maladaptive agreeableness (e.g., gullible, guileless, meek, subservient, and self-effacing). Antisocial personality disorder is, for the most part, a combination of traits from antagonism (e.g., dishonest, manipulative, exploitative, callous, and merciless) and low conscientiousness (e.g., irresponsible, immoral, lax, hedonistic, and rash). See the 1967 movie, Bonnie and Clyde, starring Warren Beatty, for a nice portrayal of someone with antisocial personality disorder.
Some of the DSM-5 personality disorders are confined largely to traits within one of the basic domains of personality. For example, obsessive-compulsive personality disorder is largely a disorder of maladaptive conscientiousness, including such traits as workaholism, perfectionism, punctilious, ruminative, and dogged; schizoid is confined largely to traits of introversion (e.g., withdrawn, cold, isolated, placid, and anhedonic); borderline personality disorder is largely a disorder of neuroticism, including such traits as emotionally unstable, vulnerable, overwhelmed, rageful, depressive, and self-destructive (watch the 1987 movie, Fatal Attraction, starring Glenn Close, for a nice portrayal of this personality disorder); and histrionic personality disorder is largely a disorder of maladaptive extraversion, including such traits as attention-seeking, seductiveness, melodramatic emotionality, and strong attachment needs (see the 1951 film adaptation of Tennessee William’s play, Streetcar Named Desire, starring Vivian Leigh, for a nice portrayal of this personality disorder).
Due to the severity of symptoms (e.g., suicide), Canadian researchers have examined the rates of Cluster B personality disorders specifically (Cailhol et al., 2017). In Quebec, the 2011-2012 prevalence rates were 2.6% (lifetime) and 3.6% (12-month). Compared with the general provincial population, the mean years of lost life expectancy for men and women were 13 and 9 years, respectively (Cailhol et al., 2017).
It should be noted though that a complete description of each DSM-5 personality disorder would typically include at least some traits from other domains. For example, antisocial personality disorder (or psychopathy) also includes some traits from low neuroticism (e.g., fearlessness and glib charm) and extraversion (e.g., excitement-seeking and assertiveness); borderline includes some traits from antagonism (e.g., manipulative and oppositional) and low conscientiousness (e.g., rash); and histrionic includes some traits from antagonism (e.g., vanity) and low conscientiousness (e.g., impressionistic). Narcissistic personality disorder includes traits from neuroticism (e.g., reactive anger, reactive shame, and need for admiration), extraversion (e.g., exhibitionism and authoritativeness), antagonism (e.g., arrogance, entitlement, and lack of empathy), and conscientiousness (e.g., acclaim-seeking). Schizotypal personality disorder includes traits from neuroticism (e.g., social anxiousness and social discomfort), introversion (e.g., social withdrawal), unconventionality (e.g., odd, eccentric, peculiar, and aberrant ideas), and antagonism (e.g., suspiciousness).
The APA currently conceptualizes personality disorders as qualitatively distinct conditions; distinct from each other and from normal personality functioning. However, included within an appendix to DSM-5 is an alternative view that personality disorders are simply extreme and/or maladaptive variants of normal personality traits, as suggested herein. Nevertheless, many leading personality disorder researchers do not hold this view (e.g., Gunderson, 2010; Hopwood, 2011; Shedler et al., 2010). They suggest that there is something qualitatively unique about persons suffering from a personality disorder, usually understood as a form of pathology in sense of self and interpersonal relatedness that is considered to be distinct from personality traits (APA, 2012; Skodol, 2012). For example, it has been suggested that antisocial personality disorder includes impairments in identity (e.g., egocentrism), self-direction, empathy, and capacity for intimacy, which are said to be different from such traits as arrogance, impulsivity, and callousness (APA, 2012).
DSM-5 Description of Each Disorder
As mentioned, the DSM organizes personality disorders into 3 clusters.
Cluster A personality disorders involve odd and eccentric thinking or behaviour and include paranoid, schizoid, and schizotypal personality disorder. The Cluster B personality disorders involve dramatic, overly emotional, or unpredictable thinking or behaviour and include antisocial, borderline, histrionic, and narcissistic personality disorder. Cluster C personality disorders involve anxious, fearful thinking or behaviour and include avoidant, dependent, and obsessive-compulsive personality disorder.
Paranoid Personality Disorder
Paranoid personality disorder is characterized by a pattern of mistrust or suspiciousness of others. Their motives are generally interpreted as malicious. Even when no evidence supports this conclusion, individuals with this personality disorder tend to assume that others mean them harm. They may be suspicious of their close friends or family, and as a result tend to avoid confiding in others. There may also be a tendency to misinterpret harmless events or comments as threats. Individuals with paranoid personality disorder can carry persistent grudges or generally present as unforgiving of even minor slights. When feeling attacked or plotted against they are quick to react with anger and often lash out or plan to seek revenge. This personality disorder often involves an inability to trust one’s romantic partner, and even in the absence of any evidence to the contrary an individual may become convinced that their partner has been unfaithful. Interpersonally they often appear hostile, stubborn, sarcastic, rigid, controlling, and critical of others. However, it is important to note that members of minority groups may appear guarded or defensive in response to discrimination or neglect by the majority society. As with any personality disorder, cultural factors must not contribute to a diagnosis of paranoid personality disorder.
Schizoid Personality Disorder
If an individual generally remains detached from interpersonal relationships and has only a narrow range of emotional expression, they may be diagnosed with schizoid personality disorder. Someone with this disorder may derive no enjoyment from nor show any interest in close relationships including family, close friendships, or sexual relationships. They may choose solitary activities over interpersonal ones, find very few activities pleasurable or enjoyable, and may also seem indifferent when either praised or criticized by others. They may present as emotionally cold or distant and detached, with flattened affect. They may seem superficial or self-absorbed due to their disinterest in interpersonal relationships, and are generally not aware of (or do not respond to) social norms or cues. Individuals with schizoid personality disorder often find mechanical or abstract tasks (such as computer or mathematics) more attractive than social activities.
Schizotypal Personality Disorder
Schizotypal personality disorder is diagnosed when an individual is unable or unwilling to form close relationships and has cognitive or perceptual distortions or eccentric behaviour. These individuals may experience ideas of references and strange beliefs or “magical thinking” that influences how they behave and is inconsistent with cultural/societal norms. It is important to note that many cultural contexts or religious settings include beliefs in things that would otherwise be symptoms of schizotypal personality disorder, and this must be ruled out before a diagnosis can be made. People with this disorder may have unusual perceptions that include somatic illusions, and their speech and thinking may be “odd” (i.e., vague, metaphorical, overly detailed). Suspiciousness and paranoia are often present, as is inappropriate/constricted affect (i.e., appearing emotionally “stiff”), eccentric behaviour and appearance, and lack of close connections other than immediate family. Social anxiety is also common, but differs from Axis I anxiety disorders in that it does not decrease as one becomes more familiar with someone, and it is based in paranoia rather than fears of negative judgment.
Antisocial Personality Disorder
The diagnostic criteria for antisocial personality disorder specify that there must be a consistent pattern of disregarding or violating the rights of others since the age of 15. Specifically, this can involve unlawful behaviour or lying to or conning others for personal gain or pleasure. These individuals may be impulsive, irritable, aggressive, or reckless. As a result of these characteristics they may get into frequent physical fights or display a disregard for their own safety or that of others. They are frequently irresponsible and may fail to hold down a job or take care of financial obligations. Individuals with antisocial personality disorder often lack remorse, and as such they frequently present as indifferent to the suffering of others even when they have caused it. This personality disorder can only be diagnosed in someone 18 years or older, but conduct disorder must have been present prior to 15 years of age. There has been discussion about whether this diagnosis is disproportionately given to those from lower socioeconomic circumstances and care should be taken to tease apart survival strategies and traits from diagnosable symptoms of the disorder.
Borderline Personality Disorder
The hallmark of borderline personality disorder is a pervasive pattern of unstable interpersonal relationships, self-image, and emotions, with significant impulsivity. These individuals may respond to real or imagined abandonment by frantically trying to avoid it, and their relationships may be intense and unstable, and characterized by alternating between viewing someone as “all good” or “all bad.” They may have an extremely unstable sense of self which translates into frequently changing interests and goals, and their impulsivity may occur in areas such as finances, sexual behaviour, substance abuse, dangerous driving, or binge eating. Suicidal behaviour is common and can include gestures, threats, attempts, and self-mutilation. Their emotions are frequently labile (unstable and reactive), and their moods may last only a few hours or a few days. Many individuals with this disorder report feeling chronically “empty,” and they may struggle with intense and inappropriate anger that may be difficult for them to control. Borderline personality disorder may also cause paranoia or dissociation that comes and goes depending on stress levels. One must note that adolescents and younger adults who are undergoing identity issues may appear to have some of the symptoms of BPD. Also, BPD is disproportionately diagnosed in females (whereas antisocial PD is disproportionately diagnosed in men) and an argument has been made in the literature that perhaps the diagnosis unfairly pathologizes stereotypically female experiences or responses to trauma. Another discussion topic has been that the exact same symptoms in case studies are diagnosed by mental health professionals as symptoms of borderline personality disorder in females, but antisocial personality disorder in males.
Histrionic Personality Disorder
A diagnosis of histrionic personality disorder describes someone who may need to be the centre of attention in order to find a situation comfortable. They may interact with others in overly and inappropriately sexually seductive or provocative ways, and their emotions change quickly and tend to be quite shallow in expression. Their physical appearance is often used as a way of drawing attention to themselves, and their speech tends towards being overly vague and dramatic (for instance, making bold statements but having no details to back up their opinions). When these individuals express emotion it is often exaggerated and theatrical. They may also be easily influenced by others or circumstances and often consider their relationships to be more intimate and close than they actually are. Above all, individuals with histrionic personality disorder are known to show excessive emotion and seek attention to an extreme degree. Given that many of these traits are largely influenced by cultural context, the extent to which they cause significant impairment or distress must be evaluated before diagnosis can be made.
Narcissistic Personality Disorder
An individual with narcissistic personality disorder may have a grandiose sense of their own importance, which means that they may exaggerate their positive traits or successes and expect recognition). They may fantasize about success, power, beauty, brilliance, or love, and may see themselves as special and unique. This view of themselves may lead to a belief that they should only associate with other exceptional people. Someone with this disorder requires an excessive amount of admiration from others and feels entitled to special treatment. They may view others as needing to fulfill their needs and desires in a way that caters to their every whim. As such, these individuals sometimes take advantage of others in order to achieve their own goals and they may lack empathy or be unwilling or unable to recognize that others have valid thoughts, feelings, and needs. Although this disorder sometimes includes arrogant or haughty behaviour and attitudes, the individual may actually be envious of others. As a whole, this disorder involves extreme self-centred or self-absorbed behaviours and beliefs. Although ambition and confidence associated with this disorder may lead to significant vocational achievement, it may also cause impairment in functioning if an individual is unwilling to engage in tasks unless sure of success. They may also have difficulty working within a power structure that requires answering to someone with more power than themselves.
Avoidant Personality Disorder
Avoidant personality disorder generally involves an unwillingness to interact with people unless sure of being liked. This includes avoiding work that involves significant interaction or being restrained within relationships because of fearing criticism, rejection, disapproval, or shame. In fact, the individual is usually preoccupied with the idea of being criticized or rejected by others, and thus presents as inhibited when faced with new interpersonal relationships because of feeling inadequate. They may hold a view of themselves as socially inept, inferior, or unappealing. These individuals also tend to be quite reluctant to take any risks or try new activities because of an extreme fear of being embarrassed. What defines this personality disorder is the pattern of social inhibition, feelings of inadequacy or inferiority, and being hypersensitive to criticism. Unfortunately this disorder tends to create a vicious cycle, in which their fearful or tense presentation elicits negative responses from others, which in turn leads to more fear and avoidance. However, one must note that acculturation issues following immigration should not be confused with a diagnosis of avoidant personality disorder.
Dependent Personality Disorder
If someone shows a pattern of excessive neediness, clingy behaviour, submission, and fear of separation, they may be diagnosed with dependent personality disorder. This disorder may also include having difficulty making everyday decisions without seeking the input of others to an extreme degree. They may need others to take responsibility for large parts of their life, and may not be able to express dissenting opinions because of fearing disapproval or loss of support. Individuals with dependent personality disorder may have trouble starting projects or completing tasks on their own because they lack confidence in their abilities, and they may excessively try to secure nurturing support from others, even if it means they have to do things that they find unpleasant. This disorder also tends to involve feeling uncomfortable or helpless when left alone, due to feeling intense fear over having to take care of oneself. They may go from one relationship to another in order to avoid being left alone, as a result of being preoccupied with this fear. As with most other disorders, traits of dependent personality disorder can be heavily influenced by cultural factors. Being polite, deferent, and passive is highly regarded in some cultures and in order to be diagnosed with this disorder the individual’s behaviour must differ significantly from cultural norms.
Obsessive-Compulsive Personality Disorder
An individual with obsessive-compulsive personality disorder presents as preoccupied with details, rules, lists, order, organization, and schedules. This preoccupation is so intense that the main point of the activity being planned gets lost. Their perfectionism interferes with accomplishing goals, but they may also be so devoted to work and productivity that leisure time and friendships are sacrificed. These individuals may be extremely inflexible and scrupulous when it comes to issues of morals, ethics, or values (although this criterion must not be accounted for by religion or culture). They may find throwing out old or worthless items too difficult, even in the absence of sentimental value. This disorder also may make one hesitant to delegate or work cooperatively unless the workmate is willing to completely submit to how the individual feels the work should be done. In terms of finances, they may be extremely reluctant to spend money, choosing instead to hoard resources to prepare for an anticipated disaster in the future. These individuals also tend to present as extremely rigid and stubborn. Even normally “fun” activities may turn into structured tasks for someone with obsessive-compulsive personality disorder.
Validity
It is quite possible that in future revisions of the DSM some of the personality disorders included in DSM-5 will no longer be included. In fact, for DSM-5 it was originally proposed that four be deleted. The personality disorders that were slated for deletion were histrionic, schizoid, paranoid, and dependent (APA, 2012). The rationale for the proposed deletions was in large part because they are said to have less empirical support than the diagnoses that were at the time being retained (Skodol, 2012). There is agreement within the field with regard to the empirical support for the borderline, antisocial, and schizotypal personality disorders (Mullins-Sweat, Bernstein, & Widiger, 2012; Skodol, 2012). However, there is a difference of opinion with respect to the empirical support for the dependent personality disorder (Bornstein, 2012; Livesley, 2011; Miller, Widiger, & Campbell, 2010; Mullins-Sweat et al., 2012).
Little is known about the specific etiology for most of the DSM-5 personality disorders. Because each personality disorder represents a constellation of personality traits, the etiology for the syndrome will involve a complex interaction of an array of different neurobiological vulnerabilities and dispositions with a variety of environmental, psychosocial events. Antisocial personality disorder, for instance, is generally considered to be the result of an interaction of genetic dispositions for low anxiousness, aggressiveness, impulsivity, and/or callousness, with a tough, urban environment, inconsistent parenting, poor parental role modeling, and/or peer support (Hare, Neumann, & Widiger, 2012). Borderline personality disorder is generally considered to be the result of an interaction of a genetic disposition to negative affectivity interacting with a malevolent, abusive, and/or invalidating family environment (Hooley, Cole, & Gironde, 2012).
To the extent that one considers the DSM-5 personality disorders to be maladaptive variants of general personality structure, as described, for instance, within the Five-Factor Model, there would be a considerable body of research to support the validity for all of the personality disorders, including even the histrionic, schizoid, and paranoid. There is compelling multivariate behavior genetic support with respect to the precise structure of the Five-Factor Model (e.g., Yamagata et al., 2006), childhood antecedents (Caspi, Roberts, & Shiner, 2005), universality (Allik, 2005), temporal stability across the lifespan (Roberts & DelVecchio, 2000), ties with brain structure (DeYoung, Hirsh, Shane, Papademetris, Rajeevan, & Gray, 2010), and even molecular genetic support for neuroticism (Widiger, 2009).
Treatment
Personality disorders are relatively unique because they are often “ego-syntonic;” that is, most people are largely comfortable with their selves, with their characteristic manner of behaving, feeling, and relating to others. As a result, people rarely seek treatment for their antisocial, narcissistic, histrionic, paranoid, and/or schizoid personality disorder. People typically lack insight into the maladaptivity of their personality.
One clear exception though is borderline personality disorder (and perhaps as well avoidant personality disorder). Neuroticism is the domain of general personality structure that concerns inherent feelings of emotional pain and suffering, including feelings of distress, anxiety, depression, self-consciousness, helplessness, and vulnerability. Persons who have very high elevations on neuroticism (i.e., persons with borderline personality disorder) experience life as one of pain and suffering, and they will seek treatment to alleviate this severe emotional distress. People with avoidant personality may also seek treatment for their high levels of neuroticism (anxiousness and self-consciousness) and introversion (social isolation). In contrast, narcissistic individuals will rarely seek treatment to reduce their arrogance; paranoid persons rarely seek treatment to reduce their feelings of suspiciousness; and antisocial people rarely (or at least willfully) seek treatment to reduce their disposition for criminality, aggression, and irresponsibility.
Nevertheless, maladaptive personality traits will be evident in many individuals seeking treatment for other mental disorders, such as anxiety, mood, or substance use. Many of the people with a substance use disorder will have antisocial personality traits; many of the people with mood disorder will have borderline personality traits. The prevalence of personality disorders within clinical settings is estimated to be well above 50% (Torgersen, 2012). As many as 60% of inpatients within some clinical settings are diagnosed with borderline personality disorder (APA, 2000). Antisocial personality disorder may be diagnosed in as many as 50% of inmates within a correctional setting (Hare et al., 2012). It is estimated that 10% to 15% of the general population meets criteria for at least one of the 10 DSM-IV-TR personality disorders (Torgersen, 2012), and quite a few more individuals are likely to have maladaptive personality traits not covered by one of the 10 DSM-5 diagnoses.
The presence of a personality disorder will often have an impact on the treatment of other mental disorders, typically inhibiting or impairing responsivity. Antisocial persons will tend to be irresponsible and negligent; borderline persons can form intensely manipulative attachments to their therapists; paranoid patients will be unduly suspicious and accusatory; narcissistic patients can be dismissive and denigrating; and dependent patients can become overly attached to and feel helpless without their therapists.
It is a misnomer, though, to suggest that personality disorders cannot themselves be treated. Personality disorders are among the most difficult of disorders to treat because they involve well-established behaviors that can be integral to a client’s self-image (Millon, 2011). Nevertheless, much has been written on the treatment of personality disorder (e.g., Beck, Freeman, Davis, & Associates, 1990; Gunderson & Gabbard, 2000), and there is empirical support for clinically and socially meaningful changes in response to psychosocial and pharmacologic treatments (Perry & Bond, 2000). The development of an ideal or fully healthy personality structure is unlikely to occur through the course of treatment, but given the considerable social, public health, and personal costs associated with some of the personality disorders, such as the antisocial and borderline, even just moderate adjustments in personality functioning can represent quite significant and meaningful change.
Nevertheless, manualized and/or empirically validated treatment protocols have been developed for only one specific personality disorder, borderline (APA, 2001).
Focus Topic: Treatment of Borderline Personality Disorder
Dialectical behavior therapy (Lynch & Cuyper, 2012) and mentalization therapy (Bateman & Fonagy, 2012): Dialectical behavior therapy is a form of cognitive-behavior therapy that draws on principles from Zen Buddhism, dialectical philosophy, and behavioral science. The treatment has four components: individual therapy, group skills training, telephone coaching, and a therapist consultation team, and will typically last a full year. As such, it is a relatively expensive form of treatment, but research has indicated that its benefits far outweighs its costs, both financially and socially.
It is unclear why specific and explicit treatment manuals have not been developed for the other personality disorders. This may reflect a regrettable assumption that personality disorders are unresponsive to treatment. It may also reflect the complexity of their treatment. As noted earlier, each DSM-5 disorder is a heterogeneous constellation of maladaptive personality traits. In fact, a person can meet diagnostic criteria for the antisocial, borderline, schizoid, schizotypal, narcissistic, and avoidant personality disorders and yet have only one diagnostic criterion in common. For example, only five of nine features are necessary for the diagnosis of borderline personality disorder; therefore, two persons can meet criteria for this disorder and yet have only one feature in common. In addition, patients meeting diagnostic criteria for one personality disorder will often meet diagnostic criteria for another. This degree of diagnostic overlap and heterogeneity of membership hinders tremendously any effort to identify a specific etiology, pathology, or treatment for a respective personality disorder as there is so much variation within any particular group of patients sharing the same diagnosis (Smith & Zapolski, 2009).
Of course, this diagnostic overlap and complexity did not prevent researchers and clinicians from developing dialectical behavior therapy and mentalization therapy. A further reason for the weak progress in treatment development is that, as noted earlier, persons rarely seek treatment for their personality disorder. It would be difficult to obtain a sufficiently large group of people with, for instance, narcissistic or obsessive–compulsive disorder to participate in a treatment outcome study, one receiving the manualized treatment protocol, the other receiving treatment as usual.
Conclusions
It is evident that all individuals have a personality, as indicated by their characteristic way of thinking, feeling, behaving, and relating to others. For some people, these traits result in a considerable degree of distress and/or impairment, constituting a personality disorder. A considerable body of research has accumulated to help understand the etiology, pathology, and/or treatment for some personality disorders (i.e., antisocial, schizotypal, borderline, dependent, and narcissistic), but not so much for others (e.g., histrionic, schizoid, and paranoid). However, researchers and clinicians are now shifting toward a more dimensional understanding of personality disorders, wherein each is understood as a maladaptive variant of general personality structure, thereby bringing to bear all that is known about general personality functioning to an understanding of these maladaptive variants.
Outside Resources
Structured Clinical Interview for DSM-5 (SCID-5) https://www.appi.org/products/structured-clinical-interview-for-dsm-5-scid-5
Web: DSM-5 website discussion of personality disorders http://www.dsm5.org/ProposedRevision/Pages/PersonalityDisorders.aspx
Discussion Questions
1. Do you think that any of the personality disorders, or some of their specific traits, are ever good or useful to have?
2. If someone with a personality disorder commits a crime, what is the right way for society to respond? For example, does or should meeting diagnostic criteria for antisocial personality disorder mitigate (lower) a person’s responsibility for committing a crime?
3. Given what you know about personality disorders and the traits that comprise each one, would you say there is any personality disorder that is likely to be diagnosed in one gender more than the other? Why or why not?
4. Do you believe that personality disorders can be best understood as a constellation of maladaptive personality traits, or do you think that there is something more involved for individuals suffering from a personality disorder?
5. The authors suggested Clyde Barrow as an example of antisocial personality disorder and Blanche Dubois for histrionic personality disorder. Can you think of a person from the media or literature who would have at least some of the traits of narcissistic personality disorder?
9.02: Summary and Self-Test- Personality Disorders
Summary
Our personalities reflect our characteristic manner of thinking, feeling, behaving and relating to others. Personality traits are integral to a person’s sense of self.
While there are many theories of personality, one of the most well researched is the Five Factor Model, which organizes literally hundreds of traits into five broad dimensions: Neuroticism/Emotional Stability, Extraversion/Introversion, Openness/Closedness, Agreeableness/Antagonism, and Conscientiousness/Disinhibition.
When personality traits result in significant distress, social impairment, and/or occupational impairment, they might be considered to be a personality disorder. Personality disorders are characterized by a pervasive, consistent, and enduring pattern of behaviour and internal experience that differs significantly from that which is usually expected in the individual’s culture.
Personality disorders typically have an onset in adolescence or early adulthood, persist over time, and cause distress or impairment.
Each of the 10 personality disorders is a constellation of maladaptive personality traits, not one particular trait. In this regard, they are syndromes. These can be mapped onto the Five Factor Model.
The personality disorders are grouped into 3 clusters, based on their predominant symptoms. Cluster A personality disorders involve odd or eccentric thinking or behaviour (paranoid, schizoid, and schizotypal personality disorder). Cluster B personality disorders are marked by dramatic, overly emotional, or unpredictable thinking or behaviour (antisocial, borderline, histrionic, and narcissistic personality disorder). Cluster C personality disorders involve anxious, fearful thinking or behaviour (avoidant, dependent, and obsessive-compulsive personality disorder.
The validity of personality disorders is an issue of controversy.
Personality disorders are generally ego syntonic, meaning that people are largely comfortable with themselves and their personality serves them well.
One personality disorder for which we have a well developed treatment is borderline personality disorder, which is treated with Dialectical Behaviour Therapy.
Cognitive Therapy can also be used to treat personality disorders.
Personality disorders are among the most difficult to treat disorders, because they involve well-established behaviours that are integral to a client’s self-image.
Self-Test
An interactive or media element has been excluded from this version of the text. You can view it online here:
https://openpress.usask.ca/abnormalpsychology/?p=488 | textbooks/socialsci/Psychology/Psychological_Disorders/Abnormal_Psychology_(Cummings)/09%3A_Personality_Disorders/9.01%3A_Personality_Disorders.txt |
Learning Objectives
• Explain what it means to display abnormal behavior.
• Identify types of mental health professionals
• Clarify the manner in which mental health professionals classify mental disorders.
• Describe the effect of stigma on those afflicted with mental illness.
• Outline the history of mental illness.
01: What is Abnormal Psychology
This is the first chapter in the main body of the text. You can change the text, rename the chapter, add new chapters, and add new parts.
1.02: Understanding Abnormal Behavior
Section Learning Objectives
• Define abnormal psychology, psychopathology, and psychological disorders.
• Explain the concept of dysfunction as it relates to mental illness.
• Explain the concept of distress as it relates to mental illness.
• Explain the concept of deviance as it relates to mental illness.
• Explain the concept of dangerousness as it relates to mental illness.
• Define culture and social norms.
• Know the cost of mental illness to society.
• Identify and describe the various types of mental health professionals.
Definition of Abnormal Psychology and Psychopathology
The term abnormal psychology refers to the scientific study of people who are atypical or unusual, with the intent to be able to reliably predict, explain, diagnose, identify the causes of, and treat maladaptive behavior. A more sensitive and less stigmatizing term that is used to refer to the scientific study of psychological disorders is psychopathology. These definitions beg the questions of, what is considered abnormal and what is a psychological or mental disorder?
Defining Psychological Disorders
It may be surprising to you, but the concept of mental or psychological disorders has proven very difficult to define and even the American Psychiatric Association (APA, 2013), in its publication, the Diagnostic and Statistical Manual of Mental Disorders, 5th edition (DSM-5 for short), states that though “no definition can capture all aspects of all disorders in the range contained in the DSM-5” certain aspects are required. While the concept of mental or psychological disorders is difficult to define, and no definition will ever be perfect, it is recognized as an extremely important concept and therefore psychological disorders (aka mental disorders) have been defined as a psychological dysfunction which causes distress or impaired functioning and deviates from typical or expected behavior according to societal or cultural standards. This definition includes three components (3 Ds). Let’s break these down now:
• Dysfunction – includes “clinically significant disturbance in an individual’s cognition, emotion regulation, or behavior that reflects a dysfunction in the psychological, biological, or developmental processes underlying mental functioning” (pg. 20). In other words, dysfunction refers to a breakdown in cognition, emotion, and/orbehavior. For instance, an individual experiencing delusions that he is an omnipotent deity would have a breakdown in cognition because his thought processes are not consistent with reality. An individual who is unable to experience pleasure would have a breakdown in emotion. Finally, an individual who is unable to leave her home and attend work due to fear of having a panic attack would be exhibiting a breakdown in behavior. Abnormal behavior has the capacity to make our well-being difficult to obtain and can be assessed by looking at an individual’s current performance and comparing it to what is expected in general or how the person has performed in the past.
• Distress or Impairment – Distress can take the form of psychological or physical pain, or both concurrently. Simply put, distress refers to suffering. Alone though, distress is not sufficient enough to describe behavior as abnormal. Why is that? The loss of a loved one would cause even the most “normally” functioning individual pain and suffering. An athlete who experiences a career-ending injury would display distress as well. Suffering is part of life and cannot be avoided. And some people who display abnormal behavior are generally positive while doing so. Typically, if distress is absent then impairment must be present to deem behavior abnormal. Impairment refers to when the person experiences a disabling condition “in social, occupational, or other important activities” (pg. 20). In other words, impairment refers to when a person loses the capacity to function normally in daily life (e.g., can no longer maintain minimum standards of hygiene, pay bills, attend social functions, or go to work). Once again typically distress and/or impairment in functioning are required to consider behavior abnormal and to diagnose a psychological disorder.
• Deviance – A closer examination of the word abnormal shows that it indicates a move away from what is normal, typical, or average. Our culture – or the totality of socially transmitted behaviors, customs, values, technology, attitudes, beliefs, art, and other products that are particular to a group – determines what is normal and so a person is said to be deviant when he or she fails to follow the stated and unstated rules of society, called social norms. What is considered “normal” by society can change over time due to shifts in accepted values and expectations. For instance, just a few decades ago homosexuality was considered taboo in the U.S. and it was included as a mental disorder in the first edition of the DSM; but today, it is generally accepted. Likewise, PDAs, or public displays of affection, do not cause a second look by most people unlike the past when these outward expressions of love were restricted to the privacy of one’s own house or bedroom. In the U.S., crying is generally seen as a weakness for males but if the behavior occurs in the context of a tragedy such as the Vegas mass shooting on October 1, 2017, in which 58 people were killed and about 500 were wounded, then it is appropriate and understandable. Finally, consider that statistically deviant behavior is not necessarily negative. Genius is an example of behavior that is not the norm.
Though not part of the DSM 5’s conceptualization of what abnormal behavior is, many clinicians add a 4th D – dangerousness to this list. Dangerousness refers to when behavior represents a threat to the safety of the person or others. Individuals expressing suicidal intent, those experiencing acute paranoid ideation combined with aggressive impulses (e.g., wanting to harm people who are perceived as “being out to get them”), and many individuals with antisocial personality disorder may be considered dangerous. Mental health professionals (and many other professionals including researchers) have a duty to report to law enforcement when an individual expresses an intent to harm themselves or others. Nevertheless, individuals with depression, anxiety, and obsessive-compulsive disorder are typically no more a threat to others than individuals without these disorders. As such, it is important to note that having a mental disorder does not automatically deem one to be dangerous and most dangerous individuals are not mentally ill. Indeed, a review of the literature (Matthias & Angermeyer, 2002) found that only a small proportion of crimes are committed by individuals with severe mental disorders, that strangers are at a lower risk of being attacked by a person with a severe mental disorder than by someone who is mentally healthy, and that elevated risks to behave violently are limited to a small number of symptom constellations. Similarly, Hiday and Burns (2010) showed that dangerousness is more the exception than the rule.
What is the Cost of Mental Illness to Society?
This leads us to consider the cost of mental illness to society. The National Alliance on Mental Illness (NAMI) indicates that depression is the number one cause of disability across the world “and is a major contributor to the global burden of disease.” Serious mental illness costs the United States an estimated \$193 billion in lost earnings each year. They also point out that suicide is the 10th leading cause of death in the U.S. and 90% of those who die from suicide have an underlying mental illness. In relation to children and teens, 37% of students with a mental disorder age 14 and older drop out of school which is the highest dropout rate of any disability group, and 70% of youth in state and local juvenile justice systems have at least one mental disorder. Source: https://www.nami.org/Learn-More/Mental-Health-By-the-Numbers. In terms of worldwide impact, the World Economic Forum used 2010 data to estimate \$2.5 trillion in global costs in 2010 and projected costs of \$6 trillion by 2030. The costs for mental illness are greater than the combined costs of cancer, diabetes, and respiratory disorders (Whiteford et al., 2013). And finally, “The Social Security Administration reports that in 2012, 2.6 and 2.7 million people under age 65 with mental illness-related disability received SSI and SSDI payments, respectively, which represents 43 and 27 percent of the total number of people receiving such support, respectively” (Source: https://www.nimh.nih.gov/about/directors/thomas-insel/blog/2015/mental-health-awareness-month-by-the-numbers.shtml). So as you can see the cost of mental illness is quite staggering for the United States and other countries.
Check this out: Seven Facts about America’s Mental Health-Care System
https://www.washingtonpost.com/news/...=.12de8bc56941
In conclusion, though there is no one behavior that we can use to classify people as abnormal, most clinical practitioners agree that any behavior that strays from what is considered the norm or is unexpected within the confines of one’s culture, that causes dysfunction in cognition, emotion, and/or behavior, and that causes distress and/or impairment in functioning, is abnormal behavior. Armed with this understanding, let’s discuss what mental disorders are.
Types of Mental Health Professionals
There are many types of mental health professionals that people may seek out for assistance. They include:
Table 1: Types of Mental Health Professionals
Name
Degree Required
Function/Training
Can they prescribe medications?
Clinical Psychologist Ph.D. Trained to make diagnoses and can provide individual and group therapy Only in select states
School Psychologist Masters or Ph.D. Trained to make diagnoses and can provide individual and group therapy but also works with school staff No
Counseling Psychologist Ph.D. Deals with adjustment issues primarily and less with mental illness No
Clinical Social Worker M.S.W. or Ph.D. Trained to make diagnoses and can provide individual and group therapy and is involved in advocacy and case management. Usually in hospital settings. No
Psychiatrist M.D. Has specialized training in the diagnosis and treatment of mental disorders Yes
Psychiatric Nurse Practitioner M.R.N. Has specialized training in the care and treatment of psychiatric patients Yes
Occupational Therapist M.S. Has specialized training with individuals with physical or psychological conditions and helps them acquire needed resources No
Drug Abuse and/or Alcohol Counselor B.S. or higher Trained in alcohol and drug abuse and can make diagnoses and can provide individual and group therapy No
Child/Adolescent Psychiatrist M.D. or Ph.D. Specialized training in the diagnosis and treatment of mental illness in children Yes
Marital and Family Therapist Masters Specialized training in marital and family therapy; Can make diagnoses and can provide individual and group therapy No
Prescription Rights for Psychologists
To reduce inappropriate and over-prescribing it has been proposed to allow appropriately trained psychologists the right to prescribe. Psychologists are more likely to choose between therapy and medications, and so can make the best choice for their patient. The right has already been granted in New Mexico, Louisiana, Guam, the military, the Indian Health Services, and the U.S. Public Health Services. Measures in other states “have been opposed by the American Medical Association and American Psychiatric Association over concerns that inadequate training of psychologists could jeopardize patient safety. Supporters of prescriptive authority for psychologists are quick to point out that there is no evidence to support these concerns (Smith, 2012).”
For more information on types of mental health professionals, please visit:
http://www.mentalhealthamerica.net/types-mental-health-professionals
1.03: Chapter 1- What is Abnormal Psychology
Chapter Overview
Cassie is an 18-year-old female from suburban Seattle, WA. She was a successful student in high school, graduating valedictorian, and she obtained a National Merit Scholarship for her performance on the PSAT during her junior year. She was accepted to a university on the far eastern side of the state where she received additional scholarships which together, gives her a free ride for her full four years of undergraduate education. Excited to start this new chapter in her life, Cassie’s parents begin the 5 hour commute to Pullman where they will leave their only daughter for the first time in her life. The semester begins as it always does in late August. Cassie meets the challenge head-on and does well in all of her classes for the first few weeks of the semester, as expected. Sometime around Week 6, her friends notice she is despondent, detached, and falling behind in her work. After being asked about her condition she replies that she is “just a bit homesick.” Her friends accept the answer as this is a typical response to leaving home and starting college for many students. A month later her condition has not improved but actually worsens. She now regularly shirks her responsibilities around her apartment, in her classes, and on her job. Cassie does not hang out with friends like she did when she first arrived at college and stays in bed most of the day. Concerned, her friends contact Health and Wellness for help.
Cassie’s story, though hypothetical, is true of many Freshman leaving home for the first time to earn a higher education, whether in rural Washington state or urban areas such as Chicago and Dallas. Most students recover from episodes of depression and go on to be functional members of their collegiate environment and accomplished scholars. Some learn to cope on their own while others seek assistance from their university’s health and wellness center or from friends who have already been through similar ordeals. This is a normal reaction. But in Cassie’s case and that of other students, the path to recovery is not as clear and instead of learning how to cope, their depression increases until it reaches clinical levels and becomes an impediment to success in multiple domains of life such as home, work, school, and social circles.
In Chapter 1, we will explore what it means to display abnormal behavior, how mental disorders are classified and how society views them both today and has throughout history.
Chapter Outline
• 1.1. Understanding Abnormal Behavior
• 1.2. Classifying Mental Disorders
• 1.3. The History of Mental Illness
Chapter Learning Outcomes
• Explain what it means to display abnormal behavior.
• Identify types of mental health professionals
• Clarify the manner in which mental health professionals classify mental disorders.
• Describe the effect of stigma on those afflicted with mental illness.
• Outline the history of mental illness. | textbooks/socialsci/Psychology/Psychological_Disorders/Essentials_of_Abnormal_Psychology_(Bridley_and_Daffin)/01%3A_What_is_Abnormal_Psychology/1.01%3A_Chapter_Overview.txt |
Section Learning Objectives
• Define and exemplify classification.
• Define nomenclature.
• Define epidemiology.
• Define presenting problem and clinical description.
• Differentiate prevalence and incidence and subtypes of prevalence.
• Define comorbidity.
• Define etiology.
• Define course.
• Define prognosis.
• Define treatment.
• Explain the concept of stigma and its three forms.
• Define courtesy stigma.
• Describe what the literature shows about stigma.
Classification and Definitions
Classification
Classification is not a foreign concept and as a student, you have likely taken at least one biology class that discussed the taxonomic classification system of Kingdom, Phylum, Class, Order, Family, Genus, and Species revolutionized by Swedish botanist, Carl Linnaeus. You probably even learned a witty mnemonic such as ‘King Phillip, Come Out For Goodness Sake’ to keep the order straight. The Library of Congress uses classification to organize and arrange their book collections and includes such categories as B – Philosophy, Psychology, and Religion; H – Social Sciences; N – Fine Arts; Q – Science; R – Medicine; and T – Technology.
Simply, classification is the way in which we organize or categorize things. The second author’s wife has been known to color code her DVD collection by genre, movie title, and at times release date. It is useful for us to do the same with abnormal behavior and classification provides us with a nomenclature, or naming system, to structure our understanding of mental disorders in a meaningful way. Of course, we want to learn as much as we can about a given disorder so we can understand its cause, predict its future occurrence, and develop ways to treat it.
Definitions
Epidemiology is the scientific study of the frequency and causes of diseases and other health-related states in specific populations such as a school, neighborhood, a city, country, or the entire world. Psychiatric or mental health epidemiology refers to the study of the frequency of occurrence of mental disorders in a population. In mental health facilities, we say that a patient presents with a specific problem, or the presenting problem, and we give a clinical description of it which includes information about the thoughts, feelings, and behaviors that constitute that mental disorder. We also seek to gain information about the occurrence of the disorder, its cause, course, and treatment possibilities.
Occurrence can be investigated in several ways. First, prevalence is the percentage of people in a population that has a mental disorder. It can also be conceptualized as the number of cases of the disorder per some number of people (usually 100). For instance, if 1 person out of 100 has schizophrenia, then the prevalence rate is 1% (or 1 in 100). Prevalence can be measured in several ways:
• Point prevalence indicates the percentage of a population that has the disorder at a specific point in time. In other words, it is the number of active cases at a given point in time.
• Period prevalence indicates the percentage of a population that has the disorder at any point during a given period of time, typically the past year (Note: when it is the past year it may also be referred to as the one-year prevalence).
• Lifetime prevalence indicates the percentage of a population that has had the disorder at any time during their lives.
According to the National Survey on Drug Use and Health (NSDUH), in 2015 there was an estimated 9.8 million U.S. adults aged 18 years or older with a serious mental illness, or 4% of all U.S. adults, and 43.4 million adults aged 18 years or older with any mental illness, or 17.9% of all U.S. adults.
Source: https://www.nimh.nih.gov/health/statistics/prevalence/index.shtml
Incidence indicates the number of new cases in a population over a specific period of time. This measure is usually lower since it does not include existing cases as prevalence does. If you wish to know the number of new cases of social phobia during the past year (going from say Aug 21, 2015 to Aug 20, 2016), you would only count cases that began during this time and ignore cases that emerged before the start date, even if people are currently afflicted with the mental disorder. Incidence is often studied by medical and public health officials so that causes can be identified and future cases prevented.
Comorbidity describes when two or more mental disorders are occurring at the same time and in the same person. The National Comorbidity Survey Replication (NCS-R) study conducted by the National Institute of Mental Health (NIMH) and published in the June 6, 2005 issue of the Archives of General Psychiatry, sought to discover trends in prevalence, impairment, and service use during the 1990s. It revealed that 45% of those with one mental disorder met the diagnostic criteria for two or more disorders. The authors also found that the severity of mental illness, in regards to disability, is strongly related to comorbidity and that substance use disorders often result from disorders such as anxiety and bipolar mood disorders. The implications of this are substantial as services to treat substance abuse and mental disorders are often separate, despite their appearing together.
The etiology is the cause of the disorder. As you will see later in this textbook, there is no single cause of any mental disorder. Rather, there are multiple factors that contribute to increase a person’s susceptibility to developing a mental disorder. These factors include social, biological, or psychological explanations which need to be understood to identify the appropriate treatment. Likewise, the effectiveness of a treatment may give some hint at the cause of the mental disorder. More on this later.
The course of the disorder is its particular pattern. A disorder may be chronic, meaning it lasts a long period of time, episodic, meaning the disorder comes and goes (i.e., individuals tend to recover only to have later reoccurrences). Disorders can also be classified as time-limited, meaning that recovery will occur in a short period of time regardless of whether any treatment occurs.
Prognosis is the anticipated course the mental disorder will take. A key factor in determining the course is age, with some disorders presenting differently in childhood than adulthood.
Finally, we will discuss several treatment strategies in this book in relation to specific disorders, and in a general fashion in Module 3. Treatment is any procedure intended to modify abnormal behavior into normal behavior. The person with the mental disorder seeks the assistance of a trained professional to provide some degree of relief over a series of therapy sessions. The trained mental health professional may utilize psychotherapy and/or medication may be prescribed to bring about this change. Treatment may be sought from the primary care provider (e.g., medical doctor), in an outpatient fashion with a clinical psychologist or psychiatrist, or through inpatient care or hospitalization with at a mental hospital or psychiatric unit of a general hospital.
The Stigma of Mental Disorders
In the previous section, we indicated that care can be sought out in a variety of ways. The problem is that many people who need care never seek it out. Why is that? We already know that society dictates what is considered abnormal behavior through culture and social norms, and you can likely think of a few implications of that. But to fully understand society’s role in why people do not seek care, we need to consider the stigma that is often attached to the label mental disorder.
Stigma refers to when negative stereotyping, labeling, rejection, and loss of status occur. Stigma often takes on three forms as described below:
• Public stigma – when members of a society endorse negative stereotypes of people with a mental disorder and discriminate against them. They might avoid them altogether resulting in social isolation. An example is when an employer intentionally does not hire a person because their mental illness is discovered.
• Label avoidance – In order to avoid being labeled as “crazy” people needing care may avoid seeking it all together or stop care once started. Due to these labels, funding for mental health services could be restricted and instead, physical health services funded.
• Self-stigma – When people with mental illnesses internalize the negative stereotypes and prejudice, and in turn, discriminate against themselves. They may experience shame, reduced self-esteem, hopelessness, low self-efficacy, and a reduction in coping mechanisms. An obvious consequence of these potential outcomes is the why try effect, or the person saying, ‘Why should I try and get that job? I am not unworthy of it’ (Corrigan, Larson, & Rusch, 2009; Corrigan, et al., 2016).
Another form of stigma that is worth noting is that of courtesy stigma or when stigma affects people associated with the person with a mental disorder. Karnieli-Miller et. al. (2013) found that families of the afflicted were often blamed, rejected, or devalued when others learned that one of their family members had a serious mental illness. Due to this, they felt hurt and betrayed and an important source of social support during the difficult time had been removed, resulting in greater levels of stress. To cope, they had decided to conceal their relative’s illness and some parents struggled to decide whether it was their place to disclose information about their child’s mental illness or their child’s place to do so. Others fought with the issue of confronting the stigma through attempts at education or to just ignore it due to not having enough energy or a desire to maintain personal boundaries. There was also a need to understand responses of others and to attribute those responses to a lack of knowledge, experience, and/or media coverage. In some cases, the reappraisal allowed family members to feel compassion for others rather than feeling put down or blamed. The authors concluded that each family “develops its own coping strategies which vary according to its personal experiences, values, and extent of other commitments” and that the “coping strategies families employ change over-time.”
Other effects of stigma include experiencing work-related discrimination resulting in higher levels of self-stigma and stress (Rusch et al., 2014), higher rates of suicide especially when treatment is not available (Rusch, Zlati, Black, and Thornicroft, 2014; Rihmer & Kiss, 2002), and a decreased likelihood of future help-seeking (Lally et al., 2013). The results of the latter study also showed that personal contact with someone with a history of mental illness led to a decreased likelihood of seeking help. This is important because 48% of the sample stated that they needed help for an emotional or mental health issue during the past year but did not seek help. Similar results have been reported in other studies (Eisenberg, Downs, Golberstein, & Zivin, 2009). It is important to also point out that social distance, a result of stigma, has also been shown to increase throughout the lifespan, suggesting that anti-stigma campaigns should focus primarily on older people (Schomerus, et al., 2015).
One potentially disturbing trend is that mental health professionals have been shown to hold negative attitudes toward the people they serve. Hansson et al. (2013) found that staff members at an outpatient clinic in the southern part of Sweden held the most negative attitudes about whether an employer would accept an applicant for work, willingness to date a person who had been hospitalized, and hiring a patient to care for children. Attitudes were stronger when staff treated patients with psychosis or in inpatient settings. In a similar study, Martensson, Jacobsson, and Engstrom (2014) found that staff had more positive attitudes towards persons with mental illness if their knowledge of such disorders is less stigmatized, their workplaces were in the county council – as they were more likely to encounter patients who recover and return to normal life in society compared to municipalities where patients have long-term and recurrent mental illness -, and they have or had one close friend with mental health issues.
To help deal with stigma in the mental health community, Papish et al. (2013) investigated the effect of a one-time contact-based educational intervention compared to a four-week mandatory psychiatry course on the stigma of mental illness among medical students at the University of Calgary. The course included two methods involving contact with people who had been diagnosed with a mental disorder – patient presentations or two, one-hour oral presentations in which patients shared their story of having a mental illness; and “clinical correlations” in which students are mentored by a psychiatrist while they directly interacted with patients with a mental illness in either inpatient or outpatient settings. Results showed that medical students did hold stigmatizing attitudes towards mental illness and that comprehensive medical education can reduce this stigma. As the authors stated, “These results suggest that it is possible to create an environment in which medical student attitudes towards mental illness can be shifted in a positive direction.” That said, the level of stigma was still higher for mental illness than it was for a stigmatized physical illness, type 2 diabetes mellitus.
What might happen if mental illness is presented as a treatable condition? McGinty, Goldman, Pescosolido, and Barry (2015) found that portraying schizophrenia, depression, and heroin addiction as untreated and symptomatic increased negative public attitudes towards people with these conditions but when the same people were portrayed as successfully treated, the desire for social distance was reduced, there was less willingness to discriminate against them, and belief in treatment’s effectiveness increased.
Self-stigma has also been shown to affect self-esteem, which then affects hope, which then affects the quality of life of people with serious mental illnesses. As such, hope should play a central role in recovery (Mashiach-Eizenberg et al., 2013). Narrative Enhancement and Cognitive Therapy (NECT) is an intervention designed to reduce internalized stigma and targets both hope and self-esteem (Yanos et al., 2011). The intervention replaces stigmatizing myths with facts about the illness and recovery which leads to hope in clients and greater levels of self-esteem. This may then reduce susceptibility to internalized stigma.
Stigma has been shown to lead to health inequities (Hatzenbuehler, Phelan, & Link, 2013) prompting calls for change in stigma. Targeting stigma leads to two different agendas. The services agenda attempts to remove stigma so the person can seek mental health services while the rights agenda tries to replace discrimination that “robs people of rightful opportunities with affirming attitudes and behavior” (Corrigan, 2016). The former is successful when there is evidence that people with mental illness are seeking services more or becoming better engaged, while the latter is successful when there is an increase in the number of people with mental illnesses in the workforce and receiving reasonable accommodations. The federal government has tackled this issue with landmark legislation such as the Patient Protection and Affordable Care Act of 2010, Mental Health Parity and Addiction Equity Act of 2008, and the Americans with Disabilities Act of 1990. However, protections are not uniform across all subgroups due to “1) explicit language about inclusion and exclusion criteria in the statute or implementation rule, 2) vague statutory language that yields variation in the interpretation about which groups qualify for protection, and 3) incentives created by the legislation that affect specific groups differently” (Cummings, Lucas, and Druss, 2013). | textbooks/socialsci/Psychology/Psychological_Disorders/Essentials_of_Abnormal_Psychology_(Bridley_and_Daffin)/01%3A_What_is_Abnormal_Psychology/1.04%3A_Classifying_Mental_Disorders.txt |
Section Learning Objectives
• Describe prehistoric and ancient beliefs about mental illness.
• Describe Greco-Roman thought on mental illness.
• Describe thoughts on mental illness during the Middle Ages.
• Describe thoughts on mental illness during the Renaissance.
• Describe thoughts on mental illness during the 18th and 19th centuries.
• Describe thoughts on mental illness during the 20th and 21st centuries.
• Outline the use of psychoactive drugs throughout time and their impact.
• Outline Freud’s theories and approaches to mental illness.
• Describe and provide examples of the various defense mechanisms
As we have seen so far, what is considered abnormal behavior is often dictated by the culture/society a person lives in, and unfortunately, the past has not treated the afflicted very well. In this section, we will examine how past societies viewed and dealt with mental illness.
Prehistoric and Ancient Beliefs
Prehistoric cultures often held a supernatural view of abnormal behavior and saw it as the work of evil spirits, demons, gods, or witches who took control of the person. This form of demonic possession was believed to occur when the person engaged in behavior contrary to the religious teachings of the time. Treatment by cave dwellers included a technique called trephination, in which a stone instrument known as a trephine was used to remove part of the skull, creating an opening. They believed that evil spirits could escape through the hole in the skull, thereby ending the person’s mental affliction and returning them to normal behavior. Early Greek, Hebrew, Egyptian, and Chinese cultures used a treatment method called exorcism in which evil spirits were cast out through prayer, magic, flogging, starvation, noise-making, or having the person ingest horrible tasting drinks.
Greco-Roman Thought
Rejecting the idea of demonic possession, Greek physician, Hippocrates (460-377 B.C.), said that mental disorders were akin to physical disorders and had natural causes. Specifically, he suggested that they arose from brain pathology, or head trauma/brain dysfunction or disease, and were also affected by heredity. Hippocrates classified mental disorders into three main categories – melancholia, mania, and phrenitis (brain fever) and gave detailed clinical descriptions of each. He also described four main fluids or humors that directed normal functioning and personality – blood which arose in the heart, black bile arising in the spleen, yellow bile or choler from the liver, and phlegm from the brain. Mental disorders occurred when the humors were in a state of imbalance such as an excess of yellow bile causing frenzy/mania and too much black bile causing melancholia/depression. Hippocrates believed mental illnesses could be treated as any other disorder and focused on the underlying pathology.
Also important was Greek philosopher, Plato (429-347 B.C.), who said that the mentally ill were not responsible for their own actions and so should not be punished. He emphasized the role of social environment and early learning in the development of mental disorders and believed it was the responsibility of the community and their families to care for them in a humane manner using rational discussions. Greek physician, Galen (A.D. 129-199) said mental disorders had either physical or mental causes that included fear, shock, alcoholism, head injuries, adolescence, and changes in menstruation.
In Rome, physician Asclepiades (124-40 BC) and philosopher Cicero (106-43 BC) rejected Hippocrates’ idea of the four humors and instead stated that melancholy arises from grief, fear, and rage; not excess black bile. Roman physicians treated mental disorders with massage and warm baths, with the hope that their patients be as comfortable as possible. They practiced the concept of “contrariis contrarius”, meaning opposite by opposite, and introduced contrasting stimuli to bring about balance in the physical and mental domains. An example would be consuming a cold drink while in a warm bath.
The Middle Ages – 500 AD to 1500 AD
The progress made during the time of the Greeks and Romans was quickly reversed during the Middle Ages with the increase in power of the Church and the fall of the Roman Empire. Mental illness was yet again explained as possession by the Devil and methods such as exorcism, flogging, prayer, the touching of relics, chanting, visiting holy sites, and holy water were used to rid the person of the Devil’s influence. In extreme cases, the afflicted were confined, beat, and even executed. Scientific and medical explanations, such as those proposed by Hippocrates, were discarded at this time.
Group hysteria, or mass madness, was also seen in which large numbers of people displayed similar symptoms and false beliefs. This included the belief that one was possessed by wolves or other animals and imitated their behavior, called lycanthropy, and a mania in which large numbers of people had an uncontrollable desire to dance and jump, called tarantism. The latter was believed to have been caused by the bite of the wolf spider, now called the tarantula, and spread quickly from Italy to Germany and other parts of Europe where it was called Saint Vitus’s dance.
Perhaps the return to supernatural explanations during the Middle Ages makes sense given events of the time. The Black Death or Bubonic Plague had killed up to a third, and according to other estimates almost half, of the population. Famine, war, social oppression, and pestilence were also factors. Death was ever present which led to an epidemic of depression and fear. Nevertheless, near the end of the Middle Ages, mystical explanations for mental illness began to lose favor and government officials regained some of their lost power over nonreligious activities. Science and medicine were once again called upon to explain mental disorders.
The Renaissance – 14th to 16th Centuries
The most noteworthy development in the realm of philosophy during the Renaissance was the rise of humanism, or the worldview that emphasizes human welfare and the uniqueness of the individual. This helped continue the decline of supernatural views of mental illness. In the mid to late 1500s, Johann Weyer (1515-1588), a German physician, published his book, On the Deceits of the Demons, that rebutted the Church’s witch-hunting handbook, the Malleus Maleficarum, and argued that many accused of being witches and subsequently imprisoned, tortured, hung, and/or burned at the stake, were mentally disturbed and not possessed by demons or the Devil himself. He believed that like the body, the mind was susceptible to illness. Not surprisingly, the book was met with vehement protest and even banned from the church. It should be noted that these types of acts occurred not only in Europe but also in the United States. The most famous example was the Salem Witch Trials of 1692 in which more than 200 people were accused of practicing witchcraft and 20 were killed.
The number of asylums, or places of refuge for the mentally ill where they could receive care, began to rise during the 16th century as the government realized there were far too many people afflicted with mental illness to be left in private homes. Hospitals and monasteries were converted into asylums. Though the intent was benign in the beginning, as they began to overflow patients came to be treated more like animals than people. In 1547, the Bethlem Hospital opened in London with the sole purpose of confining those with mental disorders. Patients were chained up, placed on public display, and often heard crying out in pain. The asylum became a tourist attraction, with sightseers paying a penny to view the more violent patients, and soon was called “Bedlam” by local people; a term that today means “a state of uproar and confusion” (https://www.merriam-webster.com/dictionary/bedlam).
Reform Movement – 18th to 19th Centuries
The rise of the moral treatment movement occurred in Europe in the late 18th century and then in the United States in the early 19th century. Its earliest proponent was Phillipe Pinel (1745-1826) who was assigned as the superintendent of la Bicetre, a hospital for mentally ill men in Paris. He emphasized the importance of affording the mentally ill respect, moral guidance, and humane treatment, all while considering their individual, social, and occupational needs. Arguing that the mentally ill were sick people, Pinel ordered that chains be removed, outside exercise be allowed, sunny and well-ventilated rooms replace dungeons, and patients be extended kindness and support. This approach led to considerable improvement for many of the patients, so much so, that several were released.
Following Pinel’s lead in England, William Tuke (1732-1822), a Quaker tea merchant, established a pleasant rural estate called the York Retreat. The Quakers believed that all people should be accepted for who they were and treated kindly. At the retreat, patients could work, rest, talk out their problems, and pray (Raad & Makari, 2010). The work of Tuke and others led to the passage of the County Asylums Act of 1845 which required that every county in England and Wales provide asylum to the mentally ill. This was even extended to English colonies such as Canada, India, Australia, and the West Indies as word of the maltreatment of patients at a facility in Kingston, Jamaica spread, leading to an audit of colonial facilities and their policies.
Reform in the United States started with the figure largely considered to be the father of American psychiatry, Benjamin Rush (1745-1813). Rush advocated for the humane treatment of the mentally ill, showing them respect, and even giving them small gifts from time to time. Despite this, his practice included treatments such as bloodletting and purgatives, the invention of the “tranquilizing chair,” and a reliance on astrology, showing that even he could not escape from the beliefs of the time.
Due to the rise of the moral treatment movement in both Europe and the United States, asylums became habitable places where those afflicted with mental illness could recover. However, it is often said that the moral treatment movement was a victim of its own success. The number of mental hospitals greatly increased leading to staffing shortages and a lack of funds to support them. Though treating patients humanely was a noble endeavor, it did not work for some and other treatments were needed, though they had not been developed yet. It was also recognized that the approach worked best when the facility had 200 or fewer patients. However, waves of immigrants arriving in the U.S. after the Civil War were overwhelming the facilities, with patient counts soaring to 1,000 or more. Prejudice against the new arrivals led to discriminatory practices in which immigrants were not afforded moral treatments provided to native citizens, even when the resources were available to treat them.
Another leader in the moral treatment movement was Dorothea Dix (1802-1887), a New Englander who observed the deplorable conditions suffered by the mentally ill while teaching Sunday school to female prisoners. She instigated the mental hygiene movement, which focused on the physical well-being of patients. Over the span of 40 years, from 1841 to 1881, she motivated people and state legislators to do something about this injustice and raised millions of dollars to build over 30 more appropriate mental hospitals and improve others. Her efforts even extended beyond the U.S. to Canada and Scotland.
Finally, in 1908 Clifford Beers (1876-1943) published his book, A Mind that Found Itself, in which he described his personal struggle with bipolar disorder and the “cruel and inhumane treatment people with mental illnesses received. He witnessed and experienced horrific abuse at the hands of his caretakers. At one point during his institutionalization, he was placed in a straightjacket for 21 consecutive nights.” (http://www.mentalhealthamerica.net/our-history). His story aroused sympathy in the public and led him to found the National Committee for Mental Hygiene, known today as Mental Health America, which provides education about mental illness and the need to treat these people with dignity. Today, MHA has over 200 affiliates in 41 states and employs 6,500 affiliate staff and over 10,000 volunteers.
For more information on MHA, please visit: http://www.mentalhealthamerica.net/
20th – 21st Centuries
The decline of the moral treatment approach in the late 19th century led to the rise of two competing perspectives – the biological or somatogenic perspective and the psychological or psychogenic perspective.
Biological or Somatogenic Perspective
Recall that Greek physicians Hippocrates and Galen said that mental disorders were akin to physical disorders and had natural causes. Though the idea fell into oblivion for several centuries it re-emerged in the late 19th century for two reasons. First, German psychiatrist, Emil Kraepelin (1856-1926), discovered that symptoms occurred regularly in clusters which he called syndromes. These syndromes represented a unique mental disorder with its own cause, course, and prognosis. In 1883 he published his textbook, Compendium der Psychiatrie (Textboook of Psychiatry), and described a system for classifying mental disorders that became the basis of the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders (DSM) that is currently in its 5th edition (published in 2013).
Secondly, in 1825, the behavioral and cognitive symptoms of advanced syphilis were identified to include delusions (e.g., false beliefs that everyone is plotting against you or that you are God), and were termed general paresis by French physician A.L.J. Bayle. In 1897, Viennese psychiatrist Richard von Krafft-Ebbing injected patients with general paresis with matter from syphilis spores and noted that none of the patients developed symptoms of syphilis, indicating they must have been previously exposed and were now immune. This led to the conclusion that syphilis (a bacterial infection) was the cause of the general paresis. In 1906, August von Wassermann developed a blood test for syphilis and in 1917 a cure was stumbled upon. Julius von Wagner-Jauregg noticed that patients with general paresis who contracted malaria recovered from their symptoms. To test this hypothesis, he injected nine patients with blood from a soldier afflicted with malaria. Three of patients fully recovered while three others showed great improvement in their paretic symptoms. The high fever caused by malaria burned out the syphilis bacteria. Hospitals in the United States began incorporating this new cure for paresis into their treatment approach by 1925.
Also noteworthy was the work of American psychiatrist John P. Grey. Appointed as superintendent of the Utica State Hospital in New York, Grey asserted that insanity always had a physical cause. As such, the mentally ill should be seen as physically ill and treated with rest, proper room temperature and ventilation, and a proper diet.
The 1930s also saw the use of electric shock as a treatment method, which was stumbled upon accidentally by Benjamin Franklin while experimenting with electricity in the early 18th century. He noticed that after experiencing a severe shock his memories had changed and in published work, suggested physicians study electric shock as a treatment for melancholia.
Beginning in the 1950s, psychiatric or psychotropic drugs were used for the treatment of mental illness and made an immediate impact. Though drugs alone cannot cure mental illness, they can improve symptoms. Classes of psychiatric drugs include antidepressants used to treat depression and anxiety, mood-stabilizing medications to treat bipolar disorder, antipsychotic drugs to treat schizophrenia and other psychotic disorders, and anti-anxiety drugs used to treat generalized anxiety disorder or panic disorder
Frank (2006) found that by 1996, psychotropic drugs were used in 77% of mental health cases and spending on these drugs to treat mental disorders grew from \$2.8 billion in 1987 to about \$18 billion in 2001 (Coffey et al., 2000; Mark et al., 2005), representing a greater than sixfold increase. The largest classes of psychotropic drugs are anti-psychotic and anti-depressant medications followed closely by anti-anxiety medications. Frank, Conti, and Goldman (2005) point out, “The expansion of insurance coverage for prescription drugs, the introduction and diffusion of managed behavioral health care techniques, and the conduct of the pharmaceutical industry in promoting their products all have influenced how psychotropic drugs are used and how much is spent on them.” Is it possible then that we are overprescribing these mediations? Davey (2014) provides ten reasons why this may be so to include individuals believing that recovery is out of their hands but instead in the hands of their doctors, increased risk of relapse, drug companies causing the “medicalization of perfectly normal emotional processes, such as bereavement” to ensure their own survival, side effects, and a failure to change the way the way the person thinks or the socioeconomic environments that may be the cause of the disorder. For more on this article, please see: https://www.psychologytoday.com/blog/why-we-worry/201401/overprescribing-drugs-treat-mental-health-problems. Smith (2012) echoed similar sentiments in an article on inappropriate prescribing and cites the approval of Prozac by the Food and Drug Administration (FDA) in 1987 as when the issue began and the overmedication/overdiagnosis of children with ADHD as a more recent example.
A result of the use of psychiatric drugs was deinstitutionalization or the release of patients from mental health facilities. This shifted resources from inpatient to outpatient care and placed the spotlight back on the biological or somatogenic perspective. Today, when people with severe mental illness do need inpatient care, it is typically in the form of short-term hospitalization.
Psychological or Psychogenic Perspective
The psychological or psychogenic perspective states that emotional or psychological factors are the cause of mental disorders and represented a challenge to the biological perspective.
The History of Hypnosis
This perspective had a long history but did not gain favor until the work of Viennese physician Franz Anton Mesmer (1734-1815). Influenced heavily by Newton’s theory of gravity, he believed that the planets also affected the human body through the force of animal magnetism and that all people had a universal magnetic fluid that determined how healthy they were. He demonstrated the usefulness of his approach when he cured Franzl Oesterline, a 27-year old woman experiencing what he described as a convulsive malady. Mesmer used a magnet to disrupt the gravitational tides that were affecting his patient and produced a sensation of the magnetic fluid draining from her body. This removed the illness from her body and produced a near instantaneous recovery. In reality, the patient was placed in a trancelike state which made her highly suggestible. With other patients, Mesmer would have them sit in a darkened room filled with soothing music, into which he would enter dressed in a colorful robe and passed from person to person touching the afflicted area of their body with his hand or a special rod/wand. He successfully cured deafness, paralysis, loss of bodily feeling, convulsions, menstrual difficulties, and blindness.
His approach gained him celebrity status as he demonstrated it at the courts of English nobility. The medical community was hardly impressed. A royal commission was formed to investigate his technique but could not find any proof for his theory of animal magnetism. Though he was able to cure patients when they touched his “magnetized” tree, the result was the same when “non-magnetized” trees were touched. As such, Mesmer was deemed a charlatan and forced to leave Paris. His technique was called mesmerism, and today we know it as an early form of hypnosis.
The psychological perspective gained popularity after two physicians practicing in the city of Nancy in France discovered that they could induce the symptoms of hysteria in perfectly healthy patients through hypnosis and then remove the symptoms in the same way. The work of Hippolyte-Marie Bernheim (1840-1919) and Ambroise-Auguste Liebault (1823-1904) came to be part of what was called the Nancy School and showed that hysteria was nothing more than a form of self-hypnosis. In Paris, this view was challenged by Jean Charcot (1825-1893) who stated that hysteria was caused by degenerative brain changes, reflecting the biological perspective. He was proven wrong and eventually turned to their way of thinking.
The use of hypnosis to treat hysteria was also carried out by fellow Frenchman Pierre Janet (1859-1947), and student of Charcot, who believed that hysteria had psychological, not biological causes. Namely, these included unconscious forces, fixed ideas, and memory impairments. In Vienna, Josef Breuer (1842-1925) induced hypnosis and had patients speak freely about past events that upset them. Upon waking, he discovered that patients sometimes were free of their symptoms of hysteria. Success was even greater when patients not only recalled forgotten memories but also relieved them emotionally. He called this the cathartic method and our use of the word catharsis today indicates a purging or release, in this case, of pent-up emotion. Sigmund Freud’s development of psychoanalysis followed on the heels of the work of Breuer, and others who came before him.
Psychodynamic Theory
In 1895, the book, Studies on Hysteria, was published by Josef Breuer (1842-1925) and Sigmund Freud (1856-1939), and marked the birth of psychoanalysis, though Freud did not use this actual term until a year later. The book published several case studies, including that of Anna O., born February 27, 1859, in Vienna to Jewish parents Siegmund and Recha Pappenheim, strict Orthodox adherents considered millionaires at the time. Bertha, known in published case studies as Anna O., was expected to complete the formal education of a girl in the upper middle class which included foreign language, religion, horseback riding, needlepoint, and piano. She felt confined and suffocated in this life and took to a fantasy world she called her “private theater.” Anna also developed hysteria which included symptoms of memory loss, paralysis, disturbed eye movements, reduced speech, nausea, and mental deterioration. Her symptoms appeared as she cared for her dying father and her mother called on Breuer to diagnosis her condition (note that Freud never actually treated her). Hypnosis was used at first and relieved her symptoms, as it had done for many patients (See Chapter 1). Breuer made daily visits and allowed her to share stories from her private theater which she came to call “talking cure” or “chimney sweeping.” Many of the stories she shared were actually thoughts or events she found troubling and reliving them helped to relieve or eliminate the symptoms. Breuer’s wife, Mathilde, became jealous of her husband’s relationship with the young girl, leading Breuer to terminate treatment in the June of 1882 before Anna had fully recovered. She relapsed and was admitted to Bellevue Sanatorium on July 1, eventually being released in October of the same year. With time, Anna O. did recover from her hysteria and went on to become a prominent member of the Jewish Community, involving herself in social work, volunteering at soup kitchens, and becoming ‘House Mother’ at an orphanage for Jewish girls in 1895. Bertha (Anna O.) became involved in the German Feminist movement, and in 1904 founded the League of Jewish Women. She published many short stories; a play called Women’s Rights, in which she criticized the economic and sexual exploitation of women, and wrote a book in 1900 called The Jewish Problem in Galicia, in which she blamed the poverty of the Jews of Eastern Europe on their lack of education. In 1935 she was diagnosed with a tumor and was summoned by the Gestapo in 1936 to explain anti-Hitler statements she had allegedly made. She died shortly after this interrogation on May 28, 1936. Freud considered the talking cure of Anna O. to be the origin of psychoanalytic therapy and what would come to be called the cathartic method.
The Structure of Personality.
Freud’s psychoanalysis was unique in the history of psychology because it did not arise within universities as most of the major school of thought in our history did, but from medicine and psychiatry, it dealt with psychopathology and examined the unconscious. Freud believed that consciousness had three levels – 1) consciousness which was the seat of our awareness, 2) preconscious that included all of our sensations, thoughts, memories, and feelings, and 3) the unconscious which was not available to us. The contents of the unconscious could move from the unconscious to preconscious, but to do so, it had to pass a Gate Keeper. Content that was turned away was said to be repressed by Freud.
According to Freud, our personality has three parts – the id, superego, and ego, and from these, our behavior arises. First, the id is the impulsive part that expresses our sexual and aggressive instincts. It is present at birth, completely unconscious, and operates on the pleasure principle, resulting in our selfishly seeking immediate gratification of our needs no matter what the cost. The second part of personality emerges after birth with early formative experiences and is called the ego. The ego attempts to mediate the desires of the id against the demands of reality, and eventually the moral limitations or guidelines of the superego. It operates on the reality principle, or an awareness of the need to adjust behavior to meet the demands of our environment. The last part of personality to develop is the superego which represents society’s expectations, moral standards, rules, and represents our conscience. It leads us to adopt our parent’s values as we come to realize that many of the id’s impulses are unacceptable. Still, we violate these values at times which lead to feelings of guilt. The superego is partly conscious but mostly unconscious. The three parts of personality generally work together well and compromise, leading to a healthy personality, but if conflicts among these components are not resolved, intrapsychic conflicts can arise and lead to mental disorders.
The Development of Personality.
Freud also proposed that personality develops over the course of five distinct stages (oral, anal, phallic, latency, genital), in which the libido is focused on different parts of the body. First, libido is the psychic energy that drives a person to pleasurable thoughts and behaviors. Our life instincts, or Eros, are manifested through it and are the creative forces that sustain life. They include hunger, thirst, self-preservation, and sex. In contrast, Thanatos, or our death instinct, is either directed inward as in the case of suicide and masochism or outward via hatred and aggression. Both types of instincts are sources of stimulation in the body and create a state of tension which is unpleasant, thereby motivating us to reduce them. Consider hunger, and the associated rumbling of our stomach, fatigue, lack of energy, etc., that motivates us to find and eat food. If we are angry at someone we may engage in physical or relational aggression to alleviate this stimulation.
Freud’s psychosexual stages of personality development are listed below. Freud proposed that a person may become fixated at any stage, meaning they become stuck, thereby affecting later development and possibly leading to abnormal functioning, or psychopathology.
1. Oral Stage – Beginning at birth and lasting to 24 months, the libido is focused on the mouth and sexual tension is relieved by sucking and swallowing at first, and then later by chewing and biting as baby teeth come in. Fixation is linked to a lack of confidence, argumentativeness, and sarcasm.
2. Anal Stage – Lasting from 2-3 years, the libido is focused on the anus as toilet training occurs. If parents are too lenient children may become messy or unorganized. If parents are too strict, children may become obstinate, stingy, or orderly.
3. Phallic Stage – Occurring from about age 3 to 5-6 years, the libido is focused on the genitals. The Oedipus complex develops in boys and results in the son falling in love with his mother while fearing that his father will find out and castrate him. Meanwhile, girls fall in love with the father and fear that their mother will find out, called the Electra complex. A fixation at this stage may result in low self-esteem, feelings of worthlessness, and shyness.
4. Latency Stage – From 6-12 years of age, children lose interest in sexual behavior and boys play with boys and girls with girls. Neither sex pays much attention to the opposite sex.
5. Genital Stage – Beginning at puberty, sexual impulses reawaken and unfulfilled desires from infancy and childhood can be satisfied with sex.
Defense Mechanisms.
The ego has a challenging job to fulfill, balancing both the will of the id and the superego, and the overwhelming anxiety and panic this creates. Defense mechanisms are in place to protect us from this pain but are considered maladaptive if they are misused and become our primary way of dealing with stress. They protect us from anxiety and operate unconsciously, also distorting reality. Defense mechanisms include the following:
• Repression – when unacceptable ideas, wishes, desires, or memories are blocked from consciousness such as forgetting a horrific car accident that you caused. Eventually, though, it must be dealt with or else the repressed memory can cause problems later in life.
• Reaction formation – When an impulse is repressed and then expressed by its opposite. As an example, if we are angry with our boss but cannot lash out at him/her, we may be overly friendly instead. Another example is having lustful thoughts about a coworker that you cannot express because you are married, and so you are mean to this person.
• Displacement – When we satisfy an impulse with a different object because focusing on the primary object may get us in trouble. A classic example is taking out your frustration with your boss on your wife and/or kids when you get home. If we lash out at our boss we could be fired. The substitute target is less dangerous than the primary target.
• Projection – When we attribute threatening desires or unacceptable motives to others. An example is when we do not have the skills necessary to complete a task but we blame the other members of our group for being incompetent and unreliable. Another example is projecting your feelings of love toward your therapist onto your therapist, believing he/she is in love with you.
• Sublimation – When we find a socially acceptable way to express a desire. If we are stressed out or upset, we may go to the gym and box or lift weights. A person who desires to cut things may become a surgeon.
• Denial – Sometimes life is so hard all we can do is deny how bad it is. An example is denying a diagnosis of lung cancer given by your doctor.
• Identification – this is when we find someone who has found a socially acceptable way to satisfy their unconscious wishes and desires and we model that behavior.
• Regression – When we move from a mature behavior to one that is infantile in nature. If your significant other is nagging you, you might regress and point your hands over your ears and say, “La la la la la la la la…”
• Rationalization – When we offer well thought out reasons for why we did what we did but in reality, these are not the real reason. Students sometimes rationalize not doing well in a class by stating that they really are not interested in the subject or saying the instructor writes impossible to pass tests when in reality they are not putting enough effort into learning the material.
• Intellectualization– When we avoid emotion by focusing on intellectual aspects of a situation such as ignoring the sadness we are feeling after the death of our mother by focusing on planning the funeral.
Psychodynamic Techniques.
Freud used three primary assessment techniques as part of psychoanalysis, or psychoanalytic therapy, to understand the personalities of his patients and to expose repressed material, which included free association, transference, and dream analysis. First, free association involves the patient describing whatever comes to mind during the session. The patient continues but always reaches a point when he/she cannot or will not proceed any further. The patient might change the subject, stop talking, or lose his/her train of thought. Freud said this was resistance and revealed where issues were.
Second, transference is the process through which patients transfer to the therapist attitudes he/she held during childhood. They may be positive and include friendly, affectionate feelings, or negative, and include hostile and angry feelings. The goal of therapy is to wean patients from their childlike dependency on the therapist.
Finally, Freud used dream analysis to understand a person’s innermost wishes. The content of dreams include the person’s actual retelling of the dreams called manifest content, and the hidden or symbolic meaning called latent content. In terms of the latter, some symbols are linked to the person specifically while others are common to all people.
Evaluating Psychodynamic Theory.
Freud’s psychodynamic theory has made a lasting impact on the field of psychology but also has been criticized heavily. First, most of Freud’s observations were made in an unsystematic, uncontrolled way and he relied on the case study method. Second, the participants in his studies were not representative of the larger body of people whom he tried to generalize to and he really based his theory on a few patients. Third, he relied solely on the reports of his patients and sought out no observer reports. Fourth, it is difficult to empirically study psychodynamic principles since most operate unconsciously. This begs the question of how can we really know that they exist. Finally, psychoanalytic treatment is expensive and time-consuming and since Freud’s time, drug therapies have become more popular and successful. Still, the work of Sigmund Freud raised awareness about the role the unconscious plays in both normal and abnormal behavior and he developed useful therapeutic tools for clinicians.
By the end of the 19th century, it had become evident that mental disorders were caused by a combination of biological and psychological factors and the investigation of how they develop began. Today, rather than arguing for a purely biological or psychological approach to understanding mental disorders we focus on a more integrative multidimensional approach. This contemporary approach is the focus of Chapter 2.
Chapter Recap
In Chapter 1, we undertook a fairly lengthy discussion of what abnormal behavior is by first looking at what normal behavior is. What emerged was a general set of guidelines focused on mental disorders as causing dysfunction, distress, deviance, and at times, being dangerous for the afflicted and others around him/her. We acknowledged that mental illness is stigmatized in our society and provided a basis for why this occurs and what to do about it. We introduced the various members of the mental health team and defined several key terms including occurrence, cause, course, prognosis, and treatment. We concluded with a lengthy discussion of the history of mental illness. It is with this foundation in mind that we move to examine contemporary models of mental disorders in Chapter 2. | textbooks/socialsci/Psychology/Psychological_Disorders/Essentials_of_Abnormal_Psychology_(Bridley_and_Daffin)/01%3A_What_is_Abnormal_Psychology/1.05%3AThe_History_of_Mental_Illness.txt |
Learning Objectives
• Differentiate uni- and multi-dimensional models of abnormality.
• Describe how the biological model explains mental illness.
• Describe how psychological perspectives explain mental illness.
• Describe how the sociocultural model explains mental illness.
In Chapter 2, we will discuss three models of abnormal behavior to include the biological, psychological, and sociocultural models. Each is unique in its own right and no one model can account for all aspects of abnormality. Hence, a multi-dimensional and not a uni-dimensional model will be advocated for.
02: Contemporary Models of Abnormal Psychology
Section Learning Objectives
• Define the uni-dimensional model.
• Explain the need for a multi-dimensional model of abnormality.
• Define model.
• List and describe the three models of abnormality.
Uni-Dimensional Models
In order to effectively treat a mental disorder, it is helpful to understand its cause. This could be a single factor such as a chemical imbalance in the brain, relationship with a parent, socioeconomic status (SES), a fearful event encountered during middle childhood, or the way in which the individual copes with life’s stressors. This single factor explanation is called a uni-dimensional model. The problem with this approach is that mental disorders are not typically caused by a solitary factor, but instead, they are caused by multiple factors. Admittedly, single factors do emerge during the course of the person’s life, but as they arise they become part of the individual and in time, the cause of the person’s disorder is due to all of these individual factors.
Multi-Dimensional
So, in reality, it is better to subscribe to a multi-dimensional model that integrates multiple causes of psychopathology and affirms that each cause comes to affect other causes over time. Uni-dimensional models alone are too simplistic to fully understand the etiology of something as complex as mental disorders.
Before introducing the main models subscribed to today, it is important to understand what a model is. In a general sense, a model is defined as a representation or imitation of an object (dictionary.com). Models help mental health professionals understand mental illness since disorders such as depression cannot be touched or experienced firsthand. To be considered distinct from other conditions, a mental illness must have its own set of symptoms. But as you will see, the individual does not have to present with the entire range of symptoms to be diagnosed with major depressive disorder, schizophrenia, avoidant personality disorder, or illness anxiety disorder. Five out of nine symptoms may be enough to diagnose a disorder, for example. There will be some variability in terms of what symptoms the afflicted displays, but in general all people with a specific mental disorder have symptoms from that group. We can also ask the patient probing questions, seek information from family members, examine medical records, and in time, organize and process all of this information to better understand the person’s condition and potential causes. Models aid us with doing all of this but we must be cautious to remember that the model is a starting point for the researcher, and due to this, determine what causes might be investigated, at the exclusion of other causes. Often times, proponents of a given model find themselves in disagreement with proponents of other models. All forget that there is no one model that completely explains human behavior, or in this case, abnormal behavior and so each model contributes in its own way. So what are the models we will examine in this chapter?
• Biological – Includes genetics, chemical imbalances in the brain, the functioning of the nervous system, etc.
• Psychological – includes learning, personality, stress, cognition, self-efficacy, and early life experiences. We will examine several perspectives that make up the psychological model to include psychodynamic, behavioral, cognitive, and humanistic-existential.
• Sociocultural – includes factors such as one’s gender, religious orientation, race, ethnicity, and culture, for example. | textbooks/socialsci/Psychology/Psychological_Disorders/Essentials_of_Abnormal_Psychology_(Bridley_and_Daffin)/02%3A_Contemporary_Models_of_Abnormal_Psychology/2.01%3A_Uni-_vs._Multi-Dimensional_Models.txt |
Section Learning Objectives
• Describe how communication in the nervous system occurs.
• List the parts of the nervous system.
• Describe the structure of the neuron and all key parts.
• Outline how neural transmission occurs.
• Identify and define important neurotransmitters.
• List the major structures of the brain.
• Clarify how specific areas of the brain are involved in mental illness.
• Describe the role of genes in mental illness.
• Describe the role of hormonal imbalances in mental illness.
• Describe commonly used treatments for mental illness.
• Evaluate the usefulness of the biological model.
Proponents of the biological model view mental illness as being a result of a malfunction in the body to include issues with brain anatomy or chemistry. As such, we will need to establish a foundation for how communication in the nervous system occurs, what the parts of the nervous system are, what a neuron is and its structure, how neural transmission occurs, and what the parts of the brain are. While doing this, we will identify areas of concern for psychologists focused on the treatment of mental disorders.
Communication in the Nervous System
To really understand brain structure and chemistry, it is a good idea to understand how communication occurs within the nervous system. Simply:
1. Receptor cells in each of the five sensory systems detect energy.
2. This information is passed to the nervous system due to the process of transduction and through sensory or afferent neurons, which are part of the peripheral nervous system.
3. The information is received by brain structures (central nervous system) and perception occurs.
4. Once the information has been interpreted, commands are sent out, telling the body how to respond, also via the peripheral nervous system.
Please note that we will not cover this process in full, but just the parts relevant to our topic of psychopathology.
The Nervous System
The nervous system consists of two main parts – the central and peripheral nervous systems. The central nervous system(CNS) is the control center for the nervous system which receives, processes, interprets, and stores incoming sensory information. It consists of the brain and spinal cord. The peripheral nervous system consists of everything outside the brain and spinal cord. It handles the CNS’s input and output and divides into the somatic and autonomic nervous systems. The somatic nervous system allows for voluntary movement by controlling the skeletal muscles and it carries sensory information to the CNS. The autonomic nervous system regulates the functioning of blood vessels, glands, and internal organs such as the bladder, stomach, and heart. It consists of sympathetic and parasympathetic nervous systems. The sympathetic nervous system is involved when a person is intensely aroused. It provides the strength to fight back or to flee (fight-or-flight response). Eventually, the response brought about by the sympathetic nervous system must end so the parasympathetic nervous system kicks in to calm the body.
Figure 2.1. The Structure of the Nervous System
The Neuron
The fundamental unit of the nervous system is the neuron, or nerve cell (See Figure 2.3). It has several structures in common with all cells in the body. The nucleus is the control center of the body and the soma is the cell body. In terms of structures that make it different, these focus on the ability of a neuron to send and receive information. The axon sends signals/information through the neuron while the dendrites receive information from neighboring neurons and look like little trees. Notice the s on the end of dendrite and that axon has no such letter. In other words, there are lots of dendrites but only one axon. Also of importance to the neuron is the myelin sheath or the white, fatty covering which: 1) provides insulation so that signals from adjacent neurons do not affect one another and, 2) increases the speed at which signals are transmitted. The axon terminals are the end of the axon where the electrical impulse becomes a chemical message and is released into the synaptic cleft which is the space between neurons.
Though not neurons, glial cells play an important part in helping the nervous system to be the efficient machine that it is. Glial cells are support cells in the nervous system that serve five main functions.
1. They act as a glue and hold the neuron in place.
2. They form the myelin sheath.
3. They provide nourishment for the cell.
4. They remove waste products.
5. They protect the neuron from harmful substances.
Finally, nerves are a group of axons bundled together like wires in an electrical cable.
Figure 2.2. The Structure of the Neuron
Neural Transmission
Transducers or receptor cells in the major organs of our five sensory systems – vision (the eyes), hearing (the ears), smell (the nose), touch (the skin), and taste (the tongue) – convert the physical energy that they detect or sense, and send it to the brain via the neural impulse. How so? We will cover this process in three parts.
Part 1. The Neural Impulse
• Step 1 – Neurons waiting to fire are said to be in resting potential and to be polarized (meaning they have a negative charge inside the neuron and a positive charge outside).
• Step 2 – If adequately stimulated, the neuron experiences an action potential and becomes depolarized. When this occurs, ion gated channels open allowing positively charged Sodium (Na) ions to enter. This shifts the polarity to positive on the inside and negative outside.
• Step 3 – Once the action potential passes from one segment of the axon to the next, the previous segment begins to repolarize. This occurs because the Na channels close and Potassium (K) channels open. K has a positive charge and so the neuron becomes negative again on the inside and positive on the outside.
• Step 4 – After the neuron fires, it will not fire again no matter how much stimulation it receives. This is called the absolute refractory period.
• Step 5 – After a short period of time, the neuron can fire again, but needs greater than normal levels of stimulation to do so. This is called the relative refractory period.
• Step 6 – Please note that the process is cyclical. Once the relative refractory period has passed the neuron returns to its resting potential.
Part 2. The Action Potential
Let’s look at the electrical portion of the process in another way and add some detail.
Figure 2.3. The Action Potential
• Recall that a neuron is normally at resting potential and polarized. The charge inside is -70mV at rest.
• If it receives sufficient stimulation meaning that the polarity inside the neuron rises from -70 mV to -55mV defined as the threshold of excitation, the neuron will fire or send an electrical impulse down the length of the axon (the action potential or depolarization). It should be noted that it either hits -55mV and fires or it does not. This is the all-or-nothing principle. The threshold must be reached.
• Once the electrical impulse has passed from one segment of the axon to the next, the neuron begins the process of resetting called repolarization.
• During repolarization, the neuron will not fire no matter how much stimulation it receives. This is called absolute refractory period.
• The neuron next moves into relative refractory period meaning it can fire, but needs greater than normal levels of stimulation. Notice how the line has dropped below -70mV. Hence, to reach -55mV and fire, it will need more than the normal gain of +15mV (-70 to -55 mV).
• And then it returns to resting potential, as you saw in Figure 2.3
Ions are charged particles found both inside and outside the neuron. It is positively charged Sodium (Na) ions that cause the neuron to depolarize and fire and positively charged Potassium (K) ions that exit and return the neuron to a polarized state.
Part 3. The Synapse
The electrical portion of the neural impulse is just the start. The actual code passes from one neuron to another in a chemical form called a neurotransmitter. The point where this occurs is called the synapse. The synapse consists of three parts – the axon terminals of the sending neuron (presynaptic neuron); the space in between called the synaptic cleft, space, or gap; and the dendrite of the receiving neuron (postsynaptic neuron). Once the electrical impulse reaches the end of the axon, called the axon terminal, it stimulates synaptic vesicles or neurotransmitter sacs to release the neurotransmitter. Neurotransmitters will only bind to their specific receptor sites, much like a key will only fit into the lock it was designed for. You might say neurotransmitters are part of a lock-and-key system. What happens to the neurotransmitters that do not bind to a receptor site? They might go through reuptake which is a process in which the presynaptic neuron takes back excess neurotransmitters in the synaptic space for future use or enzymatic degradation when enzymes destroy excess neurotransmitters in the synaptic space.
Neurotransmitters
What exactly are some of the neurotransmitters which are so critical for neural transmission, and are important to our discussion of psychopathology?
• Dopamine – controls voluntary movements and is associated with the reward mechanism in the brain
• Serotonin – controls pain, sleep cycle, and digestion; leads to a stable mood and so low levels leads to depression
• Norepinephrine – increases the heart rate and blood pressure and regulates mood
• GABA – an inhibitory neurotransmitter responsible for blocking the signals of excitatory neurotransmitters responsible for anxiety and panic.
• Glutamate – an excitatory neurotransmitter associated with learning and memory
The critical thing to understand here is that there is a belief in the realm of mental health that chemical imbalances are responsible for many mental disorders. Chief among these are neurotransmitter imbalances. For instance, people with Seasonal Affective Disorder (SAD) have difficulty regulating serotonin. More on this throughout the book as we discuss each disorder.
The Brain
The central nervous system consists of the brain and spinal cord; the former we will discuss briefly and in terms of key structures which include:
• Medulla – regulates breathing, heart rate, and blood pressure
• Pons – acts as a bridge connecting the cerebellum and medulla and helps to transfer messages between different parts of the brain and spinal cord.
• Reticular formation – responsible for alertness and attention
• Cerebellum – involved in our sense of balance and for coordinating the body’s muscles so that movement is smooth and precise. Involved in the learning of certain kinds of simple responses and acquired reflexes.
• Thalamus – major sensory relay center for all senses except smell.
• Hypothalamus – involved in drives associated with the survival of both the individual and the species. It regulates temperature by triggering sweating or shivering and controls the complex operations of the autonomic nervous system
• Amygdala – responsible for evaluating sensory information and quickly determining its emotional importance
• Hippocampus – our “gateway” to memory. Allows us to form spatial memories so that we can accurately navigate through our environment and helps us to form new memories (involved in memory consolidation)
• The cerebrum has four distinct regions in each cerebral hemisphere. First, the frontal lobe contains the motor cortex which issues orders to the muscles of the body that produce voluntary movement. The frontal lobe is also involved in emotion and in the ability to make plans, think creatively, and take initiative. The parietal lobe contains the somatosensory cortex and receives information about pressure, pain, touch, and temperature from sense receptors in the skin, muscles, joints, internal organs, and taste buds. The occipital lobe contains the visual cortex and receives and processes visual information. Finally, the temporal lobe is involved in memory, perception, and emotion. It contains the auditory cortex which processes sound.
Figure 2.4. Anatomy of the Brain
Of course, this is not an exhaustive list of structures found in the brain but gives you a pretty good idea of function and which structures help to support those functions. What is important to mental health professionals is that for some disorders, specific areas of the brain are involved. For instance, individuals with borderline personality disorder have been shown to have structural and functional changes in brain areas associated with impulse control and emotional regulation while imaging studies reveal differences in the frontal cortex and subcortical structures of individuals with OCD.
Check out the following from Harvard Health for more on depression and the brain as a cause: https://www.health.harvard.edu/mind-and-mood/what-causes-depression
Genes, Hormonal Imbalances, and Viral Infections
Genetic Issues and Explanations
DNA, or deoxyribonucleic acid, is our heredity material and is found in the nucleus of each cell packaged in threadlike structures known as chromosomes. Most of us have 23 pairs of chromosomes or 46 total. Twenty-two of these pairs are the same in both sexes, but the 23rd pair is called the sex chromosome and differs between males and females. Males have X and Y chromosomes while females have two Xs. According to the Genetics Home Reference website as part of NIH’s National Library of Medicine, a gene is “the basic physical and functional unit of heredity” (https://ghr.nlm.nih.gov/primer/basics/gene). They act as the instructions to make proteins and it is estimated by the Human Genome Project that we have between 20,000 and 25,000 genes. We all have two copies of each gene and one is inherited from our mother and one from our father.
Recent research has discovered that autism, ADHD, bipolar disorder, major depression, and schizophrenia all share genetic roots. They “were more likely to have suspect genetic variation at the same four chromosomal sites. These included risk versions of two genes that regulate the flow of calcium into cells.” For more on this development, please check out the article at: https://www.nimh.nih.gov/news/science-news/2013/five-major-mental-disorders-share-genetic-roots.shtml. Likewise, twin and family studies have shown that people with first-degree relatives with OCD are at higher risk of developing the disorder themselves. The same is true of most mental disorders. Indeed, it is presently believed that genetic factors contribute to all mental disorders but typically account for less than half of the explanation. Moreover, most mental disorders are linked to abnormalities in many genes, rather than just one; that is, most are polygenetic.
Moreover, there are important gene-environment interactions that are unique for every person (even twins) which help to explain why some people with a genetic predisposition toward a certain disorder develop that disorder and others do not (e.g., why one identical twin may develop schizophrenia but the other does not). The diathesis-stress model posits that people can inherit tendencies or vulnerabilities to express certain traits, behaviors, or disorders, which may then be activated under certain environmental conditions like stress (e.g., abuse, traumatic events). However, it is also important to note that certain protective factors (like being raised in a consistent, loving, supportive environment) may modify the response to stress and thereby help to protect individuals against mental disorders.
For more on the role of genes in the development of mental illness, check out this article from Psychology Today:
https://www.psychologytoday.com/blog/saving-normal/201604/what-you-need-know-about-the-genetics-mental-disorders
Hormonal Imbalances
The body has two coordinating and integrating systems in the body. The nervous system is one and the endocrine system is the second. The main difference between these two systems is in terms of the speed with which they act. The nervous system moves quickly with nerve impulses moving in a few hundredths of a second. The endocrine system moves slowly with hormones, released by endocrine glands, taking seconds, or even minutes, to reach their target. Hormones are important to psychologists because they organize the nervous system and body tissues at certain stages of development and activate behaviors such as alertness or sleepiness, sexual behavior, concentration, aggressiveness, reaction to stress, a desire for companionship.
The pituitary gland is the “master gland” which regulates other endocrine glands. It influences blood pressure, thirst, contractions of the uterus during childbirth, milk production, sexual behavior and interest, body growth, the amount of water in the body’s cells, and other functions as well. The pineal gland produces melatonin which helps regulate the sleep-wake cycle and other circadian rhythms. Overproduction of the hormone melatonin can lead to Seasonal Affective Disorder (a specific type of Major Depressive Disorder). The thyroid gland produces thyroxin which facilitates energy, metabolism, and growth. Hypothyroidism is a condition in which the thyroid glands become underactive and this condition can produce symptoms of depression. In contrast, hyperthyroidism is a condition in which the thyroid glands becomes overactive and this condition can produce symptoms of mania. Therefore it is important for individuals experiencing these symptoms to have their thyroid checked, because conventional treatments for depression and mania will not correct the problem with the thyroid, and will therefore not resolve the symptoms. Rather, individuals with these conditions need to be treated with thyroid medications. Also of key importance to mental health professionals are the adrenal glands which are located on top of the kidneys, and release cortisol which helps the body deal with stress. However, chronically, elevated levels of cortisol can lead to increased weight gain, interfere with learning and memory, decrease the immune response, reduce bone density, increase cholesterol, and increase the risk of depression.
Figure 2.5. Hormone Systems
The Hypothalamic-Pituitary-Adrenal-Cortical Axis (HPA Axis) is the connection between the hypothalamus, pituitary glands, and adrenal glands. Specifically, the hypothalamus releases corticotropin-releasing factor (CRF) which stimulates the anterior pituitary to release adrenocorticotrophic hormone (ACTH), which in turn stimulates the adrenal cortex to release cortisol (see Figure 2.4). Malfunctioning of this system is implicated in a wide range of mental disorders including, depression, anxiety, and post-traumatic stress disorder. Exposure to chronic, unpredictable stress during early development can sensitive this system, making it over-responsive to stress (meaning it activates too readily and does not shut down appropriately). Sensitization of the HPA axis leads to an overproduction of cortisol which once again can damage the body and brain when it remains at chronically high levels.
Figure 2.6. The HPA Axis
For more on the link between cortisol and depression, check out this article:
https://www.psychologytoday.com/blog/the-athletes-way/201301/cortisol-why-the-stress-hormone-is-public-enemy-no-1
Viral Infections
Infections can cause brain damage and lead to the development of mental illness or an exacerbation of symptoms. For example, evidence suggests that contracting strep infection can lead to the development of OCD, Tourette’s syndrome, and tic disorder in children (Mell, Davis, & Owens, 2005; Giedd et al., 2000; Allen et al., 1995; https://www.psychologytoday.com/blog/the-perfectionists-handbook/201202/can-infections-result-in-mental-illness). Influenza epidemics have also been linked to schizophrenia (Brown et al., 2004; McGrath and Castle, 1995; McGrath et al., 1994; O’Callaghan et al., 1991) though more recent research suggests this evidence is weak at best (Selten & Termorshuizen, 2017; Ebert & Kotler, 2005).
Treatments
Psychopharmacology and Psychotropic Drugs
One option to treat severe mental illness is psychotropic medications. These medications fall into five major categories.
Antidepressants are used to treat depression, but also anxiety, insomnia, or pain. The most common types of antidepressants are selective serotonin reuptake inhibitors (SSRIs) and include Citalopram (Celexa), Paroxetine, and Fluoxetine (Prozac). They can often take 2-6 weeks to take effect. Possible side effects include weight gain, sleepiness, nausea and vomiting, panic attacks, or thoughts about suicide or dying.
Anti-anxiety medications help with the symptoms of anxiety and include the benzodiazepines such as Diazepam (Valium), Alprazolam (Xanax), and Lorazepam (Ativan). These medications are effective in reducing anxiety in the short-term and take less time to take effect than antidepressants which are also commonly prescribed for anxiety. However, benzodiazepines are rather addictive. As such, tolerance to these drugs can develop quickly and individuals may experience withdrawal symptoms (e.g., anxiety, panic, insomnia) when they cease taking the drugs. For this reason, benzodiazepines should not be used in the long-term. Side effects include drowsiness, dizziness, nausea, difficulty urinating, and irregular heartbeat, to name a few.
Stimulants increase one’s alertness and attention and are frequently used to treat ADHD. They include Lisdexamfetamine, the combination of dextroamphetamine and amphetamine, and Methylphenidate (Ritalin). Stimulants are generally effective and produce a calming effect. Possible side effects include loss of appetite, headache, motor tics or verbal tics, and personality changes such as appearing emotionless.
Antipsychotics are used to treat psychosis (i.e., hallucinations and delusions). They can also be used to treat eating disorders, severe depression, PTSD, OCD, ADHD, and Generalized Anxiety Disorder. Common antipsychotics include Chlorpromazine, Perphenazine, Quetiapine, and Lurasidone. Side effects include nausea, vomiting, blurred vision, weight gain, restlessness, tremors, and rigidity.
Mood stabilizers are used to treat bipolar disorder and at times depression, schizoaffective disorder, and disorders of impulse control. A common example is Lithium and side effects include loss of coordination, hallucinations, seizures, and frequent urination.
For more information on psychotropic medications, please visit:
https://www.nimh.nih.gov/health/topics/mental-health-medications/index.shtml
The use of these drugs has been generally beneficial to patients. Most report that their symptoms decline, leading them to feel better and improve their functioning. Also, long-term hospitalizations are less likely to occur as a result, though the medications do not benefit the individual in terms of improved living skills.
Electroconvulsive Therapy
According to Mental Health America, “Electroconvulsive therapy (ECT) is a procedure in which a brief application of electric stimulus is used to produce a generalized seizure.” Patients are placed on a padded bed and administered a muscle relaxant to avoid injury during the seizures. Annually, approximately 100,000 are treated using ECT for conditions including severe depression, acute mania, and suicidality. The procedure is still the most controversial available to mental health professionals due to “its effectiveness vs. the side effects, the objectivity of ECT experts, and the recent increase in ECT as a quick and easy solution, instead of long-term psychotherapy or hospitalization” (http://www.mentalhealthamerica.net/ect). Its popularity has declined since the 1940s and 1950s.
Psychosurgery
Another option to treat mental disorders is to perform brain surgeries. In the past, we have conducted trephining and lobotomies, neither of which are used today. Today’s techniques are much more sophisticated and have been used to treat schizophrenia, depression, and obsessive-compulsive disorder, though critics cite obvious ethical issues with conducting such surgeries as well as scientific issues. Due to these issues, psychosurgery is only used as a radical last resort when all other treatment options have failed to resolve a serious mental illness.
For more on psychosurgery, check out this article from Psychology Today:
https://www.psychologytoday.com/articles/199203/psychosurgery
Evaluation of the Model
The biological model is generally well respected today but suffers a few key issues. First, consider the list of side effects given for the psychotropic medications. You might make the case that some of the side effects are worse than the condition they are treating. Second, the viewpoint that all human behavior is explainable in biological terms, and therefore, when issues arise they can be treated using biological methods, overlooks factors that are not biological in nature. More on that over the next two sections. | textbooks/socialsci/Psychology/Psychological_Disorders/Essentials_of_Abnormal_Psychology_(Bridley_and_Daffin)/02%3A_Contemporary_Models_of_Abnormal_Psychology/2.02%3A_The_Biological_Model.txt |
Section Learning Objectives
• Describe learning.
• Outline classical conditioning and the work of Pavlov and Watson.
• Outline operant conditioning and the work of Thorndike and Skinner.
• Outline observational learning/social-learning theory and the work of Bandura.
• Evaluate the usefulness of the behavioral model.
• Define the cognitive model.
• Exemplify the effect of maladaptive cognitions on creating abnormal behavior.
• List and describe cognitive therapies.
• Evaluate the usefulness of the cognitive model.
• Describe the humanistic perspective.
• Describe the existential perspective.
• Evaluate the usefulness of the humanistic and existential perspectives.
The Behavioral Model
What is Learning?
The behavioral model concerns the cognitive process of learning. Simply, learning is any relatively permanent change in behavior due to experience and practice and has two main forms – associative learning and observational learning. First, associative learning is the linking together of information sensed from our environment. Conditioning, a type of associative learning, occurs which two events are linked and has two forms – classical conditioning, or linking together two types of stimuli, and operant conditioning, or linking together a response with its consequence. Second, observational learning occurs when we learn by observing the world around us.
We should also note the existence of non-associative learning or when there is no linking of information or observing the actions of others around you. Types include habituation, or when we simply stop responding to repetitive and harmless stimuli in our environment such as a fan running in your laptop as you work on a paper, and sensitization, or when our reactions are increased due to a strong stimulus, such as an individual who experienced a mugging and now experiences panic when someone walks up behind him/her on the street.
Behaviorism is the school of thought associated with learning that began in 1913 with the publication of John B. Watson’s article, “Psychology as the Behaviorist Views It,” in the journal, Psychological Review (Watson, 1913). It was Watson’s belief that the subject matter of psychology was to be observable behavior and to that end said that psychology should focus on the prediction and control of behavior. Behaviorism was dominant from 1913 to 1990 before being absorbed into mainstream psychology. It went through three major stages – behaviorism proper under Watson and lasting from 1913-1930 (discussed as respondent conditioning), neobehaviorism under Skinner and lasting from 1930-1960 (discussed as operant conditioning), and sociobehaviorism under Bandura and Rotter and lasting from 1960-1990 (discussed as social learning theory).
Classical Conditioning
You have likely heard about Pavlov and his dogs but what you may not know is that this was a discovery made accidentally. Ivan Petrovich Pavlov (1906, 1927, 1928), a Russian physiologist, was interested in studying digestive processes in dogs in response to being fed meat powder. What he discovered was the dogs would salivate even before the meat powder was presented. They would salivate at the sound of a bell, footsteps in the hall, a tuning fork, or the presence of a lab assistant. Pavlov realized there were some stimuli that automatically elicited responses (such as salivating to meat powder) and those that had to be paired with these automatic associations for the animal or person to respond to it (such as salivating to a bell). Armed with this stunning revelation, Pavlov spent the rest of his career investigating this learning phenomenon.
The important thing to understand is that not all behaviors occur due to reinforcement and punishment as operant conditioning says. In the case of classical conditioning, stimuli exert complete and automatic control over some behaviors. We see this in the case of reflexes. When a doctor strikes your knee with that little hammer it extends out automatically. You do not have to do anything but watch. Babies will root for a food source if the mother’s breast is placed near their mouth. If a nipple is placed in their mouth, they will also automatically suck, as per the sucking reflex. Humans have several of these reflexes though not as many as other animals due to our more complicated nervous system.
Classical conditioning (also called response or Pavlovian conditioning) occurs when we link a previously neutral stimulus with a stimulus that is unlearned or inborn, called an unconditioned stimulus. In respondent conditioning, learning occurs in three phases: preconditioning, conditioning, and postconditioning. See Figure 2.1 for an overview of Pavlov’s classic experiment.
Preconditioning. This stage of learning signifies is that some learning is already present. There is no need to learn it again as in the case of primary reinforcers and punishers in operant conditioning. In Panel A, food makes a dog salivate. This does not need to be learned and is the relationship of an unconditioned stimulus (UCS) yielding an unconditioned response (UCR). Unconditioned means unlearned. In Figure 2.1, we also see that a neutral stimulus (NS) yields nothing. Dogs do not enter the world knowing to respond to the ringing of a bell (which it hears).
Conditioning. Conditioning is when learning occurs. Through the pairing of a neutral stimulus and unconditioned stimulus (bell and food, respectively) the dog will learn that the bell ringing (NS) signals food coming (UCS) and salivate (UCR). The pairing must occur more than once so that needless pairings are not learned such as someone farting right before your food comes out and now you salivate whenever someone farts (…at least for a while. Eventually the fact that no food comes will extinguish this reaction but still, it will be weird for a bit).
Postconditioning. Postconditioning, or after learning has occurred, establishes a new and not naturally occurring relationship of a conditioned stimulus (CS; previously the NS) and conditioned response (CR; the same response). So the dog now reliably salivates at the sound of the bell because he expects that food will follow, and it does.
Figure 2.1. Pavlov’s Classic Experiment
One of the most famous studies in psychology was conducted by Watson and Rayner (1920). Essentially, they wanted to explore the possibility of conditioning emotional responses. The researchers ran a 9-month-old child, known as Little Albert, through a series of trials in which he was exposed to a white rat. At first, he showed no response except curiosity. Then the researchers began to make a loud sound (UCS) whenever the rat was presented. Little Albert exhibited the normal fear response to this sound. After several conditioning trials like these, Albert responded with fear to the mere presence of the white rat.
As fears can be learned, so too they can be unlearned. Considered the follow-up to Watson and Rayner (1920), Jones (1924) wanted to see if a child (named Peter) who learned to be afraid of white rabbits could be conditioned to become unafraid of them. Simply, she placed Peter in one end of a room and then brought in the rabbit. The rabbit was far enough away so as to not cause distress. Then, Jones gave Peter some pleasant food (i.e., something sweet such as cookies; remember the response to the food is unlearned). She continued this procedure with the rabbit being brought in a bit closer each time until eventually, Peter did not respond with distress to the rabbit. This process is called counterconditioning or extinction, or the reversal of previous learning.
Another way to unlearn a fear is called flooding or exposing the person to the maximum level of stimulus and as nothing aversive occurs, the link between CS and UCS producing the CR of fear should break, leaving the person unafraid. This type of treatment is rather extreme and is not typically practiced by psychologists.
Operant Conditioning
Influential on the development of Skinner’s operant conditioning, Thorndike proposed the law of effect (Thorndike, 1905) or the idea that if our behavior produces a favorable consequence, in the future when the same stimulus is present, we will be more likely to make the response again, expecting the same favorable consequence. Likewise, if our action leads to dissatisfaction, then we will not repeat the same behavior in the future. Thorndike developed the law of effect thanks to his work with the Puzzle Box. Cats were food deprived the night before the experimental procedure was to occur. The next morning, they were placed in the puzzle box and a small amount of food was placed outside the box close enough to be smelled, but the cat could not reach the food. To get out, a series of switches, buttons, levers, etc. had to be manipulated and once done, the cat could escape the box and eat some of the food. But just some. The cat was then promptly placed back in the box to figure out how to get out again, the food being its reward for doing so. With each subsequent escape and re-insertion into the box, the cat became faster until he/she knew exactly what had to be done to escape. This is called trial and error learning, or making a response repeatedly if it leads to success. Thorndike also said that stimulus and responses were connected by the organism and this lead to learning. This approach to learning was called connectionism.
Operant conditioning is a type of associate learning which focuses on consequences that follow a response or behavior that we make (anything we do, say, or think/feel) and whether it makes a behavior more or less likely to occur. This should sound much like what you just read about in terms of Thorndike’s work. Skinner talked about contingencies or when one thing occurs due to another. Think of it as an If-Then statement. If I do X then Y will happen. For operant conditioning, this means that if I make a behavior, then a specific consequence will follow. The events (response and consequence) are linked in time.
What form do these consequences take? There are two main ways they can present themselves.
• Reinforcement – Due to the consequence, a behavior/response is more likely to occur in the future. It is strengthened.
• Punishment – Due to the consequence, a behavior/response is less likely to occur in the future. It is weakened.
Reinforcement and punishment can occur as two types – positive and negative. These words have no affective connotation to them meaning they do not imply good or bad. Positive means that you are giving something – good or bad. Negative means that something is being taken away – good or bad. Check out the figure below for how these contingencies are arranged.
Figure 2.2. Contingencies in Operant Conditioning
Let’s go through each:
• Positive Punishment (PP) – If something bad or aversive is given or added, then the behavior is less likely to occur in the future. If you talk back to your mother and she slaps your mouth, this is a PP. Your response of talking back led to the consequence of the aversive slap being delivered or given to your face. Ouch!!!
• Positive Reinforcement (PR) – If something good is given or added, then the behavior is more likely to occur in the future. If you study hard and earn an A on your exam, you will be more likely to study hard in the future. Similarly, your parents may give you money for your stellar performance. Cha Ching!!!
• Negative Reinforcement (NR) – This is a tough one for students to comprehend because the terms don’t seem to go together and are counterintuitive. But it is really simple and you experience NR all the time. This is when you are more likely to engage in a behavior that has resulted in the removal of something aversive in the past. For instance, what do you do if you have a headache? You likely answered take Tylenol. If you do this and the headache goes away, you will take Tylenol in the future when you have a headache. Another example is continually smoking marijuana because it temporarily decreases feelings of anxiety. The behavior of smoking marijuana is being reinforced because it reduces a negative state.
• Negative Punishment (NP) – This is when something good is taken away or subtracted making a behavior less likely in the future. If you are late to class and your professor deducts 5 points from your final grade (the points are something good and the loss is negative), you will hopefully be on time in all subsequent classes. Another example is taking away a child’s allowance when he misbehaves.
Observational Learning
There are times when we learn by simply watching others. This is called observational learning and is contrasted with enactive learning, which is learning by doing. There is no firsthand experience by the learner in observational learning. You can learn desirable behaviors such as exercising because your mother engaged in exercise every day and you can learn undesirable ones too. If your parents resort to alcohol consumption to deal with the stressors life presents, then you too might do the same. What is critical is what happens to the model in all of these cases. If my mother seems genuinely happy and pleased with herself after exercising, then I will be more likely to adopt this behavior. If my mother or father consumes alcohol to feel better when things are tough, and it works, then I might do the same. On the other hand, if we see a sibling constantly getting in trouble with the law then we may not model this behavior due to the negative consequences.
Albert Bandura conducted pivotal research on observational learning and you likely already know all about it from previous psychology courses. In Bandura’s experiment, children were first brought into a room to watch a video of an adult model playing nicely or aggressively with a Bobo doll. Next, the children were placed in a room with toys and a Bobo doll. Children who watched the aggressive model behaved aggressively with the Bobo doll while those who saw the nice model, played nice.
Figure 2.3. Bandura’s Classic Bobo Doll Experiment
Bandura said if all behaviors are learned by observing others and we model our behaviors on theirs, then undesirable behaviors can be altered or relearned in the same way. Modeling techniques are used to change behavior by having clients observe a model in a situation that usually causes them some anxiety. By seeing the model interact calmly with the fear-evoking stimulus, their fear should subside. This form of behavior therapy is widely used in clinical and classroom situations. In the classroom, we might use modeling to demonstrate to a student how to do a math problem. In fact, in many college classrooms, this is exactly what the instructor does.
But keep in mind that we do not model everything we see. Why? First, we cannot pay attention to everything going on around us. We are more likely to model behaviors by someone who commands our attention. Second, we must remember what a model does in order to imitate it. If a behavior is not memorable, it will not be imitated. Finally, we must try to convert what we see into action. If we are not motivated to perform an observed behavior, we probably will not show what we have learned.
Evaluating the Behavioral Model
Within the context of abnormal behavior or psychopathology, the behavioral perspective is useful because it suggests that maladaptive behavior occurs when learning goes awry. The good thing is that what is learned can be unlearned or relearned using behavior modification which refers to the process of changing behavior. To begin, an applied behavior analyst will identify a target behavior, or behavior to be changed, define it, work with the client to develop goals, conduct a functional assessment to understand what the undesirable behavior is, what causes it, and what maintains it. Armed with this knowledge, a plan is developed and consists of numerous strategies to act on one or all of these elements – antecedent, behavior, and/or consequence.
The greatest strength or appeal of the behavioral model is that its tenets are easily tested in the laboratory unlike those of the psychodynamic model. Also, a large number of treatment techniques have been developed and proven to be effective over the years. For example, desensitization (Wolpe, 1997) teaches clients to respond calmly to fear-producing stimuli. It begins with the individual learning a relaxation technique such as diaphragmatic breathing. Next, a fear hierarchy, or list of feared objects and situations, is constructed in which the individual moves from least to most feared. Finally, the individual either imagines (systematic) or experiences in real life (in-vivo) each object or scenario from the hierarchy and uses the relaxation technique while doing so. This represents individual pairings of feared object or situation and relaxation and so if there are 10 objects/situations in the list, the client will experience ten such pairings and eventually be able to face each without fear. Outside of phobias, desensitization has been shown to be effective in the treatment of Obsessive Compulsive Disorder symptoms (Hakimian and D’Souza, 2016) and limitedly with the treatment of depression that is co-morbid with OCD (Masoumeh and Lancy, 2016).
Critics of the behavioral perspective point out that it oversimplifies behavior and often ignores inner determinants of behavior. Behaviorism has also been accused of being mechanistic and seeing people as machines. Watson and Skinner defined behavior as what we do or say, but later, behaviorists added what we think or feel. In terms of the latter, cognitive behavior modification procedures arose after the 1960s along with the rise of cognitive psychology. This lead to a cognitive-behavioral perspective which combines concepts from the behavioral and cognitive models, the latter is discussed in the next section.
2.3.2.1. What is It?
As noted earlier, the idea of people being machines was a key feature of behaviorism and other schools of thought in psychology until about the 1960s or 1970s. In fact, behaviorism said psychology was to be the study of observable behavior. Any reference to cognitive processes was dismissed as this was not overt, but covert according to Watson and later Skinner. Of course, removing cognition from the study of psychology ignored an important part of what makes us human and separates us from the rest of the animal kingdom. Fortunately, the work of George Miller, Albert Ellis, Aaron Beck, and Ulrich Neisser demonstrated the importance of cognitive abilities in understanding thoughts, behaviors, and emotions, and in the case of psychopathology, they helped to show that people can create their own problems by how they come to interpret events experienced in the world around them. How so?
Maladaptive Cognitions
Irrational or dysfunctional thought patterns can be the basis of psychopathology. Throughout this book, we will discuss several treatment strategies that are used to change unwanted, maladaptive cognitions, whether they are present as an excess such as with paranoia, suicidal ideation, or feelings of worthlessness; or as a deficit such as with self-confidence and self-efficacy. More specifically, cognitive distortions/maladaptive cognitions can take the following forms:
• Overgeneralizing – You see a larger pattern of negatives based on one event.
• What if? – Asking yourself what if something happens without being satisfied by any of the answers.
• Blaming – Focusing on someone else as the source of your negative feelings and not taking any responsibility for changing yourself.
• Personalizing – Blaming yourself for negative events rather than seeing the role that others play.
• Inability to disconfirm – Ignoring any evidence that may contradict your maladaptive cognition.
• Regret orientation – Focusing on what you could have done better in the past rather than on making an improvement now.
• Dichotomous thinking – Viewing people or events in all-or-nothing terms.
For more on cognitive distortions, check out this website: http://www.goodtherapy.org/blog/20-cognitive-distortions-and-how-they-affect-your-life-0407154
Cognitive Therapies
According to the National Alliance on Mental Illness (NAMI), cognitive behavioral therapy (CBT) “focuses on exploring relationships among a person’s thoughts, feelings and behaviors. During CBT a therapist will actively work with a person to uncover unhealthy patterns of thought and how they may be causing self-destructive behaviors and beliefs.” CBT attempts to identifying negative or false beliefs and restructure them. They add, “Oftentimes someone being treated with CBT will have homework in between sessions where they practice replacing negative thoughts with more realistic thoughts based on prior experiences or record their negative thoughts in a journal.” For more on CBT, visit: https://www.nami.org/Learn-More/Treatment/Psychotherapy. Some commonly used strategies include cognitive restructuring, cognitive coping skills training, and acceptance techniques.
First, cognitive restructuring (also called rational restructuring) involves replacing maladaptive cognitions with more adaptive ones. To do this, the client must be aware of the distressing thoughts, when they occur, and their effect on them. Next, the therapist works to help the client stop thinking these thoughts and to replace them with more rational ones. It’s a simple strategy, but an important one. Psychology Today published a great article on January 21, 2013 which described 4 ways to change your thinking through cognitive restructuring. Briefly, these included:
1. Notice when you are having a maladaptive cognition such as making “negative predictions.” They suggest you figure out what is the worst thing that could happen and what other outcomes are possible.
2. Track the accuracy of the thought. For instance, if you believe ruminating on a problem generates a solution then write down each time you ruminate and then the result. You can generate a percentage of times you ruminated to the number of successful problem-solving strategies you generated.
3. Behaviorally test your thought. As an example, if you think you don’t have time to go to the gym then figure out if you really do not have time. Record what you do each day and then look at open times of the day. Explore if you can make some minor, or major, adjustments to your schedule to free up an hour to exercise.
4. Examine the evidence both for and against your thought. If you do not believe you do anything right, list evidence of when you did not do something right and then evidence of when you did. Then write a few balanced statements such as the one the article suggests, “I’ve made some mistakes that I feel embarrassed about but a lot of the time, I make good choices.”
The article also suggested a few non-cognitive restructuring techniques to include mindfulness meditation and self-compassion. For more on these visit: https://www.psychologytoday.com/blog/in-practice/201301/cognitive-restructuring
A second major strategy is to use what is called cognitive coping skills training. This strategy involves teaching social skills, communication, and assertiveness through direct instruction, role-playing, and modeling. For social skills, therapists identify appropriate social behavior such as making eye contact, saying no to a request, or starting up a conversation with a stranger and examine whether the client is inhibited from engaging in the behavior due to anxiety. For communication, the therapist can help determine if the problem is with speaking, listening, or both and then develop a plan the client can use in various interpersonal situations. Finally, assertiveness training aids the client protect their rights and obtain what they want from others. Treatment starts with determining situations in which assertiveness is lacking and generating a hierarchy of assertiveness opportunities. Least difficult situations are handled first, followed by more difficult situations, all while rehearsing and mastering all the situations present in the hierarchy. For more on these techniques, visit http://cogbtherapy.com/cognitive-behavioral-therapy-exercises/.
Finally, acceptance techniques can be used to reduce a client’s worry and anxiety. Life involves a degree of uncertainty and at times we need to just accept this uncertainty. However, many clients, especially those with anxiety, have difficulty tolerating uncertainty. Acceptance techniques might include weighing the pros of fighting uncertainty against the cons of doing so. The cons should outweigh the pros and help the client to end the struggle and accept what is unknown. Chances are the client is already accepting the unknown in some areas of life and identifying those can help them to see why it is helpful to accept uncertainty which may help them to do so in more difficult areas. Finally, the therapist may help the client to question whether uncertainty necessarily leads to a negative end. The client may think so, but reviewing the evidence for and against this statement will show them that uncertainty does not always lead to negative outcomes which can help to reduce how threatening uncertainty seems.
Evaluating the Cognitive Model
The cognitive model made up for an obvious deficit in the behavioral model – overlooking the importance of our thoughts and the role cognitive processes play in our feelings and behaviors. Right before his death, Skinner (1990) reminded psychologists that the only thing we can truly know and study is observable behavior. Cognitive processes cannot be empirically and reliably measured and so should be ignored. Is there merit to this view? Social desirability states that sometimes people do not tell us the truth about what they are thinking, feeling or doing (or have done) because they do not want us to think less of them or to judge them harshly if they are outside the social norm. In other words, they present themselves in a favorable light. If this is true, how can we really know what they are thinking? The person’s true intentions or thoughts and feelings are not readily available to us or are covert, and so do not make for good empirical data. Still, cognitive-behavioral therapies have proven their efficacy for the treatment of OCD (McKay et al., 2015); perinatal depression (Sockol, 2015); insomnia (de Bruin et al., 2015), bulimia nervosa (Poulsen et al., 2014), hypochondriasis (Olatunji et al., 2014), and social anxiety disorder (Leichsenring et al., 2014) to name a few. Other examples will be discussed throughout this book.
The Humanistic and Existential Perspectives
The Humanistic Perspective
The humanistic perspective, or third force psychology (psychoanalysis and behaviorism being the other two forces), emerged in the 1960s and 1970s as an alternative viewpoint to the largely deterministic view of personality espoused by psychoanalysis and the view of humans as machines advocated by behaviorism. Key features of the perspective include a belief in human perfectibility, personal fulfillment, valuing self-disclosure, placing feelings over intellect, an emphasis on the present, and hedonism. Its key figures were Abraham Maslow who proposed the hierarchy of needs and Carl Rogers who we will focus on here.
Rogers said that all people want to have positive regard from significant others in their life. When the individual is accepted as they are they receive unconditional positive regard and become a fully functioning person. They are open to experience, live every moment to the fullest, are creative, accept responsibility for their decisions, do not derive their sense of self from others, strive to maximize their potential, and are self-actualized. Their family and friends may disapprove of some of their actions but overall, respect and love them. They then realize their worth as a person but also that they are not perfect. Of course most people do not experience this but instead are made to feel that they can only be loved and respected if they meet certain standards, called conditions of worth. Hence, they experience conditional positive regard. According to Rogers, their self-concept is now seen as having worth only when these significant others approve and so becomes distorted, leading to a disharmonious state and psychopathology. Individuals in this situation are unsure what they feel, value, or need leading to dysfunction and the need for therapy. Rogers stated that the humanistic therapist should be warm, understanding, supportive, respectful, and accepting of his/her clients. This approach came to be called client-centered therapy.
The Existential Perspective
This existential perspective stresses the need for people to continually re-create themselves and be self-aware, acknowledges that anxiety is a normal part of life, focuses on free will and self-determination, emphasizes that each person has a unique identity known only through relationships and the search for meaning, and finally, that we develop to our maximum potential. Abnormal behavior arises when we avoid making choices, do not take responsibility, and fail to actualize our full potential. Existential therapy is used to treat a myriad of disorders and problems including substance abuse, excessive anxiety, apathy, avoidance, despair, depression, guilt, anger, and rage. It also focuses on life-enhancing experiences such as love, caring, commitment, courage, creativity, spirituality, and acceptance, to name a few (For more information, please visit: https://www.psychologytoday.com/therapy-types/existential-therapy).
Evaluating the Humanistic and Existential Perspectives
The biggest criticism of these models is that the concepts are abstract and fuzzy and as such are very difficult to research. The exception to this was Rogers who did try to scientifically investigate his propositions, though most other humanistic-existential psychologists rejected the use of the scientific method. They also have not developed much in the way of theory and their perspectives tend to work best with people who have adjustment issues and not as well with severe mental illness. The perspectives do offer hope to people who have experienced tragedy by asserting that we control our own destiny and can make our own choices. | textbooks/socialsci/Psychology/Psychological_Disorders/Essentials_of_Abnormal_Psychology_(Bridley_and_Daffin)/02%3A_Contemporary_Models_of_Abnormal_Psychology/2.03%3A_Psychological_Models.txt |
Section Learning Objectives
• Describe the sociocultural model.
• Clarify how socioeconomic factors affect mental illness.
• Clarify how gender factors affect mental illness.
• Clarify how environmental factors affect mental illness.
• Clarify how multicultural factors affect mental illness.
• Evaluate the sociocultural model.
Outside of biological and psychological factors on mental illness, race, ethnicity, gender, religious orientation, socioeconomic status, sexual orientation, etc. also play a role, and this is the basis of the sociocultural model. Next, we explore a few of these factors.
Socioeconomic Factors
Low socioeconomic status has been linked to higher rates of mental and physical illness (Ng, Muntaner, Chung, & Eaton, 2014) due to persistent concern over unemployment or under-employment, low wages, lack of health insurance, no savings, and the inability to put food on the table, which can then lead to feeling hopeless, helpless, and dependent on others. This situation places considerable stress on an individual and can lead to higher rates of anxiety disorders and depression. Borderline personality disorder has also been found to be higher in people in low-income brackets (Tomko et al., 2014).
Gender Factors
Gender plays an important, though at times, unclear role in mental illness. It is important to understand that gender is not the cause of mental illness, though differing demands placed on males and females by society and their culture can influence the development and course of a disorder. Consider the following:
• Rates of eating disorders are higher among women than, men, though both genders are affected. In the case of men, muscle dysphoria is of concern and is characterized by extreme concern over not be muscular enough.
• OCD has an earlier age of onset in boys than girls, with most people being diagnosed by age 19.
• Women are at greater risk for developing an anxiety disorder than men.
• ADHD is more common in males than females, though females are more likely to have inattention issues.
• Boys are more likely to be diagnosed with Autism Spectrum Disorder.
• Depression occurs with greater frequency in women than men.
• Women are more likely to develop PTSD compared to men.
• Rates of SAD (Seasonal Affective Disorder) are four times greater in women than men.
Consider this…
In relation to men: “Men and women experience many of the same mental disorders but their willingness to talk about their feelings may be very different. This is one of the reasons that their symptoms may be very different as well. For example, some men with depression or an anxiety disorder hide their emotions and may appear to be angry or aggressive while many women will express sadness. Some men may turn to drugs or alcohol to try to cope with their emotional issues.”
https://www.nimh.nih.gov/health/topics/men-and-mental-health/index.shtml
In relation to women: “Some women may experience symptoms of mental disorders at times of hormone change, such as perinatal depression, premenstrual dysphoric disorder, and perimenopause-related depression. When it comes to other mental disorders such as schizophrenia and bipolar disorder, research has not found differences in rates that men and women experience these illnesses. But, women may experience these illnesses differently – certain symptoms may be more common in women than in men, and the course of the illness can be affected by the sex of the individual.”
https://www.nimh.nih.gov/health/topics/women-and-mental-health/index.shtml
Environmental Factors
Environmental factors also play a role in the development of mental illness. How so?
• In the case of borderline personality disorder, many people report experiencing traumatic life events such as abandonment, abuse, unstable relationships or hostility, and adversity during childhood.
• Cigarette smoking, alcohol use, and drug use during pregnancy are risk factors for ADHD.
• Divorce or the death of a spouse can increase the risk of developing an anxiety disorder.
• Trauma, stress, and other extreme stressors are predictive of depression.
• Malnutrition before birth, exposure to viruses, and other psychosocial factors are believed to contribute to the risk of developing schizophrenia.
• Seasonal Affective Disorder (SAD) occurs with greater frequency for those living far north or south of the equator (Melrose, 2015). Horowitz (2008) found that rates of SAD are just 1% for those living in Florida while 9% of Alaskans are diagnosed with the disorder. This is due to differences in exposure to sunlight in these regions.
Source: https://www.nimh.nih.gov/health/topics/index.shtml
Multicultural Factors
Racial, ethnic, and cultural factors are also relevant to understanding the development and course of mental disorders. Multicultural psychologists assert that both normal behavior and abnormal behavior need to be understood in relation to the individual’s unique culture and the group’s value system. Racial and ethnic minorities must contend with prejudice, discrimination, racism, economic hardships, etc. as part of their daily life and these stressors can increase vulnerability to a mental disorder (Lo & Cheng, 2014; Jones, Cross, & DeFour, 2007; Satcher, 2001), though some research suggests that ethnic identity can buffer against these stressors and protect mental health (Mossakowski, 2003). To address this unique factor, culture-sensitive therapies have been developed and include increasing the therapist’s awareness of cultural values, hardships, stressors, and/or prejudices faced by their client; the identification of suppressed anger and pain; and raising the client’s self-worth (Prochaska & Norcross, 2013).
Evaluation of the Model
The sociocultural model has contributed greatly to our understanding of the nuances of diagnosis, prognosis, course, and treatment of mental disorders for other races, cultures, genders, ethnicities. In Chapter 3 we will discuss diagnosing and classifying abnormal behavior from the perspective of the DSM 5 (Diagnostic and Statistical Manual of Mental Disorders, 5th edition). Important here is that specific culture- and gender-related diagnostic issues are discussed for each disorder, demonstrating increased awareness of the impact of these factors. Still, the sociocultural model suffers from issues with the findings being difficult to interpret and not allowing for the establishment of causal relationships due to a reliance on more qualitative data gathered from case studies and ethnographic analyses (one such example is Zafra, 2016).
Chapter Recap
In Chapter 2, we first distinguished uni- and multi-dimensional models of abnormality and made a case that the latter was better to subscribe to. We then discussed biological, psychological, and sociocultural models of abnormality. In terms of the biological model, neurotransmitters, brain structures, hormones, genes, and viral infections were discussed as potential causes of mental disorders and several treatment options were described. In terms of psychological perspectives, behavioral, cognitive, humanistic and existential perspectives were discussed. Finally, the sociocultural model indicated the roles that socioeconomic status, gender, environmental, and multicultural factors can play in abnormal behavior. | textbooks/socialsci/Psychology/Psychological_Disorders/Essentials_of_Abnormal_Psychology_(Bridley_and_Daffin)/02%3A_Contemporary_Models_of_Abnormal_Psychology/2.04%3A_The_Sociocultural_Model.txt |
Learning Objectives
• Describe clinical assessment and methods used in it.
• Clarify how mental health professionals diagnosis mental disorders in a standardized way.
• Discuss reasons to seek treatment and the importance of psychotherapy
Chapter 3 covers the issues of clinical assessment, diagnosis, and treatment. We will define assessment and then describe key issues such as reliability, validity, standardization, and specific methods that are used. In terms of clinical diagnosis, we will discuss the two main classification systems used around the world – the DSM-5 and ICD-10. Finally, we discuss reasons why people may seek treatment and what to expect when doing so.
03: Clinical Assessment Diagnosis and Treatment
Section Learning Objectives
• Define clinical assessment.
• Clarify why clinical assessment is an ongoing process.
• Define and exemplify reliability.
• Define and exemplify validity.
• Define standardization.
• List and describe six methods of assessment.
What is Clinical Assessment?
In order for a mental health professional to be able to effectively treat a client and know that the selected treatment actually worked (or is working), he/she first must engage in the clinical assessment of the client. Clinical assessment refers to collecting information and drawing conclusions through the use of observation, psychological tests, neurological tests, and interviews to determine what the person’s problem is and what symptoms he/she is presenting with. This collection of information involves learning about the client’s skills, abilities, personality characteristics, cognitive and emotional functioning, social context (e.g., environmental stressors), and cultural factors particular to them such as their language or ethnicity. Clinical assessment is not just conducted at the beginning of the process of seeking help but all throughout the process. Why is that?
Consider this. First, we need to determine if a treatment is even needed. By having a clear accounting of the person’s symptoms and how they affect daily functioning we can determine to what extent the individual is adversely affected. Assuming treatment is needed, our second reason to engage in clinical assessment is to determine what treatment will work best. As you will see later in this chapter, there are numerous approaches to treatment. These include Behavior Therapy, Cognitive Therapy, Cognitive-Behavioral Therapy (CBT), Humanistic-Experiential Therapies, Psychodynamic Therapies, Couples and Family Therapy, and biological treatments (e.g., psychopharmacology). Of course, for any mental disorder, some of the aforementioned therapies will have greater efficacy than others. Even if several can work well, it does not mean a particular therapy will work well for that specific client. Assessment can help the clinician figure this out. Finally, we need to know if the treatment worked. This will involve measuring symptoms and behavior before any treatment is used and then measuring symptoms and behavior while the treatment is in place. We will even want to measure symptoms and behavior after the treatment ends to make sure symptoms do not return. Knowing what the person’s baselines are for different aspects of psychological functioning will help us to see when improvement occurs. In recap, obtaining the baselines happens in the beginning, implementing the treatment plan happens more so in the middle, and then making sure the treatment produces the desired outcome occurs at the end. It should be clear from this discussion that clinical assessment is an ongoing process.
Key Concepts in Assessment
Important to the assessment process are three critical concepts – reliability, validity, and standardization. Actually, these three are important to science in general. First, we want assessment to be reliable or consistent. Outside of clinical assessment, when our car has an issue and we take it to the mechanic, we want to make sure that what one mechanic says is wrong with our car is the same as what another says or even two others. If not, the measurement tools they use to assess cars are flawed. The same is true of a patient who is experiencing a mental disorder. If one mental health professional says the person has major depressive disorder and another says the issue is borderline personality disorder, then there is an issue with the assessment tool being used. Ensuring that two different raters (e.g., mechanics, mental health professionals) are consistent in their assessments is called interrater reliability. Another type of reliability occurs when a person takes a test one day, and then the same test on another day. We would expect the person’s answers to be consistent with one another, which is called test-retest reliability. An example is if the person takes the Minnesota Multiphasic Personality Inventory (MMPI) on Tuesday and then the same test on Friday, then unless something miraculous or tragic happened over the two days in between tests, the scores on the MMPI should be nearly identical to one another. In other words, the two scores (test and retest) should be correlated with one another. If the test is reliable, the correlation should be very high (remember, a correlation goes from -1.00 to +1.00 and positive means as one score goes up, so does the other, so the correlation for the two tests should be high on the positive side).
In addition to reliability, we want to make sure the test measures what it says it measures. This is called validity. Let’s say a new test is developed to measure symptoms of depression. It is compared against an existing, and proven test, such as the Beck Depression Inventory (BDI). If the new test measures depression, then the scores on it should be highly correlated with the ones obtained by the BDI. This is called concurrent or descriptive validity. We might even ask if an assessment tool looks valid. If we answer yes, then it has face validity, though it should be noted that this is not based on any statistical or evidence-based method of assessing validity. An example would be a personality test that asks about how people behave in certain situations. It, therefore, seems to measure personality or we have an overall feeling that it measures what we expect it to measure.
A tool should also be able to accurately predict what will happen in the future, called predictive validity. Let’s say we want to tell if a high school student will do well in college. We might create a national exam to test needed skills and call it something like the Scholastic Aptitude Test (SAT). We would have high school students take it by their senior year and then wait until they are in college for a few years and see how they are doing. If they did well on the SAT, we would expect that at that point, they should be doing well in college. If so, then the SAT accurately predicts college success. The same would be true of a test such as the Graduate Record Exam (GRE) and its ability to predict graduate school performance.
Finally, we want to make sure that the experience one patient has when taking a test or being assessed is the same as another patient taking the test the same day or on a different day, and with either the same tester or another tester. This is accomplished with the use of clearly laid out rules, norms, and/or procedures, and is called standardization. Equally important is that mental health professionals interpret the results of the testing in the same way or otherwise it will be unclear what the meaning of a specific score is.
Methods of Assessment
So how do we assess patients in our care? We will discuss psychological tests, neurological tests, the clinical interview, behavioral assessment, and a few others in this section.
The Clinical Interview
A clinical interview is a face-to-face encounter between a mental health professional and a patient in which the former observes the latter and gathers data about the person’s behavior, attitudes, current situation, personality, and life history. The interview may be unstructured in which open-ended questions are asked, structured in which a specific set of questions according to an interview schedule are asked, or semi-structured, in which there is a pre-set list of questions but clinicians are able to follow up on specific issues that catch their attention.
A mental status examination is used to organize the information collected during the interview and to systematically evaluate the client through a series of observations and questions assessing appearance and behavior (e.g., grooming and body language), thought processes and content (e.g., disorganized speech or thought and false beliefs), mood and affect (e.g., hopelessness or elation), intellectual functioning (e.g., speech and memory), and awareness of surroundings (e.g., does the client know where he/she is, when it is, and who he/she is?). The exam covers areas not normally part of the interview and allows the mental health professional to determine which areas need to be examined further. The limitation of the interview is that it lacks reliability, especially in the case of the unstructured interview.
Psychological Tests and Inventories
Psychological tests are used to assess the client’s personality, social skills, cognitive abilities, emotions, behavioral responses, or interests and can be administered either individually or to groups. Projective tests consist of simple ambiguous stimuli that can elicit an unlimited number of responses. They include the Rorschach test or inkblot test and the Thematic Apperception Test which requires the individual to write a complete story about each of 20 cards shown to them and give details about what led up to the scene depicted, what the characters are thinking, what they are doing, and what the outcome will be. From these responses, the clinician gains perspective on the patient’s worries, needs, emotions, conflicts. Another projective test is the sentence completion test and asks individuals to finish an incomplete sentence. Examples include ‘My mother’ …. or ‘I hope.’
Personality inventories ask clients to state whether each item in a long list of statements applies to them, and could ask about feelings, behaviors, or beliefs. Examples include the MMPI or Minnesota Multiphasic Personality Inventory and the NEO-PI-R which is a concise measure of the five major domains of personality – Neuroticism, Extroversion, Openness, Agreeableness, and Conscientiousness. Six facets define each of the five domains and the measure assess emotional, interpersonal, experimental, attitudinal, and motivational styles (Costa & McCrae, 1992). These inventories have the advantage of being easy to administer by either a professional or the individual taking it, are standardized, objectively scored, and are completed either on the computer or through paper and pencil. That said, personality cannot be directly assessed and so you can never completely know the individual on the basis of these inventories.
Neurological Tests
Neurological tests are also used to diagnose cognitive impairments caused by brain damage due to tumors, infections, or head injury; or changes in brain activity. Positron Emission Tomography or PET is used to study the brain’s functioning and begins by injecting the patient with a radionuclide which collects in the brain. Patients then lie on a scanning table while a ring-shaped machine is positioned over their head. Images are produced that yield information about the functioning of the brain. Magnetic Resonance Imaging or MRI produces 3D images of the brain or other body structures using magnetic fields and computers. They are used to detect structural abnormalities such as brain and spinal cord tumors or nervous system disorders such as multiple sclerosis. Finally, computed tomography or the CT scan involves taking X-rays of the brain at different angles that are then combined. They are used to detect structural abnormalities such as brain tumors and brain damage caused by head injuries.
Physical Examination
Many mental health professionals recommend the patient see their family physician for a physical examination which is much like a check-up. Why is that? Some organic conditions, such as hyperthyroidism or hormonal irregularities, manifest behavioral symptoms that are similar to mental disorders and so ruling such conditions out can save costly therapy or surgery.
Behavioral Assessment
Within the realm of behavior modification and applied behavior analysis, is behavioral assessment which is simply the measurement of a target behavior. The target behavior is whatever behavior we want to change and it can be in excess (needing to be reduced), or in a deficit state (needing to be increased). During behavioral assessment we assess the ABCs of behavior:
• Antecedents are the environmental events or stimuli that trigger a behavior
• Behaviors are what the person does, says, thinks/feels; and
• Consequences are the outcome of a behavior that either encourages it to be made again in the future or discourages its future occurrence.
Though we might try to change another person’s behavior using behavior modification, we can also change our own behavior using self-monitoring which refers to measuring and recording one’s own ABCs. In the context of psychopathology, behavior modification can be useful in treating phobias, reducing habit disorders, and ridding the person of maladaptive cognitions.
A limitation of this method is that the process of observing and/or recording a behavior can cause the behavior to change, called reactivity. Have you ever noticed someone staring at you while you sat and ate your lunch? If you have, what did you do? Did you change your behavior? Did you become self-conscious? Likely yes and this is an example of reactivity. Another issue is that the behavior that is made in one situation may not be made in other situations, such as your significant other only acting out at their favorite team’s football game and not at home. This form of validity is called cross-sectional validity.
Intelligence Tests
Intelligence testing is occasionally used to determine the client’s level of cognitive functioning. Intelligence testing consists of a series of tasks asking the patient to use both verbal and nonverbal skills. An example is the Stanford-Binet Intelligence test which is used to assess fluid reasoning, knowledge, quantitative reasoning, visual-spatial processing and working memory. These tests are rather time-consuming and require specialized training to administer. As such, they are typically only used in cases where there is a suspected cognitive disorder or intellectual disability. Intelligence tests have been criticized for not predicting future behaviors such as achievement and reflecting social or cultural factors/biases and not actual intelligence. | textbooks/socialsci/Psychology/Psychological_Disorders/Essentials_of_Abnormal_Psychology_(Bridley_and_Daffin)/03%3A_Clinical_Assessment_Diagnosis_and_Treatment/3.01%3A_Clinical_Assessment.txt |
Section Learning Objectives
• Explain what it means to make a clinical diagnosis.
• Define syndrome.
• Clarify and exemplify what a classification system does.
• Identify the two most used classification systems.
• Outline the history of the DSM.
• Identify and explain the elements of a diagnosis.
• Outline the major disorder categories of the DSM-5.
• Describe the ICD-10.
• Clarify why the DSM-5 and ICD-11 need to be harmonized.
Clinical Diagnosis and Classification Systems
To begin any type of treatment, the client/patient must be clearly diagnosed with a mental disorder. Clinical diagnosis is the process of using assessment data to determine if the pattern of symptoms the person presents with is consistent with the diagnostic criteria for a specific mental disorder set forth in an established classification system such as the DSM-5 or ICD-10 (both will be described shortly). Any diagnosis should have clinical utility, meaning it aids the mental health professional in determining the prognosis, the treatment plan, and possible outcomes of treatment (APA, 2013). Receiving a diagnosis does not necessarily mean the person requires treatment. This decision is made based upon how severe the symptoms are, the level of distress caused by the symptoms, symptom salience such as expressing suicidal ideation, risks and benefits of treatment, disability, and other factors (APA, 2013). Likewise, a patient may not meet full criteria for a diagnosis but require treatment nonetheless.
Symptoms that cluster together on a regular basis are called a syndrome. If they also follow the same, predictable course, we say that they are characteristic of a specific disorderClassification systems for mental disorders provide mental health professionals with an agreed upon list of disorders falling in distinct categories for which there are clear descriptions and criteria for making a diagnosis. Distinct is the key word here. People experiencing delusions, hallucinations, disorganized speech, catatonia, and/or negative symptoms are different from people presenting with a primary clinical deficit in cognitive functioning that is not developmental in nature but has been acquired (i.e. they have shown a decline in cognitive functioning over time). The former would likely be diagnosed with a schizophrenia spectrum disorder while the latter likely has a neurocognitive disorder (NCD). The latter can be further distinguished from neurodevelopmental disorders which manifest early in development and involve developmental deficits that cause impairments in social, academic, or occupational functioning (APA, 2013). These three disorder groups or categories can be clearly distinguished from one another. Classification systems also permit the gathering of statistics for the purpose of determining incidence and prevalence rates, they facilitate research on the etiology and treatment of disorders, and they conform to the requirements of insurance companies for the payment of claims.
The most widely used classification system in the United States is the Diagnostic and Statistical Manual of Mental Disorders currently in its 5th edition and produced by the American Psychiatric Association (APA, 2013). Alternatively, the World Health Organization (WHO) produces the International Statistical Classification of Diseases and Related Health Problems (ICD) currently in its 10th edition with an 11th edition expected to be published in 2018. We will begin by discussing the DSM and then move to the ICD.
The DSM Classification System
A Brief History of the DSM
The DSM 5 was published in 2013 and took the place of the DSM IV-TR (TR means Text Revision; published in 2000) but the history of the DSM goes back to 1844 when the American Psychiatric Association published a predecessor of the DSM which was a “statistical classification of institutionalized mental patients” and “…was designed to improve communication about the types of patients cared for in these hospitals” (APA, 2013, p. 6). However, the first official version of the DSM was not published until 1952. The DSM evolved through four subsequent editions after World War II into a diagnostic classification system to be used by psychiatrists and physicians, but also other mental health professionals. The Herculean task of revising the DSM IV-TR began in 1999 when the APA embarked upon an evaluation of the strengths and weaknesses of the DSM in coordination with the World Health Organization (WHO) Division of Mental Health, the World Psychiatric Association, and the National Institute of Mental Health (NIMH). This resulted in the publication of a monograph in 2002 called, A Research Agenda for DSM-V. From 2003 to 2008, the APA, WHO, NIMH, the National Institute on Drug Abuse (NIDA), and the National Institute on Alcoholism and Alcohol Abuse (NIAAA) convened 13 international DSM-5 research planning conferences, “to review the world literature in specific diagnostic areas to prepare for revisions in developing both DSM-5 and the International Classification of Disease, 11th Revision (ICD-11)” (APA, 2013).
After the naming of a DSM-5 Task Force Chair and Vice-Chair in 2006, task force members were selected and approved by 2007 and workgroup members were approved in 2008. What resulted from this was an intensive process of “conducting literature reviews and secondary analyses, publishing research reports in scientific journals, developing draft diagnostic criteria, posting preliminary drafts on the DSM-5 Web site for public comment, presenting preliminary findings at professional meetings, performing field trials, and revisiting criteria and text”(APA, 2013).
What resulted was a “common language for communication between clinicians about the diagnosis of disorders” along with a realization that the criteria and disorders contained within were based on current research and may undergo modification with new evidence gathered (APA, 2013). Additionally, some disorders were not included within the main body of the document because they did not have the scientific evidence to support their widespread clinical use, but were included in Section III under “Conditions for Further Study” to “highlight the evolution and direction of scientific advances in these areas to stimulate further research” (APA, 2013).
Elements of a Diagnosis
The DSM 5 states that the following make up the key elements of a diagnosis (APA, 2013):
• Diagnostic Criteria and Descriptors – Diagnostic criteria are the guidelines for making a diagnosis. When the full criteria are met, mental health professionals can add severity and course specifiers to indicate the patient’s current presentation. If the full criteria are not met, designators such as “other specified” or “unspecified” can be used. If applicable, an indication of severity (mild, moderate, severe, or extreme), descriptive features, and course (type of remission – partial or full – or recurrent) can be provided with the diagnosis. The final diagnosis is based on the clinical interview, text descriptions, criteria, and clinical judgment.
• Subtypes and Specifiers – Since the same disorder can be manifested in different ways in different individuals the DSM uses subtypes and specifiers to better characterize an individual’s disorder. Subtypes denote “mutually exclusive and jointly exhaustive phenomenological subgroupings within a diagnosis” (APA, 2013). For example, non-rapid eye movement sleep arousal disorders can have either a sleepwalking or sleep terror type. Enuresis is nocturnal only, diurnal only, or both. Specifiers are not mutually exclusive or jointly exhaustive and so more than one specifier can be given. For instance, binge eating disorder has remission and severity specifiers. Major depressive disorder has a wide range of specifiers that can be used to characterize the severity, course, or symptom clusters. Again the fundamental distinction between subtypes and specifiers is that there can be only one subtype but multiple specifiers.
• Principle Diagnosis – A principal diagnosis is used when more than one diagnosis is given for an individual (when an individual has comorbid disorders). The principal diagnosis is the reason for the admission in an inpatient setting or the reason for a visit resulting in ambulatory care medical services in outpatient settings. The principal diagnosis is generally the main focus of treatment.
• Provisional Diagnosis – If not enough information is available for a mental health professional to make a definitive diagnosis, but there is a strong presumption that the full criteria will be met with additional information or time, then the provisional specifier can be used.
DSM-5 Disorder Categories
The DSM-5 includes the following categories of disorders:
Table 3.1. DSM-5 Classification System of Mental Disorders
Disorder Category Short Description
Neurodevelopmental Disorders A group of conditions that arise in the developmental period and include intellectual disability, communication disorders, autism spectrum disorder, motor disorders, and ADHD
Schizophrenia Spectrum and Other Psychotic Disorders Disorders characterized by one or more of the following: delusions, hallucinations, disorganized thinking and speech, disorganized motor behavior, and negative symptoms
Bipolar and Related Disorders Characterized by mania or hypomania and possibly depressed mood; includes Bipolar I and II, cyclothymic disorder
Depressive Disorders Characterized by sad, empty, or irritable mood, as well as somatic and cognitive changes that affect functioning; includes major depressive and persistent depressive disorders
Anxiety Disorders Characterized by excessive fear and anxiety and related behavioral disturbances; Includes phobias, separation anxiety, panic attack, generalized anxiety disorder
Obsessive-Compulsive and Related Disorders Characterized by obsessions and compulsions and includes OCD, hoarding, and body dysmorphic disorders
Trauma- and Stressor-Related Disorders Characterized by exposure to a traumatic or stressful event; PTSD, acute stress disorder, and adjustment disorders
Dissociative Disorders Characterized by a disruption or disturbance in memory, identity, emotion, perception, or behavior; dissociative identity disorder, dissociative amnesia, and depersonalization/derealization disorder
Somatic Symptom and Related Disorders Characterized by prominent somatic symptoms to include illness anxiety disorder somatic symptom disorder, and conversion disorder
Feeding and Eating Disorders Characterized by a persistent disturbance of eating or eating-related behavior to include bingeing and purging
Elimination Disorders Characterized by the inappropriate elimination of urine or feces; usually first diagnosed in childhood or adolescence
Sleep-Wake Disorders Characterized by sleep-wake complaints about the quality, timing, and amount of sleep; includes insomnia, sleep terrors, narcolepsy, and sleep apnea
Sexual Dysfunctions Characterized by sexual difficulties and include premature ejaculation, female orgasmic disorder, and erectile disorder
Gender Dysphoria Characterized by distress associated with the incongruity between one’s experienced or expressed gender and the gender assigned at birth
Disruptive, Impulse-Control, and Conduct Disorders Characterized by problems in self-control of emotions and behavior and involve the violation of the rights of others and cause the individual to be in violation of societal norms; Includes oppositional defiant disorder, antisocial personality disorder, kleptomania, etc.
Substance-Related and Addictive Disorders Characterized by the continued use of a substance despite significant problems related to its use
Neurocognitive Disorders Characterized by a decline in cognitive functioning over time and the NCD has not been present since birth or early in life
Personality Disorders Characterized by a pattern of stable traits which are inflexible, pervasive, and leads to distress or impairment
Paraphilic Disorders Characterized by recurrent and intense sexual fantasies that can cause harm to the individual or others; includes exhibitionism, voyeurism, and sexual sadism
The ICD-10
In 1893, the International Statistical Institute adopted the International List of Causes of Death which was the first edition of the ICD. The World Health Organization was entrusted with the development of the ICD in 1948 and published the 6th version (ICD-6), which was the first version to include mental disorders. The ICD-10 was endorsed in May 1990 by the 43rd World Health Assembly. The WHO states:
ICD is the foundation for the identification of health trends and statistics globally, and the international standard for reporting diseases and health conditions. It is the diagnostic classification standard for all clinical and research purposes. ICD defines the universe of diseases, disorders, injuries and other related health conditions, listed in a comprehensive, hierarchical fashion that allows for:
• easy storage, retrieval and analysis of health information for evidence-based decision-making;
• sharing and comparing health information between hospitals, regions, settings, and countries;
• and data comparisons in the same location across different time periods.
Source: http://www.who.int/classifications/icd/en/
The ICD lists many types of diseases and disorders and includes Chapter V: Mental and Behavioral Disorders. The list of mental disorders is broken down as follows:
• Organic, including symptomatic, mental disorders
• Mental and behavioral disorders due to psychoactive substance use
• Schizophrenia, schizotypal and delusional disorders
• Mood (affective) disorders
• Neurotic, stress-related and somatoform disorders
• Behavioral syndromes associated with physiological disturbances and physical factors
• Disorders of adult personality and behavior
• Mental retardation
• Disorders of psychological development
• Behavioral and emotional disorders with onset usually occurring in childhood and adolescence
• Unspecified mental disorder
Harmonization of DSM-5 and ICD-11
As noted earlier, the ICD-11 is currently in development with an expected publication date in 2018. According to the DSM-5, there is an effort to harmonize the two classification systems so that there can be a more accurate collection of national health statistics and design of clinical trials, increased ability to replicate scientific findings across national boundaries and to rectify the lack of agreement between the DSM-IV and ICD-10 diagnoses. (APA, 2013). | textbooks/socialsci/Psychology/Psychological_Disorders/Essentials_of_Abnormal_Psychology_(Bridley_and_Daffin)/03%3A_Clinical_Assessment_Diagnosis_and_Treatment/3.02%3A_Diagnosing_and_Classifying_Abnormal_Behavior.txt |
Section Learning Objectives
• Clarify reasons why an individual may need to seek treatment.
• Critique myths about psychotherapy.
Seeking Treatment
Who Seeks Treatment?
Would you describe the people who seek treatment as being on the brink, crazy, or desperate? Or can the ordinary Joe in need of advice seek out mental health counseling? The answer is that anyone can. David Sack, M.D. (2013) writes in an article entitled, 5 Signs Its Time to Seek Therapy, published in Psychology Today, that “most people can benefit from therapy at some point in their lives” and that though the signs one needs to seek help are obvious at times, people often try “to sustain [their] busy life until it sets in that life has become unmanageable.” So when should we seek help? First, if we feel sad, angry, or not like ourselves. We might be withdrawing from friends and families or sleeping more or less than we usually do. Second, if we are abusing drugs, alcohol, food, or sex to deal with life’s problems. In this case, our coping skills may need some work. Third, in instances when we have lost a loved one or something else important to us, whether due to a death or divorce, the grief may be too much to process. Fourth, a traumatic event may have occurred such as abuse, a crime, an accident, chronic illness, or rape. Finally, if we have stopped doing the things we enjoy the most. Sack (2013) says, “If you decide that therapy is worth a try, it doesn’t mean you’re in for a lifetime of “head shrinking.” In fact, a 2001 study in the Journal of Counseling Psychology found that most people feel better within seven to 10 visits. In another study, published in 2006 in the Journal of Consulting and Clinical Psychology, 88 percent of therapy-goers reported improvements after just one session.”
For more on this article, please visit:
https://www.psychologytoday.com/blog...e-seek-therapy
When Friends, Family, and Self-Healing are Not Enough
If you are experiencing any of the aforementioned issues, you should seek help. Instead of facing the potential stigma of talking to a mental health professional, many people think that talking through their problems with friends or family is just as good. Though you will ultimately need these people to see you through your recovery, they do not have the training and years of experience that a psychologist or similar professional has. “Psychologists can recognize behavior or thought patterns objectively, more so than those closest to you who may have stopped noticing — or maybe never noticed. A psychologist might offer remarks or observations similar to those in your existing relationships, but their help may be more effective due to their timing, focus or your trust in their neutral stance” (http://www.apa.org/helpcenter/psychotherapy-myths.aspx). You also should not wait to recover on your own. It is not a failure to admit you need help and there could be a biological issue that makes it almost impossible to heal yourself.
Prevention
As a society, we often to wait for a mental or physical health issue to emerge and then we scramble to treat it. More recently, medicine and science have taken a prevention stance which involves identifying the factors that cause specific mental health issues and implementing interventions to stop them from happening, or at least minimize their deleterious effects. Our focus has shifted from individuals to the population. Mental health promotion programs have been instituted with success in schools (Shoshani & Steinmetz, 2014; Weare & Nind, 2011; Berkowitz & Bier, 2007), in the workplace (Czabała, Charzyńska, & Mroziak, B., 2011), with undergraduate and graduate students (Conley et al., 2017; Bettis et al., 2017), in relation to bullying (Bradshaw, 2015), and with the elderly (Forsman et al., 2011). Many researchers believe the time is ripe to move from knowledge to action and to expand public mental health initiatives (Wahlbeck, 2015).
So What Exactly is Psychotherapy?
APA states that in psychotherapy, “psychologists apply scientifically validated procedures to help people develop healthier, more effective habits.” Several different approaches can be utilized to include behavior, cognitive and cognitive-behavior, humanistic-experiential, psychodynamic, couples and family, and biological therapies/treatments. (article quoted can be found at: http://www.apa.org/helpcenter/understanding-psychotherapy.aspx)
The Client-Therapist Relationship
What is key is the client-therapist relationship. APA says, “Psychotherapy is a collaborative treatment based on the relationship between an individual and a psychologist. Grounded in dialogue, it provides a supportive environment that allows you to talk openly with someone who’s objective, neutral and nonjudgmental. You and your psychologist will work together to identify and change the thought and behavior patterns that are keeping you from feeling your best.” It’s not just about solving the problem you saw the therapist for, but also about learning new skills to better help you cope in the future when faced with the same or similar environmental stressors.
So how do you find a psychotherapist? Several strategies may prove fruitful. You could ask family and friends, your primary care physician (PCP), look online, consult an area community mental health center, your local university’s psychology department, state psychological association, or use APA’s Psychologist Locator Service (https://locator.apa.org/?_ga=2.160567293.1305482682.1516057794-1001575750.1501611950). Once you find a list of psychologists or other practitioners, choose the right one for you by determining if you plan on attending alone or with family, what you wish to get out of your time with a psychotherapist, how much your insurance company pays for (and if you have to pay out of pocket how much you can afford), when you can attend sessions, and how far you are willing to travel. Once you have done this, make your first appointment.
But what should you bring? APA suggests, “To make the most of your time, make a list of the points you want to cover in your first session and what you want to work on in psychotherapy. Be prepared to share information about what’s bringing you to the psychologist. Even a vague idea of what you want to accomplish can help you and your psychologist proceed efficiently and effectively.” Additionally, they suggest taking report cards, a list of medications, information on the reasons for a referral, a notebook, a calendar to schedule future visits if needed, and a form of payment.
What should you expect? Your therapist and you will work to develop a full history which could take several visits. From this, a treatment plan will be developed. “This collaborative goal-setting is important, because both of you need to be invested in achieving your goals. Your psychologist may write down the goals and read them back to you, so you’re both clear about what you’ll be working on. Some psychologists even create a treatment contract that lays out the purpose of treatment, its expected duration and goals, with both the individual’s and psychologist’s responsibilities outlined.”
After the initial visit, the mental health professional may conduct tests to further understand your condition but will definitely continue talking through the issue. He/she may even suggest involving others especially in cases of relationship issues. Resilience is a skill that will be taught so that you can better handle future situations.
Does it Work?
APA writes, “Reviews of these studies show that about 75 percent of people who enter psychotherapy show some benefit. Other reviews have found that the average person who engages in psychotherapy is better off by the end of treatment than 80 percent of those who don’t receive treatment at all.” Treatment works due to finding an evidence-based treatment that is specific for the person’s problem; the expertise of the therapist; and the characteristics, values, culture, preferences, and personality of the client.
How Do You Know You are Finished?
“How long psychotherapy takes depends on several factors: the type of problem or disorder, the patient’s characteristics and history, the patient’s goals, what’s going on in the patient’s life outside psychotherapy and how fast the patient is able to make progress.” It is important to note that psychotherapy is not a lifelong commitment and it is a joint decision of client and therapist as to when it ends. Once over, expect to have a periodic check-up with your therapist. This might be weeks or even months after your last session. If you need to see him/her sooner, schedule an appointment. APA calls this a “mental health tune up” or a “booster session.”
For more on psychotherapy, please see the very interesting APA article on this matter:
http://www.apa.org/helpcenter/unders...hotherapy.aspx
Chapter Recap
With the conclusion of Chapter 3, you now have the necessary foundation to understand each of the groups of disorders we discuss in the remaining chapters. In Chapter 3 we discussed clinical assessment, diagnosis, and treatment. In terms of assessment, we covered key concepts such as reliability, validity, and standardization; and discussed methods of assessment such as the clinical interview, psychological tests, personality inventories, neurological tests, the physical examination, behavioral assessment, and intelligence tests. In terms of diagnosis, we discussed the classification systems of the DSM-5 and ICD-10. For treatment, we discussed reasons why someone may seek treatment, self-treatment, psychotherapy, the client-therapist relationship, and evidence for the success of psychotherapy. We discussed some of the specific therapies in Chapter 3 but will cover others throughout this book and in terms of the disorders they are used to treat. | textbooks/socialsci/Psychology/Psychological_Disorders/Essentials_of_Abnormal_Psychology_(Bridley_and_Daffin)/03%3A_Clinical_Assessment_Diagnosis_and_Treatment/3.03%3A_Treatment_of_Mental_Disorders__An_Overview.txt |
Learning Objectives
• Describe the various anxiety disorders and their symptoms.
• Describe the epidemiology of anxiety disorders.
• Describe comorbidity in relation to anxiety disorders.
• Describe treatment options for anxiety disorders.
• Describe the etiology of anxiety disorders.
In Chapter 4, we will discuss matters related to anxiety disorders including their clinical presentation, epidemiology, comorbidity, treatment options, and etiology. Our discussion will include Panic Disorder, Generalized Anxiety Disorder, Specific Phobias, Social Anxiety Disorder, and Agoraphobia. Be sure you refer to Chapters 1-3 for explanations of key terms (Chapter 1), an overview of the various models to explain psychopathology (Chapter 2), and descriptions of the various therapies (Chapter 3).
04: Anxiety Disorders
Section Learning Objectives
• Describe how panic disorder presents itself.
• Describe the epidemiology of panic disorder.
• Indicate which disorders are commonly comorbid with panic disorder.
• Describe the treatment options for panic disorder.
Clinical Description
Panic disorder consists of a series of recurrent, unexpected panic attacks coupled with the fear of future panic attacks. A panic attack is defined as a sudden or abrupt surge or fear or impending doom along with at least four physical or cognitive symptoms (listed below). The symptoms generally peak within a few minutes, although it seems much longer for the individual experiencing the panic attack.
There are two key components to panic disorder—the attacks are unexpected meaning there is nothing that triggers them, and they are recurrent meaning they occur multiple times. Because these panic attacks occur frequently and essentially “out of the blue,” they cause significant worry or anxiety in the individual as they are unsure of when the next attack will occur. In some individuals, significant behavioral changes such as fear of leaving their home or attending large events occur as the individual is fearful an attack will happen in one of these situations, causing embarrassment. Additionally, individuals report worry that other’s will think they are “going crazy” or losing control if they were to observe an individual experiencing a panic attack. Occasionally, an additional diagnosis of agoraphobia is given to an individual with panic disorder if their behaviors meet diagnostic criteria for this disorder as well (see more below).
The frequency and intensity of these panic attacks vary widely among individuals. Some people report panic attacks occurring once a week for months on end, others report more frequent attacks multiple times a day, but then experience weeks or months without any attacks. The intensity of symptoms also varies among individuals, with some individuals reporting experiencing nearly all 14 symptoms and others only reporting the minimum 4 required for the diagnosis. Furthermore, individuals report variability within their own panic attack symptoms, with some panic attacks presenting with more symptoms than others. It should be noted that at this time, there is no identifying information (i.e. demographic information) to suggest why some individuals experience panic attacks more frequently or more severe than others.
Epidemiology
Prevalence rates for panic disorder are estimated at around 2-3% in adults and adolescents. Higher rates of panic disorder are found in American Indians and non-Latino whites. Females are more commonly diagnosed than males with a 2:1 diagnosis rate—this gender discrepancy is seen throughout the lifespan. Although panic disorder can occur in young children, it is generally not observed in individuals younger than 14 years of age.
Comorbidity
Panic disorder rarely occurs in isolation, as many individuals also report symptoms of other anxiety disorders, major depression, and substance abuse. There is mixed evidence as to whether panic disorder precedes other comorbid psychological disorders—estimates suggest that 1/3 of individuals with panic disorder will experience depressive symptoms prior to panic symptoms whereas the remaining 2/3 will experience depressive symptoms concurrently or after the onset of panic disorder (APA, 2013).
Unlike some of the other anxiety disorders, there is a high comorbid diagnosis with general medical symptoms. More specifically, individuals with panic disorder are more likely to report somatic symptoms such as dizziness, cardiac arrhythmias, asthma, irritable bowel syndrome, and hyperthyroidism (APA, 2013). The relationship between panic symptoms and somatic symptoms is unclear; however, there does not appear to be a direct medical cause between the two.
Treatment
Cognitive Behavioral Therapy (CBT)
CBT is the most effective treatment option for individuals with panic disorder as the focus is on correcting misinterpretations of bodily sensations (Craske & Barlow, 2014). Nearly 80 percent of people with panic disorder report complete remission of symptoms after mastering the following five components of CBT for panic disorder (Craske & Barlow, 2014).
1. Psychoeducation. Treatment begins by educating the client on the nature of panic disorder, the underlying causes of panic disorder, as well as the mechanisms that maintain the disorder such as the physical, cognitive, and behavioral response systems (Craske & Barlow, 2014). This part of treatment is fundamental in correcting any myths or misconceptions about panic symptoms, as they often contribute to the exacerbation of panic symptoms.
2. Self-monitoringSelf-monitoring, or the awareness of self-observation, is essential to the CBT treatment process for panic disorder. In this part of treatment, the individual is taught to identify the physiological cues immediately leading up to and during a panic attack. The client is then encouraged to identify and document/record the thoughts and behaviors associated with these physiological symptoms. By bringing awareness to the symptoms, as well as the relationship between physical arousal and cognitive/behavioral responses, the client is learning the fundamental processes in which they can manage their panic symptoms (Craske & Barlow, 2014).
3. Relaxation training. Prior to engaging in exposure training, the individual must learn a relaxation technique to apply during the onset of panic attacks. While breathing training was once included as the relaxation training technique of choice for panic disorder, due to the high report of hyperventilation during panic attacks more recent research has failed to support this technique as effective in the use of panic disorder (Schmidt et al., 2000). Findings suggest that breathing retraining is more commonly misused as a means for avoiding physical symptoms as opposed to an effective physiological response to stress (Craske & Barlow, 2014). To replace the breathing retraining, Craske & Barlow (2014) suggest progressive muscle relaxation (PMR). In PMR, the client learns to tense and relax various large muscle groups throughout the body. Generally speaking, the client is encouraged to start at either the head or the feet, and gradually work their way up through the entire body, holding the tension for roughly 10 seconds before relaxing. The theory behind PMR is that in tensing the muscles for a prolonged period of time, the individual exhausts those muscles, forcing them (and eventually) the entire body to engage in relaxation (McCallie, Blum, & Hood, 2006).
4. Cognitive restructuring. Cognitive restructuring, or the ability to recognize cognitive errors and replace them with alternate, more appropriate thoughts, is likely the most powerful part of CBT treatment for panic disorder, aside from the exposure part. Cognitive restructuring involves identifying the role of thoughts in generating and maintaining emotions. The clinician encourages the individual to view these thoughts as “hypotheses” as opposed to facts, which allows the thoughts to be questioned and challenged. This is where the detailed recordings in the self-monitoring section of treatment are helpful. By discussing specifically what the client has recorded for the relationship between physiological arousal and thoughts/behaviors, the clinician is able to help the individual restructure the maladaptive thought processes to more positive thought processes which in return, helps to reduce fear and anxiety.
5. Exposure. Next, the client is encouraged to engage in a variety of exposure techniques such as in vivo exposure and interoceptive exposure, while also incorporating the cognitive restructuring and relaxation techniques previously learned in efforts to reduce and eliminate ongoing distress. Interoceptive exposure involves inducing panic specific symptoms to the individual repeatedly, for a prolonged time period, so that maladaptive thoughts about the sensations can be disconfirmed and conditional anxiety responses are extinguished (Craske & Barlow, 2014). Some examples of these exposure techniques are spinning a client repeatedly in a chair to induce dizziness and breathing in a paper bag to induce hyperventilation. These treatment approaches can be presented in a gradual manner; however, the client must endure the physiological sensations for at least 30 seconds to 1 minute to ensure adequate time for applying cognitive strategies to misappraisal of cognitive symptoms (Craske & Barlow, 2014). Interoceptive exposure is continued both in and outside of treatment until panic symptoms remit. Over time, the habituation of fear within an exposure session will ultimately lead to habituation across treatment, which leads to long-term remission of panic symptoms (Foa & McNally, 1996). Occasionally, panic symptoms will return in individuals who report complete remission of panic disorder. Follow-up booster sessions reviewing the steps above is generally effective in eliminating symptoms again.
Pharmacological Interventions
According to Craske & Barlow (2014), nearly half of people with panic disorder present to psychotherapy already on medication, likely prescribed by their primary care physician. Some researchers argue that anti-anxiety medications impede the progress of CBT treatment as the individual is not able to fully experience the physiological sensations during exposure sessions, thus limiting their ability to modify maladaptive thoughts maintaining the panic symptoms. Results from large clinical trials suggest no advantage during or immediately after treatment of combining CBT and medication (Craske & Barlow, 2014). Additionally, when medications were discontinued post-treatment, the CBT+ medication groups fared worse than the CBT treatment alone groups, thus supporting the theory that immersion in interoceptive exposure is limited by the use of medication. Therefore, it is suggested that medications are reserved for those who do not respond to CBT therapy alone (Kampman, Keijers, Hoogduin & Hendriks, 2002). | textbooks/socialsci/Psychology/Psychological_Disorders/Essentials_of_Abnormal_Psychology_(Bridley_and_Daffin)/04%3A_Anxiety_Disorders/4.01%3A_Panic_Disorder.txt |
Section Learning Objectives
• Describe how generalized anxiety disorder presents itself.
• Describe the epidemiology of generalized anxiety disorder.
• Indicate which disorders are commonly comorbid with generalized anxiety disorder.
• Describe the treatment options for generalized anxiety disorder.
Clinical Description
Generalized anxiety disorder, commonly referred to as GAD, is a disorder characterized by an underlying excessive worry related to a wide range of events or activities. While many individuals experience some levels of worry throughout the day, individuals with GAD experience worry of a greater intensity and for longer periods of times than the average person. Additionally, they are often unable to control their worry through various coping strategies, which directly interferes with their ability to engage in daily social and occupational tasks. There are six characteristic symptoms of generalized anxiety disorder and in order to be diagnosed with the disorder, individuals must experience at least three of them. These symptoms are: feeling restless, being easily fatigued, having difficulty concentrating, feeling irritable, having muscle tension, experiencing problems with sleep.
Epidemiology
The prevalence rate for generalized anxiety disorder is estimated to be 3% of the general population, with nearly 6% of individuals experiencing GAD sometime during their lives. While it can present at any age, it generally appears first in childhood or adolescence. Similar to most anxiety-related disorders, females are twice as likely to be diagnosed with GAD as males (APA, 2013).
Comorbidity
There is a high comorbidity between generalized anxiety disorder and the other anxiety-related disorders, as well as major depressive disorder, suggesting they all share common vulnerabilities, both biological and psychological.
Treatment
Psychopharmacology
Benzodiazepines, a class of sedative-hypnotic drugs, originally replaced barbiturates as the leading anti-anxiety medication due to their less addictive nature, yet equally effective ability to calm individuals at low dosages. Unfortunately, as more research was conducted on benzodiazepines, serious side effects, as well as physical dependence have routinely been documented (NIMH, 2013). Due to these negative effects, selective serotonin-reuptake inhibitors (SSRIs) and serotonin-norepinephrine reuptake inhibitors (SNRIs) are generally considered to be first-line medication options for those with GAD. Findings indicate a 30-50% positive response rate to these psychopharmacological interventions (Reinhold & Rickels, 2015). Unfortunately, none of these medications continue to provide any benefit once they are stopped; therefore, other more effective treatment options such as CBT, relaxation training, and biofeedback are often encouraged before the use of pharmacological interventions.
Rational-Emotive Therapy
Rational emotive therapy was developed by Albert Ellis in the mid-1950s as one of the first forms of cognitive-behavioral therapy. Ellis proposed that individuals were not aware of the effect their negative thoughts had on their behaviors and various relationships and thus, identified a treatment aimed to address these thoughts in an effort to provide relief to those experiencing anxiety and depression. The goal of rational emotive therapy is to identify irrational, self-defeating assumptions, challenge the rationality of those assumptions, and to replace them with new more productive thoughts and feelings. It is proposed that through identifying and replacing these assumptions that one will experience relief of GAD symptoms (Ellis, 2014).
Cognitive Behavioral Therapy (CBT)
CBT is among the most effective treatment options for a variety of anxiety disorders, including GAD. In fact, findings suggest 60% of individuals report a significant reduction/elimination in anxious thoughts one-year post-treatment (Hanrahan, Field, Jones, & Davy, 2013). The fundamental goal of CBT is a combination of cognitive and behavioral strategies aimed to identify and restructure maladaptive thoughts while also providing opportunities to utilize these more effective thought patterns through exposure based experiences. Through repetition, the individual will be able to identify and replace anxious thoughts outside of therapy sessions, ultimately reducing their overall anxiety levels (Borkovec, & Ruscio, 2001).
Biofeedback
Biofeedback provides a visual representation of a clients’s physiological arousal. To achieve this feedback, a client is connected to a computer that provides continuous information on their physiological states. There are several ways a client can be connected to the computer. Among the most common is electromyography (EMG). EMG measures the amount of muscle activity currently experienced by the individual. An electrode is placed on a individuals’s skin just above a major muscle group- commonly the forearm or the forehead. Other common types of measurement are electroencephalography (EEG) which measures the neurofeedback or brain activity; heart rate variability (HRV) which measures autonomic activity such as heart rate or blood pressure; and galvanic skin response (GSR) which measures sweat.
Once the client is connected to the biofeedback machine, the clinician is able to walk the client through a series of relaxation scripts or techniques as the computer simultaneously measures the changes in muscle tension. The theory behind biofeedback is that in providing a client with a visual representation of changes in their physiological state, they become more skilled at voluntarily reducing their physiological arousal, and thus, their overall sense of anxiety or stress. While research has identified only a modest effect of biofeedback on anxiety levels, clients do report a positive experience with the treatment due to the visual feedback of their physiological arousal (Brambrink, 2004). | textbooks/socialsci/Psychology/Psychological_Disorders/Essentials_of_Abnormal_Psychology_(Bridley_and_Daffin)/04%3A_Anxiety_Disorders/4.02%3A_Generalized_Anxiety_Disorder.txt |
Section Learning Objectives
• Describe how specific phobia presents itself.
• Describe the epidemiology of specific phobia.
• Indicate which disorders are commonly comorbid with specific phobia.
• Describe the treatment options for specific phobia.
Clinical Description
Specific phobia is distinguished by an individual’s fear or anxiety specific to an object or a situation. While the amount of fear or anxiety related to the specific object or situation varies among individuals, it also varies related to the proximity of the object/situation. When individuals are face-to-face with their specific phobia, immediate fear is present. It should also be noted that these fears are more excessive and more persistent than a “normal” fear, often severely impacting one’s daily functioning (APA, 2013).
Individuals can experience multiple specific phobias at one time. In fact, nearly 75% of individuals with a specific phobia report fear in more than one object (APA, 2013). When making a diagnosis of specific phobia, it is important to identify the specific phobic stimulus. Among the most commonly diagnosed specific phobias are animals, natural environments (height, storms, water), blood-injection-injury (needles, invasive medical procedures), or situational (airplanes, elevators, enclosed places; APA, 2013). Given the high percentage of individuals who experience more than one specific phobia, all specific phobias should be listed as a diagnosis in efforts to identify an appropriate treatment plan.
Epidemiology
The prevalence rate for specific phobias is 7-9% within the united states. While young children have a prevalence rate of approximately 5%, teens have nearly a double prevalence rate than that of the general public at 16%. There is a 2:1 ratio of females to males diagnosed with specific phobia; however, this rate changes depending on the different phobic stimuli. More specifically, animal, natural environment, and situational specific phobias are more commonly diagnosed in females, whereas blood-injection-injury phobia is reportedly diagnosed equally between genders.
Comorbidity
Seeing as the onset of specific phobias occurs at a younger age than most other anxiety disorders, it is generally the primary diagnosis with generalized anxiety disorder as an occasional comorbid diagnosis. It should be noted that children/teens diagnosed with a specific phobia are at an increased risk for additional psychopathology later in life. More specifically, other anxiety disorders, depressive disorders, substance-related disorders and somatic symptom disorders.
Treatment
Exposure Treatments
While there are many treatment options for specific phobias, research routinely supports the behavioral techniques as the most effective treatment strategies. Seeing as the behavioral theory suggests phobias are developed via classical conditioning, the treatment approach revolves around breaking the maladaptive association developed between the object and fear. This is generally accomplished through exposure treatments. As the name implies, the individual is exposed to their feared stimuli. This can be done using several different approaches: systematic desensitization, flooding, and modeling.
Systematic desensitization is an exposure technique that utilizes relaxation strategies to help calm the individual as they are presented with the fearful object. The notion behind this technique is that both fear and relaxation cannot exist at the same time; therefore, the individual is taught how to replace their fearful reaction with a calm, relaxing reaction. To begin, the client, with assistance from the clinician, will identify a fear hierarchy, or a list of feared objects/situations ordered from least fearful to most fearful. After learning intensive relaxation techniques, the clinician will present items from the fear hierarchy- starting from the least fearful object/subject- while the patient practices using the learned relaxation techniques. The presentation of the feared object/situation can be in person (in vivo exposure) or it can be imagined (imaginal exposure). Imaginal exposure tends to be less intensive than in vivo exposure; however, it is less effective than in vivo exposure in eliminating the phobia. Depending on the phobia, in vivo exposure may not be an option, such as with a fear of a tornado. Once the patient is able to effectively employ relaxation techniques to reduce their fear/anxiety to a manageable level, the clinician will slowly move up the fear hierarchy until the individual does not experience excessive fear of any objects on the list.
Another exposure technique is flooding. In flooding, the clinician does not utilize a fear hierarchy, but rather repeatedly exposes the individual to their most feared object/subject. Similar to systematic desensitization, flooding can be done in either in vivo or imaginal exposure. Clearly, this technique is more intensive than the systematic or gradual exposure to feared objects. Because of this, patients are at a greater likelihood of dropping out of treatment, thus not successfully overcoming their phobias.
Finally, modeling is a common technique that is used to treat specific phobias (Kelly, Barker, Field, Wilson, & Reynolds, 2010). In this technique, the clinician approaches the feared object/subject while the patient observes. Like the name implies, the clinician models appropriate behaviors when exposed to the feared stimulus, implying that the phobia is irrational. After modeling several times, the clinician encourages the patient to confront the feared stimulus with the clinician, and then ultimately, without the clinician. | textbooks/socialsci/Psychology/Psychological_Disorders/Essentials_of_Abnormal_Psychology_(Bridley_and_Daffin)/04%3A_Anxiety_Disorders/4.03%3A_Specific_Phobia.txt |
Section Learning Objectives
• Describe how social anxiety disorder presents itself.
• Describe the epidemiology of social anxiety disorder.
• Indicate which disorders are commonly comorbid with social anxiety disorder.
• Describe the treatment options for social anxiety disorder.
Clinical Description
For social anxiety disorder (formerly known as social phobia), the anxiety is directed toward the fear of social situations, particularly those in which an individual can be evaluated by others. More specifically, the individual is worried that they will be judged negatively and viewed as stupid, anxious, crazy, unlikeable, or boring to name a few. Some individuals report feeling concerned that their anxiety symptoms will be obvious to others via blushing, stuttering, sweating, trembling, etc. These fears severely limit an individual’s behavior in social settings. For example, an individual may avoid holding drinks or plates if they know they will tremble in fear of dropping or spilling food/water. Additionally, if one is known to sweat a lot in social situations, they may limit physical contact with others, refusing to shake hands.
Unfortunately, for those with social anxiety disorder, all or nearly all social situations provoke this intense fear. Some individuals even report significant anticipatory fear days or weeks before a social event is to occur. This anticipatory fear often leads to avoidance of social events in some individuals; others will attend social events with a marked fear of possible threats. Because of these fears, there is a significant impact on one’s social and occupational functioning.
It is important to note that the cognitive interpretation of these social events is often excessive and out of proportion to the actual risk of being negatively evaluated. There are instances where one may experience anxiety toward a real threat such as bullying or ostracizing. In this instance, social anxiety disorder would not be diagnosed as the negative evaluation and threat are real.
Epidemiology
The overall prevalence rate of social anxiety disorder is significantly higher in the United States than in other countries worldwide, with an estimated 7% of the US population diagnosed with social anxiety disorder. Within the US, the prevalence rate remains the same among children through adults; however, there appears to a significant decrease in the diagnosis of social anxiety disorder among older individuals. With regards to gender, there is a higher diagnosis rate in females than males. This gender discrepancy appears to be larger in children/adolescents than adults.
Comorbidity
Among the most common comorbid diagnoses with social anxiety disorder are other anxiety-related disorders, major depressive disorder, and substance-related disorders. Generally speaking, social anxiety disorders will precede that of other mental health disorders, with the exception of separation anxiety disorder and specific phobia, seeing as these two disorders are more commonly diagnosed in childhood (APA, 2013). The high comorbidity rate among anxiety-related disorders and substance-related disorders is likely related to the efforts of self-medicating. For example, an individual with social anxiety disorder may consume larger amounts of alcohol in social settings in efforts to alleviate the anxiety of the social situation.
Treatment
Exposure
A hallmark treatment approach for all anxiety disorders is exposure. Specific to social anxiety disorder, the individual is encouraged to engage in social situations where they are likely to experience increased anxiety. Initially, the clinician will engage in role-playing of various social situations with the client so that he/she can practice social interactions in a safe, controlled environment (Rodebaugh, Holaway, & Heimberg, 2004). As the client becomes habituated to the int4.5: Agoraphobiaeraction with the clinician, the clinician and client may venture outside of the treatment room and engage in social settings with random strangers at various locations such as fast food restaurants, local stores, libraries, etc. The client is encouraged to continue with these exposure based social interactions outside of treatment to help reduce anxiety related to social situations.
Social Skills Training
This treatment is specific to social anxiety disorder as it focuses on skill deficits or inadequate social interactions displayed by the client that contributes to the negative social experiences and anxiety. The clinician may use a combination of skills such as modeling, corrective feedback, and positive reinforcement to provide feedback and encouragement to the client regarding his/her behavioral interactions (Rodebaugh, Holaway, & Heimberg, 2004). By incorporating the clinician’s feedback into their social repertoire, the client can engage in positive social behaviors outside of the treatment room in hopes to improve overall social interactions and reduce ongoing social anxiety.
Cognitive Restructuring
While exposure and social skills training are helpful treatment options, research routinely supports the need to incorporate cognitive restructuring as an additive component in treatment to provide substantial symptom reduction. Here the client will work with the therapist to identify negative, automatic thoughts that contribute to the distress in social situations. The clinician can then help the client establish new, positive thoughts to replace these negative thoughts. Research indicates that implementing cognitive restructuring techniques before, during, and after exposure sessions enhances the overall effects of treatment of social anxiety disorder (Heimberg & Becker, 2002). | textbooks/socialsci/Psychology/Psychological_Disorders/Essentials_of_Abnormal_Psychology_(Bridley_and_Daffin)/04%3A_Anxiety_Disorders/4.04%3A_Social_Anxiety_Disorder.txt |
Section Learning Objectives
• Describe how agoraphobia presents itself.
• Describe the epidemiology of agoraphobia.
• Indicate which disorders are commonly comorbid with agoraphobia.
• Describe the treatment options for agoraphobia.
Clinical Description
Similar to GAD, agoraphobia is defined as an intense fear triggered by a wide range of situations; however, unlike GAD, agoraphobia’s fears are related to situations in which the individual is in public situations where escape may be difficult. In order to receive a diagnosis of agoraphobia, there must be a presence of fear in at least two of the following situations: using public transportation such as planes, trains, ships, buses; being in large, open spaces such as parking lots or on bridges; being in enclosed spaces like stores or movie theaters; being in a large crowd similar to those at a concert; or being outside of the home in general (APA, 2013). When an individual is in one (or more) of these situations, they experience significant fear, often reporting panic-like symptoms (see Panic Disorder). It should be noted that fear and anxiety related symptoms are present every time the individual is presented with these situations. Should symptoms only occur occasionally, a diagnosis of agoraphobia is not warranted.
Due to the intense fear and somatic symptoms, individuals will go to great lengths to avoid these situations, often preferring to remain within their home where they feel safe, thus causing significant impairment in one’s daily functioning. They may also engage in active avoidance, where the individual will intentionally avoid agoraphobic situations. These avoidance behaviors may be behavioral, including having food delivery to avoid going to grocery store or only taking a job that does not require the use of public transportation, or cognitive, by using distraction and various other cognitive techniques to successfully get through the agoraphobic situation.
Epidemiology
The yearly prevalence rate for agoraphobia across the lifespan is roughly 1.7%. Females are twice as likely as males to be diagnosed with agoraphobia (notice the trend…). While it can occur in childhood, agoraphobia typically does not develop until late adolescence/early adulthood and typically tapers off in later adulthood.
Comorbidity
Similar to the other anxiety disorders, comorbid diagnoses include other anxiety disorders, depressive disorders, and substance use disorders, all of which typically occur after the onset of agoraphobia (APA, 2013). Additionally, there is also a high comorbidity between agoraphobia and PTSD. While agoraphobia can be a symptom of PTSD, an additional diagnosis of agoraphobia is made when all symptoms of agoraphobia are met in addition to the PTSD symptoms.
Treatment
Similar to the treatment approaches for specific phobias, exposure-based treatment techniques are among the most effective treatment options for individuals with agoraphobia; however, unlike the high success rate in specific phobias, exposure-based treatment for agoraphobia has been less effective in providing complete relief of the disorder. The success rate may be impacted by the high comorbidity rate of agoraphobia and panic disorder. Because of the additional presentation of panic symptoms, exposure-based treatments alone are not the most effective in eliminating symptoms as residual panic symptoms often remain (Craske & Barlow, 2014). Therefore, the best treatment approach for those with agoraphobia and panic disorder is a combination of exposure and CBT techniques (see panic disorder treatment).
For individuals with agoraphobia without panic symptoms, the use of group therapy in combination with individual exposure-based therapy has been identified as a successful treatment option. The group therapy format allows the individual to engage in exposure-based field trips to various community locations, while also maintaining a sense of support and security from a group of individuals whom they know. Research indicates that this exposure based type of treatment provides improvement for nearly 60% to 80% of patients with agoraphobia; however, there is a relatively high rate of partial relapse suggesting that long-term treatment or booster sessions at a minimum should be continued for several years (Craske & Barlow, 2014). | textbooks/socialsci/Psychology/Psychological_Disorders/Essentials_of_Abnormal_Psychology_(Bridley_and_Daffin)/04%3A_Anxiety_Disorders/4.05%3A_Agoraphobia.txt |
Section Learning Objectives
• Describe the biological causes of anxiety disorders.
• Describe the psychological causes of anxiety disorders.
• Describe the sociocultural causes of anxiety disorders.
Biological
Genetic Influences
While genetics have been known to contribute to the presentation of anxiety symptoms, the interaction between genetics and stressful environmental influences accounts for more of anxiety disorders than genetics alone (Bienvenu, Davydow, & Kendler, 2011). The quest to identify specific genes that may predispose individuals to develop anxiety disorders has lead researchers to the serotonin transporter gene (5-HTTLPR). Mutation of the 5-HTTLPR gene has been found to be related to a reduction in serotonin activity and an increase in anxiety-related personality traits (Munafo, Brown, & Hairiri, 2008).
Neurobiological Structures
Researchers have identified several brain structures and pathways that are likely responsible for anxiety responses. Among those structures is the amygdala, the area of the brain that is responsible for storing memories related to emotional events (Gorman, Kent, Sullivan, & Coplan, 2000). When presented with a fearful situation, the amygdala initiates a reaction in efforts to prepare the body for a response. First, the amygdala triggers the hypothalamic-pituitary-adrenal (HPA) axis to prepare for immediate action— either to fight or flight. The second pathway is activated by the feared stimulus itself, by sending a sensory signal to the hippocampus and prefrontal cortex, to determine if the threat is real or imagined. If it is determined that no threat is present, the amygdala sends a calming response to the HPA axis, thus reducing the level of fear. If there is a threat present, the amygdala is activated, producing a fear response.
Specific to panic disorder is the implication of the locus coeruleus, the brain structure that serves as an “on-off” switch for norepinephrine neurotransmitters. It is believed that increased activation of the locus coeruleus results in panic like symptoms; therefore, individuals with panic disorder may have a hyperactive locus coeruleus, leaving them more susceptible to experience more intense and frequent physiological arousal than the general public (Gorman, Kent, Sullivan, & Coplan, 2000). This theory is supported by studies in which individuals experienced increased panic symptoms following injection of norepinephrine (Bourin, Malinge, & Guitton, 1995). Unfortunately, norepinephrine and the locus coeruleus fail to fully explain the development of panic disorder, as treatment would be much easier if only norepinephrine was implicated. Therefore, researchers argue that a more complex neuropathway is likely implicated in the development of panic disorder. More specifically, the corticostriatal-thalamocortical (CSTC) circuit, also known as the fear-specific circuit, is theorized as a major contributor to panic symptoms (Gutman, Gorman, & Hirsch, 2004). When an individual is presented with a frightening object or situation, the amygdala is activated, sending a fear response to the anterior cingulate cortex and the orbitofrontal cortex. Additional projection from the amygdala to the hypothalamus activates endocrinologic responses to fear- releasing adrenaline and cortisol to help prepare the body to fight or flight (Gutman, Gorman, & Hirsch, 2004). This complex pathway supports the theory that panic disorder is mediated by several neuroanatomical structures and their associated neurotransmitters.
Psychological
Cognitive
The cognitive perspective on the development of anxiety disorders centers around dysfunctional thought patterns. Maladaptive assumptions are routinely observed in individuals with anxiety disorders, as they often interpret events as dangerous and overreact to potentially stressful events, which contributes to a heightened overall anxiety level. These negative appraisals, in combination with a biological predisposition to anxiety likely contribute to the development of anxiety symptoms (Gallagher et al., 2013).
Sensitivity to physiological arousal not only contributes to anxiety disorders in general, but also for panic disorder where individuals experience various physiological sensations and misinterpret them as catastrophic. One explanation for this theory is that individuals with panic disorder are actually more susceptible to more frequent and intensive physiological symptoms than the general public (Nillni, Rohan, & Zvolensky, 2012). Others argue that these individuals have had more trauma-related experiences in the past, and therefore, are quick to misevaluate their physical symptoms as a potential threat. This misevaluation of symptoms as impending disaster likely maintain symptoms as the cognitive misinterpretations to physiological arousal creates a negative feedback loop, leading to more physiological changes.
Social anxiety is also largely explained by cognitive theorists. Individuals with social anxiety disorder tend to hold unattainable or extremely high social beliefs and expectations. Furthermore, they often engage in preconceived maladaptive assumptions that they will behave incompetently in social situations and that their behaviors will lead to terrible consequences. Because of these beliefs, they anticipate social disasters will occur and therefore, avoid social encounters (or limit them to close friends/family members) in efforts to prevent the disaster (Moscovitch et al., 2013). Unfortunately, these cognitive appraisals are not only isolated before and during the event. Individuals with social anxiety disorder will also evaluate the social event after it has taken place, often obsessively reviewing the details (i.e., ruminating over social events). This over-evaluation of social performance negatively reinforces future avoidance of social situations.
Behavioral
The behavioral explanation for the development of anxiety disorders is largely reserved for phobias- both specific and social phobia. More specifically, behavioral theorists focus on classical conditioning – when two events that occur close together become strongly associated with one another, despite their lack of causal relationship. Watson and Rayner’s (1920) infamous Little Albert experiment is an example of how classical conditioning can be used to induce fear through associations. In this study, Little Albert developed a fear of white rats by pairing a white rate with a loud sound. This experiment, although lacking ethical standards, was groundbreaking in the development of learned behaviors. Over time, researchers have been able to replicate these findings (in more ethically sound ways) to provide further evidence of the role of classical conditioning in the development of phobias.
Modeling
Modeling is another behavioral explanation of the development of specific and social phobias. In modeling, an individual acquires a far through observation and imitation (Bandura & Rosenthal, 1966). For example, when a young child observes their parent display irrational fears of an animal, the child may then begin to display similar behaviors. Similarly, observing another individual being ridiculed in a social setting may increase the chances of the development of social anxiety, as the individual may become fearful that they would experience a similar situation in the future. It is speculated that the maintenance of these phobias is due to the avoidance of the feared item or social setting, thus preventing the individual from learning that the item/social situation is not something that should be feared.
While modeling and classical conditioning largely explain the development of phobias, there is some speculation that the accumulation of a large number of these learned fears will develop into GAD. Through stimulus generalization, or the tendency for the conditioned stimulus to evoke similar responses to other conditions, a fear of one item (such as the dog) may become generalized to other items (such as all animals). As these fears begin to grow, a more generalized anxiety may present, as opposed to a specific phobia.
Sociocultural
Finally, we will review the social constructs that contribute to and maintain anxiety disorders. While characteristics such as living in poverty, experiencing significant daily stressors, and increased exposure to traumatic events are all identified as major contributors to anxiety disorders, additional sociocultural influences such as gender and discrimination have also received a great deal of attention.
Gender has largely been researched within anxiety disorders due to the consistent discrepancy in diagnosis rate between men and women. As previously discussed, women are routinely diagnosed with anxiety disorders more often than men, a trend that is observed throughout the entire lifespan. One potential explanation for this discrepancy is the influence of social pressures on women. Women are more susceptible to experience traumatic experiences throughout their life, which may contribute to anxious appraisals of future events. Furthermore, women are more likely to use emotion-focused coping, which is less effective in reducing distress than problem-focused coping (McLean & Anderson, 2009). These factors may increase levels of stress hormones (e.g., cortisol) within women that leave them susceptible to develop symptoms of anxiety. Therefore, it appears a combination of genetic, environmental, and social factors may explain why women tend to be diagnosed with anxiety disorders more often than men.
Exposure to discrimination and prejudice, particularly relevant to ethnic minority and other marginalized groups, can also impact an individual’s anxiety level. Discrimination and prejudice contribute to negative interactions, which is directly related to negative affect and an overall decline in mental health (Gibbons et al., 2014). The repeated exposure to discrimination and prejudice over time can lead to fear responses in individuals, along with subsequent avoidance of social situations in efforts to protect themselves emotionally.
Chapter Recap
Chapter 4 covered the topic of anxiety disorders. This discussion included Generalized Anxiety Disorder, Specific Phobias, Agoraphobia, Social Anxiety Disorder, and Panic Disorder. As with other chapters in this book, we discussed the clinical presentation, epidemiology, comorbidity, and treatment of the anxiety disorders. Etiology was also discussed in the context of biological, psychological, and sociocultural theories. | textbooks/socialsci/Psychology/Psychological_Disorders/Essentials_of_Abnormal_Psychology_(Bridley_and_Daffin)/04%3A_Anxiety_Disorders/4.06%3A_Anxiety_Disorders_Etiology.txt |
Learning Objectives
• Describe obsessive-compulsive disorder and body dysmorphic disorder.
• Describe the epidemiology of obsessive-compulsive disorder.
• Describe comorbidities of obsessive-compulsive disorder and body dysmorphic disorder.
• Describe the etiology of these disorders.
• Describe treatment options for these disorders.
In Chapter 5, we will discuss matters related to obsessive-compulsive and related disorders to include their clinical presentation, diagnostic criteria, epidemiology, comorbidity, etiology, and treatment options. Our discussion will include obsessive-compulsive disorder (OCD) and body dysmorphic disorder (BDD). However, it is worth noting that hoarding disorder, trichotillomania (excessive hair pulling), and excoriation disorder (excessive skin picking) were recently added to the new obsessive-compulsive and related disorders section of the DSM 5. Be sure you refer Chapters 1-3 for explanations of key terms (Chapter 1), an overview of the various models to explain psychopathology (Chapter 2), and descriptions of the various therapies (Chapter 3).
05: Obsessive-Compulsive and Related Disorders
Section Learning Objectives
• Describe how obsessive-compulsive disorder presents itself.
• Describe the epidemiology of obsessive-compulsive disorder.
• Indicate which disorders are commonly comorbid with obsessive-compulsive disorder.
• Describe the biological, cognitive, and behavioral theories for the etiology of obsessive-compulsive disorder.
• Describe the treatment options for obsessive-compulsive disorder.
Clinical Description
Obsessive-compulsive disorder, more commonly known as OCD, requires the presence of obsessions and/or compulsions. Obsessions are defined as repetitive and intrusive thoughts, urges, or images. These obsessions are persistent, time-consuming, and unwanted, often causing significant distress and impairment in an individual’s daily functioning. Common obsessions are contamination (dirt on self or objects), errors of uncertainty regarding daily behaviors (locking a door, turning off appliances), thoughts of physical harm or violence, and orderliness, to name a few (Cisler, Adams, et. al., 2011; Yadin & Foa, 2009). Often the individual will try to ignore these thoughts, urges, or images. When they are unable to ignore them, the individual will engage in compulsatory behaviors to alleviate the anxiety.
Compulsions are defined as repetitive behaviors or mental acts that an individual typically performs in response to an obsession. Common examples of compulsions are checking (i.e. repeatedly checking if the stove is turned off even though the first four times they checked it was off), counting (i.e. flicking the lights off and on 5 times), hand washing, organizing objects in a symmetrical manner, and repeating specific words. These compulsive behaviors are typically performed in an attempt to alleviate the anxiety associated with the obsessive thoughts. For example, an individual may feel as though his hands are dirty after using utensils at a restaurant. He may obsess over this thought for a period of time, impacting his ability to interact with others or complete a specific task. This obsession will ultimately lead to the individual performing a compulsion where he will wash his hands with extremely hot water to rid all the germs, or even wash his hands a specified number of times if he also has a counting compulsion. At this point, the individual’s anxiety may be temporarily relieved.
These obsessions and compulsions are more excessive than the typical “cleanliness” as they consume a large part of the individual’s day. Indeed, in order to be considered clinical OCD, the obsessions or compulsions must consume more than 1 hour per day, cause distress, or result in impairment in functioning. Given the example above, an individual with a fear of contamination may refuse to eat out at restaurants or may bring his own utensils with him and insist on using them when he is not eating at home.
Epidemiology
The one-year prevalence rate for OCD is approximately 1.2% both in the US, and worldwide (APA, 2013). OCD has a balanced sex ratio in adults; however, in childhood, boys are diagnosed more frequently than girls (APA, 2013). With respect to gender and symptoms, females are more likely to be diagnosed with cleaning-related obsessions and compulsions, whereas males are more likely to display symptoms related to forbidden thoughts and symmetry (APA, 2013). Additionally, males have an earlier age of onset (5-15 yrs) compared to women (20-24 yrs; Rasmussen & Eisen, 1990). Approximately two-thirds of all individuals with OCD had some symptoms present before the age of 15 (Rasmussen & Eisen, 1990). Overall the average age of onset of OCD iis 19.5 years.
Comorbidity
There is a high comorbidity rate between OCD and other anxiety disorders. Nearly 76% of individuals with OCD will be diagnosed with another anxiety disorder, most commonly panic disorder, social anxiety disorder, generalized anxiety disorder, or a specific phobia (APA, 2013). Additionally, 63% of those with OCD will also be diagnosed with a mood disorder (APA, 2013).
There is a high comorbidity rate between OCD and tic disorder, particularly in males with an onset of OCD in childhood. Children presenting with early-onset OCD typically have a different presentation of symptoms than traditional OCD. Research has also indicated a strong triad of OCD, Tic disorder, and attention-deficit/hyperactivity disorder in children. Due to this triad of psychological disorders, it is believed there is a neurobiological mechanism at fault for the development and maintenance of the disorders.
It should be noted that there are several disorders- schizophrenia, bipolar disorder, eating disorders, and Tourettes – where there is a higher incidence of OCD than the general public (APA, 2013). Therefore, clinicians who have a client diagnosed with one of the disorders above, should also routinely assess him/her for OCD.
Etiology
Biological
There are a few biological explanations for obsessive-compulsive related disorders including: hereditary transmission, neurotransmitter deficits, and abnormal functioning in brain structures.
Hereditary Transmission
With regards to heritability studies, twin studies routinely support the role of genetics in the development of obsessive-compulsive behaviors, as monozygotic twins have a substantially greater concordance rate (80-87%) than dizygotic twins (47-50%; Carey & Gottesman, 1981; van Grootheest, Cath, Beekman, & Boomsma, 2005). Additionally, first degree relatives of individuals diagnosed with OCD are twice as likely to develop OCD (APA, 2013).
Interestingly, a study conducted by Nestadt and colleagues (2000) exploring the familial role in the development of obsessive-compulsive disorder found that family members of individuals with OCD had higher rates of both obsessions and compulsions than control families; however, obsessions were more specific to the family members than that of the disorder. This suggests that there is a stronger heritability association for obsessions than compulsions. This study also found a relationship between age of onset of OCD symptoms and family heritability. Individuals who experienced an earlier age of onset, particularly before age 17, were found to have more first-degree relatives diagnosed with OCD. In fact, after the age of 17, there was no relationship between family diagnoses, suggesting those who develop OCD at an older age may have a different diagnostic origin (Nestadt, et al., 2000).
Neurotransmitters
Neurotransmitters, particularly serotonin have been identified as a contributing factor to obsessive and compulsive behaviors. This discovery was actually on accident. When individuals with depression and comorbid OCD were given antidepressant medications clomipramine and/or fluoxetine (both of which increase levels of serotonin) to mediate symptoms of depression, not only did they report a significant reduction in their depressive symptoms, but they also experienced significant improvement in their symptoms of OCD (Bokor & Anderson, 2014). Interestingly enough, antidepressant medications that do not affect serotonin levels are not effective in managing obsessive and compulsive symptoms, thus offering additional support for deficits of serotonin levels as an explanation of obsessive and compulsive behaviors (Sinopoli, Burton, Kronenberg, & Arnold, 2017; Bokor & Anderson, 2014). More recently, there has been some research implicating the involvement of additional neurotransmitters – glutamate, GABA, and dopamine – in the development and maintenance of OCD, although future studies are still needed to draw definitive conclusions (Marinova, Chuang, & Fineberg, 2017).
Brain Structures
Seeing as neurotransmitters have a direct involvement in the development of obsessive-compulsive behaviors, it’s only logical that brain structures that house these neurotransmitters also likely play a role in symptom development. Neuroimaging studies implicate the brain structures and circuits in the frontal lobe, more specifically, the orbitofrontal cortex, which is located just above each eye (Marsh et al., 2014). This brain region is responsible for mediating strong emotional responses and converts them into behavioral responses. Once the orbitofrontal cortex receives sensory/emotional information via sensory inputs, it transmits this information through impulses. These impulses are then passed on to the caudate nuclei which filter through the many impulses received, passing along only the strongest impulses to the thalamus. Once the impulses reach the thalamus, the individual essentially reassesses the emotional response and decides whether or not to act behaviorally (Beucke et al., 2013). It is believed that individuals with obsessive-compulsive behaviors experience overactivity of the orbitofrontal cortex and a lack of filtering in the caudate nuclei, thus causing too many impulses to be transferred to the thalamus (Endrass et al., 2011). Further support for this theory has been shown when individuals with OCD experience brain damage to the orbitofrontal cortex or caudate nuclei and experience remission of OCD symptoms (Hofer et al., 2013).
Cognitive
Cognitive theorists believe that OCD behaviors occur due to an individual’s distorted thinking and negative cognitive biases. More specifically, individuals with OCD are more likely to overestimate the probability of threat and harm, to have an inflated sense of responsibility for preventing harm, to think thoughts are important and need to be controlled, and to be perfectionistic. Additionally, some research has indicated that those with OCD also experience disconfirmatory bias, which causes the individual to seek out evidence that proves they failed to perform the ritual or compensatory behavior incorrectly (Sue, Sue, Sue, & Sue, 2017). Finally, individuals with OCD often report the inability to trust themselves and their instincts, and therefore, feel the need to repeat the compulsive behavior multiple times to ensure it is done correctly. These cognitive biases are supported throughout research studies that repeatedly find that individuals with OCD experience more intrusive thoughts than those without OCD (Jacob, Larson, & Storch, 2014).
Now that we have identified that individuals with OCD experience cognitive biases and that these biases contribute to the obsessive and compulsive behaviors, we have yet to identify why these cognitive biases occur. Everyone has times when they have repetitive or intrusive thoughts such as: “Did I turn the oven off after cooking dinner?” or “Did I remember to lock the door before I left home?” Fortunately, most individuals are able to either check once or even forgo checking after they confidently talk themselves through their actions, ensuring that the behavior in question was or was not completed. Unfortunately, individuals with OCD are unable to neutralize these thoughts without performing a ritual as a way to put themselves at ease. As you will see in more detail in the behavioral section below, the behaviors (compulsions) used to neutralize the thoughts (obsessions) provide a temporary relief to the individual. As the individual is continually exposed to the obsession and repeatedly engages in the compulsive behaviors to neutralize the anxiety, the behavior is repeatedly reinforced, thus becoming a compulsion. This theory is supported by studies where individuals with OCD report using more neutralizing strategies and report significant reductions in anxiety after employing these neutralizing techniques (Jacob, Larson, & Storch, 2014; Salkovskis, et al., 2003).
Behavioral
The behavioral explanation of obsessive-compulsive disorder focuses on the explanation of compulsions rather than obsessions. Behaviorists believe that these compulsions begin with and are maintained by the classical conditioning. As you may remember, classical conditioning occurs when an unconditioned stimulus is paired with a conditioned stimulus to produce a conditioned response. How does this help explain OCD? Well, an individual with OCD may experience negative thoughts or anxieties related to an unpleasant event (obsession; unconditioned stimulus). These thoughts/anxieties cause significant distress to the individual, and therefore, they seek out some kind of behavior (compulsion) to alleviate these threats (conditioned stimulus). This provides temporary relief to the individual, thus reinforcing the compulsive behaviors used to alleviate the threat. Over time, the conditioned stimulus (compulsive behaviors) are reinforced due to the repeated exposure of the obsession and the temporary relief that comes with engaging in these compulsive behaviors.
Strong support for this theory is the fact that the behavioral treatment option for OCD – exposure and response prevention – is among the most effective treatments for these disorders. As you will read below, this treatment essentially breaks the classical conditioning associated with the obsessions and compulsions through extinction (by preventing the individual from engaging in the compulsive behavior until anxiety is reduced).
Treatment
Exposure and Response Prevention
Treatment of OCD has come a long way in recent years. Among the most effective treatment options is exposure and response prevention (March, Frances, Kahn, & Carpenter, 1997). First developed by psychiatrist Victor Meyer (1966), individuals are repeatedly exposed to their obsession, thus causing anxiety/fears, while simultaneously being prevented from engaging in their compulsive behaviors. Exposure sessions are often done in vivo, or in real life, via videos, or even imaginary, depending on the type of obsession.
Prior to beginning the exposure and response prevention exercises, the clinician must teach the client relaxation techniques for them to engage to cope with the distress of being exposed to the obsession. Once relaxation techniques are taught, the clinician and client will develop a hierarchy of obsessions. Treatment will start at those with the lowest amount of distress to ensure the client has success with treatment and to reduce the likelihood the client will withdrawal from treatment.
Within the hierarchy of obsessions, the individual is gradually exposed to their obsession. For example, an individual obsessed with germs might first watch a person sneeze on the computer in session. Once anxiety is managed and compulsions are resisted at this level of exposure, the individual would move on to being present in the same room as a sick individual, to eventually shaking hands with someone obviously sick, each time helping the client resist the compulsion to engage in the compulsive behavior. Once this level of the hierarchy is managed, they would move on to the next obsession and so forth until the entire list is complete.
Exposure and response prevention is very effective in treating individuals with OCD. In fact, some studies suggest up to an 86% response rate when treatment is completed (Foa et al., 2005). The largest barrier to treatment with OCD is getting clients to commit to treatment, as the repeated exposures and prevention of compulsive behaviors can be quite distressing to clients.
Psychopharmacology
There has been minimal support for the treatment of OCD with medication alone. This is likely due to the temporary resolution of symptoms during medication use. Among the most effective medications are those that inhibit the reuptake of serotonin (e.g., clomipramine or SSRI’s). Reportedly, up to 60% of people do show improvement in symptoms while taking these medications; however, symptoms are quick to return when medications are discontinued (Dougherty, Rauch, & Jenike, 2002). While there has been some promise in a combined treatment option of exposure and response prevention and SSRIs, these findings were not superior to exposure and response prevention alone, suggesting that the inclusion of medication in treatment does not provide any added benefit (Foa et al., 2005). | textbooks/socialsci/Psychology/Psychological_Disorders/Essentials_of_Abnormal_Psychology_(Bridley_and_Daffin)/05%3A_Obsessive-Compulsive_and_Related_Disorders/5.01%3A_Obsessive-Compulsive_Disorder.txt |
Section Learning Objectives
• Describe how body dysmorphic disorder presents itself.
• Describe the epidemiology of body dysmorphic.
• Indicate which disorders are commonly comorbid with body dysmorphic.
• Describe the theories for the etiology of body dysmorphic disorder.
• Describe the treatment for body dysmorphic disorder.
Clinical Description
Body Dysmorphic Disorder (BDD) is another obsessive-compulsive disorder, however, the focus of these obsessions are with a perceived defect or flaw in physical appearance. A key feature of these obsessions with defects or flaws are that they are not observable to others. An individual who has a congenital facial defect or a burn victim who is concerned about scars are not examples of an individual with BDD. The obsessions related to one’s appearance can run the spectrum from feeling “unattractive” to “looking hideous.” While any part of the body can be a concern for an individual with BDD, the most commonly reported areas are skin (e.g., acne, wrinkles, skin color), hair (e.g., thinning hair or excessive body hair), or nose (e.g., size, shape).
The distressing nature of the obsessions regarding one’s body, often drive individuals with BDD to engage in compulsive behaviors that take up a considerable amount of time. For example, an individual may repeatedly compare her body to other people’s bodies in the general public; repeatedly look at herself in the mirror; engage in excessive grooming which includes using make-up to modify her appearance. Some individuals with BDD will go as far as having numerous plastic surgeries in attempts to obtain the “perfect” appearance. The problem is plastic surgery does not usually resolve the issue after all the physical defect or flaw is not observable to others. While most of us are guilty of engaging in some of these behaviors, to meet criteria for BDD, one must spend a considerable amount of time preoccupied with his/her appearance (i.e., on average 3-8 hours a day), as well as display significant impairment in social, occupational, or other areas of functioning.
Muscle Dysmorphia.
While muscle dysmorphia is not a formal diagnosis, it is a common type of BDD, particularly within the male population. Muscle dysmorphia refers to the belief that one’s body is too small, or lacks appropriate amount of muscle definition (Ahmed, Cook, Genen & Schwartz, 2014). While severity of BDD between individuals with and without muscle dysmorphia appears to be the same, some studies have found a higher use of substance abuse (i.e. steroid use), poorer quality of life, and an increased reports of suicide attempts in those with muscle dysmorphia (Pope, Pope, Menard, Fay Olivardia, & Philips, 2005).
Epidemiology
The point prevalence rate for BDD among U.S. adults is 2.4% (APA, 2013). Internationally, this rate drops to 1.7% –1.8% (APA, 2013). Despite the difference between the national and international prevalence rates, the symptoms across races and cultures are similar.
Gender-based prevalence rates indicate a fairly balanced sex ratio (2.5% females; 2.2% males; APA, 2013). While the diagnosis rates may be different, general symptoms of BDD appear to be the same across genders with one exception: males tend to report genital preoccupations, while females are more likely to present with a comorbid eating disorder.
Comorbidity
While research on BDD is still in its infancy, initial studies suggest that major depressive disorder is the most common comorbid psychological disorder (APA, 2013). Major depressive disorder typically occurs after the onset of BDD. Additionally, there are some reports of social anxiety, OCD, and substance-related disorders (likely related to muscle enhancement; APA, 2013).
Etiology
Initial studies exploring genetic factors for BDD indicate a hereditary influence as the prevalence of BDD is elevated in first degree relatives of people with BDD. Interestingly, the prevalence of BDD is also heightened in first degree relatives of individuals with OCD (suggesting a shared genetic influence to these disorders).
However, environmental factors appear to play a larger role in the development of BDD than OCD (Ahmed, et al., 2014; Lervolino et al., 2009). Specifically, it is believed that negative life experiences such as teasing in childhood, negative social evaluations about one’s body, and even childhood neglect and abuse may contribute to BDD. Cognitive research has further discovered that people with BDD tend to have an attentional bias towards beauty and attractiveness, selectively attending to words related to beauty and attractiveness. Cognitive theories have also proposed that individuals with BDD have dysfunctional beliefs that their worth is inherently tied to their attractiveness and hold attractiveness as one of their primary core values. These beliefs are further reinforced by our society, which overly values and emphasizes beauty.
Treatment
Seeing as though there are strong similarities between OCD and BDD, it should not come as a surprise that the only two effective treatments for BDD are those that are effective in OCD. Exposure and response prevention has been successful in treating symptoms of BDD, as clients are repeatedly exposed to their body imperfections/obsessions and prevented from engaging in compulsions used to reduce their anxiety (Veale, Gournay, et al., 1996; Wilhelm, Otto, Lohr, & Deckersbach, 1999).
The other treatment option, psychopharmacology, has also been shown to reduce symptoms in individuals diagnosed with BDD. Similar to OCD, medications such as clomipramine and other SSRIs are generally prescribed. While these are effective in reducing BDD symptoms, once the medication is discontinued, symptoms resume nearly immediately, suggesting this is not an effective long-term treatment option for those with BDD.
Treatment of BDD appears to be difficult, with one study finding that only 9% of clients had full remission at a 1-year follow-up, and 21% reported partial remission (Phillips, Pagano, Menard & Stout, 2006). A more recent finding reported more promising findings with 76% of participants reporting full remission over an 8-year period (Bjornsson, Dyck, et al., 2011).
Plastic surgery and medical treatments
It should not come as a surprise that many individuals with BDD seek out plastic surgery to attempt to correct their perceived defects. Phillips and colleagues (2001) evaluated treatments of clients with BDD and found that 76.4% reported some form of plastic surgery or medical treatment, with dermatology treatment the most reported (45%) followed by plastic surgery (23%). The problem with this type of treatment is that the individual is rarely satisfied with the outcome of the procedure, thus leading them to seek out additional surgeries on the same defect (Phillips, et al., 2001). Therefore, it is important that medical professionals thoroughly screen patients for BDD before completing any type of medical treatment. | textbooks/socialsci/Psychology/Psychological_Disorders/Essentials_of_Abnormal_Psychology_(Bridley_and_Daffin)/05%3A_Obsessive-Compulsive_and_Related_Disorders/5.02%3A_Body_Dysmorphic_Disorder.txt |
Learning Objectives
• Describe how depressive disorders present and be able to distinguish between the different types of depressive disorders.
• Describe how bipolar disorders present be able to distinguish between the different types of bipolar disorders.
• Describe the epidemiology of mood disorders.
• Describe comorbidity in relation to mood disorders.
• Describe the etiology of mood disorders.
• Describe treatment options for mood disorders.
In Chapter 6, we will discuss matters related to mood disorders to include their clinical presentation, epidemiology, comorbidity, etiology, and treatment options. Our discussion will include Major Depressive Disorder, Persistent Depressive Disorder (formerly called Dysthymia), Bipolar I Disorder, Bipolar II Disorder, and Cyclothymic Disorder.
06: Mood Disorders
Section Learning Objectives
• Classify the symptoms of depression.
• Identify and describe the two types of depressive disorders.
• Identify the disorders that are commonly comorbid with depressive disorders.
• Describe the epidemiology of depressive disorders.
• Discuss the factors that contribute to depressive disorders.
• Describe treatment options for depressive disorders.
Clinical Description
Symptoms of Depressive Disorders
Symptoms of depression can generally be categorized into four categories to include mood, behavioral, physical and cognitive symptoms.
Mood
While clinical depression can vary in its presentation among individuals, most if not all individuals with depression will report significant mood disturbances such as a depressed mood (e.g., feeling sad, hopeless, discouraged) and/or feelings of anhedonia, which is the loss of interest or pleasure in previously interesting activities. These feelings occur transiently in all of us and therefore they must be present most of the day, nearly every day to be considered as symptoms of depression.
Behavioral
Fatigue and/or decreased energy are a symptom of depression that can make even the simplest of tasks (e.g., showering, getting off the couch to get the T.V. remote) seem difficult. Behavioral issues such as decreased physical activity and reduce productivity – both at home and at work – can result from this fatigue and cause disruptions in daily functioning (e.g., difficulty maintaining social interactions and employment responsibilities).
Physical
Changes in sleep patterns are common in those experiencing depression. This can occur at various points throughout the night – either difficulty falling asleep (initial insomnia), waking up in the middle of the night (middle insomnia), or even waking too early and not being able to fall back asleep before having to wake for the day (terminal insomnia). Excessive sleeping can also occur (hypersomnia)
Additional physical symptoms such as a change in weight or eating behaviors are also a symptom of depression. Some individuals who are experiencing depression report a lack of appetite, often forcing themselves to eat something during the day. On the contrary, others eat excessively, often seeking “comfort foods” such as those high in carbohydrates. Due to these changes in eating behaviors, there may be associated changes to weight. Changes in weight of more than 5% of one’s body weight are considered a symptom of depression.
Finally, psychomotor agitation, which is the purposeless physical movement of the body (i.e. pacing around a room, tapping toes, restlessness etc.) and the opposite psychomotor retardation (e.g., slowed speech, thinking and movement) are symptoms of depression.
Cognitive
It should not come as a surprise that there are serious disruptions in cognitions as individuals with depressive disorders typically hold a negative view of themselves and the world around them. They are quick to blame themselves when things go wrong, and rarely take credit when they experience positive achievements. Feelings of worthlessness and guilt are common symptoms of depression. These distorted cognitions can create a negative feedback loop and further contribute to feelings of depression. Finally, thoughts of suicide and self-harm do occasionally occur in those with depressive disorders and are considered one of the most severe symptoms of depression.
Individuals with depressive disorders also report difficulty thinking, making decisions, and/or concentrating on tasks. This is supported by research that has found individuals with depression perform worse than those without depression on tasks of memory, attention, and reasoning (Chen et al., 2013).
Types of Depressive Disorders
The two most common types of depressive disorders are: major depressive disorder (MDD) and persistent depressive disorder (PDD). PDD (previously known as Dysthymia) is thought to be a more chronic, and potentially less severe form depression. More specifically, although symptoms for MDD and PDD are nearly identical, the duration and number of symptoms required to diagnose the two disorders differ substantially. First, while symptoms of MDD must persist for a minimum of two weeks to make a diagnosis, symptoms of PDD must persist continuously for a minimum of two years for a diagnosis. Second, five symptoms are required to diagnose MDD, while only two are required for a diagnosis of PDD.
As with most of the disorders listed in the DSM 5, diagnoses of both MDD and PDD require that the symptoms cause significant distress or impairment. Moreover, the clinician needs to rule out that the symptoms are caused by something secondary (e.g., a medical condition, a substance, a schizophrenia spectrum disorder) and establish that the individual has never have had a manic or hypomanic episode before a diagnosis of MDD or PDD can be made.
It is important to note that these are not the only depressive disorders recognized by the DSM 5. Indeed, the DSM 5 added two new depressive disorders – Disruptive Mood Dysregulation Disorder and Premenstrual Dysphoric Disorder. Since these are new disorders less is known about them and as such we will not consider them in any further detail here.
Epidemiology
According to the DSM-5 (APA, 2013), the prevalence of MDD is approximately 7% within the U.S. (which represents the highest prevalence of depression in the world). The prevalence rate for PDD is much lower, with a 0.5% rate among adults in the U.S. There is a difference among demographics, with individuals in the 18- to 29- year-old age bracket reporting the highest rates of depression than any other age group. Similarly, depression is approximately 1.5 to 3 times higher in females than in males. The estimated lifetime prevalence for MDD in women is 21.3% compared to 12.7% in men (Nolen-Hoeksema, 2001).
Suicidality in depressive disorders is much higher than in the general public. Males and those with a past history of suicide attempts/threats are most at risk for attempting suicide.
Comorbidity
I’m sure it does not come as a surprise that studies exploring depression symptoms among the general population show a substantial pattern of comorbidity with other mental disorders (Kessler, Berglund, et al., 2003). In fact, in a large-scale research study nearly three-fourths of participants with lifetime MDD also met criteria for at least one other DSM disorder (Kessler, Berglund, et al., 2003). Among those that are the most common are anxiety disorders, ADHD, and substance abuse.
Given the extent of comorbidity among individuals with MDD, researchers have tried to identify which disorder precipitated the other. The majority of the studies have identified most cases of depression occur secondarily to another mental health disorder suggesting that the onset of depression is a direct result of the onset of another disorder (Gotlib & Hammen, 2009).
Etiology
Biological
Research throughout the years continues to provide evidence that depressive disorders have some biological cause. Most individuals who develop depression have some predisposition to develop a depressive disorder. Among the biological factors are genetic factors, biochemical factors, endocrine factors, and brain structure.
Genetics
As with other disorders, researchers often explore the prevalence rate of depressive disorders among family members, in efforts to determine whether there is some genetic component. If there is a genetic predisposition to developing depressive disorders, one would expect a higher rate of depression within families than that of the general population. Research supports this as there is nearly a 30 percent increase in the risk of depression in relatives of individuals diagnosed with depression, compared to 10 percent of the general population (Levinson & Nichols, 2014).
Another way to study the genetic component of a disorder is via twin studies. One would expect identical twins to have a higher rate of the disorder as opposed to fraternal twins, as identical twins share the same genetic make-up whereas fraternal twins only share that of siblings, roughly 50%. A large-scale study found that there was nearly a 46% chance that if one identical twin was diagnosed with depression, that the other was as well. In contrast, the fraternal twin rate was only 20%. This study provided enough evidence that there is a strong genetic link in the development of depression (McGuffin et al., 1996).
Finally, scientists have more recently been studying depression at a molecular level, exploring possibilities of gene abnormalities underlying depressive disorders. While much of the research is speculative due to sampling issues and low power, there is some evidence that depression may be tied to the 5-HTT gene on chromosome 17, as this is responsible for the activity of serotonin (Jansen et al., 2016).
Biochemical
As you will read in the treatment section, there is strong evidence of a biochemical deficit in depression. More specifically, low activity levels of norepinephrine and serotonin, have long been documented as contributing factors to developing depressive disorders. This was actually discovered accidentally in the 1950’s when monoamine oxidase inhibitors (MAOIs) were given to patients with tuberculosis, and miraculously, their depressed moods were also improved. Soon thereafter, medical providers found that medications used to treat high blood pressure, by causing a reduction in norepinephrine, also caused depression in their patients (Ayd, 1956).
While these initial findings were premature in the identification of how neurotransmitters affected the development of depressive symptoms, they did provide insight as to what neurotransmitters were involved. Researchers are still trying to determine exact pathways; however, it does appear that both norepinephrine and serotonin are involved in the development of symptoms, whether it be between the interaction between them, or their interaction with other neurotransmitters (Ding et al., 2014).
Endocrine System
As described in Chapter 2, the endocrine system is a collection of glands responsible for regulating hormones, metabolism, growth and development, sleep, and mood among other things. Some research has implicated hormones, particularly cortisol (a stress hormone), in the development of depression (Owens et al, 2014). Additionally, elevated levels of melatonin (a hormone released when it is dark outside to assist with the transition to sleep), may also be related to depressive symptoms, particularly a specific type of depression commonly referred to as seasonal affective disorder which is prominent in northern latitudes where there is less sunlight in the winter.
Brain Anatomy
Seeing as neurotransmitters are involved in depressive disorders, it should not be a surprise that brain anatomy is also involved. While exact anatomy and pathways are yet to be determined, research studies implicate the prefrontal cortex, the hippocampus, and the amygdala. More specifically, drastic changes in blood flow throughout the prefrontal cortex have been linked with depressive symptoms. Similarly, a smaller hippocampus, and consequently, a fewer number of neurons, have also been linked to depressive symptoms (this may also help to account for some of the memory problems commonly reported in depression). Finally, heightened activity and blood flow in the amygdala (the brain area responsible for the fight or flight response), are also consistently found in individuals with depressive symptoms.
Cognitive
The cognitive model, arguably the most conclusive model with regards to depressive disorders, focuses on the negative thoughts and perceptions that may contribute to and maintain symptoms of depression. One theory often equated with the cognitive model of depression is learned helplessness. The concept of learned helplessness was developed based on Seligman’s (1972) laboratory experiment involving dogs. In this study, Seligman restrained dogs in an apparatus and routinely shocked the dogs regardless of their behavior. The following day, the dogs were placed in a similar apparatus; however, this time the dogs were not restrained and there was a small barrier placed between the “shock” floor and the “safe” floor. What Seligman observed was that despite the opportunity to escape the shock, the dogs flurried for a bit, and then ultimately laid down and whimpered while being shocked. Based on this study, Seligman concluded that the animals essentially learned that they were unable to avoid the shock the day prior, and therefore, learned that they were helpless in avoiding the shocks. When they were placed in a similar environment but had the opportunity to escape the shocks, their learned helplessness carried over and they continued to believe they were unable to escape the shocks.
The concept of learned helplessness has been linked to humans through research on attributional styles (Nolen-Hoeksema, Girgus & Seligman, 1992). There are two types of attributional styles – positive and negative. A negative attributional style focuses on the internal, stable, and global influences of daily life, whereas a positive attributional style focuses on the external, unstable, and specific influences of the environment. Research has found that individuals with a negative attributional style are more likely to experience depression. This is likely due to their negative interpretation of daily events. For example, if something bad were to happen to them, they would likely conclude that it is their fault (internal), bad things always happen to them (stable), and bad things happen all the time. Unfortunately, this maladaptive thinking style often takes over their global view of themselves and the world, thus making them more vulnerable to depression.
In addition to attributional style, Aaron Beck also attributed negative thinking as a precursor to depressive disorders (Beck, 2002, 1991, 1967). Often viewed as the grandfather of Cognitive-Behavioral Therapy, Beck went on to coin the terms maladaptive attitudes, cognitive triad, errors in thinking, and automatic negative thoughts – all of which combine to explain the cognitive model of depressive disorders.
Maladaptive attitudes, or negative attitudes about oneself, others, and the world around them are often present in those experiencing depression. These attitudes are inaccurate and often global. For example, “If I fail my exam, the world will know I’m stupid.” Will the entire world really know you failed your exam? Not likely. Because you fail the exam, are you stupid? No. Individuals with depressive symptoms often develop these maladaptive attitudes regarding everything in their life, indirectly isolating themselves from others. The cognitive triad also plays into the maladaptive attitudes in that the individual interprets these negative thoughts about themselves, their experiences, and their futures. An example would be getting dumped and thinking “I am worthless, no one loves me or treats me well, and my future is hopeless,” rather than concluding that they were a bad match with the person.
Cognitive distortions, also known as errors in thinking, are a key component in Beck’s cognitive theory. Beck identified 15 errors in thinking that are most common in individuals with depression. Among the most common are catastrophizing (believing things are far worse than they actually are), jumping to conclusions, and overgeneralization. I always like to use my dad as an example for overgeneralization – whenever we go to the grocery store, he always comments about how whatever line he chooses, at every store, it is always the slowest/takes the longest. Does this happen every time he is at the store? I’m doubtful, but his error in thinking makes him perceive this to be true.
Finally, automatic negative thoughts, or a constant stream of negative thoughts, also lead to symptoms of depression as individuals regularly think in a pessimistic manner. While some cognitions are manipulated and interpreted in a negative view, Beck stated that there are another set of negative thoughts that occur automatically, such as these. Research studies have continually supported Beck’s maladaptive thoughts, attitudes, and errors in thinking as fundamental issues that contribute to and help maintain depressive disorders (Possel & Black, 2014; Lai et al., 2014). Furthermore, as you will see in the treatment section, cognitive strategies are among the most effective forms of treatment for depressive disorders.
Behavioral
The behavioral model explains depression as a result of a change in the number of rewards and punishments one receives throughout their life. This change can come from work, intimate relationships, family, or even the environment in general. Among the most influential in the field of depression is Peter Lewinsohn. He stated depression occurs in most people due to the reduced positive rewards in their life. Because they are not being positively rewarded, their constructive behaviors occur more infrequently until they stop engaging in the behavior completely (Lewinsohn et al., 1990; 1984). An example of this is a student who continues to receive bad grades on exams despite studying for hours. Over time, the individual will reduce the amount of study time, thus continuing to earn poor grades.
Sociocultural
In the sociocultural theory, the role of family and one’s social environment play a strong role in the development of depressive disorders. These topics will be explored next.
Social Support
Depression is commonly found to be related to a lack of social support. This is supported by research showing that separated and divorced individuals are three times more likely to experience depressive symptoms than those who are married or even widowed (Schultz, 2007). While there are many factors that lead a couple to separate or even end their marriage, some relationships end due to a spouse’s mental health issues, particularly depressive symptoms. Depressive symptoms have been positively related to increased interpersonal conflicts, reduced communication, and intimacy issues, all of which are often reported in causal factors leading to a divorce (Najman et al., 2014). The relationship between depression and marital problems appears to be bidirectional with stress and marital discord leading to increased rates of depression in one or both spouses (Nezlek et al., 2000). Further, while some research indicates that having children provides a positive influence in one’s life, it can also lead to stress both within the individual, as well as between partners due to the division of work and potential differences in discipline strategies and beliefs. Research studies have shown that women who had three or more young children who also lacked a close confidante and outside employment, were more likely than other mothers to become depressed (Brown, 2002).
Multi-Cultural Perspective
While depression is experienced across the entire world, one’s cultural background may influence what symptoms of depression are presented. Common depressive symptoms such as feeling sad, lack of energy, anhedonia, difficulty concentrating and thoughts of suicide are the hallmark in most societies, but other symptoms may be more specific to one’s nationality. More specifically, individuals from Asian cultures often focus on the physical symptoms of depression – tiredness, weakness, sleep issues, and there is less of an emphasis on the cognitive symptoms. Individuals from Latino and Mediterranean cultures often experience problems with “nerves” and headaches as primary symptoms of depression (APA, 2013).
Within the United States, many researchers have explored potential differences across ethnic or racial groups in both rates of depression, as well as presenting symptoms of those diagnosed with depression. These studies continually fail to identify any significant differences between ethnic and racial groups; however, one major study has identified a difference in the rate of recurrence of depression in Hispanic and African Americans (Gonzalez et al., 2010). While the exact reason for this is unclear, the researchers propose a lack of treatment opportunities as a possible explanation. According to Gonzalez and colleagues (2010), approximately 54% of depressed Caucasian Americans seek out treatment, compared to the 34% and 40% Hispanic and African Americans, respectively. The fact that there is such a large discrepancy in the use of treatment between Caucasians and minority Americans suggests that minorities are not receiving the effective treatment necessary to resolve the disorder, thus leaving them more vulnerable for repeated depressive episodes.
Gender Differences
As previously discussed, there is a significant difference between rates of depression in men and women, with women being twice as likely to experience an episode of depression than men (Schuch et al., 2014). There are a few speculations of why there is such an imbalance in the rate of depression across genders.
The first theory – artifact theory – posits that the difference between genders is due to clinicians or diagnostic systems being more sensitive to diagnosing women with depression than men. While women are often thought to be more “emotional,” easily expressing their feelings and more willing to discuss their symptoms with clinicians and physicians, men often withhold their symptoms or present with more traditionally “masculine” symptoms of anger or aggression. While this theory provides a possible explanation for the gender differences in the rate of depression, research has failed to support this theory suggesting that men and women are equally likely to seek out treatment and discuss their depressive symptoms (McSweeney, 2004; Rieker & Bird, 2005).
The second theory – hormone theory – suggests that variations in hormone levels trigger depression in women more than in men (Graziottin & Serafini, 2009). While there is biological evidence supporting the changes in hormone levels during various phases of the menstrual cycle and their impact on women’s ability to integrate and process emotional information, research has failed to support this theory as the reason for higher rates of depression in women (Whiffen & Demidenko, 2006).
The third theory – life stress theory – suggests that women are more likely to experience chronic stressors than men, thus accounting for their higher rate of depression (Astbury, 2010). Women are at an increased risk for facing poverty, lower employment opportunities, discrimination, and poorer quality of housing than men, all of which are strong predictors of depressive symptoms (Garcia-Toro et al., 2013).
The fourth theory – gender roles theory – suggests that social and or psychological factors related to traditional gender roles also influence the rate of depression in women. For example, men are often encouraged to develop personal autonomy, seek out activities that interest them, and display achievement-oriented goals, while women are encouraged to empathize and care for others, often fostering an interdependent functioning, which may cause women to value the opinion of others more highly than do their male counterparts.
The final theory – rumination theory – suggests that women are more likely than men to ruminate, or intently focus and dwell on their depressive symptoms, thus making them more vulnerable to developing depression at a clinical level (Nolen-Hoeksema, 2012). Several studies have supported this theory and shown that rumination of negative thoughts is positively related to an increase in depression symptoms (Hankin, 2009).
While many theories have been proposed to explain the gender discrepancy in depression, no one single theory has produced enough evidence to fully explain why women experience depression more than men. Due to the lack of evidence, gender differences in depression remains a highly researched topic, while simultaneously being one of the least understood phenomena in the clinical psychology world.
Treatment
Given that MDD is among the most frequent and debilitating psychiatric disorders, it should not be surprising that the research on the treatment of depression is quite extensive. Among its treatment options, the most efficacious treatments include antidepressant medications, Cognitive-Behavioral Therapy (CBT; Beck et al., 1979), Behavioral Activation (BA; Jacobson et al., 2001), and Interpersonal Therapy (IPT; Klerman et al., 1984). Although CBT is the most widely known and used treatment for depression, there is minimal evidence to support one treatment modality over the other; rather treatment is generally dictated by therapist competence, availability, and client preference (Craighhead & Dunlop, 2014).
Psychopharmacology
Antidepressant medications are often the most common first-line attempt at treatment for depression for a few reasons. Oftentimes an individual will present with symptoms to their primary caregiver (a medical doctor) who will prescribe some line of antidepressant medication. Medication is often seen as an “easier” treatment for depression as the individual can take the medication at their home, rather than attending weekly therapy sessions. However, this also leaves room for adherence issues as a large percentage of individuals do not take their prescription medication as indicated by their physician. Further, antidepressant medications take 3-6 weeks to begin to take effect.
There are a few different classes or antidepressant medications, each categorized by their structural or functional relationships. It should be noted that no specific antidepressant medication has been proven to be more effective in treating MDD than others (APA, 2010). In fact, many people try several different types of antidepressant medications until they find one that is effective for them, with minimal side effects.
Selective serotonin reuptake inhibitors (SSRIs)
SSRI’s are among the most common medications used to treat depression due to their relatively benign side effects. Additionally, the required dose to reach therapeutic levels is low compared to the other medication options. Possible side effects from SSRI’s include but are not limited to nausea, insomnia, weight gain, and reduced sex drive. SSRIs improve symptoms of depression by blocking the reuptake of serotonin and/or norepinephrine (SNRIs) in presynaptic neurons, thus allowing more of these neurotransmitters to be available for the postsynaptic neurons. While this is the general mechanism through which all SSRIs work, there are minor biological differences among different types of medications within the SSRI family. These minor differences are actually beneficial to clients in that there are a few treatment options to maximize medication benefits and minimize side effects.
Tricyclic Antidepressants
Although originally developed to treat schizophrenia, tricyclic antidepressants were adapted to treat depression after failing to manage symptoms of schizophrenia (Kuhn, 1958). The term tricyclic came from the molecular shape of the structure which has three rings. Tricyclic antidepressants are similar to SSRIs in that they work by affecting the brain chemistry, altering the availability of neurotransmitters. More specifically, they block the absorption or reuptake of serotonin and norepinephrine, thus increasing their availability for postsynaptic neurons. Tricyclic antidepressants have been shown to be more effective in treating traditionally resistant depression and PDD. While effective, tricyclic antidepressants have been increasingly replaced by SSRIs due to SSRI’s reduced side effects. While the majority of side effects of tricyclics are minimal – dry mouth, blurry vision, constipation –others can be serious – sexual dysfunction, weight gain, tachycardia, cognitive and/or memory impairment, to name a few. Tricyclic antidepressants should not be used in cardiac patients as they have been shown to exacerbate cardiac arrhythmias (Roose & Spatz, 1999).
Monoamine Oxidase Inhibitors (MAOIs)
The utility of MAOIs was found serendipitously after producing antidepressant effects in a tuberculosis patient in the early 1950’s. Although they are still prescribed, they are not typically the first line medications due to potentially lethal interactions when ingested with common substances like cheese and wine and safety concerns with hypertensive crises. Because of this, individuals on MAOIs have strict diet restrictions in efforts to reduce their risk of hypertensive crises (Shulman, Herrman & Walker, 2013).
How do MAOI’s work? In basic terms, monoamine oxidase is released in the brain to remove excess norepinephrine, serotonin, and dopamine. MAOIs essentially prevent the monoamine oxidase (hence the name monoamine oxidase inhibitors) from removing these neurotransmitters, thereby leading to an increase in these neurotransmitters (Shulman, Herman & Walker, 2013) associated with depression. While these drugs are effective, they come with serious side effects. In addition to the hypertensive episodes, they can also cause nausea, headaches, drowsiness, involuntary muscle jerks, reduced sexual desire, and weight gain to name a few (American Psychiatric Association, 2010). Despite these side effects, studies have shown that individual’s prescribed MAOI’s for depression have a treatment response rate of 50-70% (Krishnan, 2007). Overall, despite their effectiveness, MAOIs are likely the best treatment for later staged, treatment-resistant depression in individuals who have exhausted other treatment options (Krishnan, 2007)
It should be noted that occasionally, antipsychotic medications are used for individuals with MDD; however, these are limited to individuals presenting with psychotic features.
Psychotherapy
Cognitive Behavioral Therapy (CBT)
CBT was founded by Aaron Beck in the 1960’s and is a widely practiced therapeutic tool used to treat depression. The basics of CBT involve what Beck called the cognitive triad – cognitions (thoughts), behaviors, and emotions. Beck believed that these three components are interconnected, and therefore, affect one another. It is believed that CBT can improve emotions in individuals with depression by changing both cognitions and behaviors, which in return will improve mood/emotion. Common cognitive interventions with CBT include monitoring and recording thoughts, identifying cognitive errors, examining the evidence supporting/negating cognitions, and creating rational alternatives to maladaptive thought patterns. Behavioral interventions of CBT include activity planning, pleasant event scheduling, task assignments, and coping skills training.
Cognitive behavioral therapy generally follows three phases of treatment:
• Phase 1: Increasing pleasurable activities. Similar to behavioral activation (read below), the clinician encourages the client to identify and engage in activities that are pleasurable to the individual. The clinician is able to help the client identify the activity and plan when they will engage in that activity.
• Phase 2: Identifying automatic negative thoughts. During this stage, the clinician provides psychoeducation about the automatic negative thoughts that can maintain symptoms of depression. The client learns to identify these thoughts on their own and maintains a thought journal of these cognitions to review with the clinician in session.
• Phase 3: Challenging automatic negative thoughts. Once the individual is consistently able to identify these negative thoughts on a daily basis, the clinician is able to help the client identify how these thoughts are maintaining their depressive symptoms. It is at this point that the client begins to have direct insight as to how their cognitions contribute to their disorder. Finally, the client is taught to challenge the negative thoughts and replacing them with positive thoughts.
CBT typically requires 10-20 sessions and it not only assists in recovery from depression but it also assists in preventing relapse. Evidence shows lower relapse rates following CBT (20-35%) compared to controls who received no treatment (70%) and those who were on antidepressant medications and stopped taking them (50%).
Rates of relapse following any treatment for MDD are often associated with individuals whose onset was at a younger age (particularly adolescents), those who have already experienced multiple major depressive episodes, and those with more severe symptomology, especially those presenting with severe suicidal ideation and psychotic features (APA, 2013).
Behavioral Activation (BA)
BA is similar to the behavioral component of CBT in that the goal of treatment is to alleviate depression and prevent future relapse by changing an individual’s behavior. Founded by both Ferster (1973) and Lewinsohn and colleagues (Lewinsohn, 1974; Lewinsohn, Biglan, & Zeiss, 1976) the goal of BA is to increase the frequency of behaviors so that individuals have opportunities to experience greater contact with sources of reward in their lives. In order to do this, the clinician assists the client by developing a list of pleasurable activities that s/he can engage in outside of treatment (i.e. going for a walk, going shopping, having dinner with a friend). Additionally, the clinician assists the client in identifying negative behaviors – crying, sleeping too much, avoiding friends – and monitoring them so that they do not impact the outcome of their pleasurable activities. Finally, the clinician works with the client on effective social skills. The thought is that if the negative behaviors are minimized and the pleasurable activities are maximized, the individual will receive more positive rewards or reinforcement from others and their environment, thus improving their overall mood.
Interpersonal Therapy (IPT)
IPT was developed by Klerman, Weissman, and colleagues in the 1970’s as a treatment arm for a pharmacotherapy study of depression (Klerman & Weissman, 1994). The treatment was created based on data from post World War II individuals who expressed a significant impact on their psychosocial life events. Klerman and colleagues noticed a significant relationship between the development of depression and complicated bereavement, role disputes, role transitions, and interpersonal deficits in these individuals (Weissman, 1995). The idea behind IPT therapy is that depression compromises interpersonal functioning, which in return, makes it difficult to manage stressful life events. The basic mechanism of IPT is to establish effective strategies to manage interpersonal issues, which in return, will ameliorate depressive symptoms. There are two main principles of IPT. First, depression is a common, medical illness, with a complex and multi-determined etiology. Since depression is a medical illness, it is also treatable and is not the individual’s fault. Second, depression is connected to a current or recent life event. The goal of IPT is to identify the interpersonal problem that is connected to the depressive symptoms and resolve this crisis so the individual can improve his/her life situation while relieving depressive symptoms.
Multimodal Treatment
While both pharmacological and psychological treatment alone is very effective in treating depression, a combination of the two treatments may be helpful for individuals who have not achieved wellness in a single modality.
Multimodal treatments can be offered in three different ways: treatments can be instigated concurrently, treatments can be instigated done sequentially, or stepped treatments can be offered (McGorry et al., 2010). With a stepped treatment, pharmacological therapy is often used initially to treat depressive symptoms. Once the client reports some relief in symptoms, the psychosocial treatment is added to address the remaining symptoms. While all three methods are effective in managing depressive symptoms, matching clients to their treatment preference may produce better outcomes than clinician driven treatment decisions. | textbooks/socialsci/Psychology/Psychological_Disorders/Essentials_of_Abnormal_Psychology_(Bridley_and_Daffin)/06%3A_Mood_Disorders/6.01%3A_Depressive_Disorders.txt |
Section Learning Objectives
• Identify the symptoms of bipolar disorder.
• Identify and distinguish between the three types of bipolar disorders.
• Identify the disorders that are commonly comorbid with bipolar disorders.
• Describe the epidemiology of bipolar disorders.
• Discuss the factors that contribute to bipolar disorders.
• Describe treatment options for bipolar disorders.
Clinical Description
There are three bipolar disorders – bipolar I disorder, bipolar II disorder, and cyclothymic disorder.
A diagnosis of bipolar I disorder is made when there is at least one manic episode. This manic episode may be preceded by or followed by a hypomanic or major depressive episode but neither is required for a diagnosis of bipolar I disorder. In contrast, a diagnosis of bipolar II disorder is made when the individual has experienced both a hypomanic episode and a depressive episode. A manic episode (past or present) rules out a diagnosis of bipolar II disorder. In simpler words, if an individual has ever experienced a manic episode, they qualify for a bipolar I diagnosis. If the criteria have been met for both a hypomanic and depressive episode then the individual qualifies for a diagnosis of bipolar II disorder.
So, what defines a manic episode? The key feature of a manic episode is an abnormally euphoric or irritable mood that is experienced persistently for at least one week. In order to qualify as a manic episode, the individual must experience at least three other symptoms of a manic episode. These symptoms include inflated self-esteem, decreased need for sleep, pressured speech, racing thoughts, distractibility, psychomotor agitation, and involvement in pleasurable activities that are likely to result in negative consequences (e.g., risky sexual behavior, gambling).
With regards to mood, an individual in a manic episode will appear excessively happy, often engaging haphazardly in sexual or personal interactions. They also display rapid shifts in mood, also known as mood lability, ranging from happy, neutral, to irritable. Inflated self-esteem or grandiosity is also commonly present during a manic episode. Occasionally these inflated self-esteem levels can appear delusional. Individuals may believe they are friends with a celebrity, do not need to abide by laws, or even at times think they are God or famous.
Despite their increased activity level, individuals experiencing a manic episode also typically experience a decreased need for sleep, sleeping as little as a few hours a night and still feel rested. In fact, decreased need for sleep may be an indicator that a manic episode is to begin imminently.
It is not uncommon for those in a manic episode to have rapid, pressured speech. It can be difficult to follow their conversation due to the fast nature of their talking, as well as the tangential storytelling (i.e., jumping from topic to topic). Additionally, they can be difficult to interrupt in conversation, often disregarding the reciprocal nature of communication. If the individual is more irritable than expansive, speech can become hostile or even pronounced by angry tirades, particularly if they are interrupted or not allowed to engage in an activity they are seeking out. Based on their speech pattern, it should not be a surprise that manic episodes are also marked by racing thoughts which are commonly referred to as a flight of ideas. Because of these rapid thoughts, speech may become disorganized or incoherent.
Hypomanic episodes are milder versions of manic episodes. While the symptoms of the two are the same, a diagnosis of a hypomanic episode requires only 4 days of symptoms rather than the full week required to diagnosis a manic episode. Moreover, while manic episodes must cause impairment in functioning, significant distress, or require the individual to be hospitalized, hypomanic episodes cannot cause impairment, distress, or the need for hospitalization. If any of these three features are present the episode is considered manic, rather than hypomanic.
It should be noted that there is a subclass of individuals who experience periods of hypomanic symptoms that do not fully meet DSM 5 criteria for a hypomanic episode and depressive symptoms that again do not fully meet DSM 5 criteria for a depressive episode. These individuals are diagnosed with cyclothymic disorder (APA, 2013). Cyclothymic disorder is further distinguished from bipolar II disorder by its duration. Specifically, cyclothymic disorder requires a minimum of two years of subthreshold depressive and hypomanic symptoms before a diagnosis can be made.
Epidemiology
Compared to depression, the epidemiological studies on the rates of bipolar disorder suggest a significantly lower prevalence rate for both bipolar I and bipolar II. Within the two disorders, there is a very minimal difference in the prevalence rates with yearly rates reported as 0.6% for bipolar I disorder and 0.8% for bipolar II disorder in the U.S. (APA, 2013). There are no apparent differences in the frequency of men and women diagnosed with bipolar I or bipolar II disorder; however, rapid-cycling episodes (where four or more mood episodes are experienced in a one-year period) are more common in women (Bauer & Pfenning, 2005).
Individuals with bipolar disorder are approximately 15 times greater than the general population to attempt suicide. Prevalence rates of suicide attempts in individuals with bipolar disorder are estimated to be 33%. Furthermore, bipolar disorder may account for one-quarter of all completed suicides (APA, 2013).
While only a small percentage of the population develops cyclothymic disorder (lifetime prevalence estimates range from 0.4 to 1%), it can eventually progress into bipolar I or bipolar II disorder (Zeschel et al., 2015).
As stated previously, bipolar II disorder requires a past or present depressive episode and, while not required, depressive episodes are commonly experienced in bipolar I disorder. The depressive episode can occur before or after the manic/hypomanic episode, and the two types of episodes can alternate or “cycle” throughout one’s life.
Comorbidity
The bipolar disorders also have a high comorbidity rate with other mental disorders, particularly anxiety disorders and disruptive/impulse-control disorders such as ADHD and conduct disorder. Substance abuse disorders are also commonly seen in individuals with bipolar disorder. In fact, over half of those with bipolar disorder also meet diagnostic criteria for substance abuse disorder, particularly alcohol abuse. The combination of bipolar disorder and substance abuse disorder places individuals at a greater risk of suicide attempt (APA, 2013). While these comorbidities are high across both bipolar I and bipolar II, bipolar II appears to have more comorbidities, with 60% of individuals with bipolar II disorder meeting criteria for three or more co-occurring mental disorders (APA, 2013).
Etiology
Biological
As is typical with most mental disorders there is an elevated prevalence of bipolar disorders among first-degree biological relatives of people with bipolar I or bipolar II disorder. Specifically, first-degree biological relatives of individuals with bipolar I or II disorder have a 10-fold increased risk of developing bipolar disorder. Twin studies within bipolar disorder yield concordance rates for identical twins at as high as 72% and as high as 20% for fraternal twins. Both of these percentages are significantly higher than that of the general population, suggesting a strong genetic component of bipolar disorder (Edvardsen et al., 2008). Indeed, as some of these statistics demonstrate, the genetic contribution to bipolar disorder is believed to be greater than the genetic contribution to depressive disorders. There also seems to be a shared genetic component to the bipolar disorders and major depressive disorder (MDD) as relatives of individuals with bipolar disorder have elevated rates of MDD and MDD is more common in relatives of individuals with cyclothymic disorder.
Due to the close nature of depression and bipolar disorder, researchers initially believed that norepinephrine, serotonin, and dopamine were all implicated in the development of bipolar disorder; however, the idea was that there was a drastic increase in serotonin during manic episodes. Unfortunately, research actually supports the opposite. It is believed that manic episodes may, in fact, be explained by low levels of serotonin and high levels of norepinephrine (Soreff & McInnes, 2014). Moreover, following evidence that drugs like cocaine which stimulate dopamine produce manic-like symptoms it is further believed that elevated dopamine may be implicated in bipolar I disorder. Additional research in this area is needed to conclusively determine exactly what is responsible for the manic episodes that characterize bipolar I disorder.
Psychological
Stressful life events are believed to trigger early episodes of mania and depression, but as the disorder progresses the cycling from mania to depression can take on a life of its own and become more removed from stressors. Nevertheless, stressful life events can always provoke a relapse. As we saw with the depressive disorders, separated and divorced people have higher rates of bipolar I disorder than do people who are married or who were never married. Once again the direction of this relationship is not clear but it is likely bidirectional with the symptoms of bipolar disorder contributing to marital discord and the termination of a marriage representing a severe psychosocial stressor that can contribute to the onset of a bipolar disorder or trigger a relapse of the disorder. Finally, a lack of social support is associated with more depressive episodes in bipolar disorder, as it was for the depressive disorders.
Treatment
Psychopharmacology
Unlike treatment for MDD, there is some controversy over the most effective treatment for bipolar disorder. One suggestion is to treat bipolar disorder aggressively with mood stabilizers such as Lithium or Depakote as these medications do not induce pharmacological mania/hypomania. These mood stabilizers are occasionally combined with antidepressant medications later in treatment only if absolutely necessary (Ghaemi, Hsu, Soldani & Goodwin, 2003). Research has shown that mood stabilizers are less powerful in treating depressive symptoms in those with bipolar disorder, and therefore, this combination approach is believed to help reduce the occurrence of both manic and depressive episodes (Nivoli et al., 2011).
The other treatment option is to forgo the mood stabilizer and treat symptoms with newer antidepressants early in treatment. Unfortunately, large-scale research studies have not shown great support for this method (Gijsman, Geddes, Rendell, Nolen, & Goodwin, 2004; Moller, Grunze & Broich, 2006). In fact, antidepressants can trigger a manic or hypomanic episode in people with bipolar disorder. Because of this, the first line treatment option for bipolar disorder is mood stabilizers, particularly Lithium.
Lithium and other mood stabilizers are very effective in managing symptoms of people with bipolar disorder. Unfortunately, adherence to the medication regimen can be problematic. The euphoric highs that are associated with manic and hypomanic episodes are often desired and can lead individuals with bipolar disorder to cease taking their medication. Combination of psychopharmacology and psychotherapy aimed at increasing rate of adherence to medication may be the most effective treatment option for bipolar I and II disorder.
Psychological Treatment
Although psychopharmacology is the first and most widely used treatment for bipolar disorders, occasionally psychological interventions are also paired with medication as psychotherapy alone is not a sufficient treatment option. The majority of psychological interventions are aimed at medication adherence, as many people with bipolar disorder stop taking their mood stabilizers when they “feel better” (Advokat et al., 2014) or as described above to induce a manic or hypomanic episode. CBT can also be used to help treat and reduce the reoccurrence of depressive episodes. Social skills training and problem-solving skills can also be helpful techniques to address in the therapeutic setting as individuals with bipolar disorder may struggle in these areas. Finally, individuals with bipolar disorder may be advised to stabilize their routines, especially their sleep routines, to reduce the risk of relapse.
Chapter Recap
That concludes our discussion of mood disorders. You should now have a good understanding of the two major types of mood disorders – depressive and bipolar disorders. Be sure you are clear on what makes them different from one another in terms of their clinical presentation, diagnostic criteria, epidemiology, comorbidity, and etiology. Also be sure to understand how the different depressive disorders (MDD and PDD) are distinguished as well as how the various bipolar disorders (bipolar I disorder, bipolar II disorder, and cyclothymic disorder) differ from one another. | textbooks/socialsci/Psychology/Psychological_Disorders/Essentials_of_Abnormal_Psychology_(Bridley_and_Daffin)/06%3A_Mood_Disorders/6.02%3A_Bipolar_Disorders.txt |
Learning Objectives
• Describe the dissociative disorders and their symptoms.
• Describe the epidemiology of dissociative disorders.
• Indicate which disorders are commonly comorbid with dissociative disorders.
• Describe the etiology of dissociative disorders.
• Describe treatment options for dissociative disorders.
In Chapter 7, we will discuss dissociative disorders, including their clinical presentation, diagnostic criteria, epidemiology, comorbidity, etiology, and treatment options. Our discussion will include depersonalization/derealization, dissociative amnesia, and dissociative identity disorder. Be sure you refer Chapters 1-3 for explanations of key terms (Chapter 1), an overview of the various models to explain psychopathology (Chapter 2), and descriptions of the various therapies (Chapter 3).
07: Dissociative Disorders
Chapter Overview
In Chapter 7, we will discuss dissociative disorders, including their clinical presentation, diagnostic criteria, epidemiology, comorbidity, etiology, and treatment options. Our discussion will include depersonalization/derealization, dissociative amnesia, and dissociative identity disorder. Be sure you refer Chapters 1-3 for explanations of key terms (Chapter 1), an overview of the various models to explain psychopathology (Chapter 2), and descriptions of the various therapies (Chapter 3).
Chapter Outline
• 7.2 Depersonalization/derealization disorder
• 7.3 Dissociative amnesia
• 7.4 Dissociative identity disorder
Chapter Learning Outcomes
• Describe the dissociative disorders and their symptoms.
• Describe the epidemiology of dissociative disorders.
• Indicate which disorders are commonly comorbid with dissociative disorders.
• Describe the etiology of dissociative disorders.
• Describe treatment options for dissociative disorders.
Chapter Introduction
Dissociative disorders are a group of disorders categorized by symptoms of disruption in consciousness, memory, identity, emotion, perception, motor control, or behavior (APA, 2013). These symptoms are likely to appear following a significant stressor or years of ongoing stress (i.e. abuse; Maldonadao & Spiegel, 2014). Occasionally, one may experience temporary dissociative symptoms due to lack of sleep or ingestion of a substance, however, these would not qualify as a dissociative disorder due to the lack of impairment in functioning. Furthermore, individuals with acute stress disorder and post-traumatic stress disorder (PTSD) often experience dissociative symptoms, such as amnesia, flashbacks, depersonalization and/or derealization; however, because of the identifiable stressor (and lack of additional symptoms listed below), they meet diagnostic criteria for a stress disorder as opposed to a dissociative disorder.
There are 3 main types of dissociative disorders that will be described in the next three sections: Depersonalization/Derealization Disorder, Dissociative Amnesia, and Dissociative Identity Disorder.
7.02: Depersonalization Derealization Disorder
Section Learning Objectives
• Describe how derealization/depersonalization disorder presents itself.
• Describe the epidemiology of derealization/depersonalization disorder .
• Indicate which disorders are commonly comorbid with derealization/depersonalization disorder.
• Describe factors that may contribute to the etiology of derealization/depersonalization disorder.
• Describe the treatment of derealization/depersonalization disorder.
\(1\): Clinical Description
Depersonalization/Derealization disorder is categorized by recurrent episodes of depersonalization and/or derealization. Depersonalization can be defined as a feeling of unreality or detachment from oneself. Individuals describe this feeling as an out-of-bodyexperience where they are an outside observer of their thoughts, feelings, and physical being. Furthermore, some people report feeling as though they lack speech or motor control, thus feeling at times like a robot. Distortions of one’s physical body has also been reported, with various body parts appearing enlarged or shrunken. Individuals may also feel detached from their own feelings.
Symptoms of derealization include feelings of unreality or detachment from the world—whether it be individuals, objects, or their surroundings. For example, people experiencing derealization may feel as though they are unfamiliar with their surroundings, even though they are in a place they have been to many times before. Feeling emotionally disconnected from close friends or family members whom they have strong feelings for is another common symptom experienced during derealization episodes. Sensory changes have also been reported such as feeling as though the environment is distorted, blurry, or even artificial. Distortions of time, distance, and size/shape of objects may also occur.
These episodes can last anywhere from a few hours to days, weeks, or even months (APA, 2013). The onset is generally sudden, and similar to the other dissociative disorders, is often triggered by a intense stress or trauma. As one can imagine, depersonalization/derealization disorder can cause significant emotional distress, as well as impairment in one’s daily functioning (APA, 2013).
\(2\): Epidemiology
While many individuals experience brief episodes of depersonalization/derealization throughout their life (about 50% of adults have experienced depersonalization/derealization at least once), the estimated number of individuals who experiences these symptoms to the degree of clinical significance is estimated to be 2%, with an equal ratio of men and women experiencing these symptoms (APA, 2013). The mean age of onset is 16 years, with only a minority developing the disorder after the age of 25. About 1/3 of people with the disorder have discrete episodes, 1/3 have continuous symptoms from their onset, and 1/3 have an episodic course that progresses to continuous.
\(3\): Comorbidity
Depersonalization/derealization disorder has been found to be comorbid with depression and anxiety disorders. With respect to the personality disorders, depersonalization/derealization disorder is most commonly comorbid with avoidant, borderline, and obsessive-compulsive personality disorders. Some evidence indicates that comorbidity with post-traumatic stress disorder is low (APA, 2013).
\(4\): Epidemiology
The causes of depersonalization/derealization disorder are largely unknown. Very little is understood about the potential genetic underpinnings; however, there is some suggestion that heritability rates for dissociative experiences range from 50-60% (Pieper, Out, Bakermans-Kranenburg, Van Ijzendoorn, 2011). However, as with other psychological disorders, it is suggested that the combination of genetic and environmental factors may play a larger role in the development of dissociative disorders than genetics alone (Pieper, Out, Bakermans-Kranenburg, Van Ijzendoorn, 2011).
There are clear associations between all of the dissociative disorders and childhood trauma and abuse but the association is slightly weaker for depersonalization/derealization disorder than it is for the other dissociative disorders (i.e., dissociative amnesia and dissociative identity disorder). Emotional abuse, emotional neglect, physical abuse, witnessing domestic violence, being raised by a parent who is seriously impaired and/or mentally ill, and experiencing the unexpected death or suicide of a family member or close friend are early-life stressors that have been identified to be associated with the disorder. The onset of the disorder is commonly triggered by severe stress, depression, anxiety, panic attacks, and drug use (particularly cannabis, hallucinogens, ketamine, ecstasy, and salvia).
\(5\): Treatment
Depersonalization/derealization disorder symptoms generally occur for an extensive period of time before the individual seeks out treatment. Because of this, there is some evidence to support that the diagnosis alone is effective in reducing symptom intensity, as it also relieves the individual’s anxiety surrounding the baffling nature of the symptoms (Medford, Sierra, Baker, & David, 2005).
Due to the high comorbidity of depersonalization/derealization disorder and anxiety/depression, the goal of treatment is often alleviating these secondary mental health symptoms related to the depersonalization/derealization symptoms. While there has been some evidence to suggest treatment with an SSRI is effective in improving mood, the evidence for a combined treatment method of psychopharmacological and psychological treatment is even more compelling (Medford, Sierra, Baker, & David, 2005). The psychological treatment of preference is cognitive-behavioral therapy as it addresses the negative attributions and appraisals contributing to the depersonalization/derealization symptoms (Medford, Sierra, Baker, & David, 2005). By challenging these catastrophic attributions in response to stressful situations, the individual is able to reduce overall anxiety levels, which in return, reduces depersonalization/derealization symptoms. | textbooks/socialsci/Psychology/Psychological_Disorders/Essentials_of_Abnormal_Psychology_(Bridley_and_Daffin)/07%3A_Dissociative_Disorders/7.01%3A_Chapter_Introduction.txt |
Section Learning Objectives
• Describe how dissociative amnesia presents itself.
• Describe the epidemiology of dissociative amnesia.
• Indicate which disorders are commonly comorbid with dissociative amnesia.
• Describe factors that may contribute to the etiology of dissociative amnesia.
• Describe the treatment of dissociative amnesia.
\(1\): Clinical Description
Dissociative amnesia is identified by amnesia for autobiographical information, particularly for traumatic events. This type of amnesia is different from what one would consider a permanent amnesia in that the information was successfully stored in memory, however, the individual cannot retrieve it. Additionally, individuals experiencing permanent amnesia often have a neurobiological cause, whereas dissociative amnesia does not (APA, 2013).
There are a few types of amnesia that people with dissociative amnesia can experience. Localized amnesia, the most common type of dissociative amnesia, is the inability to recall events during a specific period of time. The length of time within a localized amnesia episode can vary—it can be as short as the time immediately surrounding a traumatic event, to months or years, should the traumatic event occur that long (as commonly seen in abuse and combat situations). Selective amnesia is in a sense, a component of localized amnesia in that the individual can recall some, but not all, of the details during a specific time period. For example, a soldier may experience dissociative amnesia during the time they were deployed, yet still, have some memories of positive experiences such as celebrating Thanksgiving dinner or Christmas dinner with their unit. The onset of localized and selective amnesia may immediately follow the acute stress or be delayed for hours, days, or longer.
Conversely, some individuals experience generalized amnesia where they have a complete loss of memory of their entire life history, including their own identity. Individuals who experience this type of amnesia experience deficits in both semantic and procedural knowledge. This means that individuals have no common knowledge of the world (i.e. cannot identify songs, the current president, or names of colors) nor do they have the ability to engage in learned skills (i.e. typing shoes, driving car). The onset of generalized amnesia is typically acute.
While generalized amnesia is extremely rare, it is also extremely frightening. The onset is acute, and the individual is often found wandering in a state of disorientation. Many times, these individuals are brought into emergency rooms by law enforcement following a dangerous situation such as an individual walking aimlessly on a busy road.
Dissociative fugue is considered to be the most extreme type of dissociative amnesia where not only does an individual forget personal information, but they also flee to a different location (APA, 2013). The degree of the fugue varies among individuals – with some experiencing symptoms for a short time (only hours) to others lasting years, affording individuals to take on new identities, careers, and even relationships. Similar to their sudden onset, dissociative fugues also end abruptly. Post dissociative fugue, the individual generally regains most of their memory. Emotional adjustment after the fugue is dependent on the time the individual spent in the fugue – with those having been in a fugue state longer experiencing more emotional distress than those who experienced a shorter fugue (Kopelman, 2002).
\(2\): Epidemiology
A large community sample suggested dissociative amnesia occurs in approximately 1.8% of the population with females being about 2.5 times more likely to be diagnosed than males (APA, 2013). Similar to trauma-related disorders, it is believed that more women experience dissociative amnesia due to the increased chances of a woman to experience significant stress/trauma compared to that of men.
\(3\): Comorbidity
Given that dissociative amnesia is often precipitated by a traumatic experience, many people develop PTSD after the traumatic events are finally recalled. Similarly, a wide range of emotions related to their inability to recall memories during the episode often presents once their memories return (APA, 2013). These emotions often contribute to the development of a depressive episode.
Due to the rarity of these disorders with respect to other mental health disorders, it is often difficult to truly determine comorbid diagnoses. There has been some evidence of comorbid somatic symptom disorder and conversion disorder. Furthermore, dependent, avoidant, and borderline personality disorders have all been suspected as co-occurring disorders among the dissociative disorder family.
\(4\): Etiology
\(4.1\): Biological
As previously indicated, heritability rates for dissociative experiences range from 50-60% (Pieper, Out, Bakermans-Kranenburg, Van Ijzendoorn, 2011). However, it is suggested that the combination of genetic and environmental factors may play a larger role in the development of dissociative disorders than genetics alone (Pieper, Out, Bakermans-Kranenburg, Van Ijzendoorn, 2011).
\(4.2\): Cognitive
One proposed cognitive theory of dissociative amnesia proposed by Kopelman (2000) is that the combination of psychological stress and various other biopsychosocial predispositions affects the frontal lobe’s executive system’s ability to retrieve autobiographical memories (Picard et al., 2013). Neuroimaging studies have supported this theory by showing deficits to several prefrontal regions, which is one area responsible for memory retrieval (Picard et al., 2013). Despite these findings, there is still some debate over which specific brain regions within the executive system are responsible for the retrieval difficulties, as research studies have reported mixed findings.
\(4.3\): Environmental/Behavioral
Severe trauma and/or stress commonly precipitate the disorder. The most common precipitating stressors for fugues are marital discord, financial and occupational problems, natural disasters, and combat in war. The likelihood that dissociative amnesia is experienced increases with higher numbers of adverse childhood experiences (e.g., physical and/or sexual abuse), and more severe and frequent interpersonal violence. According to the behavioral perspective, the amnesia is negatively reinforced by avoiding/removing the distressing thoughts and feelings associated with the trauma/stressor.
\(4.4\): Psychodynamic
The psychodynamic theory of dissociative amnesia assumes that dissociative disorders are caused by an individual’s repressed thoughts and feelings related to an unpleasant or traumatic event (Richardson, 1998). In blocking, or dissociating from, these thoughts and feelings, the individual is subconsciously protecting himself from painful memories.
\(5\): Treatment
Treatment for dissociative amnesia is limited in part because many individuals recover on their own without any type of intervention. Occasionally treatment is sought out after recovery due to the traumatic nature of memory loss. Further, the rarity of the disorder has offered limited opportunities for research on both the development and effectiveness of treatment methods. While there is no evidence-based treatment for dissociative amnesia, both hypnosis and treatment with barbituates have been shown to produce some positive effects in clients with dissociative amnesia.
Hypnosis. One theory of dissociative amnesia is that it is a form of self-hypnosis and that individuals hypnotize themselves to forget information or events that are unpleasant (Dell, 2010). Based on this theory, one type of treatment that has routinely been implemented for individuals with dissociative amnesia is hypnosis. Through hypnosis, the clinician can help the individual contain, modulate, and reduce the intensity of the amnesia symptoms, thus allowing them to process the traumatic or unpleasant events underlying the amnesia episode (Maldonadao & Spiegel, 2014). To do this, the clinician will encourage the client to think of memories just prior to the amnesic episode as though it was the present time. The clinician will then slowly walk them through the events during the amnesic time period in efforts to reorient the individual to experience these events. This technique is essentially a way to encourage a controlled recall of dissociated memories, something that is particularly helpful when the memories include traumatic experiences (Maldonadao & Spiegel, 2014).
Another form of “hypnosis” is the use of barbiturates, also known as “truth serums,” to help relax the individual and free their inhibitions. Although not always effective, the theory is that these drugs reduce the anxiety surrounding the unpleasant events enough to allow the individual to recall and process these memories in a safe environment (Ahern et al., 2000). | textbooks/socialsci/Psychology/Psychological_Disorders/Essentials_of_Abnormal_Psychology_(Bridley_and_Daffin)/07%3A_Dissociative_Disorders/7.03%3A_Dissociative_Amnesia.txt |
Section Learning Objectives
• Describe how dissociative identity disorder presents itself.
• Describe the epidemiology of dissociative identity disorder.
• Indicate which disorders are commonly comorbid with dissociative identity disorder.
• Describe factors that may contribute to the etiology of dissociative identity disorder.
• Describe the treatment of dissociative identity disorder.
\(1\): Clinical Description
Dissociative Identity Disorder (DID) is what people commonly refer to as multiple personality disorder, as it was labeled as such in the DSM III. The key diagnostic criteria for DID is the presence of two or more distinct personality states or expressions. The identities are distinct in that they often have their own tone of voice, engage in different physical gestures (including different gait), and have their own behaviors – ranging anywhere from cooperative and sweet to defiant and aggressive. Additionally, the identities can be of varying ages and gender.
While personalities can present at any time, there is generally a dominant or primary personality that is present majority of the time. From there, an individual may have several alternate personality states or alters. Although it is hard to identify how many alters an individual may have at one time, it is believed that there are on average 15 alters for women and 8 for men (APA, 2000).
The presentation of switching between alternate personality states varies among individuals and can be as simple as the individual appearing to fall asleep to very dramatic, involving excessive bodily movements. While often sudden and unexpected, switching is generally precipitated by a significant stressor, as the alter is best equipped to handle the current stressor will present. The relationship between alters varies between individuals – with some individuals reporting knowledge of other alters while others have a one-way amnesic relationship with alters, meaning they are not aware of other personality states (Barlow & Chu, 2014). These individuals will experience episodes of “amnesia” when the primary personality is not present.
\(2\): Epidemiology
Dissociative disorders were once believed to be extremely rare; however, more recent research suggests that they may be more present in the general population than once believed. Estimates for the prevalence rate of DID is 1.5% (APA, 2013), with more women experiencing the disorder. Due to the high comorbidity between childhood abuse and DID, it is believed that symptoms begin in early childhood following the repeated exposure to abuse; however, full onset of the disorder may not be observed (or noticed by others) until adolescence (Sar et al., 2014) or later in life. Over 70% of people with DID have attempted suicide and other self-injurious behaviors are common (APA, 2013).
\(3\): Comorbidity
People with DID commonly experience a large number of comorbid disorders including PTSD and other trauma and stressor-related disorders, depressive disorders, somatic symptom disorders (e.g., conversion disorder), eating disorders, substance-related disorders, obsessive-compulsive disorder, sleep disturbances, as well as avoidant personality disorder and borderline personality disorder.
\(4\): Etiology
\(4.1\): Biological
Once again, heritability rates for dissociation rage from 50-60% (Pieper, Out, Bakermans-Kranenburg, Van Ijzendoorn, 2011). However, it is suggested that the combination of genetic and environmental factors may play a larger role in the development of dissociative disorders than genetics alone (Pieper, Out, Bakermans-Kranenburg, Van Ijzendoorn, 2011).
\(4.2\): Cognitive
Neuroimaging studies have revealed differences in hippocampus activation between alters (Tsai, Condie, Wu & Chang, 1999). As you may recall, the hippocampus is responsible for storing information from short-term to long-term memory. It is hypothesized that this brain region is responsible for the generation of dissociative states and amnesia (Staniloiu & Markowitsch, 2010).
\(4.3\): Sociocultural
The sociocultural model of dissociative disorders has largely been influenced by Lilienfeld and colleagues (1999) who argue that the influence of mass media and its publications of dissociative disorders, provide a model for individuals to not only learn about dissociative disorders but also engage in similar dissociative behaviors. This theory has been supported by the significant increase in DID cases after the publication of Sybil, a documentation of a woman’s 16 personalities (Goff & Simms, 1993).
These mass media productions are not just suggestive to patients; mass media also influences the way clinicians gather information regarding dissociative symptoms of patients. For example, therapists may unconsciously use questions or techniques in sessions that evoke dissociative types of problems in their patients following exposure to a media source discussing dissociative disorders.
\(4.4\): Psychodynamic
Once again, the psychodynamic theory of dissociative disorders assumes that dissociative disorders are caused by an individual’s repressed thoughts and feelings related to an unpleasant or traumatic event (Richardson, 1998). In blocking these thoughts and feelings, the individual is subconsciously protecting himself from painful memories.
While dissociative amnesia may be explained by a single repression, psychodynamic theorists believe that DID results from repeated exposure to traumatic experiences, such as severe childhood abuse, neglect, or abandonment (Dalenberg et al., 2012). According to the psychodynamic perspective, children who experience repeated traumatic events such as physical abuse or parental neglect lack the support and resources to cope with these experiences. In efforts to escape from their current situations, children develop different personalities to essentially flee the dangerous situation they are in. While there is limited scientific evidence to support this theory, the nature of severe childhood psychological trauma is consistent with this theory, as individuals with DID have the highest rate of childhood psychological trauma compared to all other psychiatric disorders (Sar, 2011).
\(5\): Treatment
The ultimate treatment goal for DID is integration of alternate personalities to a point of final fusion (Chu et al., 2011). Integration refers to the ongoing process of merging alternate personalities into one personality. Psychoeducation is paramount for integration, as the individual must have an understanding of their disorder, as well as acknowledge their alternate personalities. Like mentioned above, many individuals have a one-way amnesic relationship with their alters, meaning they are not aware of one another. Therefore, the clinician must first make the individual aware of the various alters that present across different situations.
Achieving integration requires several steps. First, the clinician needs to build a relationship and strong rapport with the primary personality. From there, the clinician can begin to encourage gradual communication and coordination between the alternate personalities. Making the alternate personalities aware of one another, as well as addressing their conflicts, is an essential component of the integration of these personalities, and the core of DID treatment (Chu et al., 2011).
Once the individual is aware of their personalities, treatment can continue with the goal of fusion. Fusion occurs when two or more alternate identities join together (Chu et al., 2011). When this happens, there is a complete loss of separateness. Depending on the number of personalities, this process can take quite a while. Once all alternate personalities are fused together and the individual identifies themselves as one unified self, it is believed the patient has reached final fusion.
It should be noted that final fusion is difficult to obtain. As you can imagine, some clients do not find final fusion as a desirable outcome, particularly those with extremely painful histories; chronic, serious stressors; advanced age; and comorbid medical and psychiatric disorders to name a few. For individuals where final fusion is not the treatment goal, the clinician may work toward resolution or sufficient integration and coordination of alternate personalities that allows the individual to function independently (Chu et al., 2011). Unfortunately, individuals that do not achieve final fusion are at greater risk for relapse of symptoms, particularly those with whose DID appears to stem from traumatic experiences.
Once an individual reaches final fusion, ongoing treatment is essential to maintain this status. In general, treatment focuses on social and positive coping skills. These skills are particularly helpful for individuals with a history of traumatic events, as it can help them process these events, as well as help to prevent future relapses.
Chapter Recap
In this chapter, we discussed Depersonalization/Derealization Disorder, Dissociative Amnesia, and Dissociative Identity Disorder, in terms of their clinical presentation, diagnostic criteria, epidemiology, comorbidity, etiology, and treatment approaches. This represents the final class of disorders covered in this book. | textbooks/socialsci/Psychology/Psychological_Disorders/Essentials_of_Abnormal_Psychology_(Bridley_and_Daffin)/07%3A_Dissociative_Disorders/7.04%3A_Dissociative_Identity_Disorder.txt |
Learning Objectives
• Describe the symptoms of schizophrenia spectrum disorders.
• Distinguish between the various schizophrenia spectrum disorders
• Describe the epidemiology of schizophrenia spectrum disorders.
• Describe comorbidity in relation to schizophrenia spectrum disorders.
• Describe the etiology of schizophrenia spectrum disorders.
• Describe treatment options for schizophrenia spectrum disorders.
In Chapter 8, we will discuss matters related to schizophrenia spectrum disorders to include their clinical presentation, diagnostic criteria, epidemiology, comorbidity, etiology, and treatment options. Our discussion will include schizophrenia, schizophreniform disorder, brief psychotic disorder, schizoaffective disorder, and delusional disorder. We will depart from our usual convention of describing the epidemiology, comorbidity, etiology, and treatment for each disorder separately because there is a great deal of overlap among these for the various schizophrenia spectrum disorders. Be sure you refer Chapters 1-3 for explanations of key terms (Chapter 1), an overview of the various models to explain psychopathology (Chapter 2), and descriptions of the various therapies (Chapter 3).
08: Schizophrenia Spectrum and Other Psychotic Disorders
Section Learning Objectives
• Identify and describe the five symptoms of schizophrenia spectrum disorders.
• Describe how schizophrenia presents itself.
• Describe how schizophreniform disorder presents itself.
• Describe how brief psychotic disorder presents itself.
• Describe how schizoaffective disorder presents itself.
• Describe how delusional disorder presents itself.
• Be able to distinguish the five disorders from one another.
\(1\): Symptoms of Schizophrenia Spectrum Disorders
Individuals diagnosed with a schizophrenia spectrum disorder experience psychosis, which is defined as a loss of contact with reality and is manifested by delusions and/or hallucinations. These episodes of psychosis can make it difficult for individuals to perceive and respond to environmental stimuli, which can cause significant disturbances in everyday functioning. While there are a number of symptoms displayed in schizophrenia spectrum disorders, presentation of symptoms varies greatly among individuals, as there are rarely two cases similar in presentation, triggers, course, or responsiveness to treatment (APA, 2013). We will now turn our attention to the five major symptoms associated with these disorders: delusions, hallucinations, disorganized speech, disorganized behavior, and negative symptoms.
\(1.1\): Delusions
Delusions are defined as “fixed beliefs that are not amenable to change in light of conflicting evidence” (APA, 2013, pp. 87). This means that despite evidence contradicting one’s thoughts, the individual is unable to distinguish them from reality. There are a variety of delusions that can present in many different ways:
• Delusions of grandeur – beliefs they have exceptional abilities, wealth, or fame; the belief they are God or other religious saviors
• Delusions of persecution – beliefs they are going to be harmed, harassed, plotted or discriminated against by either an individual or an institution
• Delusions of reference – beliefs that specific gestures, comments, or even larger environmental cues (e.g., an ad in the newspaper, a terrorist attack) are directed at them
• Delusions of control – beliefs that their thoughts/feelings/actions are controlled by others
• Delusions of thought broadcasting – beliefs that one’s thoughts are transparent and everyone knows what they are thinking
• Delusions of thought withdrawal – belief that one’s thoughts have been removed by another source
The most common delusion is delusions of persecution (APA, 2013). It is believed that the presentation of the delusion is largely related to the social, emotional, educational, and cultural background of the individual (Arango & Carpenter, 2010). For example, an individual with schizophrenia who comes from a highly religious family is more likely to experience religious delusions (e.g., delusions of grandeur) than another type of delusion.
\(1.2\): Hallucinations
Hallucinations can occur in any of the five senses including as hearing (auditory hallucinations), seeing (visual hallucinations), smelling (olfactory hallucinations), touching (tactile hallucinations), or tasting (gustatory hallucinations). Additionally, they can occur in a single modality or present across a combination of modalities (i.e. experiencing both auditory and visual hallucinations). For the most part, individuals recognize that their hallucinations are not real and attempt to engage in normal behavior while simultaneously combating ongoing hallucinations.
According to various research studies, nearly half of all people with schizophrenia report auditory hallucinations, 15% report visual hallucinations, and 5% report tactile hallucinations (DeLeon, Cuesta, & Peralta, 1993). Among the most common types of auditory hallucinations are voices talking to the individual or various voices talking to one another. Generally, these hallucinations are not attributable to any one person that the individual knows. However, they are usually clear, objective, and definite (Arango & Carpenter, 2010). Additionally, the auditory hallucinations can be pleasurable, providing comfort to the individuals; however, in other individuals, the auditory hallucinations can be unsettling as they produce commands or have malicious intent.
\(1.3\): Disorganized Speech
Among the most common cognitive impairments displayed in individuals with schizophrenia are disorganized speech, communication, and thoughts. More specifically, thoughts and speech patterns may appear to be circumstantial or tangential. For example, individuals with circumstantial speech may give unnecessary details in response to a question before they finally produce the desired response. While the question is eventually answered by individuals with circumstantial speech, those with tangential speech never reach the point or answer the question. Another common cognitive symptom is speech retardation where the individual may take a long period of time before answering a question. Derailment, or the illogical connection in a chain of thoughts, is another common type of disorganized thinking. The most severe form of disorganized speech is incoherence or word salad which is where speech is completely incomprehensible and meaningful sentences are not produced.
These type of distorted thought patterns are often related to concrete thinking. That is, the individual is focused on one aspect of a concept or thing, and neglects all other aspects. This type of thinking makes treatment difficult as individuals lack insight into their illness and symptoms (APA, 2013).
\(1.4\): Disorganized Behavior
Psychomotor symptoms can also be observed in individuals with schizophrenia spectrum disorders. These behaviors may manifest as awkward movements or even ritualistic/repetitive behaviors. They are often unpredictable and overwhelming, severely impacting the ability to perform daily activities (APA, 2013). Catatonic behavior, or the decrease or even lack of reactivity to the environment, is among the most commonly seen disorganized motor behavior in schizophrenia spectrum disorders. These catatonic behaviors include:
• Negativism – resistance to instruction
• Mutism – complete lack of verbal responses
• Stupor – complete lack of motor responses
• Rigidity maintaining a rigid or upright posture while resisting efforts to be moved
• Posturing – holding odd, awkward postures for long periods of time
On the opposite side of the spectrum is catatonic excitement, where the individual experiences a hyperactivity of motor behavior. This can include echolalia (mimicking the speech of others) and echopraxia (mimicking the movement of others) but may also simply be manifested through excessive and/or purposeless motor behaviors.
\(1.5\): Negative Symptoms
Up until this point, all the schizophrenia symptoms can be categorized as positive symptoms or symptoms that involve the presence of something that should not be there (e.g., hallucinations and delusions) or disorganized symptoms (disorganized speech and behavior). The final symptom included in the diagnostic criteria of several of the schizophrenia spectrum disorders is negative symptoms, which are defined as the inability, or decreased ability, to initiate actions, speech, express emotion, or to feel pleasure (Barch, 2013). Negative symptoms are typically present before positive symptoms and often remain once positive symptoms remit. They account for much of the morbidity in schizophrenia but not as prominent in the other spectrum disorders (indeed, as you will see, they are not included as a symptom in some of these other disorders). Because of their prevalence through the course of the schizophrenia, they are also more indicative of prognosis, with more negative symptoms suggestive of a poorer prognosis. The poorer prognosis may be explained by the lack of effect that traditional antipsychotic medications have in addressing negative symptoms (Kirkpatrick, Fenton, Carpenter, & Marder, 2006).
There are five main types of negative symptoms seen in individuals with schizophrenia:
• Affective flattening – reduction in emotional expression (i.e., a reduced display of emotional expression)
• Alogia poverty of speech or speech content
• Anhedonia decreased ability to experience pleasure
• Asociality lack of interest in social relationships
• Avolition – lack of motivation of goal-directed behavior
\(2\): Types of Schizophrenia Spectrum Disorders
\(2.1\): Schizophrenia
As stated above, the hallmark symptoms of schizophrenia include the presence of at least two of the following symptoms for at least one month: delusions, hallucinations, disorganized speech, disorganized/abnormal behavior, negative symptoms. These symptoms must create significant impairment in the individual’s ability to engage in normal daily functioning such as work, school, relationships with others, or self-care. It should be noted that presentation of schizophrenia varies greatly among individuals, as it is a heterogeneous clinical syndrome (APA, 2013).
While the presence of active phase symptoms must persist for a minimum of one month to meet criteria for a schizophrenia diagnosis, the total duration of symptoms must persist for at least six months before a diagnosis of schizophrenia can be made. This six month period can comprise a combination of active, prodromal, and residual phase symptoms. Active phase symptoms represent the “full-blown” symptoms previously described. Prodromal symptoms are “subthreshold” symptoms that precede the active phase of the disorder and residual symptoms are subthreshold symptoms that follow the active phase. These prodromal and residual symptoms are milder forms of symptoms that do not cause significant impairment in functioning, with the exception of negative symptoms (Lieberman et al., 2001). Due to the severity of psychotic symptoms, mood disorder symptoms are also common among individuals with schizophrenia; however, to diagnose schizophrenia either there must be no mood symptoms or if mood symptoms have occurred they must be present for only a minority of the total duration of the illness. The latter helps to distinguish schizophrenia from a mood disorder with psychotic features for which psychotic symptoms are limited to the context of the mood episodes and do not extend beyond those episodes.
\(2.2\): Schizophreniform Disorder
Schizophreniform disorder is similar to schizophrenia with the exception of the length of presentation of symptoms and the requirement for impairment in functioning. As described above, a diagnosis of schizophrenia requires impairment in functioning and a six-month minimum duration of symptoms. In contrast, impairment in functioning is not required to diagnose schizophreniform disorder. While many individuals with schizophreniform disorder do display impaired functioning, it is not essential for diagnosis. Moreover, symptoms must last at least one month but less than six-months do diagnose schizophreniform disorder. In this way, the duration of schizophreniform disorder is considered an “intermediate” disorder between schizophrenia and brief psychotic disorder (which we will consider next).
Approximately two-thirds of individuals who are initially diagnosed with schizophreniform disorder will have symptoms that last longer than six months, at which time their diagnosis is changed to schizophrenia (APA, 2013). The other one-third will recover within the six month time period and schizophreniform disorder will be their final diagnosis.
Finally, as with schizophrenia, any major mood episodes that are present concurrently with the psychotic features must only be present for a small period of time, otherwise, a diagnosis of schizoaffective disorder may be more appropriate.
\(2.3\): Brief Psychotic Disorder
A diagnosis of brief psychotic disorder requires one or more of the following symptoms: delusions, hallucinations, disorganized speech, disorganized behavior. Moreover at least one of these symptoms must be delusions, hallucinations, or disorganized speech. Notice that negative symptoms are not included in this list. Also notice that while schizophrenia and schizophreniform disorder require a minimum of two symptoms, only one is required for a diagnosis of brief psychotic disorder. To diagnose brief psychotic disorder symptom(s) must be present for at least one day but less than one month (recall: one month is the minimum duration of symptoms required to diagnose schizophreniform disorder). After one-month individuals return to their full premorbid level of functioning. Also, while there is typically very severe impairment in functioning associated with brief psychotic disorder it is not required for a diagnosis.
\(2.4\): Schizoaffective Disorder
Schizoaffective disorder is characterized by two or more of the symptoms of schizophrenia (delusions, hallucinations, disorganized speech, disorganized behavior, negative symptoms) and a concurrent uninterrupted period of a major mood episode—either a depressive or manic episode. Those who experience only depressive episodes are diagnosed with the depressive type of schizoaffective disorder while those who experience manic episodes (with or without depressive episodes) are diagnosed with the bipolar type of schizoaffective disorder. It should be noted that because a loss of interest in pleasurable activities is a common symptom of schizophrenia, to meet criteria for a depressive episode within schizoaffective disorder, the individual must present with a pervasive depressed mood (not just anhedonia). While schizophrenia and schizophreniform disorder do not have a significant mood component, schizoaffective disorder requires the presence of a depressive or manic episode for the majority, if not the total duration of the disorder. While psychotic symptoms are sometimes present in depressive episodes, they often remit once the depressive episode is resolved. For individuals with schizoaffective disorder, psychotic symptoms should continue for at least two weeks in the absence of a major mood disorder (APA, 2013). This is the key distinguishing feature between schizoaffective disorder and major depressive disorder with psychotic features.
\(2.5\): Delusional Disorder
As suggestive of its title, delusional disorder requires the presence of at least one delusion that lasts for at least one month in duration. It is important to note that any other symptom of schizophrenia (i.e., hallucinations, disorganized behavior, disorganized speech, negative symptoms) rules out a diagnosis of delusional disorder. Therefore the only symptom that can be present is delusions. Unlike most other schizophrenia-related disorders, daily functioning is not overtly impacted in individuals with delusional disorder. Additionally, if symptoms of depressive or manic episodes present during delusions, they are typically brief in duration compared to the duration of the delusions.
The DSM 5 (APA, 2013) has identified several subtypes of delusional disorder in efforts to better categorize the symptoms of the individual’s disorder. When making a diagnosis of delusional disorder, one of the following specifiers is included.
• Erotomanic delusion – the individual reports a delusion of another person being in love with them. Generally speaking, the individual whom the convictions are about are of higher status such as a celebrity.
• Grandiose delusion – involves the conviction of having a great talent or insight. Occasionally, individuals will report they have made an important discovery that benefits the general public. Grandiose delusions may also take on a religious affiliation, as some people believe they are prophets or even God, himself.
• Jealous delusion – revolves around the conviction that one’s spouse or partner is/has been unfaithful. While many individuals may have this suspicion at some point in their relationship, a jealous delusion is much more extensive and generally based on incorrect inferences that lack evidence.
• Persecutory delusion – involves beliefs that they are being conspired against, spied on, followed, poisoned or drugged, maliciously maligned, harassed, or obstructed in pursuit of their long-term goals (APA, 2013). Of all subtypes of delusional disorder, those experiencing persecutory delusions are the most at risk of becoming aggressive or hostile, likely due to the persecutory nature of their distorted beliefs.
• Somatic delusion – involves delusions regarding bodily functions or sensations. While these delusions can vary significantly, the most common beliefs are that the individual emits a foul odor despite attempts to rectify their smell; there is an infestation of insects on the skin; or that they have an internal parasite (APA, 2013).
• Mixed delusions – there are several themes of delusions (e.g., jealous and persecutory)
• Unspecified delusion – these are delusions that don’t fit into one of the categories above (e.g., referential delusions without a persecutory or grandiose nature to them).
• Bizarre delusion – delusions that are clearly not plausible and do not stem from ordinary experience (e.g., the delusion that one is an alien/vampire hybrid). | textbooks/socialsci/Psychology/Psychological_Disorders/Essentials_of_Abnormal_Psychology_(Bridley_and_Daffin)/08%3A_Schizophrenia_Spectrum_and_Other_Psychotic_Disorders/8.01%3A_Clinical_Presentation.txt |
Section Learning Objectives
• Indicate the prevalence of schizophrenia spectrum disorders.
• Describe the sex ratios for these disorders.
• Identify the disorders that are commonly comorbid with schizophrenia spectrum disorders.
\(1\): Epidemiology
Schizophrenia occurs in approximately 0.3%-0.7% of the general population (APA, 2013). There is some discrepancy between the rates of diagnosis between genders; these differences appear to be related to the emphasis of various symptoms. For example, men typically present with more negative symptoms whereas women present with more mood-related symptoms. Despite gender differences in presentation of symptoms, there appears to be an equal risk for both genders to develop the disorder.
Schizophrenia typically occurs between late teens and mid-30’s, with the onset of the disorder typically occurring slightly earlier for males than for females (APA, 2013). Earlier onset of the disorder is generally predictive of worse overall prognosis. The onset of symptoms is typically gradual, with initial symptoms presenting similarly to depressive disorders; however, some individuals will present with an abrupt presentation of the disorder. Negative symptoms appear to be more predictive of poorer prognosis than other symptoms. This may be due to negative symptoms being the most persistent, and therefore, most difficult to treat. Overall, an estimated 20% of individuals who are diagnosed with schizophrenia report complete recovery of symptoms (APA, 2013).
Schizoaffective disorder, schizophreniform disorder, brief psychotic disorder, and delusional disorder prevalence rates are all significantly less than that of schizophrenia, occurring in less than 0.3% of the general population. While the depressive type of schizoaffective disorder is diagnosed more in females than males, schizophreniform and delusional disorder appear to be diagnosed equally between genders. The gender discrepancy in schizoaffective disorder is likely due to the higher rate of depressive symptoms as seen in females than males because this sex discrepancy is not evident in the bipolar type of the disorder (APA, 2013).
\(2\): Comorbidity
There is a high comorbidity rate between schizophrenia spectrum disorders and substance abuse disorders. Furthermore, there is some evidence to suggest that the use of various substances (specifically marijuana) may place an individual at an increased risk to develop schizophrenia if the genetic predisposition is also present (Corcoran et al., 2003). Additionally, there appears to be an increase in anxiety-related disorders—specifically obsessive-compulsive disorder and panic disorder—among individuals with schizophrenia than compared to the general public.
It should also be noted that individuals diagnosed with a schizophrenia spectrum disorder are also at an increased risk for associated medical conditions such as weight gain, diabetes, metabolic syndrome, and cardiovascular and pulmonary disease (APA, 2013). This predisposition to various medical conditions is likely related to medications and poor lifestyle choices, and also place individuals at risk for a reduced life expectancy.
8.03: Etiology
Section Learning Objectives
• Describe the biological causes of schizophrenia spectrum disorders.
• Describe the psychological causes of schizophrenia spectrum disorders.
• Describe the sociocultural causes of schizophrenia spectrum disorders.
\(1\): Biological
\(1.1\): Genetics
Twin and family studies consistently support the biological theory. More specifically, if one identical twin develops schizophrenia, there is a roughly a 50% that the other will also develop the disorder within their lifetime (Coon & Mitter, 2007). This percentage drops to 17% in fraternal twins. Similarly, family studies have also found similarities in brain abnormalities among individuals with schizophrenia and their relatives; the more similarities, the higher the likelihood that the family member also developed schizophrenia (Scognamiglio & Houenou, 2014).
\(1.2\): Neurobiological
There is consistent and reliable evidence of a neurobiological component in the transmission of schizophrenia. More specifically, neuroimaging studies have found a significant reduction in overall and specific brain regions volumes, as well as in tissue density of individuals with schizophrenia compared to healthy controls (Brugger, & Howes, 2017). Additionally, there has been evidence of ventricle enlargement as well as volume reductions in the medial temporal lobe. As you may recall, structures such as the amygdala (involved in emotion regulation), the hippocampus (involved in memory), as well as the neocortical surface of the temporal lobes (processing of auditory information) are all structures within the medial temporal lobe (Kurtz, 2015). Additional studies also indicate a reduction in the orbitofrontal regions of the brain, a part of the frontal lobe that is responsible for response inhibition (Kurtz, 2015).
\(1.3\): Stress Cascade
The stress-vulnerability model suggests that individuals have a genetic or biological predisposition to develop the disorder; however, symptoms will not present unless there is a stressful precipitating factor that elicits the onset of the disorder. Researchers have identified the HPA axis and its consequential neurological effects as the likely responsible neurobiological component responsible for this stress cascade.
The HPA axis is one of the main neurobiological structures that mediates stress. It involves the regulation of three chemical messengers (corticotropin-releasing hormone (CRH), adrenocorticotropic hormone (ACTH), and glucocorticoids) as they respond to a stressful situation (Corcoran et al., 2003). Glucocorticoids, more commonly referred to as cortisol, is the final neurotransmitter released which is responsible for the physiological change that accompanies stress to prepare the body to “fight” or “flight.”
It is hypothesized that in combination with abnormal brain structures, persistently increased levels of glucocorticoids in brain structures may be the key to the onset of psychosis in individuals in a prodromal phase (Corcoran et al., 2003). More specifically, the stress exposure (and increased glucocorticoids) affects the neurotransmitter system and exacerbates psychotic symptoms due to changes in dopamine activity (Walker & Diforio, 1997). While research continues to explore the relationship between stress and the onset of schizophrenia spectrum disorders, evidence for the implication of stress and symptom relapse is strong. More specifically, individuals with schizophrenia experience more stressful life events leading up to a relapse of symptoms. Similarly, it is hypothesized that the worsening or exacerbation of symptoms is also a source of stress as symptoms interfere with daily functioning (Walker & Diforio, 1997). This stress alone may be enough to initiate a relapse.
\(2\): Psychological
The cognitive model utilizes some of the aspects of the diathesis-stress model in that it proposes that premorbid neurocognitive impairment places individuals at risk for aversive work/academic/interpersonal experiences. These experiences in return lead to dysfunctional beliefs and cognitive appraisals, ultimately leading to maladaptive behaviors such as delusions/hallucinations (Beck & Rector, 2005).
Beck proposed a diathesis-stress model of development of schizophrenia. Based on his theory, an underlying neurocognitive impairment makes an individual more vulnerable to experience aversive life events such as homelessness, conflict within the family, etc. Individuals with schizophrenia are more likely to evaluate these aversive life events with a dysfunctional attitude and maladaptive cognitive distortions. The combination of the aversive events and negative interpretations of them, produces a stress response in the individual, thus igniting hyperactivation of the HPA axis. According to Beck and Rector (2005), it is the culmination of these events leads to the development of schizophrenia.
\(3\): Sociocultural
\(3.1\): Expressed Emotion
Research in support of a supportive family environment suggests that families high in expressed emotion, meaning families that have highly hostile, critical, or overinvolved family members, are predictors of relapse (Bebbington & Kuipers, 2011). In fact, individuals who return to families post hospitalization with high criticism and emotional involvement are twice as likely to relapse compared to those who return to families with low expressed emotion (Corcoran et al., 2003). Several meta-analyses have concluded that family atmosphere is causally related to relapse in individuals with schizophrenia and that these outcomes can be improved when the family environment is improved (Bebbington & Kuipers, 2011). Therefore, one major treatment goal in families of people with schizophrenia is to reduce expressed emotion within family interactions.
\(3.2\): Family Dysfunction
Even for families with low levels of expressed emotion, there is often an increase in family stress due to the secondary effects of schizophrenia. Having a family member who has been diagnosed with schizophrenia increases the likelihood of a disruptive family environment due to managing their symptoms and ensuring their safety while they are home (Friedrich & Wancata, 2015). Because of the severity of symptoms, families with a loved one diagnosed with schizophrenia often report more conflict in the home as well as more difficulty communicating with one another (Kurtz, 2015). | textbooks/socialsci/Psychology/Psychological_Disorders/Essentials_of_Abnormal_Psychology_(Bridley_and_Daffin)/08%3A_Schizophrenia_Spectrum_and_Other_Psychotic_Disorders/8.02%3A_Epidemiology_and_Comorbidity.txt |
Section Learning Objectives
• Describe psychopharmacological treatment options for schizophrenia spectrum disorders.
• Describe psychological treatment options for schizophrenia spectrum disorders.
• Describe family interventions for schizophrenia spectrum disorders.
\(1\): Psychopharmacological
Among the first antipsychotic medications used for the treatment of schizophrenia was Thorazine. Developed as a derivative of antihistamines, Thorazine was the first line of treatment that produced a calming effect on even the most severely agitated individuals and allowed for the organization of thoughts. Despite their effectiveness in managing psychotic symptoms, conventional or first-generation antipsychotics (such as Thorazine and Chlorpromazine) also produced significant negative side effects similar to that of neurological disorders. Therefore, psychotic symptoms were replaced with muscle tremors, involuntary movements, and muscle rigidity. Additionally, these conventional antipsychotics also produced tardive dyskinesia, which includes involuntary movements isolated to the tongue, mouth, and face (Tenback et al., 2006). While only 10% of clients reported the development of tardive dyskinesia, this percentage increased the longer they were on the medication, as well as the higher the dose (Achalia, Chaturvedi, Desai, Rao, & Prakash, 2014). In efforts to avoid these symptoms, clinicians have been cognizant of not exceeding the clinically effective dose of conventional antipsychotic medications. Should the management of psychotic symptoms not be resolved at this level, alternative medications are often added to produce a synergistic effect (Roh et al., 2014).
Due to the harsh side effects of conventional antipsychotic drugs, newer, arguably more effective second generation or atypical antipsychotic drugs have been developed. The atypical antipsychotic drugs appear to act on both dopamine and serotonin receptors, as opposed to only dopamine receptors in the conventional antipsychotics. Because of this, common atypical antipsychotic medications such as clozapine (Clozaril), risperidone (Risperdal), and aripiprazole (Abilify), appear to be more effective in managing both the positive and negative symptoms. While there does continue to be a risk of developing side effects such as tardive dyskinesia, recent studies suggest it is much lower than that of the conventional antipsychotics (Leucht, Heres, Kissling, & Davis, 2011). Due to their effectiveness and minimal side effects, atypical antipsychotic medications are typically the first line of treatment for schizophrenia (Barnes & Marder, 2011).
It should be noted that because of the harsh side effects of antipsychotic medications in general, many individuals, nearly one-half to three-quarters, discontinue use of antipsychotic medications (Leucht, Heres, Kissling, & Davis, 2011). Because of this, it is also important to incorporate psychological treatment along with psychopharmacological treatment to both address medication adherence, as well as provide additional support for symptom management.
\(2\): Psychological Interventions
\(2.1\): Cognitive Behavioral Therapy (CBT)
CBT has been thoroughly discussed in previous chapters and it should be clear that the goal of this treatment is to identify the negative biases and attributions that influence an individual’s interpretations of events and the subsequent consequences of these thoughts and behaviors. When used in the context of a schizophrenia spectrum disorder, CBT focuses on the maladaptive emotional and behavioral responses to psychotic experiences, which is directly related to distress and disability. Therefore, the goal of CBT is not on symptom reduction, but rather to improve the interpretations and understandings of these symptoms (and experiences) which will reduce associated distress (Kurtz, 2015). Common features of CBT in this context include: psychoeducation about their disorder, the course of their symptoms (i.e. ways to identify coming and going of delusions/hallucinations), challenging and replacing the negative thoughts/behaviors to more positive thoughts/behaviors associated with their delusions/hallucinations, and finally, learning positive coping strategies to deal with their unpleasant symptoms (Veiga-Martinez, Perez-Alvarez, & Garcia-Montes, 2008).
Findings from studies exploring CBT as a supportive treatment have been promising. One study conducted by Aaron Beck (the founder of CBT) and colleagues (Grant, Huh, Perivoliotis, Stolar, & Beck, 2012) found that recovery-oriented CBT produced a marked improvement in overall functioning as well as symptom reduction in clients diagnosed with schizophrenia. This study suggests that by focusing on targeted goals such as independent living, securing employment, and improving social relationships, individuals were able to slowly move closer to these targeted goals. By also including a variety of CBT strategies such as role-playing, scheduling community outings, and addressing negative cognitions, individuals were also able to address cognitive and social skill deficits.
\(2.2\): Family Interventions
Family interventions have been largely influenced by the diathesis-stress model of schizophrenia. As previously discussed, the emergence of the disorder and/or exacerbation of symptoms is likely related to environmental stressors and psychological factors. While the degree in which environmental stress stimulates an exacerbation of symptoms varies among individuals, there is significant evidence to conclude that overall stress does impact illness presentation (Haddock & Spaulding, 2011). Therefore, the overall goal of family interventions is to reduce the stress on the individual that is likely to elicit the relapse of symptoms.
Unlike many other psychological interventions, there is not a specific outline for family-based interventions related to schizophrenia. However, the majority of programs include the following three components: psychoeducation, problem-solving skills, and cognitive-behavioral therapy.
• Psychoeducation is important for both the client and family members as it is reported that more than half of those recovering from a psychotic episode reside with their family (Haddock & Spaulding, 2011). Therefore, educating families on the course of the illness, as well as ways to recognize the onset of psychotic symptoms is important to ensure optimal recovery.
• Problem-solving is a very important component in the family intervention model. Seeing as family conflict can increase stress within the home, which in return can lead to exacerbation and relapse of psychotic symptoms, family members benefit from learning effective methods of problem-solving to address family conflicts. Additionally, teaching positive coping strategies for dealing with the symptoms and their direct effect on the family environment may also alleviate some conflict within the home
• CBT is similar to that described above. The goal of family-based CBT is to reduce negativity among family member interactions, as well as help family members adjust to living with someone with psychotic symptoms. These three components within the family intervention program have been shown to reduce re-hospitalization rates, as well as slow the worsening of schizophrenia-related symptoms (Pitschel-Walz, Leucht, Baumi, Kissling, & Engel, 2001).
\(2.3\): Social Skills Training
Given the poor interpersonal functioning among individuals with schizophrenia, social skills training is another type of treatment that is commonly suggested to improve psychosocial functioning. Research has indicated that poor interpersonal skills not only predate the onset of the disorder but also remain significant even with the management of symptoms via antipsychotic medications. Impaired ability to interact with individuals in social, occupational, or recreational settings is related to poorer psychological adjustment (Bellack, Morrison, Wixted, & Mueser, 1990). This can lead to greater isolation and poorer social support among individuals with schizophrenia. As previously discussed, social support has been identified as a protective factor against symptom exacerbation, as it buffers psychosocial stressors that are often responsible for exacerbation of symptoms. Learning how to appropriately interact with others (i.e. establish eye contact, engage in reciprocal conversations, etc.) through role play in a group therapy setting is one effective way to teach positive social skills.
\(2.4\): Inpatient Hospitalizations
More commonly viewed as community-based treatments, inpatient hospitalization programs are essential in stabilizing individuals experiencing acute psychotic episodes. Generally speaking, individuals will be treated on an outpatient basis, however, there are times when their symptoms exceed the needs of an outpatient service. Short-term hospitalizations are used to modify antipsychotic medications and implement additional psychological treatments so that the individual can safely return to his/her home. These hospitalizations generally last for a few weeks as opposed to a long-term treatment option that would last months or years (Craig & Power, 2010).
In addition to short-term hospitalizations, there are also partial hospitalizations where an individual enrolls in a full-day program but returns home for the evening/night. These programs provide individuals with intensive therapy, organized activities, and group therapy programs that enhance social skills training. Research supports the use of partial hospitalizations as individuals enrolled in these programs tend to do better than those who enroll in outpatient care (Bales et al., 2014).
While a combination of psychopharmacological, psychological, and family interventions is the most effective treatment in managing symptoms of schizophrenia spectrum disorders, rarely do these treatments restore the individual to premorbid levels of functioning (Kurtz, 2015; Penn et al., 2004). Although more recent advancements in treatment for schizophrenia spectrum disorders appear promising, the disorder itself is viewed as one that requires lifelong treatment and care.
Chapter Recap
In Chapter 8, we discussed the schizophrenia spectrum disorders to include schizophrenia, schizophreniform disorder, brief psychotic disorder, schizoaffective disorder, and delusional disorder. We started by describing the common symptoms of such disorders to include delusions, hallucinations, disorganized speech, disorganized behavior, and negative symptoms. We then identified how the various schizophrenia spectrum disorders are distinguished from one another. This then led to our normal discussion of the epidemiology, comorbidity, etiology, and treatment options of the disorders. In our final chapter, we will discuss personality disorders. | textbooks/socialsci/Psychology/Psychological_Disorders/Essentials_of_Abnormal_Psychology_(Bridley_and_Daffin)/08%3A_Schizophrenia_Spectrum_and_Other_Psychotic_Disorders/8.04%3A_Treatment.txt |
Learning Objectives
• Describe how personality disorders present and be able to distinguish between each.
• Identify the disorders included in each cluster and the characterization of each cluster.
• Describe the epidemiology of personality disorders.
• Describe comorbidity in relation to personality disorders.
• Describe the etiology of personality disorders.
• Describe treatment options for personality disorders.
In Chapter 9, we will discuss matters related to personality disorders to include their clinical presentation, epidemiology, comorbidity, etiology, and treatment options. Our discussion will include Cluster A personality disorders of paranoid, schizoid, and schizotypal; Cluster B personality disorders of antisocial, borderline, histrionic, and narcissistic; and Cluster C personality disorders of avoidant, dependent, and obsessive-compulsive. As always, be sure you refer Chapters 1-3 for explanations of key terms (Chapter 1), an overview of the various models to explain psychopathology (Chapter 2), and descriptions of the various therapies (Chapter 3).
09: Personality Disorders
Section Learning Objectives
• Describe the symptoms associated with each cluster A personality disorder.
• Describe the epidemiology of cluster A personality disorders.
• Describe the treatments for cluster A personality disorders.
\(1\): Overview
In order to be diagnosed with any personality disorder, the individual must exhibit a pervasive and long-lasting pattern of inflexible behavior that violates cultural norms and is manifested in at least two of the following four areas: distorted thinking patternsproblematic emotional responsesover- or under-regulated impulse control, and interpersonal difficulties. While these four core features are common among all ten personality disorders, the DSM-5 divides the personality disorders into three different clusters based on symptom similarities. The pattern of behavior must persist since adolescence or early adulthood and must result in significant distress or impairment. Without distress or impairment, the pattern should be considered a personality trait rather than a disorder.
\(2\): Cluster A
Cluster A is described as the odd/eccentric cluster and consists of paranoid personality disorder, schizoid personality disorder, and schizotypal personality disorder. The common feature of these three disorders is social awkwardness and social withdrawal (APA, 2013). Often these behaviors are similar to those seen in schizophrenia. In fact, there is a strong relationship between cluster A personality disorders among individuals who have a relative diagnosed with schizophrenia (Chemerinksi & Siever, 2011). However, the symptoms of cluster A personality disorders tend to be less extensive and less impactful on daily functioning relative to those experienced in schizophrenia.
\(3\): Cluster B
Cluster B is typically described as the dramatic, emotional, or erratic cluster and consists of antisocial personality disorder, borderline personality disorder, histrionic personality disorder, and narcissistic personality disorder. Individuals with these personality disorders often experience problems with impulse control and emotional regulation (APA, 2013). Due to the dramatic, emotional, and erratic nature of these disorders, it is nearly impossible for individuals to establish healthy relationships with others.
\(4\): Cluster C
Cluster C is characterized as the anxious/fearful cluster and consists of avoidant personality disorder, dependent personality disorder, and obsessive-compulsive personality disorder. As you read through the descriptions of these disorders, you will see an overlap with symptoms of anxiety and depressive disorders. Likely due to the similarity in symptoms with mental health disorders that have effective treatment options, cluster C disorders have the most treatment options of all personality disorders. | textbooks/socialsci/Psychology/Psychological_Disorders/Essentials_of_Abnormal_Psychology_(Bridley_and_Daffin)/09%3A_Personality_Disorders/9.01%3A_Overview_of_Clusters_and_Personality_Disorders.txt |
Section Learning Objectives
• Describe the symptoms associated with each cluster A personality disorder.
• Describe the epidemiology of cluster A personality disorders.
• Describe the treatments for cluster A personality disorders.
\(1\): Clinical Descriptions
\(1.1\): Paranoid Personality Disorder
Paranoid personality disorder is characterized by a severe distrust or suspicion of others. Individuals interpret and believe that other’s motives and interactions are intended to harm them, and therefore, they are skeptical about establishing close relationships outside of family members — although at times even family members’ actions are believed to be malevolent (APA, 2013). Individuals with paranoid personality disorder often feel as though they have been deeply and irreversibly hurt by others even though there is little to no evidence to support that others intended to, or actually did, hurt them. Because of these persistent suspicions, they will doubt relationships that show true loyalty or trustworthiness.
Individuals with paranoid personality disorder are also hesitant to share any personal information or confide in others as they fear the information will be used against them (APA, 2013). Additionally, benign remarks or events are often interpreted as demeaning or threatening. For example, if an individual with paranoid personality disorder was accidentally bumped into at the store, they would interpret this action as intentional, with the purpose of causing them injury. Because of this, individuals with paranoid personality disorder are quick to hold grudges and unwilling to forgive insults or injuries – whether intentional or not (APA, 2013). They are known to quickly, and angrily counterattack either verbally or physically in situations where they feel they were insulted.
\(1.2\): Schizoid Personality Disorder
Individuals with schizoid personality disorder display a persistent pattern of avoidance from social relationships along with a limited range of emotion among social relationships (APA, 2013). Similar to those with paranoid personality disorder, individuals with schizoid personality disorder do not have many close relationships; however, unlike paranoid personality disorder, this lack of relationship is not due to suspicious feelings, but rather, the lack of desire to engage with others and the preference to engage in solitary behaviors. Individuals with schizoid personality disorder are often viewed as “loners” and prefer activities where they do not have to engage with others (APA, 2013). Established relationships rarely extend outside that of the family as those diagnosed with schizoid personality disorder make no effort to start or maintain friendships. This lack of establishing social relationships also extends to sexual behaviors, as those with schizoid personality disorder report a lack of interest in engaging in sexual experiences with others.
With regard to the limited range of emotion, individuals with schizoid personality disorder are often indifferent to criticisms or praises of others and appear to not be affected by what others think of them (APA, 2013). They will rarely show any feelings or expression of emotions and are often described as having a “bland” exterior (APA, 2013). In fact, individuals with schizoid personality disorder rarely reciprocate facial expressions or gestures typically displayed in normal conversations such as smiles or nods. Because of this lack of emotions, there is limited need for attention or acceptance.
\(1.3\): Schizotypal Personality Disorder
Schizotypal personality disorder is characterized by a range of impairment in social and interpersonal relationships due to discomfort in relationships, along with odd cognitive and/or perceptual distortions and eccentric behaviors (APA, 2013). Similar to those with schizoid personality disorder, these individuals also seek isolation and have few, if any established relationships outside of family members.
One of the most prominent features of schizotypal personality disorder is ideas of reference or the belief that unrelated events pertain to them in a particular and unusual way. This is a milder version of the delusions of reference that were discussed in the previous chapter. Ideas of reference also lead to superstitious behaviors or preoccupation with paranormal activities that are not generally accepted in their culture (APA, 2013). The perception of special or magical powers such as the ability to mind read or control other’s thoughts has also been documented in individuals with schizotypal personality disorder. Unusual perceptual experiences such as sensing the presence of another person or hearing one’s name (subthreshold hallucinations), as well as unusual speech patterns such as derailment or incoherence are also symptoms of this disorder.
Similar to the other personality disorders within cluster A, there is also a component of paranoia or suspiciousness of other’s motives in schizotypal personality disorder. Additionally, individuals with this disorder also display inappropriate or restricted affect, thus impacting their ability to appropriately interact with others in a social context. Significant social anxiety is often also present in social situations, particularly in those involving unfamiliar people. The combination of limited affect and social anxiety contributes to their inability to establish and maintain personal relationships; most individuals with schizotypal personality disorder prefer to keep to themselves in efforts to reduce this anxiety.
\(2\): Epidemiology
The cluster A personality disorders have a prevalence rate of around 3-5%. More specifically, paranoid personality disorder is estimated to affect approximately 4.4% of the general population, with no reported diagnosis discrepancy between genders (APA, 2013). Schizoid personality disorder occurs in 3.1% of the general population, whereas the prevalence rate for schizotypal personality disorder is 3.9%. Both schizoid and schizotypal personality disorders are more commonly diagnosed in males than females, with males also reportedly being more impaired by the disorder than females (APA, 2013).
Note: Due to the overlap among comorbidities and etiological factors we will reserve our discussion of those until the end of the chapter and will proceed directly to the treatment of the cluster A personality disorders.
\(3\): Treatment
Individuals with personality disorders within cluster A often do not seek out treatment as they do not identify themselves as someone who needs help (Millon, 2011). Of those that do seek treatment, the majority do not enter it willingly. Furthermore, due to the nature of these disorder, individuals in treatment often struggle to trust the clinician as they are suspicious of the clinician’s intentions (paranoid and schizotypal personality disorder) or are emotionally distant from the clinician as they do not have a desire to engage in treatment due to a lack of overall emotion and desire for relationships (schizoid personality disorder; Kellett & Hardy, 2014, Colli, Tanzilli, Dimaggio, & Lingiardi, 2014). Because of this, treatment is known to move very slowly, with many clients dropping out of treatment before any resolution of symptoms can be met.
When clients are enrolled in treatment, cognitive behavioral strategies are most commonly used with the primary intention of reducing anxiety-related symptoms. Additionally, attempts at cognitive restructuring – both identifying and changing maladaptive thought patterns – are also helpful in addressing the misinterpretations of other’s words and actions, particularly in those with paranoid personality disorder (Kellett & Hardy, 2014). Clients with schizoid personality disorder may be engaged in CBT techniques to help them experience more positive emotions and engage in more satisfying social experiences; whereas the goal of CBT for schizotypal personality disorder is to evaluate unusual thoughts or perceptions objectively and to ignore the inappropriate thoughts (Beck & Weishaar, 2011). Finally, behavioral techniques such as social-skills training may also be implemented to address ongoing interpersonal problems displayed in the disorders. | textbooks/socialsci/Psychology/Psychological_Disorders/Essentials_of_Abnormal_Psychology_(Bridley_and_Daffin)/09%3A_Personality_Disorders/9.02%3A_Cluster_A_Personality_Disorders.txt |
Section Learning Objectives
• Describe the symptoms of each cluster B personality disorder.
• Describe the epidemiology of cluster B personality disorders.
• Describe the treatments for cluster B personality disorders.
\(1\): Antisocial Personality Disorder
\(1.1\): Clinical Description
The defining feature of antisocial personality disorder is a consistent pattern of disregard for, and violation of, the rights of others (APA, 2013). While antisocial personality disorder can only be diagnosed in individuals who are 18 years of age or older, a diagnosis can only be made if there is evidence of conduct disorder prior to the age of 15. Although not discussed in this book, conduct disorder is a disorder of childhood that involves a repetitive and persistent pattern of behaviors that violate the rights of others (APA, 2013). Common behaviors exhibited by individuals with conduct disorder that go on to develop antisocial personality disorder are aggression toward people or animals, destruction of property, deceitfulness or theft, or serious violation of rules (APA, 2013).
While commonly referred to as “psychopaths” or “sociopaths” these are both separate (but related) disorders that are not recognized by the DSM. However, much like those with psychopathy and sociopathy, individuals with antisocial personality disorder fail to conform to social norms. This also includes legal rules as individuals with antisocial personality disorder are often repeatedly arrested for crimes such as property destruction, harassing/assaulting others, stealing, etc. (APA, 2013). Deceitfulness is another hallmark symptom of antisocial personality disorder as individuals often lie repeatedly, generally as a means to gain profit or pleasure. There is also a pattern of impulsivity, in that decisions are made spontaneously without forethought of personal consequences or consideration for others (Lang et al., 2015). This impulsivity also contributes to their inability to maintain employment as they are more likely to impulsively quit their jobs (Hengartner et al., 2014). Employment instability, along with impulsivity, also impacts their ability to manage finances; it is not uncommon to see individuals with antisocial personality disorder accumulate large debts that they are unable to pay (Derefinko & Widiger, 2016).
While also likely related to impulsivity, individuals with antisocial personality disorders tend to be extremely irritable and aggressive, repeatedly getting into fights. Their disregard for their own safety, as well as the safety of others, is also observed in reckless behavior such as speeding, driving under the influence, and engaging in sexual and substance abuse behavior that may put themselves and others at risk (APA, 2013).
Of course, one of the better-known symptoms of antisocial personality disorder is the lack of remorse for the consequences of their actions, regardless of how severe they may be (APA, 2013). Individuals with this disorder often rationalize their actions at the fault of the victim, minimize the harmfulness of the consequences of their behaviors, or display indifference (APA, 2013). Overall, individuals with antisocial personality disorder have limited personal relationships due to their selfish desires and lack of moral conscious.
\(1.2\): Epidemiology
Antisocial personality disorder has an estimated prevalence rate of up to 3.3% of the population with men comprising 75% of the cases (APA, 2013). It is more commonly diagnosed in men, particularly those with substance abuse disorders. It is also observed more commonly in those from disadvantaged socioeconomic settings. While the majority of individuals with antisocial personality disorder end up incarcerated at some point throughout their lifetime, criminal activities appear to decline after the age of 40 (APA, 2013).
\(1.3\): Treatment
Treatment options for antisocial personality disorder are limited, and generally not effective (Black, 2015). Like cluster A disorders, many individuals are forced to participate in treatment, thus impacting their ability to engage in and continue with treatment. Cognitive therapists have attempted to address the lack of moral conscious and encourage clients to think about the needs of others (Beck & Weishaar, 2011). Medications including lithium, atypical antipsychotics and SSRIs are sometimes prescribed to help reduce impulsive and aggressive behaviors but there is very little research on this topic and medication compliance can be a major issue.
\(2\): Borderline Personality Disorder
\(2.1\): Clinical Description
Individuals with borderline personality disorder display a persistent pattern of instability in interpersonal relationships, self-image, and affect (APA, 2013). The key characteristic of borderline personality disorder is unstable and/or intense relationships. For example, individuals may idealize or experience intense feelings for another person immediately after meeting them and then switch to devaluing them. It is not uncommon for people with borderline personality disorder to experience intense fluctuations in mood (i.e., mood lability), often observed as volatile interactions with family and friends (Herpertz & Bertsch, 2014). Those with borderline personality disorder may be friendly one day and hostile the next. The combination of these symptoms causes significant impairment in establishing and maintaining personal relationships.
Individuals with this disorder will often go to great lengths to avoid real or imagined abandonment. Fears related to abandonment can lead to inappropriate anger as they often interpret the abandonment as a reflection of their own behaviors. In efforts to prevent abandonment, individuals with borderline personality disorder will often engage in impulsive behaviors such as self-harm and suicidal behaviors. In fact, individuals with borderline personality disorder engage in more suicidal attempts and completion of suicide is higher among these individuals than the general public (Linehan et al., 2015). Other impulsive behaviors such as non-suicidal self-injury (cutting) and sexual promiscuity are often seen within this population, typically occurring during high-stress periods (Sansone & Sansone, 2012). Occasionally, hallucinations and delusions are present, particularly of a paranoid nature; however, these symptoms are often transient and recognized as unacceptable by the individual (Sieswerda & Arntz, 2007).
\(2.2\): Epidemiology
Borderline personality disorder, one of the more commonly diagnosed personality disorders, is observed in 1.6% –5.9% of the general population, with women making up 75% of the diagnoses (APA, 2013). Approximately 10% of individuals with borderline personality disorder have been seen in an outpatient mental health clinic, and nearly 20% have sought treatment in a psychiatric inpatient unit (APA, 2013). This high percentage of inpatient treatment is likely related to the high incidence of suicidal and self-harm behaviors.
\(2.3\): Treatment
Borderline personality disorder is the one personality disorder with the most effective treatment option – Dialectical Behavioral Therapy (DBT). DBT is a form of cognitive behavioral therapy developed by Marsha Linehan (Linehan, Armstrong, Suarez, Allmon, & Heard, 1991). There are four main goals of DBT: reduce suicidal behavior, reduce therapy interfering behavior, improve quality of life, and reduce post-traumatic stress symptoms.
Within DBT, there are five main treatment components that together help reduce harmful behaviors (i.e. self-mutilation and suicidal behaviors) and replace them with effective, life-enhancing behaviors (Gonidakis, 2014). The first component is skills training. Generally performed in a group therapy setting, individuals engage in mindfulness, distress toleranceinterpersonal effectiveness, and emotion regulation. Second, individuals focus on enhancing motivation and applying skills learned in the previous component to specific challenges and events in their everyday life. The third, and often the most distinctive component of DBT, is the use of telephone and in vivo coaching. It is not uncommon for clients to have the cell phone number of their clinician for 24/7 availability of in-the-moment support. The fourth component, case management, consists of allowing the client to become their own “case manager” and effectively use the learned DBT techniques to problem solve ongoing issues. Within this component, the clinician will only intervene when absolutely necessary. Finally, the consultation team, which is a service for the clinicians providing the DBT treatment. Due to the high demands of clients with borderline personality disorder, the consultation team provides support to the providers in their work to ensure they remain motivated and competent in DBT principles in an effort to provide the best treatment possible.
Support for the effectiveness of DBT in the treatment of borderline personality disorder has been implicated in a number of randomized control trials (Harned, Korslund, & Linehan, 2014; Neacsiu, Eberle, Kramer, Wisemeann, & Linehan, 2014). More specifically, DBT has shown to significantly reduce suicidality and self-harm behaviors in those with borderline personality disorders. It also reduces anger and hospitalizations as well as improves emotional regulation and interpersonal functioning. Additionally, the drop-out rates for treatment are extremely low, suggesting that clients value the treatment components and find them effective in managing symptoms.
\(3\): Histrionic Personality Disorder
\(3.1\): Clinical Description
Histrionic personality disorder is characterized by a persistent and excessive need for attention from others. Individuals with this disorder are uncomfortable in social settings unless they are the center of attention. In efforts to gain attention, they are often very lively and dramatic, using emotional displays, physical gestures, and mannerisms along with grandiose language. These behaviors are initially very charming to their audience; however, they begin to wear due to the constant need for attention to be on them.
If their theatrical nature does not gain the attention they desire, individuals with histrionic personality disorder may go to great lengths to gain that attention such as make-up a story or create a dramatic scene (APA, 2013). Similarly, they often dress and engage in sexually seductive or provocative ways. These sexually charged behaviors are not only directed at those with whom they have a sexual or romantic interest but to the general public as well (APA, 2013). They often spend significant amounts of time on their physical appearance to gain the attention they desire.
Individuals with histrionic personality disorder are easily suggestible. Their opinions and feelings are influenced by not only their friends but also by current fads (APA, 2013). They also have a tendency to over exaggerate relationships, considering casual acquaintanceships as more intimate in nature than they really are.
\(3.2\): Epidemiology
Histrionic personality disorder is one of the most uncommon personality disorders, occurring in only 1.84% of the general population (APA, 2013). While it was once believed to be more commonly diagnosed in females than males, more recent findings suggest the diagnosis rate is equal between genders.
\(3.3\): Treatment
Individuals with histrionic personality disorder are actually more likely to seek out treatment than other those with other personality disorders. Unfortunately, due to the nature of the disorder, they are very difficult to treat as they are quick to employ their demands and seductiveness within the treatment setting. The overall goal for treatment of histrionic personality disorder is to help the individual identify their dependency and become more self-reliant. Cognitive therapists utilize techniques to help clients change their helpless beliefs and improve problem-solving skills (Beck & Weishaar, 2011).
\(4\): Narcissistic Personality Disorder
\(4.1\): Clinical Description
The key features of narcissistic personality disorder are a need for admiration, a pattern of grandiosity, and a lack of empathy for others (APA, 2013). The grandiose sense of self often leads to an overvaluation of their abilities and accomplishments. They often come across as boastful and pretentious, repeatedly proclaiming their superior achievements. These proclamations may also be fantasized as a means to enhance their success or power. Oftentimes they identify themselves as “special” and will only interact with others of high status.
Given the grandiose sense of self, it is not surprising that individuals with narcissistic personality disorder need excessive admiration from others. While it appears that their self-esteem is extremely inflated, it is actually very fragile and dependent on how others perceive them (APA, 2013). Because of this, they may constantly seek out compliments and expect favorable treatment from others. When this sense of entitlement is not upheld, they can become irritated or angry that their needs are not being met.
A lack of empathy is also displayed in individuals with narcissistic personality disorder as they often fail to recognize the desires or needs of others. This lack of empathy also leads to exploitation of interpersonal relationships, as they are unable to empathize other’s feelings (Marcoux et al., 2014). They often become envious of others who achieve greater success or have nicer possessions than them. Conversely, they believe everyone should be envious of their achievements, regardless of how small they may actually be.
\(4.2\): Epidemiology
Finally, narcissistic personality disorder is reportedly diagnosed in 0 – 6.2% of the general public, with 75% of these individuals being men (APA, 2013).
\(4.3\): Treatment
Of all the personality disorders, narcissistic personality disorders are among the most difficult to treat (with maybe the exception of antisocial personality disorder). In fact, most individuals with narcissistic personality disorder only seek out treatment for those disorders secondary to their personality disorder, such as depression (APA, 2013). The focus of treatment is to address the grandiose, self-centered thinking, while also trying to teach clients how to empathize with others (Beck & Weishaar, 2014). | textbooks/socialsci/Psychology/Psychological_Disorders/Essentials_of_Abnormal_Psychology_(Bridley_and_Daffin)/09%3A_Personality_Disorders/9.03%3A_Cluster_B_Personality_Disorders.txt |
Section Learning Objectives
• Describe the symptoms associated with each of the cluster C personality disorders.
• Describe the epidemiology of cluster C personality disorders.
• Describe the treatment for cluster C personality disorders.
9.4.1 Avoidant Personality Disorder
9.4.1.1 Clinical Description
Individuals with avoidant personality disorder display social anxiety due to feelings of inadequacy and increased sensitivity to negative evaluations (APA, 2013). The fear of being rejected drives their reluctance to engage in social situations, in efforts to prevent others from evaluating them negatively. This fear extends so far that it prevents individuals from maintaining employment due to their intense fear of a negative evaluation or rejection.
Individuals with this disorder have very few if any friends, despite their desire to establish social relationships. They actively avoid social situations in which they can establish new friendships out of the fear of being disliked or ridiculed. Similarly, they are cautious of new activities or relationships as they often exaggerate the potential negative consequences and embarrassment that may occur; this is likely a result of their ongoing preoccupation of being criticized or rejected by others.
You may recall that schizoid personality disorder is also associated with social isolation but avoidant personality disorder differs from schizoid personality disorder because while those with schizoid personality disorder do not desire social connections, those with avoidant personality very much want relationships with others, they avoid them only because of their feelings of inadequacy, fears of criticism, and negative evaluation.
9.4.1.2 Epidemiology
Avoidant personality disorder occurs in 2.4% of the general population and is diagnosed equally among men and women (APA, 2013).
9.4.1.3 Treatment
While many individuals with avoidant personality disorder seek out treatment to address their anxiety or depressive-like symptoms, it is often difficult to keep them in treatment due to fear of rejection from the clinician. Treatment goals for avoidant personality disorder are similar to that of social anxiety disorder. CBT techniques such as identifying and challenging distressing thoughts have been effective in reducing anxiety-related symptoms (Weishaar & Beck, 2006). Behavioral treatments such as gradual exposure to various social settings, along with a combination of social skills training, has been shown to improve individuals’ confidence prior to engaging in social outings (Herbert, 2007). Anti-anxiety and antidepressant medications commonly used to treat anxiety disorders have also been used with minimal efficacy; furthermore, symptoms resume as soon as the medication is discontinued.
9.4.2 Dependent Personality Disorder
9.4.2.1 Clinical Description
Dependent personality disorder is characterized by a persistent and excessive need to be taken care of by others (APA, 2013). This intense need leads to submissive and clinging behaviors as they fear they will be abandoned or separated from their parent, spouse, or another person whom they feel dependent on. They are so dependent on this other individual that they cannot make even the smallest decisions without first consulting with them and gaining their approval or reassurance. They often allow others to assume complete responsibility of their life, making decisions in nearly all aspects of their lives. Rarely will they challenge these decisions as their fear of losing this relationship greatly outweighs their desire to express their own opinion. Should the relationship end, they experience significant feelings of helplessness and quickly and indiscriminately seek out another relationship to replace the old one (APA, 2013).
Individuals with dependent personality disorder express difficulty initiating and engaging in tasks on their own. They lack self-confidence and feel helpless when they are left to care for themselves or engage in tasks on their own. In efforts to not have to engage in tasks alone, individuals will go to great lengths to seek out support of others, often volunteering for unpleasant tasks if it means they will get the reassurance they need (APA, 2013).
9.4.2.2 Epidemiology
Dependent personality disorder occurs in less than 1% of the population (APA, 2013). Women are more frequently diagnosed with dependent personality disorder than men (APA, 2013) but this may reflect biases in clinicians making the diagnoses more than a true difference in the prevalence of the disorders in men and women.
9.4.2.3 Treatment
Unlike other personality disorders where individuals avoid treatment and are skeptical of the clinician, individuals with dependent personality disorder are likely to seek treatment and to place a large emphasis of their treatment on the clinician. Therefore, one of the main treatment goals for individuals with dependent personality disorder is to teach them to accept responsibility for themselves, both in and outside of treatment (Colli, Tanzilli, Dimaggio, & Lingiardi, 2014). Cognitive strategies such as challenging and changing thoughts on helplessness and their inability to care for themselves have been minimally effective in establishing independence. Additionally, behavioral techniques such as assertiveness training have also shown some promise in teaching individuals how to express themselves within a relationship. Some argue that family or couples therapy would be particularly helpful for those with dependent personality disorder due to the dysfunctional relationship between the individual and the person whom they are dependent on; however, research on this treatment method has not yielded consistently positive results (Nichols, 2013).
9.4.3 Obsessive-Compulsive Personality Disorder
9.4.3.1 Clinical Description
Obsessive-Compulsive Personality Disorder (OCPD) is defined by a preoccupation with orderliness, perfectionism, and control at the expense of flexibility, openness, and efficiency in everyday life (APA, 2013). Their preoccupation with details, rules, lists, orders, organizations or schedules overshadows the larger picture of the task or activity. In fact, their self-imposed high standards and need to complete tasks perfectly often prevent these tasks from ever being completed. Their desire to complete tasks perfectly often causes them to spend excessive amounts of time on the tasks, occasionally repeating them in an attempt to reach some perfectionistic standard. Due to repetition and attention to fine detail, individuals with OCPD often feel like they do not have time to engage in leisure activities or engage in social relationships. Despite the excessive amount of time spent on activities or tasks, individuals with OCPD will not seek help from others, as they are convinced that the others are incompetent and will not complete the tasks to their standard.
Personally, individuals with OCPD are rigid and stubborn, particularly with their morals, ethics, and values. Not only do they hold these standards for themselves, but they also expect others to have high standards, thus causing significant disruption in their social interactions. Their rigid and stubborn behaviors are also seen in their financial status, as they are known to live significantly below their means, in order to prepare financially for potential catastrophes (APA, 2013). Similarly, they may have difficulty discarding worn-out or worthless items, despite their lack of sentimental value.
Unfortunately, the term OCPD leads many to believe this is a similar disorder to OCD, but there is a distinct difference in that OCPD lacks the obsessions and compulsions that characterize and define OCD (APA, 2013). Although many individuals are diagnosed with both OCD and OCPD, research indicates that individuals with OCPD are more likely to be diagnosed with major depression, generalized anxiety disorder, or substance abuse disorder than OCD (APA, 2013).
9.4.3.2 Epidemiology
OCPD is the most commonly diagnosed personality disorder, occurring in 7.9% of individuals. Men are twice as likely to be diagnosed with OCPD than women (APA, 2013).
9.4.3.3 Treatment
Individuals with OCPD often seek out treatment to address their anxiety or depressive-like symptoms. Cognitive techniques aimed at changing dichotomous thinking (see etiology), perfectionism, and chronic worry are helpful in managing symptoms of OCPD. CBT may also be used to try to challenge and reduce perfectionistic beliefs and standards as well as rigid behaviors. They are often taught relaxation techniques to overcome the anxiety that manifests from attempts to break their rigid schedules and other behaviors. | textbooks/socialsci/Psychology/Psychological_Disorders/Essentials_of_Abnormal_Psychology_(Bridley_and_Daffin)/09%3A_Personality_Disorders/9.04%3A_Cluster_C_Personality_Disorders.txt |
Section Learning Objectives
• Describe the comorbidity of personality disorders.
• Describe the various factors that contribute to personality disorders
Comorbidity
Among the most common comorbid diagnoses with personality disorders are other personality disorders, mood disorders, anxiety disorders, and substance abuse disorders (Lenzenweger, Lane, Loranger, & Kessler, 2007). Indeed, many individuals are diagnosed with more than one personality disorders.
A large meta-analysis exploring the data on the comorbidity of personality disorders and mood disorders indicated a high level of comorbid diagnoses of MDD, PDD, and bipolar disorder (Friborg, Martinsen, Martinussen, Kaiser, Overgard, & Rosenvinge, 2014). Further exploration of MDD suggested the lowest rate of diagnosis in cluster A disorders, higher rate in cluster B disorders, and the highest rate in cluster C disorders. While the relationship between bipolar disorders and personality disorders has not been consistently clear, the most recent findings report a high comorbidity with OCPD as well as with the cluster B personality disorders.
Clear relationships between personality disorders and anxiety disorders have also been established (Skodol, Geier, Grant, & Hasin, 2014). More specifically, individuals diagnosed with borderline and schizotypal personality disorders were found to have elevated rates of additional diagnoses of each of the four main anxiety disorders. Individuals with narcissistic personality disorders were more likely to be diagnosed with GAD and panic disorder. Schizoid and avoidant personality disorders reported significant rates of GAD; avoidant personality disorder had a higher diagnosis rate of social phobia.
Finally, substance abuse disorders are frequently found in individuals diagnosed with antisocial, borderline, and schizotypal personality disorders (Grant et al., 2015).
Etiology
Research regarding the development of personality disorders is limited compared to that of other mental disorders. The following is a general overview of contributing factors to personality disorders as a whole. While there is some research lending itself to specific causes of specific personality disorders, we will review the overall contribution of biological, psychological, and social factors globally for all of the personality disorders.
Biological
Research across the personality disorders suggests some underlying biological or genetic component. However, identification of specific mechanisms has not been identified for most personality disorders, with the exception of the cluster A personality disorders which show a genetic link with schizophrenia. Because of the lack of specific evidence of biological causes, researchers argue that it is difficult to determine what role genetics plays in the development of these disorders compared to that of environmental influences. Therefore, while there is likely a biological predisposition to personality disorders, exact causes cannot be determined at this time.
Research on the development of schizotypal personality disorder has identified similar biological causes to that of schizophrenia, specifically, high activity of dopamine and enlarged brain ventricles (Lener et al., 2015). Similar differences in neuroanatomy may explain the high similarity of behaviors in both schizophrenia and schizotypal personality disorder.
Antisocial personality disorder and borderline personality disorder are also related to neurological dysfunctions. More specifically, individuals with both disorders reportedly show deficits in serotonin activity (Thompson, Ramos, & Willett, 2014). These low levels of serotonin activity in combination with deficient functioning of the frontal lobes, particularly the prefrontal cortex which is used in planning, self-control, and decision making, as well as an overly reactive amygdala, may explain the impulsive and aggressive nature of individuals with antisocial and borderline personality disorder (Stone, 2014).
Psychological
Psychodynamic, cognitive, and behavioral theories are among the most common models used to explain the development of personality disorders. Although much is still speculation, the following are general etiological views with regards to each specific theory.
Psychodynamic
The psychodynamic theory places a large emphasis on negative early childhood experiences and their impact on an individual’s ability to establish healthy relationships in adulthood. More specifically, individuals with personality disorders report higher levels of childhood stress such as living in impoverished environments, exposure to family/domestic violence, and experiencing repeated abuse and maltreatment (Kumari et al., 2014). Additionally, high levels of neglect and parental rejection are observed in people with personality disorders, with early parental loss and rejection leading to fears of abandonment throughout life (Caligor & Clarkin, 2010; Newnham & Janca, 2014; Roepke & Varter, 2014).
Psychodynamic theorists believe that because of these negative early experiences, their sense of self, and consequently, their beliefs of others are negatively impacted, thus leading to the development of a personality disorder. For example, an individual who was neglected as a young child and deprived of love may report a lack of trust in others as an adult, a characteristic of paranoid and antisocial personality disorders (Meloy & Yakeley, 2010). Difficulty trusting others or beliefs that they are unable to be loved may also impact their ability or desire to establish social relationships as seen in many personality disorders, particularly schizoid, avoidant and dependent personality disorders. Because of these early childhood deficits, individuals may also overcompensate in their relationships in efforts to convince themselves that they are worthy of love and affection as may be the case in histrionic and narcissistic personality disorders (Celani, 2014). Conversely, individuals may respond to their early childhood experiences by becoming emotionally distant, using relationships as a sense of power and destructiveness.
Cognitive
While psychodynamic theory places an emphasis on early childhood experiences, cognitive theorists focus on the maladaptive thought patterns and cognitive distortions displayed by those with personality disorders. Overall deficiencies in thinking place individuals with personality disorders in a position to develop inaccurate perceptions of others (Beck, 2015). These dysfunctional beliefs likely originate from the interaction between a biological predisposition and undesirable environmental experiences. Maladaptive thought patterns and strategies are strengthened during aversive life events as a protective mechanism and ultimately come together to form patterns of behaviors displayed in personality disorders (Beck, 2015).
Cognitive distortions such as dichotomous thinking, also known as all or nothing thinking, are observed in several personality disorders. More specifically, dichotomous thinking helps to explain rigidity and perfectionism in OCPD, and the lack of independence observed in those with dependent and borderline personality disorders (Weishaar & Beck, 2006). Discounting the positive helps explain the underlying mechanisms for avoidant personality disorder (Weishaar & Beck, 2006). For example, individuals who have been routinely criticized or rejected during childhood may have difficulty accepting positive feedback from others, expecting to only receive rejection and harsh criticism. In fact, they may employ these misattributions to support their ongoing theory that they are constantly rejected and criticized by others.
Behavioral
There are three major behavioral theories of the etiology of personality disorders: modeling, reinforcement, and lack of social skills. With regards to modeling, personality disorders are explained by an individual learning maladaptive social relationship patterns and behaviors due to directly observing family members engaging in similar behaviors (Gaynor & Baird, 2007). While we cannot discredit the biological component of the familial influence, research does support an additive modeling or imitating component to the development of personality disorders (especially antisocial personality disorder; APA, 2013).
Second, reinforcement or rewarding of maladaptive behaviors can also help explain personality disorders. Parents may unintentionally reward aggressive behaviors by giving into a child’s desires in efforts to cease the situation or prevent escalation of behaviors. When this is done repeatedly over time, children (and later as adults, particularly those with antisocial and borderline personality disorder) continue to display these maladaptive behaviors as they are effective in gaining their needs/wants. On the other side, there is some speculation that excessive reinforcement or praise during childhood may contribute to the grandiose sense of self-observed in individuals with narcissistic personality disorder (Millon, 2011).
Finally, a failure to develop normal social skills may explain the development of some personality disorders, such as avoidant personality disorder (Kantor, 2010). While there is some discussion as to whether lack of social skills leads to avoidance of social settings OR if social skills deficits develop as a result of avoiding social situations, most researchers agree that the avoidance of social situations contributes to the development of personality disorders, whereas, underlying deficits in social skills may contribute more to social anxiety disorder (APA, 2013).
Social
Family Dysfunction
High levels of psychological or social dysfunction within families have also been identified as a contributing factor to the development of personality disorders. High levels of poverty, unemployment, family separation, and witnessing domestic violence are routinely observed in individuals diagnosed with personality disorders (Paris, 1996). While formalized research has yet to further explore the relationship between socioeconomic status and personality disorders, correlational studies suggest a relationship between poverty, unemployment and poor academic achievement with increased levels of personality disorder diagnoses (Alwin, 2006).
Childhood Maltreatment
Childhood maltreatment is among the most influential arguments for the development of personality disorders in adulthood. Individuals with personality disorders often struggle with a sense of self and with the ability to relate to others — something that is generally developed during the first four to six years of a child’s life and is affected by the emotional environment in which the child was raised. This sense of self is the mechanism in which individuals view themselves within their social context, while also informing attitudes and expectations of others. A child who experiences significant maltreatment, whether it be through neglect or physical, emotional, or sexual abuse, is at-risk for under or lack of development of a sense of self. Due to the lack of affection, discipline, or autonomy during childhood, these individuals are unable to engage in appropriate relationships as adults as seen across the spectrum of personality disorders.
Another way childhood maltreatment contributes to personality disorders is through the emotional bonds or attachments developed with primary caregivers. The relationship between attachment and emotional development was thoroughly researched by John Bowlby as he explored the need for affection in Harlow monkeys (Bowlby, 1998). Based on Bowlby’s research, four attachment styles have been identified: secure, anxious, ambivalentand disorganizedWhile securely attached children generally do not develop personality disorders, those with anxious, ambivalent, and disorganized attachment are at an increased risk to develop various disorders. More specifically, those with an anxious attachment are at-risk for developing internalizing disorders, those with an ambivalent attachment are at-risk for developing externalizing disorders, and those with disorganized attachment are at-risk for dissociative symptoms and personality disorders (Alwin, 2006).
Chapter Recap
Chapter 9 covered the personality disorders which are arranged in three clusters: cluster A which includes paranoid, schizoid, and schizotypal; cluster B which includes antisocial, borderline, histrionic, and narcissistic; and cluster C which includes avoidant, dependent, and obsessive-compulsive. We covered the clinical description, diagnostic criteria, epidemiology, treatment, comorbidity, and etiology of personality disorders. | textbooks/socialsci/Psychology/Psychological_Disorders/Essentials_of_Abnormal_Psychology_(Bridley_and_Daffin)/09%3A_Personality_Disorders/9.05%3A_Comorbidity_and_Etiology.txt |
Learning Objectives
• Explain what it means to display abnormal behavior.
• Clarify how mental health professionals classify mental disorders.
• Describe the effect of stigma on those who have a mental illness.
• Outline the history of mental illness.
• Describe the research methods used to study abnormal behavior and mental illness.
• Identify types of mental health professionals, societies they may join, and journals they can publish their work in.
Cassie is an 18-year-old female from suburban Seattle, WA. She was a successful student in high school, graduating valedictorian and obtaining a National Merit Scholarship for her performance on the PSAT during her junior year. She was accepted to a university on the opposite side of the state, where she received additional scholarships giving her a free ride for her entire undergraduate education. Excited to start this new chapter in her life, Cassie’s parents begin the 5-hour commute to Pullman, where they will leave their only daughter for the first time in her life.
The semester begins as it always does in mid to late August. Cassie meets the challenge with enthusiasm and does well in her classes for the first few weeks of the semester, as expected. Sometime around Week 6, her friends notice she is despondent, detached, and falling behind in her work. After being asked about her condition, she replies that she is “just a bit homesick,” and her friends accept this answer as it is a typical response to leaving home and starting college for many students. A month later, her condition has not improved but worsened. She now regularly shirks her responsibilities around her apartment, in her classes, and on her job. Cassie does not hang out with friends like she did when she first arrived for college and stays in bed most of the day. Concerned, Cassie’s friends contact Health and Wellness for help.
Cassie’s story, though hypothetical, is true of many Freshmen leaving home for the first time to earn a higher education, whether in rural Washington state or urban areas such as Chicago and Dallas. Most students recover from this depression and go on to be functional members of their collegiate environment and accomplished scholars. Some students learn to cope on their own while others seek assistance from their university’s health and wellness center or from friends who have already been through the same ordeal. These are normal reactions. However, in cases like Cassie’s, the path to recovery is not as clear. Instead of learning how to cope, their depression increases until it reaches clinical levels and becomes an impediment to success in multiple domains of life such as home, work, school, and social circles.
In Module 1, we will explore what it means to display abnormal behavior, what mental disorders are, and the way society views mental illness today and how it has been regarded throughout history. Then we will review research methods used by psychologists in general and how they are adapted to study abnormal behavior/mental disorders. We will conclude with an overview of what mental health professionals do.
01: What is Abnormal Psychology
Learning Objectives
• Describe the disease model and its impact on the field of psychology throughout history.
• Describe positive psychology.
• Define abnormal behavior.
• Explain the concept of dysfunction as it relates to mental illness.
• Explain the concept of distress as it relates to mental illness.
• Explain the concept of deviance as it relates to mental illness.
• Explain the concept of dangerousness as it relates to mental illness.
• Define culture and social norms.
• Clarify the cost of mental illness on society.
• Define abnormal psychology, psychopathology, and mental disorders.
Understanding Abnormal Behavior
To understand what abnormal behavior is, we first have to understand what normal behavior is. Normal really is in the eye of the beholder, and most psychologists have found it easier to explain what is wrong with people then what is right. How so?
Psychology worked with the disease model for over 60 years, from about the late 1800s into the middle part of the 20th century. The focus was simple – curing mental disorders – and included such pioneers as Freud, Adler, Klein, Jung, and Erickson. These names are synonymous with the psychoanalytical school of thought. In the 1930s, behaviorism, under B.F. Skinner, presented a new view of human behavior. Simply, human behavior could be modified if the correct combination of reinforcements and punishments were used. This viewpoint espoused the dominant worldview of the time – mechanism – which presented the world as a great machine explained through the principles of physics and chemistry. In it, human beings serve as smaller machines in the larger machine of the universe.
Moving into the mid to late 1900s, we developed a more scientific investigation of mental illness, which allowed us to examine the roles of both nature and nurture and to develop drug and psychological treatments to “make miserable people less miserable.” Though this was an improvement, there were three consequences as pointed out by Martin Seligman in his 2008 TED Talk entitled, “The new era of positive psychology.” These are:
• “The first was moral; that psychologists and psychiatrists became victimologists, pathologizers; that our view of human nature was that if you were in trouble, bricks fell on you. And we forgot that people made choices and decisions. We forgot responsibility. That was the first cost.”
• “The second cost was that we forgot about you people. We forgot about improving normal lives. We forgot about a mission to make relatively untroubled people happier, more fulfilled, more productive. And “genius,” “high-talent,” became a dirty word. No one works on that.”
• “And the third problem about the disease model is, in our rush to do something about people in trouble, in our rush to do something about repairing damage, it never occurred to us to develop interventions to make people happier — positive interventions.”
Starting in the 1960s, figures such as Abraham Maslow and Carl Rogers sought to overcome the limitations of psychoanalysis and behaviorism by establishing a “third force” psychology, also known as humanistic psychology. As Maslow said,
“The science of psychology has been far more successful on the negative than on the positive side; it has revealed to us much about man’s shortcomings, his illnesses, his sins, but little about his potentialities, his virtues, his achievable aspirations, or his full psychological height. It is as if psychology had voluntarily restricted itself to only half its rightful jurisdiction, and that the darker, meaner half.” (Maslow, 1954, p. 354).
Humanistic psychology instead addressed the full range of human functioning and focused on personal fulfillment, valuing feelings over intellect, hedonism, a belief in human perfectibility, emphasis on the present, self-disclosure, self-actualization, positive regard, client centered therapy, and the hierarchy of needs. Again, these topics were in stark contrast to much of the work being done in the field of psychology up to and at this time.
In 1996, Martin Seligman became the president of the American Psychological Association (APA) and called for a positive psychology or one that had a more positive conception of human potential and nature. Building on Maslow and Roger’s work, he ushered in the scientific study of such topics as happiness, love, hope, optimism, life satisfaction, goal setting, leisure, and subjective well-being. Though positive and humanistic psychology have similarities, their methodology was much different. While humanistic psychology generally relied on qualitative methods, positive psychology utilizes a quantitative approach and aims to help people make the most out of life’s setbacks, relate well to others, find fulfillment in creativity, and find lasting meaning and satisfaction (https://www.positivepsychologyinstitute.com.au/what-is-positive-psychology).
So, to understand what normal behavior is, do we look to positive psychology for an indication, or do we first define abnormal behavior and then reverse engineer a definition of what normal is? Our preceding discussion gave suggestions about what normal behavior is, but could the darker elements of our personality also make up what is normal to some extent? Possibly. The one truth is that no matter what behavior we display, if taken to the extreme, it can become disordered – whether trying to control others through social influence or helping people in an altruistic fashion. As such, we can consider abnormal behavior to be a combination of personal distress, psychological dysfunction, deviance from social norms, dangerousness to self and others, and costliness to society.
How Do We Determine What Abnormal Behavior Is?
In the previous section we showed that what we might consider normal behavior is difficult to define. Equally challenging is understanding what abnormal behavior is, which may be surprising to you. A publication which you will become intimately familiar with throughout this book, the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders 5th edition, Text Revision (DSM-5-TR; 2022), states that, “Although no definition can capture all aspects of the range of disorders contained in DSM-5″ (pg. 13) certain aspects are required. These include:
• Dysfunction – Includes “clinically significant disturbance in an individual’s cognition, emotion regulation, or behavior that reflects a dysfunction in the psychological, biological, or developmental processes underlying mental functioning” (pg. 14). Abnormal behavior, therefore, has the capacity to make well-being difficult to obtain and can be assessed by looking at an individual’s current performance and comparing it to what is expected in general or how the person has performed in the past. As such, a good employee who suddenly demonstrates poor performance may be experiencing an environmental demand leading to stress and ineffective coping mechanisms. Once the demand resolves itself, the person’s performance should return to normal according to this principle.
• Distress – When the person experiences a disabling condition “in social, occupational, or other important activities” (pg. 14). Distress can take the form of psychological or physical pain, or both concurrently. Alone though, distress is not sufficient enough to describe behavior as abnormal. Why is that? The loss of a loved one would cause even the most “normally” functioning individual pain. An athlete who experiences a career-ending injury would display distress as well. Suffering is part of life and cannot be avoided. And some people who exhibit abnormal behavior are generally positive while doing so.
• Deviance – Closer examination of the word abnormal indicates a move away from what is normal, or the mean (i.e., what would be considered average and in this case in relation to behavior), and so is behavior that infrequently occurs (sort of an outlier in our data). Our culture, or the totality of socially transmitted behaviors, customs, values, technology, attitudes, beliefs, art, and other products that are particular to a group, determines what is normal. Thus, a person is said to be deviant when he or she fails to follow the stated and unstated rules of society, called social norms. Social norms change over time due to shifts in accepted values and expectations. For instance, homosexuality was taboo in the U.S. just a few decades ago, but today, it is generally accepted. Likewise, PDAs, or public displays of affection, do not cause a second look by most people unlike the past when these outward expressions of love were restricted to the privacy of one’s own house or bedroom. In the U.S., crying is generally seen as a weakness for males. However, if the behavior occurs in the context of a tragedy such as the Vegas mass shooting on October 1, 2017, in which 58 people were killed and about 500 were wounded while attending the Route 91 Harvest Festival, then it is appropriate and understandable. Finally, consider that statistically deviant behavior is not necessarily negative. Genius is an example of behavior that is not the norm.
Though not part of the DSM conceptualization of what abnormal behavior is, many clinicians add dangerousness to this list when behavior represents a threat to the safety of the person or others. It is important to note that having a mental disorder does not imply a person is automatically dangerous. The depressed or anxious individual is often no more a threat than someone who is not depressed, and as Hiday and Burns (2010) showed, dangerousness is more the exception than the rule. Still, mental health professionals have a duty to report to law enforcement when a mentally disordered individual expresses intent to harm another person or themselves. It is important to point out that people seen as dangerous are also not automatically mentally ill.
The Costs of Mental Illness
This leads us to wonder what the cost of mental illness is to society. The National Alliance on Mental Illness (NAMI) states that mental illness affects a person’s life which then ripples out to the family, community, and world. For instance, people with serious mental illness are at increased risk for diabetes, cancer, and cardiometabolic disease while 18% of those with a mental illness also have a substance use disorder. Within the family, an estimated 8.4 million Americans provide care to an adult with an emotional or mental illness with caregivers spending about 32 hours a week providing unpaid care. At the community level 21% of the homeless also have a serious mental illness while 70% of youth in the juvenile justice system have at least one mental health condition. And finally, depression is a leading cause of disability worldwide and depression and anxiety disorders cost the global economy \$1 trillion each year in lost productivity (Source: NAMI, The Ripple Effect of Mental Illness infographic; https://www.nami.org/Learn-More/Mental-Health-By-the-Numbers).
In terms of worldwide impact, data from 2010 estimates \$2.5 trillion in global costs, with \$1.7 trillion being indirect costs (i.e., invisible costs “associated with income losses due to mortality, disability, and care seeking, including lost production due to work absence or early retirement”) and the remainder being direct (i.e., visible costs to include “medication, physician visits, psychotherapy sessions, hospitalization,” etc.). It is now projected that mental illness costs will be around \$16 trillion by 2030. The authors add, “It should be noted that these calculations did not include costs associated with mental disorders from outside the healthcare system, such as legal costs caused by illicit drug abuse” (Trautmann, Rehm, & Wittchen, 2016). The costs for mental illness have also been found to be greater than the combined costs of somatic diseases such as cancer, diabetes, and respiratory disorders (Whiteford et al., 2013).
Christensen et al. (2020) did a review of 143 cost-of-illness studies that covered 48 countries and several types of mental illness. Their results showed that mental disorders are a substantial economic burden for societies and that certain groups of mental disorders are more costly than others. At the higher cost end were developmental disorders to include autism spectrum disorders followed by schizophrenia and intellectual disabilities. They write, “However, it is important to note that while disorders such as mood, neurotic and substance use disorders were less costly according to societal cost per patient, these disorders are much more prevalent and thus would contribute substantially to the total national cost in a country.” And much like Trautmann, Rehm, & Wittchen (2016) other studies show that indirect costs are higher than direct costs (Jin & Mosweu, 2017; Chong et al., 2016).
Defining Key Terms
Our discussion so far has concerned what normal and abnormal behavior is. We saw that the study of normal behavior falls under the providence of positive psychology. Similarly, the scientific study of abnormal behavior, with the intent to be able to predict reliably, explain, diagnose, identify the causes of, and treat maladaptive behavior, is what we refer to as abnormal psychology. Abnormal behavior can become pathological and has led to the scientific study of psychological disorders, or psychopathology. From our previous discussion we can fashion the following definition of a psychological or mental disorder: mental disorders are characterized by psychological dysfunction, which causes physical and/or psychological distress or impaired functioning, and is not an expected behavior according to societal or cultural standards.
Key Takeaways
You should have learned the following in this section:
• Abnormal behavior is a combination of personal distress, psychological dysfunction, deviance from social norms, dangerousness to self and others, and costliness to society.
• Abnormal psychology is the scientific study of abnormal behavior, with the intent to be able to predict reliably, explain, diagnose, identify the causes of, and treat maladaptive behavior.
• The study of psychological disorders is called psychopathology.
• Mental disorders are characterized by psychological dysfunction, which causes physical and/or psychological distress or impaired functioning, and is not an expected behavior according to societal or cultural standards
Review Questions
1. What is the disease model and what problems existed with it? What was to overcome its limitations?
2. Can we adequately define normal behavior? What about abnormal behavior?
3. What aspects are part of the American Psychiatric Association’s definition of abnormal behavior?
4. How costly is mental illness?
5. What is abnormal psychology?
6. What is psychopathology?
7. How do we define mental disorders? | textbooks/socialsci/Psychology/Psychological_Disorders/Fundamentals_of_Psychological_Disorders_3e_(Bridley_and_Daffin)/01%3A_Part_I._Setting_the_Stage/01%3A_What_is_Abnormal_Psychology/1.01%3A_Understanding_Abnormal_Behavior.txt |
Learning Objectives
• Define and exemplify classification.
• Define nomenclature.
• Define epidemiology.
• Define the presenting problem and clinical description.
• Differentiate prevalence, incidence, and any subtypes.
• Define comorbidity.
• Define etiology.
• Define course.
• Define prognosis.
• Define treatment.
Classification
Classification is not a foreign concept and as a student you have likely taken at least one biology class that discussed the taxonomic classification system of Kingdom, Phylum, Class, Order, Family, Genus, and Species revolutionized by Swedish botanist, Carl Linnaeus. You probably even learned a witty mnemonic such as ‘King Phillip, Come Out For Goodness Sake’ to keep the order straight. The Library of Congress uses classification to organize and arrange their book collections and includes such categories as B – Philosophy, Psychology, and Religion; H – Social Sciences; N – Fine Arts; Q – Science; R – Medicine; and T – Technology.
Simply, classification is how we organize or categorize things. The second author’s wife has been known to color-code her Blu Ray collection by genre, movie title, and release date. It is useful for us to do the same with abnormal behavior, and classification provides us with a nomenclature, or naming system, to structure our understanding of mental disorders in a meaningful way. Of course, we want to learn as much as we can about a given disorder so we can understand its cause, predict its future occurrence, and develop ways to treat it.
Determining Occurrence of a Disorder
Epidemiology is the scientific study of the frequency and causes of diseases and other health-related states in specific populations such as a school, neighborhood, a city, country, and the world. Psychiatric or mental health epidemiology refers to the occurrence of mental disorders in a population. In mental health facilities, we say that a patient presents with a specific problem, or the presenting problem, and we give a clinical description of it, which includes information about the thoughts, feelings, and behaviors that constitute that mental disorder. We also seek to gain information about the occurrence of the disorder, its cause, course, and treatment possibilities.
Occurrence can be investigated in several ways. First, prevalence is the percentage of people in a population that has a mental disorder or can be viewed as the number of cases divided by the total number of people in the sample. For instance, if 20 people out of 100 have bipolar disorder, then the prevalence rate is 20%. Prevalence can be measured in several ways:
• Point prevalence indicates the proportion of a population that has the characteristic at a specific point in time. In other words, it is the number of active cases.
• Period prevalence indicates the proportion of a population that has the characteristic at any point during a given period of time, typically the past year.
• Lifetime prevalence indicates the proportion of a population that has had the characteristic at any time during their lives.
According to a 2020 infographic by the National Alliance on Mental Illness (NAMI), for U.S. adults, 1 in 5 experienced a mental illness, 1 in 20 had a serious mental illness, 1 in 15 experienced both a substance use disorder and mental disorder, and over 12 million had serious thoughts of suicide (2020 Mental Health By the Numbers: US Adults infographic). In terms of adolescents aged 12-17, in 2020 1 in 6 experienced a major depressive episode, 3 million had serious thoughts of suicide, and there was a 31% increase in mental health-related emergency department visits. Among U.S. young adults aged 18-25, 1 in 3 experienced a mental illness, 1 in 10 had a serious mental illness, and 3.8 had serious thoughts of suicide (2020 Mental Health By the Numbers: Youth and Young Adults infographic). These numbers would represent period prevalence rates during the pandemic, and for the year 2020. In the, You are Not Alone infographic, NAMI reported the following 12-month prevalence rates for U.S. Adults: 19% having an anxiety disorder, 8% having depression, 4% having PTSD, 3% having bipolar disorder, and 1% having schizophrenia.
Source: https://www.nami.org/mhstats
Incidence indicates the number of new cases in a population over a specific period. This measure is usually lower since it does not include existing cases as prevalence does. If you wish to know the number of new cases of social phobia during the past year (going from say Aug 21, 2015 to Aug 20, 2016), you would only count cases that began during this time and ignore cases before the start date, even if people are currently afflicted with the mental disorder. Incidence is often studied by medical and public health officials so that causes can be identified, and future cases prevented.
Finally, comorbidity describes when two or more mental disorders are occurring at the same time and in the same person. The National Comorbidity Survey Replication (NCS-R) study conducted by the National Institute of Mental Health (NIMH) and published in the June 6, 2005 issue of the Archives of General Psychiatry, sought to discover trends in prevalence, impairment, and service use during the 1990s. The first study, conducted from 1980 to 1985, surveyed 20,000 people from five different geographical regions in the U.S. A second study followed from 1990-1992 and was called the National Comorbidity Survey (NCS). The third study, the NCS-R, used a new nationally representative sample of the U.S. population, and found that 45% of those with one mental disorder met the diagnostic criteria for two or more disorders. The authors also found that the severity of mental illness, in terms of disability, is strongly related to comorbidity, and that substance use disorders often result from disorders such as anxiety and bipolar disorders. The implications of this are significant as services to treat substance abuse and mental disorders are often separate, despite the disorders appearing together.
Other Key Factors Related to Mental Disorders
The etiology is the cause of the disorder. There may be social, biological, or psychological explanations for the disorder which need to be understood to identify the appropriate treatment. Likewise, the effectiveness of a treatment may give some hint at the cause of the mental disorder. More on this in Module 2.
The course of the disorder is its particular pattern. A disorder may be acute, meaning that it lasts a short time, or chronic, meaning it persists for a long time. It can also be classified as time-limited, meaning that recovery will occur after some time regardless of whether any treatment occurs.
Prognosis is the anticipated course the mental disorder will take. A key factor in determining the course is age, with some disorders presenting differently in childhood than adulthood.
Finally, we will discuss several treatment strategies in this book in relation to specific disorders, and in a general fashion in Module 3. Treatment is any procedure intended to modify abnormal behavior into normal behavior. The person suffering from the mental disorder seeks the assistance of a trained professional to provide some degree of relief over a series of therapy sessions. The trained mental health professional may prescribe medication or utilize psychotherapy to bring about this change. Treatment may be sought from the primary care provider, in an outpatient facility, or through inpatient care or hospitalization at a mental hospital or psychiatric unit of a general hospital. According to NAMI, the average delay between symptom onset and treatment is 11 years with 45% of adults with mental illness, 66% of adults with serious mental illness, and 51% of youth with a mental health condition seeking treatment in a given year. They also report that 50% of white, 49% of lesbian/gay and bisexual, 43% of mixed/multiracial, 34% of Hispanic or Latinx, 33% of black, and 23% of Asian adults with a mental health diagnosis received treatment or counseling in the past year (Source: Mental Health Care Matters infographic, https://www.nami.org/mhstats).
Key Takeaways
You should have learned the following in this section:
• Classification, or how we organize or categorize things, provides us with a nomenclature, or naming system, to structure our understanding of mental disorders in a meaningful way.
• Epidemiology is the scientific study of the frequency and causes of diseases and other health-related states in specific populations.
• Prevalence is the percentage of people in a population that has a mental disorder or can be viewed as the number of cases divided by the total number of people in the sample.
• Incidence indicates the number of new cases in a population over a specific period.
• Comorbidity describes when two or more mental disorders are occurring at the same time and in the same person.
• The etiology is the cause of a disorder while the course is its particular pattern and can be acute, chronic, or time-limited.
• Prognosis is the anticipated course the mental disorder will take.
Review Questions
1. What is the importance of classification for the study of mental disorders?
2. What information does a clinical description include?
3. In what ways is occurrence investigated?
4. What is the etiology of a mental illness?
5. What is the relationship of course and prognosis to one another?
6. What is treatment and who seeks it? | textbooks/socialsci/Psychology/Psychological_Disorders/Fundamentals_of_Psychological_Disorders_3e_(Bridley_and_Daffin)/01%3A_Part_I._Setting_the_Stage/01%3A_What_is_Abnormal_Psychology/1.02%3A_Classifying_Mental_Disorders.txt |
Learning Objectives
• Clarify the importance of social cognition theory in understanding why people do not seek care.
• Define categories and schemas.
• Define stereotypes and heuristics.
• Describe social identity theory and its consequences.
• Differentiate between prejudice and discrimination.
• Contrast implicit and explicit attitudes.
• Explain the concept of stigma and its three forms.
• Define courtesy stigma.
• Describe what the literature shows about stigma.
In the previous section, we discussed the fact that care can be sought out in a variety of ways. The problem is that many people who need care never seek it out. Why is that? We already know that society dictates what is considered abnormal behavior through culture and social norms, and you can likely think of a few implications of that. But to fully understand society’s role in why people do not seek care, we need to determine the psychological processes underlying this phenomenon in the individual.
Social cognition is the process through which we collect information from the world around us and then interpret it. The collection process occurs through what we know as sensation – or detecting physical energy emitted or reflected by physical objects. Detection occurs courtesy of our eyes, ears, nose, skin and mouth; or via vision, hearing, smell, touch, and taste, respectfully. Once collected, the information is relayed to the brain through the neural impulse where it is processed and interpreted, or meaning is added to this raw sensory data which we call perception.
One way meaning is added is by taking the information we just detected and using it to assign people to categories, or groups. For each category, we have a schema, or a set of beliefs and expectations about a group of people, believed to apply to all members of the group, and based on experience. You might think of them as organized ways of making sense of experience. So, it is during our initial interaction with someone that we collect information about them, assign the person to a category for which we have a schema, and then use that to affect how we interact with them. First impressions, called the primacy effect, are important because even if we obtain new information that should override an incorrect initial assessment, the initial impression is unlikely to change. We call this the perseverance effect, or belief perseverance.
Stereotypes are special types of schemas that are very simplistic, very strongly held, and not based on firsthand experience. They are heuristics, or mental shortcuts, that allow us to assess this collected information very quickly. One piece of information, such as skin color, can be used to assign the person to a schema for which we have a stereotype. This can affect how we think or feel about the person and behave toward them. Again, human beings tend to imply things about an individual solely due to a distinguishing feature and disregard anything inconsistent with the stereotype.
Social identity theory (Tajfel, 1982; Turner, 1987) states that people categorize their social world into meaningfully simplistic representations of groups of people. These representations are then organized as prototypes, or “fuzzy sets of a relatively limited number of category-defining features that not only define one category but serve to distinguish it from other categories” (Foddy and Hogg, as cited in Foddy et al., 1999). We construct in-groups and out-groups and categorize the self as an in-group member. The self is assimilated into the salient in-group prototype, which indicates what cognitions, affect, and behavior we may exhibit. Stereotyping, out-group homogeneity, in-group/out-group bias, normative behavior, and conformity are all based on self-categorization.
How so? Out-group homogeneity occurs when we see all members of an outside group as the same. This leads to a tendency to show favoritism to, and exclude or hold a negative view of, members outside of, one’s immediate group, called the in-group/out-group bias. The negative view or set of beliefs about a group of people is what we call prejudice, and this can result in acting in a way that is negative against a group of people, called discrimination. It should be noted that a person can be prejudicial without being discriminatory since most people do not act on their attitudes toward others due to social norms against such behavior. Likewise, a person or institution can be discriminatory without being prejudicial. For example, when a company requires that an applicant have a certain education level or be able to lift 80 pounds as part of typical job responsibilities. Individuals without a degree or ability to lift will be removed from consideration for the job, but this discriminatory act does not mean that the company has negative views of people without degrees or the inability to lift heavy weight. You might even hold a negative view of a specific group of people and not be aware of it. An attitude we are unaware of is called an implicit attitude, which stands in contrast to explicit attitudes, which are the views within our conscious awareness.
We have spent quite a lot of space and time understanding how people gather information about the world and people around them, process this information, use it to make snap judgements about others, form groups for which stereotypes may exist, and then potentially hold negative views of this group and behave negatively toward them as a result. Just one piece of information can be used to set this series of mental events into motion. Outside of skin color, the label associated with having a mental disorder can be used. Stereotypes about people with a mental disorder can quickly and easily transform into prejudice when people in a society determine the schema to be correct and form negative emotions and evaluations of this group (Eagly & Chaiken, 1993). This, in turn, can lead to discriminatory practices such as an employer refusing to hire, a landlord refusing to rent an apartment, or avoiding a romantic relationship, all due to the person having a mental illness.
Overlapping with prejudice and discrimination in terms of how people with mental disorders are treated is stigma, or when negative stereotyping, labeling, rejection, and loss of status occur. Stigma takes on three forms as described below:
• Public stigma – When members of a society endorse negative stereotypes of people with a mental disorder and discriminate against them. They might avoid them altogether, resulting in social isolation. An example is when an employer intentionally does not hire a person because their mental illness is discovered.
• Label avoidance –To avoid being labeled as “crazy” or “nuts” people needing care may avoid seeking it altogether or stop care once started. Due to these labels, funding for mental health services could be restricted and instead, physical health services funded.
• Self-stigma – When people with mental illnesses internalize the negative stereotypes and prejudice, and in turn, discriminate against themselves. They may experience shame, reduced self-esteem, hopelessness, low self-efficacy, and a reduction in coping mechanisms. An obvious consequence of these potential outcomes is the why try effect, or the person saying ‘Why should I try and get that job? I am not worthy of it’ (Corrigan, Larson, & Rusch, 2009; Corrigan, et al., 2016).
Another form of stigma that is worth noting is that of courtesy stigma or when stigma affects people associated with a person who has a mental disorder. Karnieli-Miller et al. (2013) found that families of the afflicted were often blamed, rejected, or devalued when others learned that a family member had a serious mental illness (SMI). Due to this, they felt hurt and betrayed, and an important source of social support during a difficult time had disappeared, resulting in greater levels of stress. To cope, some families concealed their relative’s illness, and some parents struggled to decide whether it was their place to disclose their child’s condition. Others fought with the issue of confronting the stigma through attempts at education versus just ignoring it due to not having enough energy or desiring to maintain personal boundaries. There was also a need to understand the responses of others and to attribute it to a lack of knowledge, experience, and/or media coverage. In some cases, the reappraisal allowed family members to feel compassion for others rather than feeling put down or blamed. The authors concluded that each family “develops its own coping strategies which vary according to its personal experiences, values, and extent of other commitments” and that “coping strategies families employ change over-time.”
Other effects of stigma include experiencing work-related discrimination resulting in higher levels of self-stigma and stress (Rusch et al., 2014), higher rates of suicide especially when treatment is not available (Rusch, Zlati, Black, and Thornicroft, 2014; Rihmer & Kiss, 2002), and a decreased likelihood of future help-seeking intention (Lally et al., 2013). The results of the latter study also showed that personal contact with someone with a history of mental illness led to a decreased likelihood of seeking help. This is important because 48% of the university sample stated that they needed help for an emotional or mental health issue during the past year but did not seek help. Similar results have been reported in other studies (Eisenberg, Downs, Golberstein, & Zivin, 2009). It is also important to point out that social distance, a result of stigma, has also been shown to increase throughout the life span, suggesting that anti-stigma campaigns should focus on older people primarily (Schomerus, et al., 2015).
One potentially disturbing trend is that mental health professionals have been shown to hold negative attitudes toward the people they serve. Hansson et al. (2011) found that staff members at an outpatient clinic in the southern part of Sweden held the most negative attitudes about whether an employer would accept an applicant for work, willingness to date a person who had been hospitalized, and hiring a patient to care for children. Attitudes were stronger when staff treated patients with a psychosis or in inpatient settings. In a similar study,
Martensson, Jacobsson, and Engstrom (2014) found that staff had more positive attitudes towards persons with mental illness if their knowledge of such disorders was less stigmatized; their workplaces were in the county council where they were more likely to encounter patients who recover and return to normal life in society, rather than in municipalities where patients have long-term and recurrent mental illness; and they have or had one close friend with mental health issues.
To help deal with stigma in the mental health community, Papish et al. (2013) investigated the effect of a one-time contact-based educational intervention compared to a four-week mandatory psychiatry course on the stigma of mental illness among medical students at the University of Calgary. The curriculum included two methods requiring contact with people diagnosed with a mental disorder: patient presentations, or two one-hour oral presentations in which patients shared their story of having a mental illness, and “clinical correlations” in which a psychiatrist mentored students while they interacted with patients in either inpatient or outpatient settings. Results showed that medical students held a stigma towards mental illness and that comprehensive medical education reduced this stigma. As the authors stated, “These results suggest that it is possible to create an environment in which medical student attitudes towards mental illness can be shifted in a positive direction.” That said, the level of stigma was still higher for mental illness than it was for the stigmatized physical illness, type 2 diabetes mellitus.
What might happen if mental illness is presented as a treatable condition? McGinty, Goldman, Pescosolido, and Barry (2015) found that portraying schizophrenia, depression, and heroin addiction as untreated and symptomatic increased negative public attitudes towards people with these conditions. Conversely, when the same people were portrayed as successfully treated, the desire for social distance was reduced, there was less willingness to discriminate against them, and belief in treatment effectiveness increased among the public.
Self-stigma has also been shown to affect self-esteem, which then affects hope, which then affects the quality of life among people with severe mental illness. As such, hope should play a central role in recovery (Mashiach-Eizenberg et al., 2013). Narrative Enhancement and Cognitive Therapy (NECT) is an intervention designed to reduce internalized stigma and targets both hope and self-esteem (Yanos et al., 2011). The intervention replaces stigmatizing myths with facts about illness and recovery, which leads to hopefulness and higher levels of self-esteem in clients. This may then reduce susceptibility to internalized stigma.
Stigma leads to health inequities (Hatzenbuehler, Phelan, & Link, 2013), prompting calls for stigma change. Targeting stigma involves two different agendas: The services agenda attempts to remove stigma so people can seek mental health services, and the rights agenda tries to replace discrimination that “robs people of rightful opportunities with affirming attitudes and behavior” (Corrigan, 2016). The former is successful when there is evidence that people with mental illness are seeking services more or becoming better engaged. The latter is successful when there is an increase in the number of people with mental illnesses in the workforce who are receiving reasonable accommodations. The federal government has tackled this issue with landmark legislation such as the Patient Protection and Affordable Care Act of 2010, Mental Health Parity and Addiction Equity Act of 2008, and the Americans with Disabilities Act of 1990. However, protections are not uniform across all subgroups due to “1) explicit language about inclusion and exclusion criteria in the statute or implementation rule, 2) vague statutory language that yields variation in the interpretation about which groups qualify for protection, and 3) incentives created by the legislation that affect specific groups differently” (Cummings, Lucas, and Druss, 2013). More on this in Module 15.
Key Takeaways
You should have learned the following in this section:
• Stigma is when negative stereotyping, labeling, rejection, and loss of status occur and take the form of public or self-stigma, and label avoidance.
Review Questions
1. How does social cognition help us to understand why stigmatization occurs?
2. Define stigma and describe its three forms. What is courtesy stigma?
3. What are the effects of stigma on the afflicted?
4. Is stigmatization prevalent in the mental health community? If so, what can be done about it?
5. How can we reduce stigmatization? | textbooks/socialsci/Psychology/Psychological_Disorders/Fundamentals_of_Psychological_Disorders_3e_(Bridley_and_Daffin)/01%3A_Part_I._Setting_the_Stage/01%3A_What_is_Abnormal_Psychology/1.03%3A_The_Stigma_of_Mental_Illness.txt |
Learning Objectives
• Describe prehistoric and ancient beliefs about mental illness.
• Describe Greco-Roman thought on mental illness.
• Describe thoughts on mental illness during the Middle Ages.
• Describe thoughts on mental illness during the Renaissance.
• Describe thoughts on mental illness during the 18th and 19th centuries.
• Describe thoughts on mental illness during the 20th and 21st centuries.
• Describe the status of mental illness today.
• Outline the use of psychoactive drugs throughout time and their impact.
• Clarify the importance of managed health care for the treatment of mental illness.
• Define and clarify the importance of multicultural psychology.
• State the issue surrounding prescription rights for psychologists.
• Explain the importance of prevention science.
As we have seen so far, what is considered abnormal behavior is often dictated by the culture/society a person lives in, and unfortunately, the past has not treated the afflicted very well. In this section, we will examine how past societies viewed and dealt with mental illness.
Prehistoric and Ancient Beliefs
Prehistoric cultures often held a supernatural view of abnormal behavior and saw it as the work of evil spirits, demons, gods, or witches who took control of the person. This form of demonic possession often occurred when the person engaged in behavior contrary to the religious teachings of the time. Treatment by cave dwellers included a technique called trephination, in which a stone instrument known as a trephine was used to remove part of the skull, creating an opening. Through it, the evil spirits could escape, thereby ending the person’s mental affliction and returning them to normal behavior. Early Greek, Hebrew, Egyptian, and Chinese cultures used a treatment method called exorcism in which evil spirts were cast out through prayer, magic, flogging, starvation, having the person ingest horrible tasting drinks, or noisemaking.
Greco-Roman Thought
Rejecting the idea of demonic possession, Greek physician Hippocrates (460-377 B.C.) said that mental disorders were akin to physical ailments and had natural causes. Specifically, they arose from brain pathology, or head trauma/brain dysfunction or disease, and were also affected by heredity. Hippocrates classified mental disorders into three main categories – melancholia, mania, and phrenitis (brain fever) – and gave detailed clinical descriptions of each. He also described four main fluids or humors that directed normal brain functioning and personality – blood which arose in the heart, black bile arising in the spleen, yellow bile or choler from the liver, and phlegm from the brain. Mental disorders occurred when the humors were in a state of imbalance such as an excess of yellow bile causing frenzy and too much black bile causing melancholia or depression. Hippocrates believed mental illnesses could be treated as any other disorder and focused on the underlying pathology.
Also noteworthy was the Greek philosopher Plato (429-347 B.C.), who said that the mentally ill were not responsible for their actions and should not be punished. It was the responsibility of the community and their families to care for them. The Greek physician Galen (A.D. 129-199) said mental disorders had either physical or psychological causes, including fear, shock, alcoholism, head injuries, adolescence, and changes in menstruation.
In Rome, physician Asclepiades (124-40 BC) and philosopher Cicero (106-43 BC) rejected Hippocrates’ idea of the four humors and instead stated that melancholy arises from grief, fear, and rage; not excess black bile. Roman physicians treated mental disorders with massage or warm baths, the hope being that their patients would be as comfortable as they could be. They practiced the concept of contrariis contrarius, meaning opposite by opposite, and introduced contrasting stimuli to bring about balance in the physical and mental domains. An example would be consuming a cold drink while in a warm bath.
The Middle Ages – 500 AD to 1500 AD
The progress made during the time of the Greeks and Romans was quickly reversed during the Middle Ages with the increase in power of the Church and the fall of the Roman Empire. Mental illness was yet again explained as possession by the Devil and methods such as exorcism, flogging, prayer, the touching of relics, chanting, visiting holy sites, and holy water were used to rid the person of demonic influence. In extreme cases, the afflicted were exposed to confinement, beatings, and even execution. Scientific and medical explanations, such as those proposed by Hippocrates, were discarded.
Group hysteria, or mass madness, was also seen when large numbers of people displayed similar symptoms and false beliefs. This included the belief that one was possessed by wolves or other animals and imitated their behavior, called lycanthropy, and a mania in which large numbers of people had an uncontrollable desire to dance and jump, called tarantism. The latter was believed to have been caused by the bite of the wolf spider, now called the tarantula, and spread quickly from Italy to Germany and other parts of Europe where it was called Saint Vitus’s dance.
Perhaps the return to supernatural explanations during the Middle Ages makes sense given events of the time. The black death (bubonic plague) killed up to a third, or according to other estimates almost half, of the population. Famine, war, social oppression, and pestilence were also factors. The constant presence of death led to an epidemic of depression and fear. Near the end of the Middle Ages, mystical explanations for mental illness began to lose favor, and government officials regained some of their lost power over nonreligious activities. Science and medicine were again called upon to explain psychopathology.
The Renaissance – 14th to 16th centuries
The most noteworthy development in the realm of philosophy during the Renaissance was the rise of humanism, or the worldview that emphasizes human welfare and the uniqueness of the individual. This perspective helped continue the decline of supernatural views of mental illness. In the mid to late 1500s, German physician Johann Weyer (1515-1588) published his book, On the Deceits of the Demons, that rebutted the Church’s witch-hunting handbook, the Malleus Maleficarum, and argued that many accused of being witches and subsequently imprisoned, tortured, and/or burned at the stake, were mentally disturbed and not possessed by demons or the Devil himself. He believed that like the body, the mind was susceptible to illness. Not surprisingly, the book was vehemently protested and banned by the Church. It should be noted that these types of acts occurred not only in Europe, but also in the United States. The most famous example, the Salem Witch Trials of 1692, resulted in more than 200 people accused of practicing witchcraft and 20 deaths.
The number of asylums, or places of refuge for the mentally ill where they could receive care, began to rise during the 16th century as the government realized there were far too many people afflicted with mental illness to be left in private homes. Hospitals and monasteries were converted into asylums. Though the intent was benign in the beginning, as the facilities overcrowded, the patients came to be treated more like animals than people. In 1547, the Bethlem Hospital opened in London with the sole purpose of confining those with mental disorders. Patients were chained up, placed on public display, and often heard crying out in pain. The asylum became a tourist attraction, with sightseers paying a penny to view the more violent patients, and soon was called “Bedlam” by local people; a term that today means “a state of uproar and confusion” (https://www.merriam-webster.com/dictionary/bedlam).
Reform Movement – 18th to 19th centuries
The rise of the moral treatment movement occurred in Europe in the late 18th century and then in the United States in the early 19th century. The earliest proponent was Francis Pinel (1745-1826), the superintendent of la Bicetre, a hospital for mentally ill men in Paris. Pinel stressed respectful treatment and moral guidance for the mentally ill while considering their individual, social, and occupational needs. Arguing that the mentally ill were sick people, Pinel ordered that chains be removed, outside exercise be allowed, sunny and well-ventilated rooms replace dungeons, and patients be extended kindness and support. This approach led to considerable improvement for many of the patients, so much so, that several were released.
Following Pinel’s lead, William Tuke (1732-1822), a Quaker tea merchant, established a pleasant rural estate called the York Retreat. The Quakers believed that all people should be accepted for who they are and treated kindly. At the retreat, patients could work, rest, talk out their problems, and pray (Raad & Makari, 2010). The work of Tuke and others led to the passage of the Country Asylums Act of 1845, which required that every county provide asylum to the mentally ill. This sentiment extended to English colonies such as Canada, India, Australia, and the West Indies as word of the maltreatment of patients at a facility in Kingston, Jamaica spread, leading to an audit of colonial facilities and their policies.
Reform in the United States started with the figure largely considered to be the father of American psychiatry, Benjamin Rush (1745-1813). Rush advocated for the humane treatment of the mentally ill, showing them respect, and even giving them small gifts from time to time. Despite this, his practice included treatments such as bloodletting and purgatives, the invention of the “tranquilizing chair,” and reliance on astrology, showing that even he could not escape from the beliefs of the time.
Due to the rise of the moral treatment movement in both Europe and the United States, asylums became habitable places where those afflicted with mental illness could recover. Regrettably, its success was responsible for its decline. The number of mental hospitals greatly increased, leading to staffing shortages and a lack of funds to support them. Though treating patients humanely was a noble endeavor, it did not work for some patients and other treatments were needed, though they had not been developed yet. Staff recognized that the approach worked best when the facility had 200 or fewer patients, but waves of immigrants arriving in the U.S. after the Civil War overwhelmed the facilities, and patient counts soared to 1,000 or more. Prejudice against the new arrivals led to discriminatory practices in which immigrants were not afforded the same moral treatments as native citizens, even when the resources were available to treat them.
The moral treatment movement also fell due to the rise of the mental hygiene movement, which focused on the physical well-being of patients. Its leading proponent in the United States was Dorothea Dix (1802-1887), a New Englander who observed the deplorable conditions suffered by the mentally ill while teaching Sunday school to female prisoners. Over the next 40 years, from 1841 to 1881, she motivated people and state legislators to do something about this injustice and raised millions of dollars to build over 30 more appropriate mental hospitals and improve others. Her efforts even extended beyond the U.S. to Canada and Scotland.
Finally, in 1908 Clifford Beers (1876-1943) published his book, A Mind that Found Itself, in which he described his struggle with bipolar disorder and the “cruel and inhumane treatment people with mental illnesses received. He witnessed and experienced horrific abuse at the hands of his caretakers. At one point during his institutionalization, he was placed in a straitjacket for 21 consecutive nights” (https://www.mhanational.org/our-history). His story aroused sympathy from the public and led him to found the National Committee for Mental Hygiene, known today as Mental Health America, which provides education about mental illness and the need to treat these people with dignity. Today, MHA has over 200 affiliates in 41 states and employs 6,500 affiliate staff and over 10,000 volunteers.
“In the early 1950s, Mental Health America issued a call to asylums across the country for their discarded chains and shackles. On April 13, 1953, at the McShane Bell Foundry in Baltimore, Md., Mental Health America melted down these inhumane bindings and recast them into a sign of hope: the Mental Health Bell.
Now the symbol of Mental Health America, the 300-pound Bell serves as a powerful reminder that the invisible chains of misunderstanding and discrimination continue to bind people with mental illnesses. Today, the Mental Health Bell rings out hope for improving mental health and achieving victory over mental illnesses.”
For more information on MHA, please visit: https://www.mhanational.org/
20th – 21st Centuries
The decline of the moral treatment approach in the late 19th century led to the rise of two competing perspectives – the biological or somatogenic perspective and the psychological or psychogenic perspective.
1.4.6.1.Biological or Somatogenic Perspective. Recall that Greek physicians Hippocrates and Galen said that mental disorders were akin to physical disorders and had natural causes. Though the idea fell into oblivion for several centuries, it re-emerged in the late 19th century for two reasons. First, German psychiatrist Emil Kraepelin (1856-1926) discovered that symptoms occurred regularly in clusters, which he called syndromes. These syndromes represented a unique mental disorder with a distinct cause, course, and prognosis. In 1883 he published his textbook, Compendium der Psychiatrie (Textbook of Psychiatry), and described a system for classifying mental disorders that became the basis of the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders (DSM) that is currently in its 5th edition Text Revision (published in 2022).
Secondly, in 1825, the behavioral and cognitive symptoms of advanced syphilis were identified to include a belief that everyone is plotting against you or that you are God (a delusion of grandeur), and were termed general paresis by French physician A.L.J. Bayle. In 1897, Viennese psychiatrist Richard von Krafft-Ebbing injected patients suffering from general paresis with matter from syphilis spores and noted that none of the patients developed symptoms of syphilis, indicating they must have been previously exposed and were now immune. This led to the conclusion that syphilis was the cause of the general paresis. In 1906, August von Wassermann developed a blood test for syphilis, and in 1917 a cure was found. Julius von Wagner-Jauregg noticed that patients with general paresis who contracted malaria recovered from their symptoms. To test this hypothesis, he injected nine patients with blood from a soldier afflicted with malaria. Three of the patients fully recovered while three others showed great improvement in their paretic symptoms. The high fever caused by malaria burned out the syphilis bacteria. Hospitals in the United States began incorporating this new cure for paresis into their treatment approach by 1925.
Also noteworthy was the work of American psychiatrist John P. Grey. Appointed as superintendent of the Utica State Hospital in New York, Grey asserted that insanity always had a physical cause. As such, the mentally ill should be seen as physically ill and treated with rest, proper room temperature and ventilation, and a nutritive diet.
The 1930s also saw the use of electric shock as a treatment method, which was stumbled upon accidentally by Benjamin Franklin while experimenting with electricity in the early 18th century. He noticed that after suffering a severe shock his memories had changed, and in published work, he suggested physicians study electric shock as a treatment for melancholia.
1.4.6.2. Psychological or Psychogenic Perspective. The psychological or psychogenic perspective states that emotional or psychological factors are the cause of mental disorders and represented a challenge to the biological perspective. This perspective had a long history but did not gain favor until the work of Viennese physician Franz Anton Mesmer (1734-1815). Influenced heavily by Newton’s theory of gravity, he believed that the planets also affected the human body through the force of animal magnetism and that all people had a universal magnetic fluid that determined how healthy they were. He demonstrated the usefulness of his approach when he cured Franzl Oesterline, a 27-year-old woman suffering from what he described as a convulsive malady. Mesmer used a magnet to disrupt the gravitational tides that were affecting his patient and produced a sensation of the magnetic fluid draining from her body. This procedure removed the illness from her body and provided a near-instantaneous recovery. In reality, the patient was placed in a trancelike state which made her highly suggestible. With other patients, Mesmer would have them sit in a darkened room filled with soothing music, into which he would enter dressed in a colorful robe and pass from person to person touching the afflicted area of their body with his hand or a rod/wand. He successfully cured deafness, paralysis, loss of bodily feeling, convulsions, menstrual difficulties, and blindness.
His approach gained him celebrity status as he demonstrated it at the courts of English nobility. However, the medical community was hardly impressed. A royal commission was formed to investigate his technique but could not find any proof for his theory of animal magnetism. Though he was able to cure patients when they touched his “magnetized” tree, the result was the same when “non-magnetized” trees were touched. As such, Mesmer was deemed a charlatan and forced to leave Paris. His technique was called mesmerism, better known today as hypnosis.
The psychological perspective gained popularity after two physicians practicing in the city of Nancy in France discovered that they could induce the symptoms of hysteria in perfectly healthy patients through hypnosis and then remove the symptoms in the same way. The work of Hippolyte-Marie Bernheim (1840-1919) and Ambroise-Auguste Liebault (1823-1904) came to be part of what was called the Nancy School and showed that hysteria was nothing more than a form of self-hypnosis. In Paris, this view was challenged by Jean Charcot (1825-1893), who stated that hysteria was caused by degenerative brain changes, reflecting the biological perspective. He was proven wrong and eventually turned to their way of thinking.
The use of hypnosis to treat hysteria was also carried out by fellow Frenchman Pierre Janet (1859-1947), and student of Charcot, who believed that hysteria had psychological, not biological causes. Namely, these included unconscious forces, fixed ideas, and memory impairments. In Vienna, Josef Breuer (1842-1925) induced hypnosis and had patients speak freely about past events that upset them. Upon waking, he discovered that patients sometimes were free of their symptoms of hysteria. Success was even greater when patients not only recalled forgotten memories but also relived them emotionally. He called this the cathartic method, and our use of the word catharsis today indicates a purging or release, in this case, of pent-up emotion.
By the end of the 19th century, it had become evident that mental disorders were caused by a combination of biological and psychological factors, and the investigation of how they develop began. Sigmund Freud’s development of psychoanalysis followed on the heels of the work of Bruner, and others who came before him.
Current Views/Trends
1.4.7.1. Mental illness today. An article published by the Harvard Medical School in March 2014 called “The Prevalence and Treatment of Mental Illness Today” presented the results of the National Comorbidity Study Replication of 2001-2003, which included a sample of more than 9,000 adults. The results showed that nearly 46% of the participants had a psychiatric disorder at some time in their lives. The most commonly reported disorders were:
• Major depression – 17%
• Alcohol abuse – 13%
• Social anxiety disorder – 12%
• Conduct disorder – 9.5%
Also of interest was that women were more likely to have had anxiety and mood disorders while men showed higher rates of impulse control disorders. Comorbid anxiety and mood disorders were common, and 28% reported having more than one co-occurring disorder (Kessler, Berglund, et al., 2005; Kessler, Chiu, et al., 2005; Kessler, Demler, et al., 2005).
About 80% of the sample reported seeking treatment for their disorder, but with as much as a 10-year gap after symptoms first appeared. Women were more likely than men to seek help while whites were more likely than African and Hispanic Americans (Wang, Berglund, et al., 2005; Wang, Lane, et al., 2005). Care was sought primarily from family doctors, nurses, and other general practitioners (23%), followed by social workers and psychologists (16%), psychiatrists (12%), counselors or spiritual advisers (8%), and complementary and alternative medicine providers (CAMs; 7%).
In terms of the quality of the care, the article states:
Most of this treatment was inadequate, at least by the standards applied in the survey. The researchers defined minimum adequacy as a suitable medication at a suitable dose for two months, along with at least four visits to a physician; or else eight visits to any licensed mental health professional. By that definition, only 33% of people with a psychiatric disorder were treated adequately, and only 13% of those who saw general medical practitioners.
In comparison to the original study conducted from 1991-1992, the use of mental health services has increased over 50% during this decade. This may be attributed to treatment becoming more widespread and increased attempts to educate the public about mental illness. Stigma, discussed in Section 1.3, has reduced over time, diagnosis is more effective, community outreach programs have increased, and most importantly, general practitioners have been more willing to prescribe psychoactive medications which themselves are more readily available now. The article concludes, “Survey researchers also suggest that we need more outreach and voluntary screening, more education about mental illness for the public and physicians, and more effort to treat substance abuse and impulse control disorders.” We will explore several of these issues in the remainder of this section, including the use of psychiatric drugs and deinstitutionalization, managed health care, private psychotherapy, positive psychology and prevention science, multicultural psychology, and prescription rights for psychologists.
1.4.7.2. Use of psychiatric drugs and deinstitutionalization. Beginning in the 1950s, psychiatric or psychotropic drugs were used for the treatment of mental illness and made an immediate impact. Though drugs alone cannot cure mental illness, they can improve symptoms and increase the effectiveness of treatments such as psychotherapy. Classes of psychiatric drugs include anti-depressants used to treat depression and anxiety, mood-stabilizing medications to treat bipolar disorder, anti-psychotic drugs to treat schizophrenia, and anti-anxiety drugs to treat generalized anxiety disorder or panic disorder
Frank (2006) found that by 1996, psychotropic drugs were used in 77% of mental health cases and spending on these drugs grew from \$2.8 billion in 1987 to about \$18 billion in 2001 (Coffey et al., 2000; Mark et al., 2005), representing over a sixfold increase. The largest classes of psychotropic drugs are anti-psychotics and anti-depressants, followed closely by anti-anxiety medications. Frank, Conti, and Goldman (2005) point out, “The expansion of insurance coverage for prescription drugs, the introduction and diffusion of managed behavioral health care techniques, and the conduct of the pharmaceutical industry in promoting their products all have influenced how psychotropic drugs are used and how much is spent on them.” Is it possible then that we are overprescribing these mediations? Davey (2014) provides ten reasons why this may be so, including leading suffers from believing that recovery is in their hands but instead in the hands of their doctors; increased risk of relapse; drug companies causing the “medicalization of perfectly normal emotional processes, such as bereavement” to ensure their survival; side effects; and a failure to change the way the person thinks or the socioeconomic environments that may be the cause of the disorder. For more on this article, please see: https://www.psychologytoday.com/blog/why-we-worry/201401/overprescribing-drugs-treat-mental-health-problems. Smith (2012) echoed similar sentiments in an article on inappropriate prescribing. He cites the approval of Prozac by the Food and Drug Administration (FDA) in 1987 as when the issue began and the overmedication/overdiagnosis of children with ADHD as a more recent example.
A result of the use of psychiatric drugs was deinstitutionalization, or the release of patients from mental health facilities. This shifted resources from inpatient to outpatient care and placed the spotlight back on the biological or somatogenic perspective. When people with severe mental illness do need inpatient care, it is typically in the form of short-term hospitalization.
1.4.7.3. Managed health care.Managed health care is a term used to describe a type of health insurance in which the insurance company determines the cost of services, possible providers, and the number of visits a subscriber can have within a year. This is regulated through contracts with providers and medical facilities. The plans pay the providers directly, so subscribers do not have to pay out-of-pocket or complete claim forms, though most require co-pays paid directly to the provider at the time of service. Exactly how much the plan costs depends on how flexible the subscriber wants it to be; the more flexibility, the higher the cost. Managed health care takes three forms:
• Health Maintenance Organizations (HMO) – Typically only pay for care within the network. The subscriber chooses a primary care physician (PCP) who coordinates most of their care. The PCP refers the subscriber to specialists or other health care providers as is necessary. This is the most restrictive option.
• Preferred Provider Organizations (PPO) – Usually pay more if the subscriber obtains care within the network, but if care outside the network is sought, they cover part of the cost.
• Point of Service (POS) – These plans provide the most flexibility and allow the subscriber to choose between an HMO or a PPO each time care is needed.
Regarding the treatment needed for mental illness, managed care programs regulate the pre-approval of treatment via referrals from the PCP, determine which mental health providers can be seen, and oversee which conditions can be treated and what type of treatment can be delivered. This system was developed in the 1980s to combat the rising cost of mental health care and took responsibility away from single practitioners or small groups who could charge what they felt was appropriate. The actual impact of managed care on mental health services is still questionable at best.
1.4.7.4. Multicultural psychology. As our society becomes increasingly diverse, medical practitioners and psychologists alike must take into account the patient’s gender, age, race, ethnicity, socioeconomic (SES) status, and culture and how these factors shape the individual’s thoughts, feelings, and behaviors. Additionally, we need to understand how the various groups, whether defined by race, culture, or gender, differ from one another. This approach is called multicultural psychology.
In August 2002, the American Psychological Association’s (APA) Council of Representatives put forth six guidelines based on the understanding that “race and ethnicity can impact psychological practice and interventions at all levels” and the need for respect and inclusiveness. They further state, “psychologists are in a position to provide leadership as agents of prosocial change, advocacy, and social justice, thereby promoting societal understanding, affirmation, and appreciation of multiculturalism against the damaging effects of individual, institutional, and societal racism, prejudice, and all forms of oppression based on stereotyping and discrimination.” The guidelines from the 2002 document are as follows:
• “Guideline #1: Psychologists are encouraged to recognize that, as cultural beings, they may hold attitudes and beliefs that can detrimentally influence their perceptions of and interactions with individuals who are ethnically and racially different from themselves.
• Guideline #2: Psychologists are encouraged to recognize the importance of multicultural sensitivity/responsiveness, knowledge, and understanding about ethnically and racially different individuals.
• Guideline #3: As educators, psychologists are encouraged to employ the constructs of multiculturalism and diversity in psychological education.
• Guideline #4: Culturally sensitive psychological researchers are encouraged to recognize the importance of conducting culture–centered and ethical psychological research among persons from ethnic, linguistic, and racial minority backgrounds.
• Guideline #5: Psychologists strive to apply culturally-appropriate skills in clinical and other applied psychological practices.
• Guideline #6: Psychologists are encouraged to use organizational change processes to support culturally informed organizational (policy) development and practices.”
Source: apa.org/pi/oema/resources/policy/multicultural-guidelines.aspx
This type of sensitivity training is vital because bias based on ethnicity, race, and culture has been found in the diagnosis and treatment of autism (Harrison et al., 2017; Burkett, 2015), borderline personality disorder (Jani et al., 2016), and schizophrenia (Neighbors et al., 2003; Minsky et al., 2003). Despite these findings, Schwartz and Blankenship (2014) state, “It should also be noted that although clear evidence supports a longstanding trend in differential diagnoses according to consumer race, this trend does not imply that one race (e.g., African Americans) actually demonstrate more severe symptoms or higher prevalence rates of psychosis compared with other races (e.g., Euro-Americans). Because clinicians are the diagnosticians and misinterpretation, bias or other factors may play a role in this trend caution should be used when making inferences about actual rates of psychosis among ethnic minority persons.” Additionally, white middle-class help seekers were offered appointments with psychotherapists almost three times as often as their black working-class counterparts. Women were offered an appointment time in their preferred time range more than men were, though average appointment offer rates were similar between genders (Kugelmass, 2016). These findings collectively show that though we are becoming more culturally sensitive, we have a lot more work to do.
1.4.7.5. Prescription rights for psychologists. To reduce inappropriate prescribing as described in 1.4.7.2, it has been proposed to allow appropriately trained psychologists the right to prescribe. Psychologists are more likely to utilize both therapy and medication, and so can make the best choice for their patient. The right has already been granted in New Mexico, Louisiana, Guam, the military, the Indian Health Services, and the U.S. Public Health Services. Measures in other states “have been opposed by the American Medical Association and American Psychiatric Association over concerns that inadequate training of psychologists could jeopardize patient safety. Supporters of prescriptive authority for psychologists are quick to point out that there is no evidence to support these concerns” (Smith, 2012).
1.4.7.6. Prevention science. As a society, we used to wait for a mental or physical health issue to emerge, then scramble to treat it. More recently, medicine and science has taken a prevention stance, identifying the factors that cause specific mental health issues and implementing interventions to stop them from happening, or at least minimize their deleterious effects. Our focus has shifted from individuals to the population. Mental health promotion programs have been instituted with success in schools (Shoshani & Steinmetz, 2014; Weare & Nind, 2011; Berkowitz & Beer, 2007), in the workplace (Czabała, Charzyńska, & Mroziak, B., 2011), with undergraduate and graduate students (Conley et al., 2017; Bettis et al., 2016), in relation to bullying (Bradshaw, 2015), and with the elderly (Forsman et al., 2011). Many researchers believe it is the ideal time to move from knowledge to action and to expand public mental health initiatives (Wahlbeck, 2015). The growth of positive psychology in the late 1990s has further propelled this movement forward. For more on positive psychology, please see Section 1.1.1.
Key Takeaways
You should have learned the following in this section:
• Some of the earliest views of mental illness saw it as the work of evil spirts, demons, gods, or witches who took control of the person, and in the Middle Ages it was seen as possession by the Devil and methods such as exorcism, flogging, prayer, the touching of relics, chanting, visiting holy sites, and holy water were used to rid the person of demonic influence.
• During the Renaissance, humanism was on the rise which emphasized human welfare and the uniqueness of the individual and led to an increase in the number of asylums as places of refuge for the mentally ill.
• The 18th to 19th centuries saw the rise of the moral treatment movement followed by the mental hygiene movement.
• The psychological or psychogenic perspective states that emotional or psychological factors are the cause of mental disorders and represented a challenge to the biological perspective which said that mental disorders were akin to physical disorders and had natural causes.
• Psychiatric or psychotropic drugs used to treat mental illness became popular beginning in the 1950s and led to deinstitutionalization or a shift from inpatient to outpatient care.
Review Questions
1. How has mental illness been viewed across time?
2. Contrast the moral treatment and mental hygiene movements.
3. Contrast the biological or somatogenic perspective with that of the psychological or psychogenic perspective.
4. Discuss contemporary trends in relation to the use of drugs to treat mental illness, deinstitutionalization, managed health care, multicultural psychology, prescription rights for psychologists, and prevention science. | textbooks/socialsci/Psychology/Psychological_Disorders/Fundamentals_of_Psychological_Disorders_3e_(Bridley_and_Daffin)/01%3A_Part_I._Setting_the_Stage/01%3A_What_is_Abnormal_Psychology/1.04%3A_The_History_of_Mental_Illness.txt |
Learning Objectives
• Define the scientific method.
• Outline and describe the steps of the scientific method, defining all key terms.
• Identify and clarify the importance of the three cardinal features of science.
• List the five main research methods used in psychology.
• Describe observational research, listing its advantages and disadvantages.
• Describe case study research, listing its advantages and disadvantages.
• Describe survey research, listing its advantages and disadvantages.
• Describe correlational research, listing its advantages and disadvantages.
• Describe experimental research, listing its advantages and disadvantages.
• State the utility and need for multimethod research.
The Scientific Method
Psychology is the “scientific study of behavior and mental processes.” We will spend quite a lot of time on the behavior and mental processes part throughout this book and in relation to mental disorders. Still, before we proceed, it is prudent to further elaborate on what makes psychology scientific. It is safe to say that most people outside of our discipline or a sister science would be surprised to learn that psychology utilizes the scientific method at all. That may be even truer of clinical psychology, especially in light of the plethora of self-help books found at any bookstore. But yes, the treatment methods used by mental health professionals are based on empirical research and the scientific method.
As a starting point, we should expand on what the scientific method is.
The scientific method is a systematic method for gathering knowledge about the world around us.
The keyword here is systematic, meaning there is a set way to use it. What is that way? Well, depending on what source you look at, it can include a varying number of steps. I like to use the following:
Table 1.1: The Steps of the Scientific Method
Step Name Description
0 Ask questions and be willing to wonder. To study the world around us, you have to wonder about it. This inquisitive nature is the hallmark of critical thinking our ability to assess claims made by others and make objective judgments that are independent of emotion and anecdote and based on hard evidence —and a requirement to be a scientist.
1 Generate a research question or identify a problem to investigate. Through our wonderment about the world around us and why events occur as they do, we begin to ask questions that require further investigation to arrive at an answer. This investigation usually starts with a literature review, or when we conduct a literature search through our university library or a search engine such as Google Scholar to see what questions have been investigated already and what answers have been found, so that we can identify gaps or holes in this body of work.
2 Attempt to explain the phenomena we wish to study. We now attempt to formulate an explanation of why the event occurs as it does. This systematic explanation of a phenomenon is a theory and our specific, testable prediction is the hypothesis. We will know if our theory is correct because we have formulated a hypothesis that we can now test.
3 Test the hypothesis. It goes without saying that if we cannot test our hypothesis, then we cannot show whether our prediction is correct or not. Our plan of action of how we will go about testing the hypothesis is called our research design. In the planning stage, we will select the appropriate research method to answer our question/test our hypothesis.
4 Interpret the results. With our research study done, we now examine the data to see if the pattern we predicted exists. We need to see if a cause and effect statement can be made, assuming our method allows for this inference. More on this in Section 2.3. For now, it is essential to know that statistics have two forms. First, there are descriptive statistics which provide a means of summarizing or describing data and presenting the data in a usable form. You likely have heard of mean or average, median, and mode. Along with standard deviation and variance, these are ways to describe our data. Second, there are inferential statistics that allow for the analysis of two or more sets of numerical data to determine the statistical significance of the results. Significance is an indication of how confident we are that our results are due to our manipulation or design and not chance.
5 Draw conclusions carefully. We need to interpret our results accurately and not overstate our findings. To do this, we need to be aware of our biases and avoid emotional reasoning so that they do not cloud our judgment. How so? In our effort to stop a child from engaging in self-injurious behavior that could cause substantial harm or even death, we might overstate the success of our treatment method.
6 Communicate our findings to the broader scientific community. Once we have decided on whether our hypothesis was correct or not, we need to share this information with others so that they might comment critically on our methodology, statistical analyses, and conclusions. Sharing also allows for replication or repeating the study to confirm its results. Communication occurs via scientific journals, conferences, or newsletters released by many of the organizations mentioned in Module 1.6.
Science has at its root three cardinal features that we will see play out time and time again throughout this book. They are:
1. Observation – To know about the world around us, we have to be able to see it firsthand. When a mental disorder afflicts an individual, we can see it through their overt behavior. An individual with depression may withdraw from activities he/she enjoys, those with social anxiety disorder will avoid social situations, people with schizophrenia may express concern over being watched by the government, and individuals with dependent personality disorder may leave major decisions to trusted companions. In these examples and numerous others, the behaviors that lead us to a diagnosis of a specific disorder can easily be observed by the clinician, the patient, and/or family and friends.
2. Experimentation – To be able to make causal or cause and effect statements, we must isolate variables. We must manipulate one variable and see the effect of doing so on another variable. Let’s say we want to know if a new treatment for bipolar disorder is as effective as existing treatments, or more importantly, better. We could design a study with three groups of bipolar patients. One group would receive no treatment and serve as a control group. A second group would receive an existing and proven treatment and would also be considered a control group. Finally, the third group would receive the new treatment and be the experimental group. What we are manipulating is what treatment the groups get – no treatment, the older treatment, and the newer treatment. The first two groups serve as controls since we already know what to expect from their results. There should be no change in bipolar disorder symptoms in the no-treatment group, a general reduction in symptoms for the older treatment group, and the same or better performance for the newer treatment group. As long as patients in the newer treatment group do not perform worse than their older treatment counterparts, we can say the new drug is a success. You might wonder why we would get excited about the performance of the new drug being the same as the old drug. Does it really offer any added benefit? In terms of a reduction of symptoms, maybe not, but it could cost less money than the older drug and that would be of value to patients.
3. Measurement – How do we know that the new drug has worked? Simply, we can measure the person’s bipolar disorder symptoms before any treatment was implemented, and then again once the treatment has run its course. This pre-post test design is typical in drug studies.
Research Methods
Step 3 called on the scientist to test his or her hypothesis. Psychology as a discipline uses five main research designs. They are:
1.5.2.1. Naturalistic and laboratory observation. In terms of naturalistic observation, the scientist studies human or animal behavior in its natural environment, which could include the home, school, or a forest. The researcher counts, measures, and rates behavior in a systematic way and, at times, uses multiple judges to ensure accuracy in how the behavior is being measured. The advantage of this method is that you see behavior as it happens, and the experimenter does not taint the data. The disadvantage is that it could take a long time for the behavior to occur, and if the researcher is detected, then this may influence the behavior of those being observed.
Laboratory observation involves observing people or animals in a laboratory setting. The researcher might want to know more about parent-child interactions, and so, brings a mother and her child into the lab to engage in preplanned tasks such as playing with toys, eating a meal, or the mother leaving the room for a short time. The advantage of this method over the naturalistic method is that the experimenter can use sophisticated equipment to record the session and examine it later. The problem is that since the subjects know the experimenter is watching them, their behavior could become artificial. Clinical observation is a commonly employed research method to study psychopathology; we will talk about it more throughout this book.
1.5.2.2. Case studies. Psychology can also utilize a detailed description of one person or a small group based on careful observation. This was the approach the founder of psychoanalysis, Sigmund Freud, took to develop his theories. The advantage of this method is that you arrive at a detailed description of the investigated behavior, but the disadvantage is that the findings may be unrepresentative of the larger population, and thus, lacking generalizability. Again, bear in mind that you are studying one person or a tiny group. Can you possibly make conclusions about all people from just one person, or even five or ten? The other issue is that the case study is subject to researcher bias in terms of what is included in the final narrative and what is left out. Despite these limitations, case studies can lead us to novel ideas about the cause of abnormal behavior and help us to study unusual conditions that occur too infrequently to analyze with large sample sizes and in a systematic way.
1.5.2.3. Surveys/Self-Report data. This is a questionnaire consisting of at least one scale with some questions used to assess a psychological construct of interest such as parenting style, depression, locus of control, or sensation-seeking behavior. It may be administered by paper and pencil or computer. Surveys allow for the collection of large amounts of data quickly, but the actual survey could be tedious for the participant and social desirability, when a participant answers questions dishonestly so that they are seen in a more favorable light, could be an issue. For instance, if you are asking high school students about their sexual activity, they may not give genuine answers for fear that their parents will find out. You could alternatively gather this information via an interview in a structured or unstructured fashion.
1.5.2.4. Correlational research. This research method examines the relationship between two variables or two groups of variables. A numerical measure of the strength of this relationship is derived, called the correlation coefficient. It can range from -1.00, a perfect inverse relationship in which one variable goes up as the other goes down, to 0 indicating no relationship at all, to +1.00 or a perfect relationship in which as one variable goes up or down so does the other. In terms of a negative correlation, we might say that as a parent becomes more rigid, controlling, and cold, the attachment of the child to parent goes down. In contrast, as a parent becomes warmer, more loving, and provides structure, the child becomes more attached. The advantage of correlational research is that you can correlate anything. The disadvantage is that you can correlate anything, including variables that do not have any relationship with one another. Yes, this is both an advantage and a disadvantage. For instance, we might correlate instances of making peanut butter and jelly sandwiches with someone we are attracted to sitting near us at lunch. Are the two related? Not likely, unless you make a really good PB&J, but then the person is probably only interested in you for food and not companionship. The main issue here is that correlation does not allow you to make a causal statement.
A special form of correlational research is the epidemiological study in which the prevalence and incidence of a disorder in a specific population are measured (See Section 1.2 for definitions).
1.5.2.5. Experiments. This is a controlled test of a hypothesis in which a researcher manipulates one variable and measures its effect on another variable. The manipulated variable is called the independent variable (IV), and the one that is measured is called the dependent variable (DV). In the example under Experimentation in Section 1.5.1, the treatment for bipolar disorder was the IV, while the actual intensity or number of symptoms serve as the DV. A common feature of experiments is a control group that does not receive the treatment or is not manipulated and an experimental group that does receive the treatment or manipulation. If the experiment includes random assignment, participants have an equal chance of being placed in the control or experimental group. The control group allows the researcher (or teacher) to make a comparison to the experimental group and make a causal statement possible, and stronger. In our experiment, the new treatment should show a marked reduction in the intensity of bipolar symptoms compared to the group receiving no treatment, and perform either at the same level as, or better than, the older treatment. This would be the initial hypothesis made before starting the experiment.
In a drug study, to ensure the participants’ expectations do not affect the final results by giving the researcher what he/she is looking for (in our example, symptoms improve whether the participant is receiving treatment or not), we might use what is called a placebo, or a sugar pill made to look exactly like the pill given to the experimental group. This way, participants all are given something, but cannot figure out what exactly it is. You might say this keeps them honest and allows the results to speak for themselves.
Finally, the study of mental illness does not always afford us a large sample of participants to study, so we have to focus on one individual using a single-subject experimental design. This differs from a case study in the sheer number of strategies available to reduce potential confounding variables, or variables not originally part of the research design but contribute to the results in a meaningful way. One type of single-subject experimental design is the reversal or ABAB design. Kuttler, Myles, and Carson (1998) used social stories to reduce tantrum behavior in two social environments in a 12-year old student diagnosed with autism, Fragile-X syndrome, and intermittent explosive disorder. Using an ABAB design, they found that precursors to tantrum behavior decreased when the social stories were available (B) and increased when the intervention was withdrawn (A). A more recent study (Balakrishnan & Alias, 2017) also established the utility of social stories as a social learning tool for children with autism spectrum disorder (ASD) using an ABAB design. During the baseline phase (A), the four student participants were observed, and data recorded on an observation form. During the treatment phase (B), they listened to the social story and data was recorded in the same manner. Upon completion of the first B, the students returned to A, which was followed one more time by B and the reading of the social story. Once the second treatment phase ended, the participation was monitored again to obtain the outcome. All students showed improvement during the treatment phases in terms of the number of positive peer interactions, but the number of interactions reduced in the absence of social stories. From this, the researchers concluded that the social story led to the increase in positive peer interactions of children with ASD.
1.5.2.6. Multi-method research. As you have seen above, no single method alone is perfect. All have strengths and limitations. As such, for the psychologist to provide the most precise picture of what is affecting behavior or mental processes, several of these approaches are typically employed at different stages of the research study. This is called multi-method research.
Key Takeaways
You should have learned the following in this section:
• The scientific method is a systematic method for gathering knowledge about the world around us.
• A systematic explanation of a phenomenon is a theory and our specific, testable prediction is the hypothesis.
• Replication is when we repeat the study to confirm its results.
• Psychology’s five main research designs are observation, case studies, surveys, correlation, and experimentation.
• No single research method alone is perfect – all have strengths and limitations.
Review Questions
1. What is the scientific method and what steps make it up?
2. Differentiate theory and hypothesis.
3. What are the three cardinal features of science and how do they relate to the study of mental disorders?
4. What are the five main research designs used by psychologists? Define each and then state its strengths and limitations.
5. What is the advantage of multi-method research? | textbooks/socialsci/Psychology/Psychological_Disorders/Fundamentals_of_Psychological_Disorders_3e_(Bridley_and_Daffin)/01%3A_Part_I._Setting_the_Stage/01%3A_What_is_Abnormal_Psychology/1.05%3A_Research_Methods_in_Psychopathology.txt |
Learning Objectives
• Identify and describe the various types of mental health professionals.
• Clarify what it means to communicate findings.
• Identify professional societies in clinical psychology.
• Identify publications in clinical psychology.
Types of Professionals
There are many types of mental health professionals that people may seek out for assistance. They include:
Table 1.2: Types of Mental Health Professionals
Name Degree Required Function/Training Can they prescribe medications?
Clinical Psychologist Ph.D. Trained to make diagnoses and can provide individual and group therapy Only in select states
School Psychologist Masters or Ph.D. Trained to make diagnoses and can provide individual and group therapy but also works with school staff No
Counseling Psychologist Ph.D. Deals with adjustment issues primarily and less with mental illness No
Clinical Social Worker M.S.W. or Ph.D. Trained to make diagnoses and can provide individual and group therapy and is involved in advocacy and case management. Usually in hospital settings. No
Psychiatrist M.D. Has specialized training in the diagnosis and treatment of mental disorders Yes
Psychiatric Nurse Practitioner M.S.N. Has specialized treatment in the care and treatment of psychiatric patients Yes
Occupational Therapist B.S. Trained to assist individuals suffering from physical or psychological handicaps and help them acquire needed resources No
Pastoral Counselor Clergy Trained in pastoral education and can make diagnoses and can provide individual and group therapy No
Drug Abuse and/or Alcohol Counselor B.S. or higher Trained in alcohol and drug abuse and can make diagnoses and can provide individual and group therapy No
Child/Adolescent Psychiatrist M.D. or Ph.D. Specialized training in the diagnosis and treatment of mental illness in children Yes
Marital and Family Therapist Masters Specialized training in marital and family therapy; Can make diagnoses and can provide individual and group therapy No
For more information on types of mental health professionals, please visit:
https://www.mhanational.org/types-mental-health-professionals
Professional Societies and Journals
One of the functions of science is to communicate findings. Testing hypotheses, developing sound methodology, accurately analyzing data, and drawing sound conclusions are important, but you must tell others what you have done too. This is accomplished by joining professional societies and submitting articles to peer-reviewed journals. Below are some of the organizations and journals relevant to applied behavior analysis.
1.6.2.1. Professional Societies
• Society of Clinical Psychology – Division 12 of the American Psychological Association
• Website – https://div12.org/
• Mission Statement – “The mission of the Society of Clinical Psychology is to represent the field of Clinical Psychology through encouragement and support of the integration of clinical psychological science and practice in education, research, application, advocacy and public policy, attending to the importance of diversity.”
• Publications – Clinical Psychology: Science and Practice and the newsletter Clinical Psychology: Science and Practice(quarterly)
• Other Information – Members and student affiliates may join one of eight sections such as clinical emergencies and crises, clinical psychology of women, assessment psychology, and clinical geropsychology
• Society of Clinical Child and Adolescent Psychology – Division 53 of the American Psychological Association
• Website – www.clinicalchildpsychology.org/
• Mission Statement – “Our mission is to serve children, adolescents and families with the best possible clinical care based on psychological science. SCCAP strives to integrate scientific and professional aspects of clinical child and adolescent psychology, in that it promotes scientific inquiry, training, and clinical practice related to serving children and their families.”
• Publication – Journal of Clinical Child and Adolescent Psychology
• American Academy of Clinical Psychology
• Website – https://www.aacpsy.org/
• Mission Statement – The American Academy of Clinical Psychology seeks to “recognize and promote advanced competence within Professional Psychology,” “provide a professional community that encourages communication between and among Members and Fellows of the Academy,” “provide opportunities for advanced education in Professional Psychology,” and “expand awareness and availability of AACP Members and Fellows to the public through promotion and education.”
• Publication – Bulletin of the American Academy of Clinical Psychology (newsletter)
• The Society for a Science of Clinical Psychology (SSCP)
• Website – http://www.sscpweb.org/
• Mission Statement – “The Society for a Science of Clinical Psychology (SSCP) was established in 1966. Its purpose is to affirm and continue to promote the integration of the scientist and the practitioner in training, research, and applied endeavors. Its members represent a diversity of interests and theoretical orientations across clinical psychology. The common bond of the membership is a commitment to empirical research and the ideal that scientific principles should play a role in training, practice, and establishing public policy for health and mental health concerns. SSCP has organizational affiliations with both the American Psychological Association (Section III of Division 12) and the Association for Psychological Science.”
• Other Information – Offers ten awards ranging from early career award, outstanding mentor award, outstanding student teacher award, and outstanding student clinician award.
• American Society of Clinical Hypnosis
• Website – http://www.asch.net/
• Mission Statement – “To provide and encourage education programs to further, in every ethical way, the knowledge, understanding, and application of hypnosis in health care; to encourage research and scientific publication in the field of hypnosis; to promote the further recognition and acceptance of hypnosis as an important tool in clinical health care and focus for scientific research; to cooperate with other professional societies that share mutual goals, ethics and interests; and to provide a professional community for those clinicians and researchers who use hypnosis in their work.”
• Publication – American Journal of Clinical Hypnosis
• Other Information – Offers certification in clinical hypnosis
1.6.2.2. Professional Journals
• Clinical Psychology: Science and Practice
• Website – onlinelibrary.wiley.com/journal/10.1111/(ISSN)1468-2850
• Published by – American Psychological Association, Division 12
• Description – “Clinical Psychology: Science and Practice presents cutting-edge developments in the science and practice of clinical psychology and related mental health fields by publishing scholarly articles, primarily involving narrative and systematic reviews as well as meta-analyses related to assessment, intervention, and service delivery.”
• Journal of Clinical Child and Adolescent Psychology
• Website – www.clinicalchildpsychology.org/JCCAP
• Published by – American Psychological Association, Division 53
• Description – “It publishes original contributions on the following topics: (a) the development and evaluation of assessment and intervention techniques for use with clinical child and adolescent populations; (b) the development and maintenance of clinical child and adolescent problems; (c) cross-cultural and socio-demographic issues that have a clear bearing on clinical child and adolescent psychology in terms of theory, research, or practice; and (d) training and professional practice in clinical child and adolescent psychology, as well as child advocacy.”
• American Journal of Clinical Hypnosis
• Website – http://www.asch.net/Public/AmericanJournalofClinicalHypnosis.aspx
• Published by – American Society of Clinical Hypnosis
• Description – “The Journal publishes original scientific articles and clinical case reports on hypnosis, as well as reviews of related books and abstracts of the current hypnosis literature.”
Key Takeaways
You should have learned the following in this section:
• Mental health professionals take on many different forms with different degree requirements, training, and the ability to prescribe mediations.
• Telling others what we have done is achieved by joining professional societies and submitting articles to peer-reviewed journals.
Review Questions
1. Provide a general overview of the types of mental professionals and the degree, training, and ability to prescribe medications that they have.
2. Briefly outline professional societies and journals related to clinical psychology and related disciplines.
Module Recap
In Module 1, we undertook a relatively lengthy discussion of what abnormal behavior is by first looking at what normal behavior is. What emerged was a general set of guidelines focused on mental illness as causing dysfunction, distress, deviance, and at times, being dangerous for the afflicted and others around him/her. Then we classified mental disorders in terms of their occurrence, cause, course, prognosis, and treatment. We acknowledged that mental illness is stigmatized in our society and provided a basis for why this occurs and what to do about it. This involved a discussion of the history of mental illness and current views and trends.
Psychology is the scientific study of behavior and mental processes. The word scientific is key as psychology adheres to the strictest aspects of the scientific method and uses five main research designs in its investigation of mental disorders – observation, case study, surveys, correlational research, and experiments. Various mental health professionals use these designs, and societies and journals provide additional means to communicate findings or to be good consumers of psychological inquiry.
It is with this foundation in mind that we move to examine models of abnormality in Module 2. | textbooks/socialsci/Psychology/Psychological_Disorders/Fundamentals_of_Psychological_Disorders_3e_(Bridley_and_Daffin)/01%3A_Part_I._Setting_the_Stage/01%3A_What_is_Abnormal_Psychology/1.06%3A_Mental_Health_Professionals_Societies_and_Jo.txt |
Learning Objectives
• Differentiate uni- and multi-dimensional models of abnormality.
• Describe how the biological model explains mental illness.
• Describe how psychological perspectives explain mental illness.
• Describe how the sociocultural model explains mental illness.
In Module 2, we will discuss three models of abnormal behavior to include the biological, psychological, and sociocultural models. Each is unique in its own right and no single model can account for all aspects of abnormality. Hence, we advocate for a multi-dimensional and not a uni-dimensional model.
02: Models of Abnormal Psychology
Learning Objectives
• Define the uni-dimensional model.
• Explain the need for a multi-dimensional model of abnormality.
• Define model.
• List and describe the models of abnormality.
Uni-Dimensional
To effectively treat a mental disorder, we must understand its cause. This could be a single factor such as a chemical imbalance in the brain, relationship with a parent, socioeconomic status (SES), a fearful event encountered during middle childhood, or the way in which the individual copes with life’s stressors. This single factor explanation is called a uni-dimensional model. The problem with this approach is that mental disorders are not typically caused by a solitary factor, but multiple causes. Admittedly, single factors do emerge during a person’s life, but as they arise, the factors become part of the individual. In time, the cause of the person’s psychopathology is due to all these individual factors.
Multi-Dimensional
So, it is better to subscribe to a multi-dimensional model that integrates multiple causes of psychopathology and affirms that each cause comes to affect other causes over time. Uni-dimensional models alone are too simplistic to explain the etiology of mental disorders fully.
Before introducing the current main models, it is crucial to understand what a model is. In a general sense, a model is defined as a representation or imitation of an object (dictionary.com). For mental health professionals, models help us to understand mental illness since diseases such as depression cannot be touched or experienced firsthand. To be considered distinct from other conditions, a mental illness must have its own set of symptoms. But as you will see, the individual does not have to present with the entire range of symptoms. For example, to be diagnosed with separation anxiety disorder, you must present with three of eight symptoms for criteria A whereas for a major depressive episode as part of Bipolar II disorder, you have to display five (or more) symptoms for criteria A. There will be some variability in terms of what symptoms are displayed, but in general, all people with a specific psychopathology have symptoms from that group.
We can also ask the patient probing questions, seek information from family members, examine medical records, and in time, organize and process all this information to better understand the person’s condition and potential causes. Models aid us with doing all of this. Still, we must remember that the model is a starting point for the researcher, and due to this, it determines what causes might be investigated at the exclusion of other causes. Often, proponents of a given model find themselves in disagreement with proponents of other models. All forget that there is no individual model that completely explains human behavior, or in this case, abnormal behavior, and so each model contributes in its own way. Here are the models we will examine in this module:
• Biological – includes genetics, chemical imbalances in the brain, the functioning of the nervous system, etc.
• Psychological – includes learning, personality, stress, cognition, self-efficacy, and early life experiences. We will examine several perspectives that make up the psychological model to include psychodynamic, behavioral, cognitive, and humanistic-existential.
• Sociocultural – includes factors such as one’s gender, religious orientation, race, ethnicity, and culture.
Key Takeaways
You should have learned the following in this section:
• The uni-dimensional model proposes a single factor as the cause of psychopathology while the multi-dimensional model integrates multiple causes of psychopathology and affirms that each cause comes to affect other causes over time.
• There is no individual model that completely explains human behavior and so each model contributes in its own way.
Review Questions
1. What is the problem with a uni-dimensional model of psychopathology?
2. Discuss the concept of a model and identify those important to understanding psychopathology. | textbooks/socialsci/Psychology/Psychological_Disorders/Fundamentals_of_Psychological_Disorders_3e_(Bridley_and_Daffin)/01%3A_Part_I._Setting_the_Stage/02%3A_Models_of_Abnormal_Psychology/2.01%3A_Uni-_vs._Multi-Dimensional_Models_of_Abnor.txt |
Learning Objectives
• Describe how communication in the nervous system occurs.
• List the parts of the nervous system.
• Describe the structure of the neuron and all key parts.
• Outline how neural transmission occurs.
• Identify and define important neurotransmitters.
• List the major structures of the brain.
• Clarify how specific areas of the brain are involved in mental illness.
• Describe the role of genes in mental illness.
• Describe the role of hormonal imbalances in mental illness.
• Describe the role of bacterial and viral infections in mental illness.
• Describe commonly used treatments for mental illness.
• Evaluate the usefulness of the biological model.
Proponents of the biological model view mental illness as being a result of a malfunction in the body to include issues with brain anatomy or chemistry. As such, we will need to establish a foundation for how communication in the nervous system occurs, what the parts of the nervous system are, what a neuron is and its structure, how neural transmission occurs, and what the parts of the brain are. All while doing this, we will identify areas of concern for psychologists focused on the treatment of mental disorders.
Brain Structure and Chemistry
2.2.1.1. Communication in the nervous system. To truly understand brain structure and chemistry, it is a good idea to understand how communication occurs within the nervous system. See Figure 2.1 below. Simply:
1. Receptor cells in each of the five sensory systems detect energy.
2. This information is passed to the nervous system due to the process of transduction and through sensory or afferent neurons, which are part of the peripheral nervous system.
3. The information is received by brain structures (central nervous system) and perception occurs.
4. Once the information has been interpreted, commands are sent out, telling the body how to respond (Step E), also via the peripheral nervous system.
Figure 2.1. Communication in the Nervous System
Please note that we will not cover this process in full, but just the parts relevant to our topic of psychopathology.
2.2.1.2. The nervous system. The nervous system consists of two main parts – the central and peripheral nervous systems. The central nervous system(CNS) is the control center for the nervous system, which receives, processes, interprets, and stores incoming sensory information. It consists of the brain and spinal cord. The peripheral nervous system consists of everything outside the brain and spinal cord. It handles the CNS’s input and output and divides into the somatic and autonomic nervous systems. The somatic nervous system allows for voluntary movement by controlling the skeletal muscles and carries sensory information to the CNS. The autonomic nervous system regulates the functioning of blood vessels, glands, and internal organs such as the bladder, stomach, and heart. It consists of sympathetic and parasympathetic nervous systems. The sympathetic nervous system is involved when a person is intensely aroused. It provides the strength to fight back or to flee (fight-or-flight instinct). Eventually, the response brought about by the sympathetic nervous system must end. The parasympathetic nervous system calms the body.
Figure 2.2. The Structure of the Nervous System
2.2.1.3. The neuron. The fundamental unit of the nervous system is the neuron, or nerve cell (See Figure 2.3). It has several structures in common with all cells in the body. The nucleus is the control center of the neuron, and the soma is the cell body. In terms of distinctive structures, these focus on the ability of a neuron to send and receive information. The axon sends signals/information to neighboring neurons while the dendrites, which resemble little trees, receive information from neighboring neurons. Note the plural form of dendrite and the singular form of axon; there are many dendrites but only one axon. Also of importance to the neuron is the myelin sheath or the white, fatty covering which: 1) provides insulation so that signals from adjacent neurons do not affect one another and, 2) increases the speed at which signals are transmitted. The axon terminals are the end of the axon where the electrical impulse becomes a chemical message and passes to an adjacent neuron.
Though not neurons, glial cells play an important part in helping the nervous system to be the efficient machine that it is. Glial cells are support cells in the nervous system that serve five main functions:
1. They act as a glue and hold the neuron in place.
2. They form the myelin sheath.
3. They provide nourishment for the cell.
4. They remove waste products.
5. They protect the neuron from harmful substances.
Finally, nerves are a group of axons bundled together like wires in an electrical cable.
Figure 2.3. The Structure of the Neuron
2.2.1.4. Neural transmission. Transducers or receptor cells in the major organs of our five sensory systems – vision (the eyes), hearing (the ears), smell (the nose), touch (the skin), and taste (the tongue) – convert the physical energy that they detect or sense and send it to the brain via the neural impulse. How so? See Figure 2.4 below. We will cover this process in three parts.
Part 1. The Axon and Neural Impulse
The neural impulse proceeds across the following steps:
• Step 1 – Neurons waiting to fire are said to be in resting potential and polarized, or having a negative charge inside the neuron and a positive charge outside.
• Step 2 – If adequately stimulated, the neuron experiences an action potential and becomes depolarized. When this occurs, voltage-gated ion channels open, allowing positively charged sodium ions (Na+) to enter. This shifts the polarity to positive on the inside and negative outside. Note that ions are charged particles found both inside and outside the neuron.
• Step 3 – Once the action potential passes from one segment of the axon to the next, the previous segment begins to repolarize. This occurs because the Na channels close and potassium (K) channels open. K+ has a positive charge, so the neuron becomes negative again on the inside and positive on the outside.
• Step 4 – After the neuron fires, it will not fire again no matter how much stimulation it receives. This is called the absolute refractory period. Think of it as the neuron ABSOLUTELY will not fire, no matter what.
• Step 5 – After a short time, the neuron can fire again, but needs greater than normal levels of stimulation to do so. This is called the relative refractory period.
• Step 6 – Please note that this process is cyclical. We started at resting potential in Step 1 and end at resting potential in Step 6.
Part 2. The Action Potential
Let’s look at the electrical portion of the process in another way and add some detail.
Figure 2.4. The Action Potential
• Recall that a neuron is usually at resting potential and polarized. The charge inside is -70mV at rest.
• If it receives sufficient stimulation, causing the polarity inside the neuron to rise from -70 mV to -55mV (threshold of excitation), the neuron will fire or send an electrical impulse down the length of the axon (the action potential or depolarization). It should be noted that it either hits -55mV and fires, or it does not fire at all. This is the all-or-nothing principle. The threshold must be reached.
• Once the electrical impulse has passed from one segment of the axon to the next, the neuron begins the process of resetting called repolarization.
• During repolarization the neuron will not fire no matter how much stimulation it receives. This is called the absolute refractory period.
• The neuron next moves into a relative refractory period, meaning it can fire but needs higher than normal levels of stimulation. Notice how the line has dropped below -70mV. Hence, to reach -55mV and fire, it will need more than the normal gain of +15mV (-70 to -55 mV).
• And then we return to resting potential, as you saw in Figure 2.4
Part 3. The Synapse
The electrical portion of the neural impulse is just the start. The actual code passes from one neuron to another in a chemical form called a neurotransmitter. The point where this occurs is called the synapse. The synapse consists of three parts – the axon of the sending neuron, the space in between called the synaptic space, gap, or cleft, and the dendrite of the receiving neuron. Once the electrical impulse reaches the end of the axon, called the axon terminal, it stimulates synaptic vesicles or neurotransmitter sacs to release the neurotransmitter. Neurotransmitters will only bind to their specific receptor sites, much like a key will only fit into the lock it was designed for. You might say neurotransmitters are part of a lock-and-key system. What happens to the neurotransmitters that do not bind to a receptor site? They might go through reuptake, which is the process of the presynaptic neuron taking up excess neurotransmitters in the synaptic space for future use or enzymatic degradation when enzymes destroy excess neurotransmitters in the synaptic space.
2.2.1.5. Neurotransmitters. What exactly are some of the neurotransmitters which are so critical for neural transmission, and are essential to our discussion of psychopathology?
• Dopamine – controls voluntary movements and is associated with the reward mechanism in the brain
• Serotonin – regulates pain, sleep cycle, and digestion; leads to a stable mood, so low levels lead to depression
• Endorphins – involved in reducing pain and making the person calm and happy
• Norepinephrine – increases the heart rate and blood pressure and regulates mood
• GABA – blocks the signals of excitatory neurotransmitters responsible for anxiety and panic
• Glutamate – associated with learning and memory
The critical thing to understand here is that there is a belief in the realm of mental health that chemical imbalances are responsible for many mental disorders. Chief among these are neurotransmitter imbalances. For instance, people with Seasonal Affective Disorder (SAD) have difficulty regulating serotonin. More on this throughout the book as we discuss each disorder.
2.2.1.6. The brain. The central nervous system consists of the brain and spinal cord; the former we will discuss briefly and in terms of key structures which include:
• Medulla – regulates breathing, heart rate, and blood pressure
• Pons – acts as a bridge connecting the cerebellum and medulla and helps to transfer messages between different parts of the brain and spinal cord
• Reticular formation – responsible for alertness and attention
• Cerebellum – involved in our sense of balance and for coordinating the body’s muscles so that movement is smooth and precise. Involved in the learning of certain kinds of simple responses and acquired reflexes.
• Thalamus – the major sensory relay center for all senses except smell
• Hypothalamus – involved in drives associated with the survival of both the individual and the species. It regulates temperature by triggering sweating or shivering and controls the complex operations of the autonomic nervous system
• Amygdala – responsible for evaluating sensory information and quickly determining its emotional importance
• Hippocampus – our “gateway” to memory. Allows us to form spatial memories so that we can accurately navigate through our environment and helps us to form new memories about facts and events
• The cerebrum has four distinct regions in each cerebral hemisphere. First, the frontal lobe contains the motor cortex, which issues orders to the muscles of the body that produce voluntary movement. The frontal lobe is also involved in emotion and in the ability to make plans, think creatively, and take initiative. The parietal lobe contains the somatosensory cortex and receives information about pressure, pain, touch, and temperature from sense receptors in the skin, muscles, joints, internal organs, and taste buds. The occipital lobe contains the visual cortex for receiving and processing visual information. Finally, the temporal lobe is involved in memory, perception, and emotion. It contains the auditory cortex which processes sound.
Of course, this is not an exhaustive list of structures found in the brain but gives you a pretty good idea of function and which structure is responsible for it. What is important to mental health professionals is some disorders involve specific areas of the brain. For instance, Parkinson’s disease is a brain disorder that results in a gradual loss of muscle control and arises when cells in the substantia nigra, a long nucleus considered to be part of the basal ganglia, stop making dopamine. As these cells die, the brain fails to receive messages about when and how to move. In the case of depression, low levels of serotonin are responsible, at least partially. New evidence suggests “nerve cell connections, nerve cell growth, and the functioning of nerve circuits have a major impact on depression… and areas that play a significant role in depression are the amygdala, the thalamus, and the hippocampus.” Also, individuals with borderline personality disorder have been shown to have structural and functional changes in brain areas associated with impulse control and emotional regulation, while imaging studies reveal differences in the frontal cortex and subcortical structures for those suffering from OCD.
Check out the following from Harvard Health for more on depression and the brain as a cause: https://www.health.harvard.edu/mind-and-mood/what-causes-depression
Genes, Hormonal Imbalances, and Viral Infections
2.2.2.1. Genetic issues and explanations. DNA, or deoxyribonucleic acid, is our heredity material. It exists in the nucleus of each cell, packaged in threadlike structures known as chromosomes, for which we have 23 pairs or 46 total. Twenty-two of the pairs are the same in both sexes, but the 23rd pair is called the sex chromosome and differs between males and females. Males have X and Y chromosomes while females have two Xs. According to the Genetics Home Reference website as part of NIH’s National Library of Medicine, a gene is “the basic physical and functional unit of heredity” (https://ghr.nlm.nih.gov/primer/basics/gene). They act as the instructions to make proteins, and it is estimated by the Human Genome Project that we have between 20,000 and 25,000 genes. We all have two copies of each gene, one inherited from our mother and one from our father.
Recent research has discovered that autism, ADHD, bipolar disorder, major depression, and schizophrenia all share genetic roots. They “were more likely to have suspect genetic variation at the same four chromosomal sites. These included risk versions of two genes that regulate the flow of calcium into cells.” Likewise, twin and family studies have shown that people with first-degree relatives suffering from OCD are at higher risk to develop the disorder themselves. The same is true of borderline personality disorder.
WebMD adds, “Experts believe many mental illnesses are linked to abnormalities in many genes rather than just one or a few and that how these genes interact with the environment is unique for every person (even identical twins). That is why a person inherits a susceptibility to a mental illness and doesn’t necessarily develop the illness. Mental illness itself occurs from the interaction of multiple genes and other factors–such as stress, abuse, or a traumatic event–which can influence, or trigger, an illness in a person who has an inherited susceptibility to it” (https://www.webmd.com/mental-health/mental-health-causes-mental-illness#1).
For more on the role of genes in the development of mental illness, check out this article from Psychology Today:
https://www.psychologytoday.com/blog/saving-normal/201604/what-you-need-know-about-the-genetics-mental-disorders
2.2.2.2. Hormonal imbalances. The body has two coordinating and integrating systems, the nervous system and the endocrine system. The main difference between these two systems is the speed with which they act. The nervous system moves quickly with nerve impulses moving in a few hundredths of a second. The endocrine system moves slowly with hormones, released by endocrine glands, taking seconds, or even minutes, to reach their target. Hormones are important to psychologists because they manage the nervous system and body tissues at certain stages of development and activate behaviors such as alertness or sleepiness, sexual behavior, concentration, aggressiveness, reaction to stress, and a desire for companionship. The pituitary gland is the “master gland” which regulates other endocrine glands. It influences blood pressure, thirst, contractions of the uterus during childbirth, milk production, sexual behavior and interest, body growth, the amount of water in the body’s cells, and other functions as well. The pineal gland helps regulate the sleep-wake cycle while the thyroid gland regulates the body’s energy levels by controlling metabolism and the basal metabolic rate (BMR). It regulates the body’s rate of metabolism and so how energetic people are.
Of importance to mental health professionals are the adrenal glands, located on top of the kidneys, and which release cortisol to help the body deal with stress. Elevated levels of this hormone can lead to several problems, including increased weight gain, interference with learning and memory, reduced bone density, high cholesterol, and an increased risk of depression. Similarly, the overproduction of the hormone melatonin can lead to SAD.
For more on the link between cortisol and depression, check out this article:
https://www.psychologytoday.com/blog/the-athletes-way/201301/cortisol-why-the-stress-hormone-is-public-enemy-no-1
2.2.2.3. Bacterial and viral infections. Infections can cause brain damage and lead to the development of mental illness or exacerbate existing symptoms. For instance, evidence suggests that contracting strep throat, “an infection in the throat and tonsils caused by bacteria called group A Streptococcus” (for more on strep throat, please visit https://www.cdc.gov/groupastrep/diseases-public/strep-throat.html), can lead to the development of OCD, Tourette’s syndrome, and tic disorder in children (Mell, Davis, & Owens, 2005; Giedd et al., 2000; Allen et al., 1995; https://www.mayoclinic.org/diseases-conditions/flu/symptoms-causes/syc-20351719), have also been linked to schizophrenia (Brown et al., 2004; McGrath and Castle, 1995; McGrath et al., 1994; O’callaghan et al., 1991) though more recent research suggests this evidence is weak at best (Selten & Termorshuizen, 2017; Ebert & Kotler, 2005).
Treatments
2.2.3.1. Psychopharmacology and psychotropic drugs. One option to treat severe mental illness is psychotropic medications. These medications fall under five major categories.
Antidepressants are used to treat depression, but also anxiety, insomnia, and pain. The most common types of antidepressants are SSRIs or selective serotonin reuptake inhibitors and include Citalopram, Paroxetine, and Fluoxetine (Prozac). Possible side effects include weight gain, sleepiness, nausea and vomiting, panic attacks, or thoughts about suicide or dying.
Anti-anxiety medications help with the symptoms of anxiety and include benzodiazepines such as Clonazepam, Alprazolam, and Lorazepam. “Anti-anxiety medications such as benzodiazepines are effective in relieving anxiety and take effect more quickly than the antidepressant medications (or buspirone) often prescribed for anxiety. However, people can build up a tolerance to benzodiazepines if they are taken over a long period of time and may need higher and higher doses to get the same effect.” Side effects include drowsiness, dizziness, nausea, difficulty urinating, and irregular heartbeat, to name a few.
Stimulants increase one’s alertness and attention and are frequently used to treat ADHD. They include Lisdexamfetamine, the combination of dextroamphetamine and amphetamine, and Methylphenidate. Stimulants are generally effective and produce a calming effect. Possible side effects include loss of appetite, headache, motor or verbal tics, and personality changes such as appearing emotionless.
Antipsychotics are used to treat psychosis or “conditions that affect the mind, and in which there has been some loss of contact with reality, often including delusions (false, fixed beliefs) or hallucinations (hearing or seeing things that are not really there).” They can be used to treat eating disorders, severe depression, PTSD, OCD, ADHD, and Generalized Anxiety Disorder. Common antipsychotics include Chlorpromazine, Perphenazine, Quetiapine, and Lurasidone. Side effects include nausea, vomiting, blurred vision, weight gain, restlessness, tremors, and rigidity.
Mood stabilizers are used to treat bipolar disorder and, at times, depression, schizoaffective disorder, and disorders of impulse control. A common example is Lithium; side effects include loss of coordination, hallucinations, seizures, and frequent urination.
For more information on psychotropic medications, please visit:
https://www.nimh.nih.gov/health/topics/mental-health-medications/index.shtml
The use of these drugs has been generally beneficial to patients. Most report that their symptoms decline, leading them to feel better and improve their functioning. Also, long-term hospitalizations are less likely to occur as a result, though the medications do not benefit the individual in terms of improved living skills.
2.2.3.2. Electroconvulsive therapy. According to Mental Health America, “Electroconvulsive therapy (ECT) is a procedure in which a brief application of electric stimulus is used to produce a generalized seizure.” Patients are placed on a padded bed and administered a muscle relaxant to avoid injury during the seizures. Annually, approximately 100,000 undergo ECT to treat conditions such as severe depression, acute mania, suicidality, and some forms of schizophrenia. The procedure is still the most controversial available to mental health professionals due to “its effectiveness vs. the side effects, the objectivity of ECT experts, and the recent increase in ECT as a quick and easy solution, instead of long-term psychotherapy or hospitalization” (https://www.mhanational.org/ect). Its popularity has declined since the 1960s and 1970s.
2.2.3.3. Psychosurgery. Another option to treat mental disorders is to perform brain surgeries. In the past, we have conducted trephination and lobotomies, neither of which are used today. Today’s techniques are much more sophisticated and have been used to treat schizophrenia, depression, and some personality and anxiety disorders. However, critics cite obvious ethical issues with conducting such surgeries as well as scientific issues.
For more on psychosurgery, check out this article from Psychology Today:
https://www.psychologytoday.com/articles/199203/psychosurgery
Evaluation of the Model
The biological model is generally well respected today but suffers a few key issues. First, consider the list of side effects given for psychotropic medications. You might make the case that some of the side effects are worse than the condition they are treating. Second, the viewpoint that all human behavior is explainable in biological terms, and therefore when issues arise, they can be treated using biological methods, overlooks factors that are not fundamentally biological. More on that over the next two sections.
Key Takeaways
You should have learned the following in this section:
• Proponents of the biological model view mental illness as being a result of a malfunction in the body to include issues with brain anatomy or chemistry.
• Neurotransmitter imbalances and problems with brain structures/areas can result in mental disorders.
• Many disorders have genetic roots, are a result of hormonal imbalances, or caused by viral infections such as strep.
• Treatments related to the biological model include drugs, ECT, and psychosurgery.
Review Questions
1. Briefly outline how communication in the nervous system occurs.
2. What happens at the synapse during neural transmission? Why is this important to a discussion of psychopathology?
3. How is the anatomy of the brain important to a discussion of psychopathology?
4. What is the effect of genes, hormones, and viruses on the development of mental disorders?
5. What treatments are available to clinicians courtesy of the biological model of psychopathology?
6. What are some issues facing the biological model? | textbooks/socialsci/Psychology/Psychological_Disorders/Fundamentals_of_Psychological_Disorders_3e_(Bridley_and_Daffin)/01%3A_Part_I._Setting_the_Stage/02%3A_Models_of_Abnormal_Psychology/2.02%3A_The_Biological_Model.txt |
Learning Objectives
• Describe the psychodynamic theory.
• Outline the structure of personality and how it develops over time.
• Describe ways to deal with anxiety.
• Clarify what psychodynamic techniques are used.
• Evaluate the usefulness of psychodynamic theory.
• Describe learning.
• Outline respondent conditioning and the work of Pavlov and Watson.
• Outline operant conditioning and the work of Thorndike and Skinner.
• Outline observational learning/social-learning theory and the work of Bandura.
• Evaluate the usefulness of the behavioral model.
• Define the cognitive model.
• Exemplify the effect of schemas on creating abnormal behavior.
• Exemplify the effect of attributions on creating abnormal behavior.
• Exemplify the effect of maladaptive cognitions on creating abnormal behavior.
• List and describe cognitive therapies.
• Evaluate the usefulness of the cognitive model.
• Describe the humanistic perspective.
• Describe the existential perspective.
• Evaluate the usefulness of humanistic and existential perspectives.
Psychodynamic Theory
In 1895, the book, Studies on Hysteria, was published by Josef Breuer (1842-1925) and Sigmund Freud (1856-1939), and marked the birth of psychoanalysis, though Freud did not use this actual term until a year later. The book published several case studies, including that of Anna O., born February 27, 1859 in Vienna to Jewish parents Siegmund and Recha Pappenheim, strict Orthodox adherents who were considered millionaires at the time. Bertha, known in published case studies as Anna O., was expected to complete the formal education typical of upper-middle-class girls, which included foreign language, religion, horseback riding, needlepoint, and piano. She felt confined and suffocated in this life and took to a fantasy world she called her “private theater.” Anna also developed hysteria, including symptoms such as memory loss, paralysis, disturbed eye movements, reduced speech, nausea, and mental deterioration. Her symptoms appeared as she cared for her dying father, and her mother called on Breuer to diagnosis her condition (note that Freud never actually treated her). Hypnosis was used at first and relieved her symptoms, as it had done for many patients (See Module 1). Breuer made daily visits and allowed her to share stories from her private theater, which she came to call “talking cure” or “chimney sweeping.” Many of the stories she shared were actually thoughts or events she found troubling and reliving them helped to relieve or eliminate the symptoms. Breuer’s wife, Mathilde, became jealous of her husband’s relationship with the young girl, leading Breuer to terminate treatment in June of 1882 before Anna had fully recovered. She relapsed and was admitted to Bellevue Sanatorium on July 1, eventually being released in October of the same year. With time, Anna O. did recover from her hysteria and went on to become a prominent member of the Jewish Community, involving herself in social work, volunteering at soup kitchens, and becoming ‘House Mother’ at an orphanage for Jewish girls in 1895. Bertha (Anna O.) became involved in the German Feminist movement, and in 1904 founded the League of Jewish Women. She published many short stories; a play called Women’s Rights, in which she criticized the economic and sexual exploitation of women; and wrote a book in 1900 called The Jewish Problem in Galicia, in which she blamed the poverty of the Jews of Eastern Europe on their lack of education. In 1935, Bertha was diagnosed with a tumor, and in 1936, she was summoned by the Gestapo to explain anti-Hitler statements she had allegedly made. She died shortly after this interrogation on May 28, 1936. Freud considered the talking cure of Anna O. to be the origin of psychoanalytic therapy and what would come to be called the cathartic method.
2.3.1.1. The structure of personality. Freud’s psychoanalysis was unique in the history of psychology because it did not arise within universities as most major schools of thought did; rather, it emerged from medicine and psychiatry to address psychopathology and examine the unconscious. Freud believed that consciousness had three levels – 1) consciousness which was the seat of our awareness, 2) preconscious that included all of our sensations, thoughts, memories, and feelings, and 3) the unconscious, which was not available to us. The contents of the unconscious could move from the unconscious to preconscious, but to do so, it had to pass a Gate Keeper. Content that was turned away was said to be repressed.
According to Freud, our personality has three parts – the id, superego, and ego, and from these our behavior arises. First, the id is the impulsive part that expresses our sexual and aggressive instincts. It is present at birth, completely unconscious, and operates on the pleasure principle, resulting in selfishly seeking immediate gratification of our needs no matter what the cost. The second part of personality emerges after birth with early formative experiences and is called the ego. The ego attempts to mediate the desires of the id against the demands of reality, and eventually, the moral limitations or guidelines of the superego. It operates on the reality principle, or an awareness of the need to adjust behavior, to meet the demands of our environment. The last part of the personality to develop is the superego, which represents society’s expectations, moral standards, rules, and represents our conscience. It leads us to adopt our parent’s values as we come to realize that many of the id’s impulses are unacceptable. Still, we violate these values at times and experience feelings of guilt. The superego is partly conscious but mostly unconscious, and part of it becomes our conscience. The three parts of personality generally work together well and compromise, leading to a healthy personality, but if the conflict is not resolved, intrapsychic conflicts can arise and lead to mental disorders.
Personality develops over five distinct stages in which the libido focuses on different parts of the body. First, libido is the psychic energy that drives a person to pleasurable thoughts and behaviors. Our life instincts, or Eros, are manifested through it and are the creative forces that sustain life. They include hunger, thirst, self-preservation, and sex. In contrast, Thanatos, our death instinct, is either directed inward as in the case of suicide and masochism or outward via hatred and aggression. Both types of instincts are sources of stimulation in the body and create a state of tension that is unpleasant, thereby motivating us to reduce them. Consider hunger, and the associated rumbling of our stomach, fatigue, lack of energy, etc., that motivates us to find and eat food. If we are angry at someone, we may engage in physical or relational aggression to alleviate this stimulation.
2.3.1.2. The development of personality. Freud’s psychosexual stages of personality development are listed below. Please note that a person may become fixated at any stage, meaning they become stuck, thereby affecting later development and possibly leading to abnormal functioning, or psychopathology.
1. Oral Stage – Beginning at birth and lasting to 24 months, the libido is focused on the mouth. Sexual tension is relieved by sucking and swallowing at first, and then later by chewing and biting as baby teeth come in. Fixation is linked to a lack of confidence, argumentativeness, and sarcasm.
2. Anal Stage – Lasting from 2-3 years, the libido is focused on the anus as toilet training occurs. If parents are too lenient, children may become messy or unorganized. If parents are too strict, children may become obstinate, stingy, or orderly.
3. Phallic Stage – Occurring from about age 3 to 5-6 years, the libido is focused on the genitals, and children develop an attachment to the parent of the opposite sex and are jealous of the same-sex parent. The Oedipus complex develops in boys and results in the son falling in love with his mother while fearing that his father will find out and castrate him. Meanwhile, girls fall in love with the father and fear that their mother will find out, called the Electra complex. A fixation at this stage may result in low self-esteem, feelings of worthlessness, and shyness.
4. Latency Stage – From 6-12 years of age, children lose interest in sexual behavior, so boys play with boys and girls with girls. Neither sex pays much attention to the opposite sex.
5. Genital Stage – Beginning at puberty, sexual impulses reawaken and unfulfilled desires from infancy and childhood can be satisfied during lovemaking.
2.3.1.3. Dealing with anxiety. The ego has a challenging job to fulfill, balancing both the will of the id and the superego, and the overwhelming anxiety and panic this creates. Ego-defense mechanisms are in place to protect us from this pain but are considered maladaptive if they are misused and become our primary way of dealing with stress. They protect us from anxiety and operate unconsciously by distorting reality. Defense mechanisms include the following:
• Repression – When unacceptable ideas, wishes, desires, or memories are blocked from consciousness such as forgetting a horrific car accident that you caused. Eventually, though, it must be dealt with, or the repressed memory can cause problems later in life.
• Reaction formation – When an impulse is repressed and then expressed by its opposite. For example, you are angry with your boss but cannot lash out at him, so you are super friendly instead. Another example is having lustful thoughts about a coworker than you cannot express because you are married, so you are extremely hateful to this person.
• Displacement – When we satisfy an impulse with a different object because focusing on the primary object may get us in trouble. A classic example is taking out your frustration with your boss on your wife and/or kids when you get home. If you lash out at your boss, you could be fired. The substitute target is less dangerous than the primary target.
• Projection – When we attribute threatening desires or unacceptable motives to others. An example is when we do not have the skills necessary to complete a task, but we blame the other members of our group for being incompetent and unreliable.
• Sublimation – When we find a socially acceptable way to express a desire. If we are stressed out or upset, we may go to the gym and box or lift weights. A person who desires to cut things may become a surgeon.
• Denial – Sometimes, life is so hard that all we can do is deny how bad it is. An example is denying a diagnosis of lung cancer given by your doctor.
• Identification – When we find someone who has found a socially acceptable way to satisfy their unconscious wishes and desires, and we model that behavior.
• Regression – When we move from a mature behavior to one that is infantile. If your significant other is nagging you, you might regress by putting your hands over your ears and saying, “La la la la la la la la…”
• Rationalization – When we offer well-thought-out reasons for why we did what we did, but these are not the real reason. Students sometimes rationalize not doing well in a class by stating that they really are not interested in the subject or saying the instructor writes impossible-to-pass tests.
• Intellectualization – When we avoid emotion by focusing on the intellectual aspects of a situation such as ignoring the sadness we are feeling after the death of our mother by focusing on planning the funeral.
2.3.1.4. Psychodynamic techniques. Freud used three primary assessment techniques—free association, transference, and dream analysis—as part of psychoanalysis, or psychoanalytic therapy, to understand the personalities of his patients and expose repressed material. First, free association involves the patient describing whatever comes to mind during the session. The patient continues but always reaches a point when he/she cannot or will not proceed any further. The patient might change the subject, stop talking, or lose his/her train of thought. Freud said this resistance revealed where issues persisted.
Second, transference is the process through which patients transfer attitudes he/she held during childhood to the therapist. They may be positive and include friendly, affectionate feelings, or negative, and include hostile and angry feelings. The goal of therapy is to wean patients from their childlike dependency on the therapist.
Finally, Freud used dream analysis to understand a person’s innermost wishes. The content of dreams includes the person’s actual retelling of the dreams, called manifest content, and the hidden or symbolic meaning called latent content. In terms of the latter, some symbols are linked to the person specifically, while others are common to all people.
2.3.1.5. Evaluating psychodynamic theory. Freud’s psychodynamic theory made a lasting impact on the field of psychology but also has been criticized heavily. First, Freud made most of his observations in an unsystematic, uncontrolled way, and he relied on the case study method. Second, the participants in his studies were not representative of the broader population. Despite Freud’s generalization, his theory was based on only a few patients. Third, he relied solely on the reports of his patients and sought no observer reports. Fourth, it is difficult to empirically study psychodynamic principles since most operate unconsciously. This begs the question of how we can really know that they exist. Finally, psychoanalytic treatment is expensive and time consuming, and since Freud’s time, drug therapies have become more popular and successful. Still, Sigmund Freud developed useful therapeutic tools for clinicians and raised awareness about the role the unconscious plays in both normal and abnormal behavior.
The Behavioral Model
2.3.2.1. What is learning? The behavioral model concerns the cognitive process of learning, which is any relatively permanent change in behavior due to experience and practice. Learning has two main forms – associative learning and observational learning. First, associative learning is the linking together of information sensed from our environment. Conditioning, or a type of associative learning, occurs when two separate events become connected. There are two forms: classical conditioning, or linking together two types of stimuli, and operant conditioning, or linking together a response with its consequence. Second, observational learning occurs when we learn by observing the world around us.
We should also note the existence of non-associative learning or when there is no linking of information or observing the actions of others around you. Types include habituation, or when we simply stop responding to repetitive and harmless stimuli in our environment such as a fan running in your laptop as you work on a paper, and sensitization, or when our reactions are increased due to a strong stimulus, such as an individual who experienced a mugging and now panics when someone walks up behind him/her on the street.
Behaviorism is the school of thought associated with learning that began in 1913 with the publication of John B. Watson’s article, “Psychology as the Behaviorist Views It,” in the journal Psychological Review (Watson, 1913). Watson believed that the subject matter of psychology was to be observable behavior, and to that end, psychology should focus on the prediction and control of behavior. Behaviorism was dominant from 1913 to 1990 before being absorbed into mainstream psychology. It went through three major stages – behaviorism proper under Watson and lasting from 1913-1930 (discussed as classical/respondent conditioning), neobehaviorism under Skinner and lasting from 1930-1960 (discussed as operant conditioning), and sociobehaviorism under Bandura and Rotter and lasting from 1960-1990 (discussed as social learning theory).
2.3.2.2. Respondent conditioning. You have likely heard about Pavlov and his dogs, but what you may not know is that this was a discovery made accidentally. Ivan Petrovich Pavlov (1906, 1927, 1928), a Russian physiologist, was interested in studying digestive processes in dogs in response to being fed meat powder. What he discovered was the dogs would salivate even before the meat powder was presented. They would salivate at the sound of a bell, footsteps in the hall, a tuning fork, or the presence of a lab assistant. Pavlov realized some stimuli automatically elicited responses (such as salivating to meat powder) and other stimuli had to be paired with these automatic associations for the animal or person to respond to it (such as salivating to a bell). Armed with this stunning revelation, Pavlov spent the rest of his career investigating the learning phenomenon.
The important thing to understand is that not all behaviors occur due to reinforcement and punishment as operant conditioning says. In the case of respondent conditioning, stimuli exert complete and automatic control over some behaviors. We see this in the case of reflexes. When a doctor strikes your knee with that little hammer, your leg extends out automatically. Another example is how a baby will root for a food source if the mother’s breast is placed near their mouth. And if a nipple is placed in their mouth, they will also automatically suck via the sucking reflex. Humans have several of these reflexes, though not as many as other animals due to our more complicated nervous system.
Respondent conditioning (also called classical or Pavlovian conditioning) occurs when we link a previously neutral stimulus with a stimulus that is unlearned or inborn, called an unconditioned stimulus. In respondent conditioning, learning happens in three phases: preconditioning, conditioning, and postconditioning. See Figure 2.5 for an overview of Pavlov’s classic experiment.
Preconditioning. Notice that preconditioning has both an A and a B panel. All this stage of learning signifies is that some learning is already present. There is no need to learn it again, as in the case of primary reinforcers and punishers in operant conditioning. In Panel A, food makes a dog salivate. This response does not need to be learned and shows the relationship between an unconditioned stimulus (UCS) yielding an unconditioned response (UCR). Unconditioned means unlearned. In Panel B, we see that a neutral stimulus (NS) produces no response. Dogs do not enter the world knowing to respond to the ringing of a bell (which it hears).
Conditioning. Conditioning is when learning occurs. By pairing a neutral stimulus and unconditioned stimulus (bell and food, respectively), the dog will learn that the bell ringing (NS) signals food coming (UCS) and salivate (UCR). The pairing must occur more than once so that needless pairings are not learned such as someone farting right before your food comes out and now you salivate whenever someone farts (…at least for a while. Eventually the fact that no food comes will extinguish this reaction but still, it will be weird for a bit).
Figure 2.5. Pavlov’s Classic Experiment
Postconditioning. Postconditioning, or after learning has occurred, establishes a new and not naturally occurring relationship of a conditioned stimulus (CS; previously the NS) and conditioned response (CR; the same response). So the dog now reliably salivates at the sound of the bell because he expects that food will follow, and it does.
Watson and Rayner (1920) conducted one of the most famous studies in psychology. Essentially, they wanted to explore “the possibility of conditioning various types of emotional response(s).” The researchers ran a series of trials in which they exposed a 9-month-old child, known as Little Albert, to a white rat. Little Albert made no response outside of curiosity (NS–NR not shown). Panel A of Figure 2.6 shows the naturally occurring response to the stimulus of a loud sound. On later trials, the rat was presented (NS) and followed closely by a loud sound (UCS; Panel B). After several conditioning trials, the child responded with fear to the mere presence of the white rat (Panel C).
Figure 2.6. Learning to Fear
As fears can be learned, so too they can be unlearned. Considered the follow-up to Watson and Rayner (1920), Jones (1924; Figure 2.7) wanted to see if a child who learned to be afraid of white rabbits (Panel B) could be conditioned to become unafraid of them. Simply, she placed the child in one end of a room and then brought in the rabbit. The rabbit was far enough away so as not to cause distress. Then, Jones gave the child some pleasant food (i.e., something sweet such as cookies [Panel C]; remember the response to the food is unlearned, i.e., Panel A). The procedure in Panel C continued with the rabbit being brought a bit closer each time until, eventually, the child did not respond with distress to the rabbit (Panel D).
Figure 2.7. Unlearning Fears
This process is called counterconditioning, or the reversal of previous learning.
Another respondent conditioning way to unlearn a fear is called flooding or exposing the person to the maximum level of stimulus and as nothing aversive occurs, the link between CS and UCS producing the CR of fear should break, leaving the person unafraid. That is the idea, at least. So, if you were afraid of clowns, you would be thrown into a room full of clowns. Hmm….
Finally, respondent conditioning has several properties:
• Respondent Generalization – When many similar CSs or a broad range of CSs elicit the same CR. An example is the sound of a whistle eliciting salivation much the same as a ringing bell, both detected via audition.
• Respondent Discrimination – When a single CS or a narrow range of CSs elicits a CR, i.e., teaching the dog to respond to a specific bell and ignore the whistle. The whistle would not be followed by food, eventually leading to….
• Respondent Extinction – When the CS is no longer paired with the UCS. The sound of a school bell ringing (new CS that was generalized) is not followed by food (UCS), and so eventually, the dog stops salivating (the CR).
• Spontaneous Recovery – When the CS elicits the CR after extinction has occurred. Eventually, the school bell will ring, making the dog salivate. If no food comes, the behavior will not continue. If food appears, the salivation response will be re-established.
2.3.2.3. Operant conditioning. Influential on the development of Skinner’s operant conditioning, Thorndike (1905) proposed the law of effect or the idea that if our behavior produces a favorable consequence, in the future when the same stimulus is present, we will be more likely to make the response again, expecting the same favorable consequence. Likewise, if our action leads to dissatisfaction, then we will not repeat the same behavior in the future. He developed the law of effect thanks to his work with a puzzle box. Cats were food deprived the night before the experimental procedure was to occur. The next morning, researchers placed a hungry cat in the puzzle box and set a small amount of food outside the box, just close enough to be smelled. The cat could escape the box and reach the food by manipulating a series of levers. Once free, the cat was allowed to eat some food before being promptly returned to the box. With each subsequent escape and re-insertion into the box, the cat became faster at correctly manipulating the levers. This scenario demonstrates trial and error learning or making a response repeatedly if it leads to success. Thorndike also said that stimulus and responses were connected by the organism, and this led to learning. This approach to learning was called connectionism.
Operant conditioning is a type of associate learning which focuses on consequences that follow a response or behavior that we make (anything we do or say) and whether it makes a behavior more or less likely to occur. This should sound much like what you just read about in terms of Thorndike’s work. Skinner talked about contingencies or when one thing occurs due to another. Think of it as an If-Then statement. If I do X, then Y will happen. For operant conditioning, this means that if I make a behavior, then a specific consequence will follow. The events (response and consequence) are linked in time.
What form do these consequences take? There are two main ways they can present themselves.
• Reinforcement – Due to the consequence, a behavior/response is strengthened and more likely to occur in the future.
• Punishment – Due to the consequence, a behavior/response is weakened and less likely to occur in the future.
Reinforcement and punishment can occur as two types – positive and negative. These words have no affective connotation to them, meaning they do not imply good or bad. Positive means that you are giving something – good or bad. Negative means that something is being taken away – good or bad. Check out the figure below for how these contingencies are arranged.
Figure 2.8. Contingencies in Operant Conditioning
Let’s go through each:
• Positive Punishment (PP) – If something bad or aversive is given or added, then the behavior is less likely to occur in the future. If you talk back to your mother and she slaps your mouth, this is a PP. Your response of talking back led to the consequence of the aversive slap being given to your face. Ouch!!!
• Positive Reinforcement (PR) – If something good is given or added, then the behavior is more likely to occur in the future. If you study hard and receive an A on your exam, you will be more likely to study hard in the future. Similarly, your parents may give you money for your stellar performance. Cha Ching!!!
• Negative Reinforcement (NR) – This is a tough one for students to comprehend because the terms seem counterintuitive, even though we experience NR all the time. NR is when something bad or aversive is taken away or subtracted due to your actions, making it that you will be more likely to make the same behavior in the future when the same stimulus presents itself. For instance, what do you do if you have a headache? If you take Tylenol and the pain goes away, you will likely take Tylenol in the future when you have a headache. NR can either result in current escape behavior or future avoidance behavior. What does this mean? Escape occurs when we are presently experiencing an aversive event and want it to end. We make a behavior and if the aversive event, like the headache, goes away, we will repeat the taking of Tylenol in the future. This future action is an avoidance event. We might start to feel a headache coming on and run to take Tylenol right away. By doing so, we have removed the possibility of the aversive event occurring, and this behavior demonstrates that learning has occurred.
• Negative Punishment (NP) – This is when something good is taken away or subtracted, making a behavior less likely in the future. If you are late to class and your professor deducts 5 points from your final grade (the points are something good and the loss is negative), you will hopefully be on time in all subsequent classes.
The type of reinforcer or punisher we use is crucial. Some are naturally occurring, while others need to be learned. We describe these as primary and secondary reinforcers and punishers. Primary refers to reinforcers and punishers that have their effect without having to be learned. Food, water, temperature, and sex, for instance, are primary reinforcers, while extreme cold or hot or a punch on the arm are inherently punishing. A story will illustrate the latter. When I was about eight years old, I would walk up the street in my neighborhood, saying, “I’m Chicken Little and you can’t hurt me.” Most ignored me, but some gave me the attention I was seeking, a positive reinforcer. So I kept doing it and doing it until one day, another kid grew tired of hearing about my other identity and punched me in the face. The pain was enough that I never walked up and down the street echoing my identity crisis for all to hear. This was a positive punisher that did not have to be learned, and definitely not one of my finer moments in life.
Secondary or conditioned reinforcers and punishers are not inherently reinforcing or punishing but must be learned. An example was the attention I received for saying I was Chicken Little. Over time I learned that attention was good. Other examples of secondary reinforcers include praise, a smile, getting money for working or earning good grades, stickers on a board, points, getting to go out dancing, and getting out of an exam if you are doing well in a class. Examples of secondary punishers include a ticket for speeding, losing television or video game privileges, ridicule, or a fee for paying your rent or credit card bill late. Really, the sky is the limit with reinforcers in particular.
In operant conditioning, the rule for determining when and how often we will reinforce the desired behavior is called the reinforcement schedule. Reinforcement can either occur continuously meaning every time the desired behavior is made the subject will receive some reinforcer, or intermittently/partially meaning reinforcement does not occur with every behavior. Our focus will be on partial/intermittent reinforcement.
Figure 2.9. Key Components of Reinforcement Schedules
Figure 2.9 shows that that are two main components that make up a reinforcement schedule – when you will reinforce and what is being reinforced. In the case of when, it will be either fixed or at a set rate, or variable and at a rate that changes. In terms of what is being reinforced, we will either reinforce responses or time. These two components pair up as follows:
• Fixed Ratio schedule (FR) – With this schedule, we reinforce some set number of responses. For instance, every twenty problems (fixed) a student gets correct (ratio), the teacher gives him an extra credit point. A specific behavior is being reinforced – getting problems correct. Note that if we reinforce each occurrence of the behavior, the definition of continuous reinforcement, we could also describe this as an FR1 schedule. The number indicates how many responses have to be made, and in this case, it is one.
• Variable Ratio schedule (VR) – We might decide to reinforce some varying number of responses, such as if the teacher gives him an extra credit point after finishing between 40 and 50 correct problems. This approach is useful if the student is learning the material and does not need regular reinforcement. Also, since the schedule changes, the student will keep responding in the absence of reinforcement.
• Fixed Interval schedule (FI) – With a FI schedule, you will reinforce after some set amount of time. Let’s say a company wanted to hire someone to sell their product. To attract someone, they could offer to pay them \$10 an hour 40 hours a week and give this money every two weeks. Crazy idea, but it could work. Saying the person will be paid every indicates fixed, and two weeks is time or interval. So, FI.
• Variable Interval schedule (VI) – Finally, you could reinforce someone at some changing amount of time. Maybe they receive payment on Friday one week, then three weeks later on Monday, then two days later on Wednesday, then eight days later on Thursday, etc. This could work, right? Not for a job, but maybe we could say we are reinforced on a VI schedule if we are.
Finally, four properties of operant conditioning – extinction, spontaneous recovery, stimulus generalization, and stimulus discrimination – are important. These are the same four discussed under respondent conditioning. First, extinction is when something that we do, say, think/feel has not been reinforced for some time. As you might expect, the behavior will begin to weaken and eventually stop when this occurs. Does extinction happen as soon as the anticipated reinforcer is removed? The answer is yes and no, depending on whether we are talking about continuous or partial reinforcement. With which type of schedule would you expect a person to stop responding to immediately if reinforcement is not there? Continuous or partial?
The answer is continuous. If a person is used to receiving reinforcement every time they perform a particular behavior, and then suddenly no reinforcer is delivered, he or she will cease the response immediately. Obviously then, with partial, a response continues being made for a while. Why is this? The person may think the schedule has simply changed. ‘Maybe I am not paid weekly now. Maybe it changed to biweekly and I missed the email.’ Due to this endurance, we say that intermittent or partial reinforcement shows resistance to extinction, meaning the behavior does weaken, but gradually.
As you might expect, if reinforcement occurs after extinction has started, the behavior will re-emerge. Consider your parents for a minute. To stop some undesirable behavior you made in the past, they likely took away some privilege. I bet the bad behavior ended too. But did you ever go to your grandparent’s house and grandma or grandpa—or worse, BOTH—took pity on you and let you play your video games (or something equivalent)? I know my grandmother used to. What happened to that bad behavior that had disappeared? Did it start again and your parents could not figure out why?
Additionally, you might have wondered if the person or animal will try to make the response again in the future even though it stopped being reinforced in the past. The answer is yes, and one of two outcomes is possible. First, the response is made, and nothing happens. In this case, extinction continues. Second, the response is made, and a reinforcer is delivered. The response re-emerges. Consider a rat trained to push a lever to receive a food pellet. If we stop providing the food pellets, in time, the rat will stop pushing the lever. If the rat pushes the lever again sometime in the future and food is delivered, the behavior spontaneously recovers. Hence, this phenomenon is called spontaneous recovery.
2.3.2.4. Observational learning. There are times when we learn by simply watching others. This is called observational learning and is contrasted with enactive learning, which is learning by doing. There is no firsthand experience by the learner in observational learning, unlike enactive. As you can learn desirable behaviors such as watching how your father bags groceries at the grocery store (I did this and still bag the same way today), you can learn undesirable ones too. If your parents resort to alcohol consumption to deal with stressors life presents, then you also might do the same. The critical part is what happens to the person modeling the behavior. If my father seems genuinely happy and pleased with himself after bagging groceries his way, then I will be more likely to adopt this behavior. If my mother or father consumes alcohol to feel better when things are tough, and it works, then I might do the same. On the other hand, if we see a sibling constantly getting in trouble with the law, then we may not model this behavior due to the negative consequences.
Albert Bandura conducted pivotal research on observational learning, and you likely already know all about it. Check out Figure 2.10 to see if you do. In Bandura’s experiment, children were first brought into a room to watch a video of an adult playing nicely or aggressively with a Bobo doll, which provided a model. Next, the children are placed in a room with several toys in it. The room contains a highly prized toy, but they are told they cannot play with it. All other toys are allowed, including a Bobo doll. Children who watched the aggressive model behaved aggressively with the Bobo doll while those who saw the gentle model, played nice. Both groups were frustrated when deprived of the coveted toy.
Figure 2.10. Bandura’s Classic Experiment
According to Bandura, all behaviors are learned by observing others, and we model our actions after theirs, so undesirable behaviors can be altered or relearned in the same way. Modeling techniques change behavior by having subjects observe a model in a situation that usually causes them some anxiety. By seeing the model interact nicely with the fear evoking stimulus, their fear should subside. This form of behavior therapy is widely used in clinical, business, and classroom situations. In the classroom, we might use modeling to demonstrate to a student how to do a math problem. In fact, in many college classrooms, this is exactly what the instructor does. In the business setting, a model or trainer demonstrates how to use a computer program or run a register for a new employee.
However, keep in mind that we do not model everything we see. Why? First, we cannot pay attention to everything going on around us. We are more likely to model behaviors by someone who commands our attention. Second, we must remember what a model does to imitate it. If a behavior is not memorable, it will not be imitated. We must try to convert what we see into action. If we are not motivated to perform an observed behavior, we probably will not show what we have learned.
2.3.2.5. Evaluating the behavioral model. Within the context of psychopathology, the behavioral perspective is useful because explains maladaptive behavior in terms of learning gone awry. The good thing is that what is learned can be unlearned or relearned through behavior modification, the process of changing behavior. To begin, an applied behavior analyst identifies a target behavior, or behavior to be changed, defines it, works with the client to develop goals, conducts a functional assessment to understand what the undesirable behavior is, what causes it, and what maintains it. With this knowledge, a plan is developed and consists of numerous strategies to act on one or all these elements – antecedent, behavior, and/or consequence. The strategies arise from all three learning models. In terms of operant conditioning, strategies include antecedent manipulations, prompts, punishment procedures, differential reinforcement, habit reversal, shaping, and programming. Flooding and desensitization are typical respondent conditioning procedures used with phobias, and modeling arises from social learning theory and observational learning. Watson and Skinner defined behavior as what we do or say, but later behaviorists added what we think or feel. In terms of the latter, cognitive behavior modification procedures arose after the 1960s and with the rise of cognitive psychology. This led to a cognitive-behavioral perspective that combines concepts from the behavioral and cognitive models, the latter discussed in the next section.
Critics of the behavioral perspective point out that it oversimplifies behavior and often ignores inner determinants of behavior. Behaviorism has also been accused of being mechanistic and seeing people as machines. This criticism would be true of behaviorism’s first two stages, though sociobehaviorism steered away from this proposition and even fought against any mechanistic leanings of behaviorists.
The greatest strength or appeal of the behavioral model is that its tenets are easily tested in the laboratory, unlike those of the psychodynamic model. Also, many treatment techniques have been developed and proven to be effective over the years. For example, desensitization (Wolpe, 1997) teaches clients to respond calmly to fear-producing stimuli. It begins with the individual learning a relaxation technique such as diaphragmatic breathing. Next, a fear hierarchy, or list of feared objects and situations, is constructed in which the individual moves from least to most feared. Finally, the individual either imagines (systematic) or experiences in real life (in-vivo) each object or scenario from the hierarchy and uses the relaxation technique while doing so. This represents the individual pairings of a feared object or situation and relaxation. So, if there are 10 objects/situations in the list, the client will experience ten such pairings and eventually be able to face each without fear. Outside of phobias, desensitization has been shown to be effective in the treatment of Obsessive-Compulsive Disorder symptoms (Hakimian and Souza, 2016) and limitedly with the treatment of depression when co-morbid with OCD (Masoumeh and Lancy, 2016).
The Cognitive Model
2.3.3.1. What is it? As noted earlier, the idea of people being machines, called mechanism, was a key feature of behaviorism and other schools of thought in psychology until about the 1960s or 1970s. In fact, behaviorism said psychology was to be the study of observable behavior. Any reference to cognitive processes was dismissed as this was not overt, but covert according to Watson and later Skinner. Of course, removing cognition from the study of psychology ignored an important part of what makes us human and separates us from the rest of the animal kingdom. Fortunately, the work of George Miller, Albert Ellis, Aaron Beck, and Ulrich Neisser demonstrated the importance of cognitive abilities in understanding thoughts, behaviors, and emotions, and in the case of psychopathology, show that people can create their problems by how they come to interpret events experienced in the world around them. How so?
2.3.3.2. Schemas and cognitive errors. First, consider the topic of social cognition or the process of collecting and assessing information about others. So what do we do with this information? Once collected or sensed (sensation is the cognitive process of detecting the physical energy given off or emitted by physical objects), the information is sent to the brain through the neural impulse. Once in the brain, it is processed and interpreted. This is where assessing information about others comes in and involves the cognitive process of perception, or adding meaning to raw sensory data. We take the information just detected and use it to assign people to categories, or groups. For each category, we have a schema, or a set of beliefs and expectations about a group of people, presumed to apply to all members of the group, and based on experience.
Can our schemas lead us astray or be false? Consider where students sit in a class. It is generally understood that the students who sit in the front of the class are the overachievers and want to earn an A in the class. Those who sit in the back of the room are underachievers who don’t care. Right? Where do you sit in class, if you are on a physical campus and not an online student? Is this correct? What about other students in the class that you know? What if you found out that a friend who sits in the front row is a C student but sits there because he cannot see the screen or board, even with corrective lenses? What about your friend or acquaintance in the back? This person is an A student but does not like being right under the nose of the professor, especially if he/she tends to spit when lecturing. The person in the back could also be shy and prefer sitting there so that s/he does not need to chat with others as much. Or, they are easily distracted and sits in the back so that all stimuli are in front of him/her. Again, your schema about front row and back row students is incorrect and causes you to make certain assumptions about these individuals. This might even affect how you interact with them. Would you want notes from the student in the front or back of the class?
2.3.3.3. Attributions and cognitive errors. Second, consider the very interesting social psychology topic attribution theory, or the idea that people are motivated to explain their own and other people’s behavior by attributing causes of that behavior to personal reasons or dispositional factors that are in the person themselves or linked to some trait they have; or situational factors that are linked to something outside the person. Like schemas, the attributions we make can lead us astray. How so? The fundamental attribution error occurs when we automatically assume a dispositional reason for another person’s actions and ignore situational factors. In other words, we assume the person who cut us off is an idiot (dispositional) and do not consider that maybe someone in the car is severely injured and this person is rushing them to the hospital (situational). Then there is the self-serving bias, which is when we attribute our success to our own efforts (dispositional) and our failures to external causes (situational). Our attribution in these two cases is in error, but still, it comes to affect how we see the world and our subjective well-being.
2.3.3.4. Maladaptive cognitions. Irrational thought patterns can be the basis of psychopathology. Throughout this book, we will discuss several treatment strategies used to change unwanted, maladaptive cognitions, whether they are present as an excess such as with paranoia, suicidal ideation, or feelings of worthlessness; or as a deficit such as with self-confidence and self-efficacy. More specifically, cognitive distortions/maladaptive cognitions can take the following forms:
• Overgeneralizing – You see a larger pattern of negatives based on one event.
• Mind Reading – Assuming others know what you are thinking without any evidence.
• What if? – Asking yourself ‘what if something happens,’ without being satisfied by any of the answers.
• Blaming – You focus on someone else as the source of your negative feelings and do not take any responsibility for changing yourself.
• Personalizing – Blaming yourself for adverse events rather than seeing the role that others play.
• Inability to disconfirm – Ignoring any evidence that may contradict your maladaptive cognition.
• Regret orientation – Focusing on what you could have done better in the past rather than on improving now.
• Dichotomous thinking – Viewing people or events in all-or-nothing terms.
2.3.3.5. Cognitive therapies. According to the National Alliance on Mental Illness (NAMI), cognitive behavioral therapy “focuses on exploring relationships among a person’s thoughts, feelings and behaviors. During CBT a therapist will actively work with a person to uncover unhealthy patterns of thought and how they may be causing self-destructive behaviors and beliefs.” CBT attempts to identify negative or false beliefs and restructure them. They add, “Oftentimes someone being treated with CBT will have homework in between sessions where they practice replacing negative thoughts with more realistic thoughts based on prior experiences or record their negative thoughts in a journal.” For more on CBT, visit: https://www.nami.org/About-Mental-Illness/Treatments/Psychotherapy. Some commonly used strategies include cognitive restructuring, cognitive coping skills training, and acceptance techniques.
First, you can use cognitive restructuring, also called rational restructuring, in which maladaptive cognitions are replaced with more adaptive ones. To do this, the client must be aware of the distressing thoughts, when they occur, and their effect on them. Next, help the client stop thinking these thoughts and replace them with more rational ones. It’s a simple strategy, but an important one. Psychology Today published a great article on January 21, 2013, which described four ways to change your thinking through cognitive restructuring. Briefly, these included:
1. Notice when you are having a maladaptive cognition, such as making “negative predictions.” Figure out what is the worst thing that could happen and what alternative outcomes are possible.
2. Track the accuracy of the thought. If you believe focusing on a problem generates a solution, then write down each time you ruminate and the result. You can generate a percentage of times you ruminated to the number of successful problem-solving strategies you generated.
3. Behaviorally test your thought. Try figuring out if you genuinely do not have time to go to the gym by recording what you do each day and then look at open times of the day. Add them up and see if making some minor, or major, adjustments to your schedule will free an hour to get in some valuable exercise.
4. Examine the evidence both for and against your thought. If you do not believe you do anything right, list evidence of when you did not do something right and then evidence of when you did. Then write a few balanced statements such as the one the article suggests, “I’ve made some mistakes that I feel embarrassed about, but a lot of the time, I make good choices.”
The article also suggested a few non-cognitive restructuring techniques, including mindfulness meditation and self-compassion. For more on these, visit: https://www.psychologytoday.com/blog/in-practice/201301/cognitive-restructuring
The second major CBT strategy is called cognitive coping skills training. This strategy teaches social skills, communication, assertiveness through direct instruction, role playing, and modeling. For social skills training, identify the appropriate social behavior such as making eye contact, saying no to a request, or starting up a conversation with a stranger and determine whether the client is inhibited from making this behavior due to anxiety. For communication, decide if the problem is related to speaking, listening, or both and then develop a plan for use in various interpersonal situations. Finally, assertiveness training aids the client in protecting their rights and obtaining what they want from others. Those who are not assertive are often overly passive and never get what they want or are unreasonably aggressive and only get what they want. Treatment starts with determining situations in which assertiveness is lacking and developing a hierarchy of assertiveness opportunities. Least difficult situations are handled first, followed by more difficult situations, all while rehearsing and mastering all the situations present in the hierarchy. For more on these techniques, visit http://cogbtherapy.com/cognitive-behavioral-therapy-exercises/.
Finally, acceptance techniques help reduce a client’s worry and anxiety. Life involves a degree of uncertainty, and at times we must accept this. Techniques might include weighing the pros and cons of fighting uncertainty or change. The disadvantages should outweigh the advantages and help you to end the struggle and accept what is unknown. Chances are you are already accepting the unknown in some areas of life and identifying these can help you to see why it is helpful in these areas, and how you can apply this in more difficult areas. Finally, does uncertainty always lead to a negative end? We may think so, but a review of the evidence for and against this statement will show that it does not and reduce how threatening it seems.
2.3.3.6. Evaluating the cognitive model. The cognitive model made up for an apparent deficit in the behavioral model – overlooking the role cognitive processes play in our thoughts, feelings, and behaviors. Right before his death, Skinner (1990) reminded psychologists that the only thing we can truly know and study was the observable. Cognitive processes cannot be empirically and reliably measured and should be ignored. Is there merit to this view? Social desirability states that sometimes participants do not tell us the truth about what they are thinking, feeling, or doing (or have done) because they do not want us to think less of them or to judge them harshly if they are outside the social norm. In other words, they present themselves in a favorable light. If this is true, how can we know anything about controversial matters? The person’s true intentions or thoughts and feelings are not readily available to us, or are covert, and do not make for useful empirical data. Still, cognitive-behavioral therapies have proven their efficacy for the treatment of OCD (McKay et al., 2015), perinatal depression (Sockol, 2015), insomnia (de Bruin et al., 2015), bulimia nervosa (Poulsen et al., 2014), hypochondriasis (Olatunji et al., 2014), and social anxiety disorder (Leichsenring et al., 2014) to name a few. Other examples will be discussed throughout this book.
The Humanistic and Existential Perspectives
2.3.4.1. The humanistic perspective. The humanistic perspective, or third force psychology (psychoanalysis and behaviorism being the other two forces), emerged in the 1960s and 1970s as an alternative viewpoint to the largely deterministic view of personality espoused by psychoanalysis and the view of humans as machines advocated by behaviorism. Key features of the perspective include a belief in human perfectibility, personal fulfillment, valuing self-disclosure, placing feelings over intellect, an emphasis on the present, and hedonism. Its key figures were Abraham Maslow, who proposed the hierarchy of needs, and Carl Rogers, who we will focus on here.
Rogers said that all people want to have positive regard from significant others in their life. When the individual is accepted as they are, they receive unconditional positive regard and become a fully functioning person. They are open to experience, live every moment to the fullest, are creative, accepts responsibility for their decisions, do not derive their sense of self from others, strive to maximize their potential, and are self-actualized. Their family and friends may disapprove of some of their actions but overall, respect and love them. They then realize their worth as a person but also that they are not perfect. Of course, most people do not experience this but instead are made to feel that they can only be loved and respected if they meet certain standards, called conditions of worth. Hence, they experience conditional positive regard. Their self-concept becomes distorted, now seen as having worth only when these significant others approve, leading to a disharmonious state and psychopathology. Individuals in this situation are unsure of what they feel, value, or need leading to dysfunction and the need for therapy. Rogers stated that the humanistic therapist should be warm, understanding, supportive, respectful, and accepting of his/her clients. This approach came to be called client-centered therapy.
2.3.4.2. The existential perspective. This approach stresses the need for people to re-create themselves continually and be self-aware, acknowledges that anxiety is a normal part of life, focuses on free will and self-determination, emphasizes that each person has a unique identity known only through relationships and the search for meaning, and finally, that we develop to our maximum potential. Abnormal behavior arises when we avoid making choices, do not take responsibility, and fail to actualize our full potential. Existential therapy is used to treat substance abuse, “excessive anxiety, apathy, alienation, nihilism, avoidance, shame, addiction, despair, depression, guilt, anger, rage, resentment, embitterment, purposelessness, psychosis, and violence. They also focus on life-enhancing experiences like relationships, love, caring, commitment, courage, creativity, power, will, presence, spirituality, individuation, self-actualization, authenticity, acceptance, transcendence, and awe.” For more information, please visit: https://www.psychologytoday.com/therapy-types/existential-therapy
2.3.4.3. Evaluating the humanistic and existential perspectives. The biggest criticism of these models is that the concepts are abstract and fuzzy and so very difficult to research. Rogers did try to investigate his propositions scientifically, but most other humanistic-existential psychologists rejected the use of the scientific method. They also have not developed much in the way of theory, and the perspectives tend to work best with people suffering from adjustment issues and not as well with severe mental illness. The perspectives do offer hope to people suffering tragedy by asserting that we control our destiny and can make our own choices.
Key Takeaways
You should have learned the following in this section:
• According to Freud, consciousness had three levels (consciousness, preconscious, and the unconscious), personality had three parts (the id, ego, and superego), personality developed over five stages (oral, anal, phallic, latency, and genital), there are ten defense mechanisms to protect the ego such as repression and sublimation, and finally three assessment techniques (free association, transference, and dream analysis) could be used to understand the personalities of his patients and expose repressed material.
• The behavioral model concerns the cognitive process of learning, which is any relatively permanent change in behavior due to experience and practice and has two main forms – associative learning to include classical and operant conditioning and observational learning.
• Respondent conditioning (also called classical or Pavlovian conditioning) occurs when we link a previously neutral stimulus with a stimulus that is unlearned or inborn, called an unconditioned stimulus.
• Operant conditioning is a type of associate learning which focuses on consequences that follow a response or behavior that we make (anything we do, say, or think/feel) and whether it makes a behavior more or less likely to occur.
• Observational learning is learning by watching others and modeling techniques change behavior by having subjects observe a model in a situation that usually causes them some anxiety.
• The cognitive model focuses on schemas, cognitive errors, attributions, and maladaptive cognitions and offers strategies such as CBT, cognitive restructuring, cognitive coping skills training, and acceptance.
• The humanistic perspective focuses on positive regard, conditions of worth, and the fully functioning person while the existential perspective stresses the need for people to re-create themselves continually and be self-aware, acknowledges that anxiety is a normal part of life, focuses on free will and self-determination, emphasizes that each person has a unique identity known only through relationships and the search for meaning, and finally, that we develop to our maximum potential.
Review Questions
1. What are the three parts of personality according to Freud?
2. What are the five psychosexual stages according to Freud?
3. List and define the ten defense mechanisms proposed by Freud.
4. What are the three assessment techniques used by Freud?
5. What is learning and what forms does it take?
6. Describe respondent conditioning.
7. Describe operant conditioning.
8. Describe observational learning and modeling.
9. How does the cognitive model approach psychopathology?
10. How does the humanistic perspective approach psychopathology?
11. How does the existential perspective approach psychopathology? | textbooks/socialsci/Psychology/Psychological_Disorders/Fundamentals_of_Psychological_Disorders_3e_(Bridley_and_Daffin)/01%3A_Part_I._Setting_the_Stage/02%3A_Models_of_Abnormal_Psychology/2.03%3A_Psychological_Perspectives.txt |
Learning Objectives
• Describe the sociocultural model.
• Clarify how socioeconomic factors affect mental illness.
• Clarify how gender factors affect mental illness.
• Clarify how environmental factors affect mental illness.
• Clarify how multicultural factors affect mental illness.
• Evaluate the sociocultural model.
Outside of biological and psychological factors on mental illness, race, ethnicity, gender, religious orientation, socioeconomic status, sexual orientation, etc. also play a role, and this is the basis of the sociocultural model. How so? We will explore a few of these factors in this section.
Socioeconomic Factors
Low socioeconomic status has been linked to higher rates of mental and physical illness (Ng, Muntaner, Chung, & Eaton, 2014) due to persistent concern over unemployment or under-employment, low wages, lack of health insurance, no savings, and the inability to put food on the table, which then leads to feeling hopeless, helpless, and dependency on others. This situation places considerable stress on an individual and can lead to higher rates of anxiety disorders and depression. Borderline personality disorder has also been found to be higher in people in low-income brackets (Tomko et al., 2012) and group differences for personality disorders have been found between African and European Americans (Ryder, Sunohara, and Kirmayer, 2015).
Gender Factors
Gender plays an important, though at times, unclear role in mental illness. Gender is not a cause of mental illness, though differing demands placed on males and females by society and their culture can influence the development and course of a disorder. Consider the following:
• Rates of eating disorders are higher among women than men, though both genders are affected. In the case of men, muscle dysphoria is of concern and is characterized by extreme concern over being more muscular.
• OCD has an earlier age of onset in girls than boys, with most people being diagnosed by age 19.
• Females are at higher risk for developing an anxiety disorder than men.
• ADHD is more common in males than females, though females are more likely to have inattention issues.
• Boys are more likely to be diagnosed with Autism Spectrum Disorder.
• Depression occurs with greater frequency in women than men.
• Women are more likely to develop PTSD compared to men.
• Rates of SAD (Seasonal Affective Disorder) are four times greater in women than men. Interestingly, younger adults are more likely to develop SAD than older adults.
Consider this…
In relation to men: “While mental illnesses affect both men and women, the prevalence of mental illnesses in men is often lower than women. Men with mental illnesses are also less likely to have received mental health treatment than women in the past year. However, men are more likely to die by suicide than women, according to the Centers for Disease Control and Prevention. Recognizing the signs that you or someone you love may have a mental disorder is the first step toward getting treatment. The earlier that treatment begins, the more effective it can be.”
https://www.nimh.nih.gov/health/topics/men-and-mental-health/index.shtml
In relation to women: “Some disorders are more common in women such as depression and anxiety. There are also certain types of disorders that are unique to women. For example, some women may experience symptoms of mental disorders at times of hormone change, such as perinatal depression, premenstrual dysphoric disorder, and perimenopause-related depression. When it comes to other mental disorders such as schizophrenia and bipolar disorder, research has not found differences in the rates at which men and women experience these illnesses. But women may experience these illnesses differently – certain symptoms may be more common in women than in men, and the course of the illness can be affected by the sex of the individual. Researchers are only now beginning to tease apart the various biological and psychosocial factors that may impact the mental health of both women and men.”
https://www.nimh.nih.gov/health/topics/women-and-mental-health/index.shtml
Environmental Factors
Environmental factors also play a role in the development of mental illness. How so?
• In the case of borderline personality disorder, many people report experiencing traumatic life events such as abandonment, abuse, unstable relationships or hostility, and adversity during childhood.
• Cigarette smoking, alcohol use, and drug use during pregnancy are risk factors for ADHD.
• Divorce or the death of a spouse can lead to anxiety disorders.
• Trauma, stress, and other extreme stressors are predictive of depression.
• Malnutrition before birth, exposure to viruses, and other psychosocial factors are potential causes of schizophrenia.
• SAD occurs with greater frequency for those living far north or south from the equator (Melrose, 2015). Horowitz (2008) found that rates of SAD are just 1% for those living in Florida while 9% of Alaskans are diagnosed with the disorder.
Multicultural Factors
Racial, ethnic, and cultural factors are also relevant to understanding the development and course of mental illness. Multicultural psychologists assert that both normal behavior and abnormal behavior need to be understood in the context of the individual’s unique culture and the group’s value system. Racial and ethnic minorities must contend with prejudice, discrimination, racism, economic hardships, etc. as part of their daily life and this can lead to disordered behavior (Lo & Cheng, 2014; Jones, Cross, & DeFour, 2007; Satcher, 2001), though some research suggests that ethnic identity can buffer against these stressors and protect mental health (Mossakowski, 2003). To address this unique factor, culture-sensitive therapies have been developed and include increasing the therapist’s awareness of cultural values, hardships, stressors, and/or prejudices faced by their client; the identification of suppressed anger and pain; and raising the client’s self-worth (Prochaska & Norcross, 2013). These therapies have proven efficacy for the treatment of depression (Kalibatseva & Leong, 2014) and schizophrenia (Naeem et al., 2015).
Evaluation of the Model
The sociocultural model has contributed significantly to our understanding of the nuances of mental illness diagnosis, prognosis, course, and treatment for other races, cultures, genders, ethnicities. In Module 3, we will discuss diagnosing and classifying abnormal behavior from the perspective of the DSM-5-TR (Diagnostic and Statistical Manual of Mental Disorders, 5th edition, Text-Revision). Important here is that specific culture- and gender-related diagnostic issues are discussed for each disorder, demonstrating increased awareness of the impact of these factors. Still, the sociocultural model suffers from unclear findings and not allowing for the establishment of causal relationships, reliance on more qualitative data gathered from case studies and ethnographic analyses (one such example is Zafra, 2016), and an inability to make predictions about abnormal behavior for individuals.
Key Takeaways
You should have learned the following in this section:
• The sociocultural model asserts that race, ethnicity, gender, religious orientation, socioeconomic status, sexual orientation all play a role in the development and treatment of mental illness.
Review Questions
1. How do socioeconomic, gender, environmental, and multicultural factors affect mental illness and its treatment?
2. How effective is the sociocultural model at explaining psychopathology and its treatment?
Module Recap
In Module 2, we first distinguished uni- and multi-dimensional models of abnormality and made a case that the latter was better to subscribe to. We then discussed biological, psychological, and sociocultural models of abnormality. In terms of the biological model, neurotransmitters, brain structures, hormones, genes, and viral infections were identified as potential causes of mental illness and three treatment options were given. In terms of psychological perspectives, Freud’s psychodynamic theory; the learning-related research of Watson, Skinner, and Bandura and Rotter; the cognitive model; and the humanistic and existential perspectives were discussed. Finally, the sociocultural model indicated the role of socioeconomic, gender, environmental, and multicultural factors on abnormal behavior. | textbooks/socialsci/Psychology/Psychological_Disorders/Fundamentals_of_Psychological_Disorders_3e_(Bridley_and_Daffin)/01%3A_Part_I._Setting_the_Stage/02%3A_Models_of_Abnormal_Psychology/2.04%3A_The_Sociocultural_Model.txt |
Learning Objectives
• Describe clinical assessment and methods used in it.
• Clarify how mental health professionals diagnose mental disorders in a standardized way.
• Discuss reasons to seek treatment and the importance of psychotherapy.
Module 3 covers the issues of clinical assessment, diagnosis, and treatment. We will define assessment and then describe key issues such as reliability, validity, standardization, and specific methods that are used. In terms of clinical diagnosis, we will discuss the two main classification systems used around the world – the DSM-5-TR and ICD-11. Finally, we discuss the reasons why people may seek treatment and what to expect when doing so.
03: Clinical Assessment Diagnosis and Treatment
Learning Objectives
• Define clinical assessment.
• Clarify why clinical assessment is an ongoing process.
• Define and exemplify reliability.
• Define and exemplify validity.
• Define standardization.
• List and describe seven methods of assessment.
What is Clinical Assessment?
For a mental health professional to be able to effectively help treat a client and know that the treatment selected worked (or is working), they first must engage in the clinical assessment of the client, or collecting information and drawing conclusions through the use of observation, psychological tests, neurological tests, and interviews to determine the person’s problem and the presenting symptoms. This collection of information involves learning about the client’s skills, abilities, personality characteristics, cognitive and emotional functioning, the social context in terms of environmental stressors that are faced, and cultural factors particular to them such as their language or ethnicity. Clinical assessment is not just conducted at the beginning of the process of seeking help but throughout the process. Why is that?
Consider this. First, we need to determine if a treatment is even needed. By having a clear accounting of the person’s symptoms and how they affect daily functioning, we can decide to what extent the individual is adversely affected. Assuming a treatment is needed, our second reason to engage in clinical assessment will be to determine what treatment will work best. As you will see later in this module, there are numerous approaches to treatment. These include Behavior Therapy, Cognitive and Cognitive-Behavioral Therapy (CBT), Humanistic-Experiential Therapies, Psychodynamic Therapies, Couples and Family Therapy, and biological treatments (psychopharmacology). Of course, for any mental disorder, some of the aforementioned therapies will have greater efficacy than others. Even if several can work well, it does not mean a particular therapy will work well for that specific client. Assessment can help figure this out. Finally, we need to know if the treatment we employed worked. This will involve measuring before any treatment is used and then measuring the behavior while the treatment is in place. We will even want to measure after the treatment ends to make sure symptoms of the disorder do not return. Knowing what the person’s baselines are for different aspects of psychological functioning will help us to see when improvement occurs.
In recap, obtaining the baselines happens in the beginning, implementing the treatment plan that is agreed upon happens more so in the middle, and then making sure the treatment produces the desired outcome occurs at the end. It should be clear from this discussion that clinical assessment is an ongoing process.
Key Concepts in Assessment
The assessment process involves three critical concepts – reliability, validity, and standardization. These three are important to science in general. First, we want the assessment to be reliable or consistent. Outside of clinical assessment, when our car has an issue and we take it to the mechanic, we want to make sure that what one mechanic says is wrong with our car is the same as what another says, or even two others. If not, the measurement tools they use to assess cars are flawed. The same is true of a patient who is suffering from a mental disorder. If one mental health professional says the person suffers from major depressive disorder and another says the issue is borderline personality disorder, then there is an issue with the assessment tool being used. Ensuring that two different raters are consistent in their assessment of patients is called interrater reliability. Another type of reliability occurs when a person takes a test one day, and then the same test on another day. We would expect the person’s answers to be consistent, which is called test-retest reliability. For example, let’s say the person takes the MMPI on Tuesday and then the same test on Friday. Unless something miraculous or tragic happened over the two days in between tests, the scores on the MMPI should be nearly identical to one another. What does identical mean? The score at test and the score at retest are correlated with one another. If the test is reliable, the correlation should be very high (remember, a correlation goes from -1.00 to +1.00, and positive means as one score goes up, so does the other, so the correlation for the two tests should be high on the positive side).
In addition to reliability, we want to make sure the test measures what it says it measures. This is called validity. Let’s say a new test is developed to measure symptoms of depression. It is compared against an existing and proven test, such as the Beck Depression Inventory (BDI). If the new test measures depression, then the scores on it should be highly comparable to the ones obtained by the BDI. This is called concurrent or descriptive validity. We might even ask if an assessment tool looks valid. If we answer yes, then it has face validity, though it should be noted that this is not based on any statistical or evidence-based method of assessing validity. An example would be a personality test that asks about how people behave in certain situations. Therefore, it seems to measure personality, or we have an overall feeling that it measures what we expect it to measure.
Predictive validity is when a tool accurately predicts what will happen in the future. Let’s say we want to tell if a high school student will do well in college. We might create a national exam to test needed skills and call it something like the Scholastic Aptitude Test (SAT). We would have high school students take it by their senior year and then wait until they are in college for a few years and see how they are doing. If they did well on the SAT, we would expect that at that point, they should be doing well in college. If so, then the SAT accurately predicts college success. The same would be true of a test such as the Graduate Record Exam (GRE) and its ability to predict graduate school performance.
Finally, we want to make sure that the experience one patient has when taking a test or being assessed is the same as another patient taking the test the same day or on a different day, and with either the same tester or another tester. This is accomplished with the use of clearly laid out rules, norms, and/or procedures, and is called standardization. Equally important is that mental health professionals interpret the results of the testing in the same way, or otherwise, it will be unclear what the meaning of a specific score is.
Methods of Assessment
So how do we assess patients in our care? We will discuss observation, psychological tests, neurological tests, the clinical interview, and a few others in this section.
3.1.3.1. Observation. In Section 1.5.2.1 we talked about two types of observation – naturalistic, or observing the person or animal in their environment, and laboratory, or observing the organism in a more controlled or artificial setting where the experimenter can use sophisticated equipment and videotape the session to examine it later. One-way mirrors can also be used. A limitation of this method is that the process of recording a behavior causes the behavior to change, called reactivity. Have you ever noticed someone staring at you while you sat and ate your lunch? If you have, what did you do? Did you change your behavior? Did you become self-conscious? Likely yes, and this is an example of reactivity. Another issue is that the behavior made in one situation may not be made in other situations, such as your significant other only acting out at the football game and not at home. This form of validity is called cross-sectional validity. We also need our raters to observe and record behavior in the same way or to have high inter-rater reliability.
3.1.3.2. The clinical interview. A clinical interview is a face-to-face encounter between a mental health professional and a patient in which the former observes the latter and gathers data about the person’s behavior, attitudes, current situation, personality, and life history. The interview may be unstructured in which open-ended questions are asked, structured in which a specific set of questions according to an interview schedule are asked, or semi-structured, in which there is a pre-set list of questions, but clinicians can follow up on specific issues that catch their attention. A mental status examination is used to organize the information collected during the interview and systematically evaluates the patient through a series of questions assessing appearance and behavior. The latter includes grooming and body posture, thought processes and content to include disorganized speech or thought and false beliefs, mood and affect such that whether the person feels hopeless or elated, intellectual functioning to include speech and memory, and awareness of surroundings to include where the person is and what the day and time are. The exam covers areas not normally part of the interview and allows the mental health professional to determine which areas need to be examined further. The limitation of the interview is that it lacks reliability, especially in the case of the unstructured interview.
3.1.3.3. Psychological tests and inventories. Psychological tests assess the client’s personality, social skills, cognitive abilities, emotions, behavioral responses, or interests. They can be administered either individually or to groups in paper or oral fashion. Projective tests consist of simple ambiguous stimuli that can elicit an unlimited number of responses. They include the Rorschach or inkblot test and the Thematic Apperception Test which asks the individual to write a complete story about each of 20 cards shown to them and give details about what led up to the scene depicted, what the characters are thinking, what they are doing, and what the outcome will be. From the response, the clinician gains perspective on the patient’s worries, needs, emotions, conflicts, and the individual always connects with one of the people on the card. Another projective test is the sentence completion test and asks individuals to finish an incomplete sentence. Examples include ‘My mother…’ or ‘I hope…’
Personality inventories ask clients to state whether each item in a long list of statements applies to them, and could ask about feelings, behaviors, or beliefs. Examples include the MMPI or Minnesota Multiphasic Personality Inventory and the NEO-PI-R, which is a concise measure of the five major domains of personality – Neuroticism, Extroversion, Openness, Agreeableness, and Conscientiousness. Six facets define each of the five domains, and the measure assesses emotional, interpersonal, experimental, attitudinal, and motivational styles (Costa & McCrae, 1992). These inventories have the advantage of being easy to administer by either a professional or the individual taking it, are standardized, objectively scored, and can be completed electronically or by hand. That said, personality cannot be directly assessed, and so you do not ever completely know the individual.
3.1.3.4. Neurological tests. Neurological tests are used to diagnose cognitive impairments caused by brain damage due to tumors, infections, or head injuries; or changes in brain activity. Positron Emission Tomography or PET is used to study the brain’s chemistry. It begins by injecting the patient with a radionuclide that collects in the brain and then having them lie on a scanning table while a ring-shaped machine is positioned over their head. Images are produced that yield information about the functioning of the brain. Magnetic Resonance Imaging or MRI provides 3D images of the brain or other body structures using magnetic fields and computers. It can detect brain and spinal cord tumors or nervous system disorders such as multiple sclerosis. Finally, computed tomography or the CT scan involves taking X-rays of the brain at different angles and is used to diagnose brain damage caused by head injuries or brain tumors.
3.1.3.5. Physical examination. Many mental health professionals recommend the patient see their family physician for a physical examination, which is much like a check-up. Why is that? Some organic conditions, such as hyperthyroidism or hormonal irregularities, manifest behavioral symptoms that are like mental disorders. Ruling out such conditions can save costly therapy or surgery.
3.1.3.6. Behavioral assessment. Within the realm of behavior modification and applied behavior analysis, we talk about what is called behavioral assessment, which is the measurement of a target behavior. The target behavior is whatever behavior we want to change, and it can be in excess and needing to be reduced, or in a deficit state and needing to be increased. During the behavioral assessment we learn about the ABCs of behavior in which Antecedents are the environmental events or stimuli that trigger a behavior; Behaviors are what the person does, says, thinks/feels; and Consequences are the outcome of a behavior that either encourages it to be made again in the future or discourages its future occurrence. Though we might try to change another person’s behavior using behavior modification, we can also change our own behavior, which is called self-modification. The person does their own measuring and recording of the ABCs, which is called self-monitoring. In the context of psychopathology, behavior modification can be useful in treating phobias, reducing habit disorders, and ridding the person of maladaptive cognitions.
3.1.3.7. Intelligence tests. Intelligence testing determines the patient’s level of cognitive functioning and consists of a series of tasks asking the patient to use both verbal and nonverbal skills. An example is the Stanford-Binet Intelligence test, which assesses fluid reasoning, knowledge, quantitative reasoning, visual-spatial processing, and working memory. Intelligence tests have been criticized for not predicting future behaviors such as achievement and reflecting social or cultural factors/biases and not actual intelligence. Also, can we really assess intelligence through one dimension, or are there multiple dimensions?
Key Takeaways
You should have learned the following in this section:
• Clinical assessment is the collecting of information and drawing conclusions through the use of observation, psychological tests, neurological tests, and interviews.
• Reliability refers to consistency in measurement and can take the form of interrater and test-retest reliability.
• Validity is when we ensure the test measures what it says it measures and takes the forms of concurrent or descriptive, face, and predictive validity.
• Standardization is all the clearly laid out rules, norms, and/or procedures to ensure the experience each participant has is the same.
• Patients are assessed through observation, psychological tests, neurological tests, and the clinical interview, all with their own strengths and limitations.
Review Questions
1. What does it mean that clinical assessment is an ongoing process?
2. Define and exemplify reliability, validity, and standardization.
3. For each assessment method, define it and then state its strengths and limitations. | textbooks/socialsci/Psychology/Psychological_Disorders/Fundamentals_of_Psychological_Disorders_3e_(Bridley_and_Daffin)/01%3A_Part_I._Setting_the_Stage/03%3A_Clinical_Assessment_Diagnosis_and_Treatment/3.01%3A_Clinical_Assessment_of_Abnor.txt |
Learning Objectives
• Explain what it means to make a clinical diagnosis.
• Define syndrome.
• Clarify and exemplify what a classification system does.
• Identify the two most used classification systems.
• Outline the history of the DSM.
• Identify and explain the elements of a diagnosis.
• Outline the major disorder categories of the DSM-5-TR.
• Describe the ICD-11.
• Clarify why the DSM-5-TR and ICD-11 need to be harmonized.
Clinical Diagnosis and Classification Systems
Before starting any type of treatment, the client/patient must be clearly diagnosed with a mental disorder. Clinical diagnosis is the process of using assessment data to determine if the pattern of symptoms the person presents with is consistent with the diagnostic criteria for a specific mental disorder outlined in an established classification system such as the DSM-5-TR or ICD-11 (both will be described shortly). Any diagnosis should have clinical utility, meaning it aids the mental health professional in determining prognosis, the treatment plan, and possible outcomes of treatment (APA, 2022). Receiving a diagnosis does not necessarily mean the person requires treatment. This decision is made based upon how severe the symptoms are, level of distress caused by the symptoms, symptom salience such as expressing suicidal ideation, risks and benefits of treatment, disability, and other factors (APA, 2022). Likewise, a patient may not meet the full criteria for a diagnosis but demonstrate a clear need for treatment or care, nonetheless. As stated in the DSM, “The fact that some individuals do not show all symptoms indicative of a diagnosis should not be used to justify limiting their access to appropriate care” (APA, 2022).
Symptoms that cluster together regularly are called a syndrome. If they also follow the same, predictable course, we say that they are characteristic of a specific disorder. Classification systems provide mental health professionals with an agreed-upon list of disorders falling into distinct categories for which there are clear descriptions and criteria for making a diagnosis. Distinct is the keyword here. People suffering from delusions, hallucinations, disorganized thinking (speech), grossly disorganized or abnormal motor behavior, and/or negative symptoms are different from people presenting with a primary clinical deficit in cognitive functioning that is not developmental but acquired (i.e., they have shown a decline in cognitive functioning over time). The former suffers from a schizophrenia spectrum disorder while the latter suffers from a neurocognitive disorder (NCD). The latter can be further distinguished from neurodevelopmental disorders which manifest early in development and involve developmental deficits that cause impairments in social, personal, academic, or occupational functioning (APA, 2022). These three disorder groups or categories can be clearly distinguished from one another. Classification systems also permit the gathering of statistics to determine incidence and prevalence rates and conform to the requirements of insurance companies for the payment of claims.
The most widely used classification system in the United States is the Diagnostic and Statistical Manual of Mental Disorders (DSM) which is a “medical classification of disorders and as such serves as a historically determined cognitive schema imposed on clinical and scientific information to increase its comprehensibility and utility. The classification of disorders (the way in which disorders are grouped) provides a high-level organization for the manual” (APA, 2022, pg. 11). The DSM is currently in its 5th edition Text-Revision (DSM-5-TR) and is produced by the American Psychiatric Association (APA, 2022). Alternatively, the World Health Organization (WHO) publishes the International Statistical Classification of Diseases and Related Health Problems (ICD) currently in its 11th edition. We will begin by discussing the DSM and then move to the ICD.
The DSM Classification System
3.2.2.1.A brief history of the DSM. The DSM-5 was published in 2013 and took the place of the DSM IV-TR (TR means Text Revision; published in 2000). In March 2022, a Text-Revision was published for the DSM-5, making it the DSM-5-TR.
The history of the DSM goes back to 1952 when the American Psychiatric Association published the first edition of the DSM which was “…the first official manual of mental disorders to contain a glossary of descriptions of the diagnostic categories” (APA, 2022, p. 5). The DSM evolved through four major editions after World War II into a diagnostic classification system to be used by psychiatrists and physicians, but also other mental health professionals. The Herculean task of revising the DSM began in 1999 when the APA embarked upon an evaluation of the strengths and weaknesses of the DSM in coordination with the World Health Organization (WHO) Division of Mental Health, the World Psychiatric Association, and the National Institute of Mental Health (NIMH). This collaboration resulted in the publication of a monograph in 2002 called A Research Agenda for DSM-V. From 2003 to 2008, the APA, WHO, NIMH, the National Institute on Drug Abuse (NIDA), and the National Institute on Alcoholism and Alcohol Abuse (NIAAA) convened 13 international DSM-5 research planning conferences “to review the world literature in specific diagnostic areas to prepare for revisions in developing both DSM-5 and the International Classification of Disease, 11th Revision (ICD-11)” (APA, 2022, pg. 6).
After the naming of a DSM-5 Task Force Chair and Vice-Chair in 2006, task force members were selected and approved by 2007, and workgroup members were approved in 2008. An intensive 6-year process of “conducting literature reviews and secondary analyses, publishing research reports in scientific journals, developing draft diagnostic criteria, posting preliminary drafts on the DSM-5 website for public comment, presenting preliminary findings at professional meetings, performing field trials, and revisiting criteria and text” was undertaken (APA, 2022, pg. 7). The process involved physicians, psychologists, social workers, epidemiologists, neuroscientists, nurses, counselors, and statisticians, all who aided in the development and testing of DSM-5 while individuals with mental disorders, families of those with a mental disorder, consumer groups, lawyers, and advocacy groups provided feedback on the mental disorders contained in the book. Additionally, disorders with low clinical utility and weak validity were considered for deletion while “Conditions for Future Study” were placed in Section 3 and “contingent on the amount of empirical evidence generated on the proposed diagnosis, diagnostic reliability or validity, presence of clear clinical need, and potential benefit in advancing research” (APA, 2022, pg. 7).
3.2.2.2. The DSM-5 text revision process. In the spring 2019, APA started work on the Text-Revision for the DSM-5. This involved more than 200 experts who were asked to conduct literature reviews of the past 10 years and to review the text to identify any material that was out-of-date. Experts were divided into 20 disorder review groups, each with its own section editor. Four cross-cutting review groups to include Culture, Sex and Gender, Suicide, and Forensic, reviewed each chapter and focused on material involving their specific expertise. The text was also reviewed by an Ethnoracial Equity and Inclusion work group whose task was to “ensure appropriate attention to risk factors such as racism and discrimination and the use of nonstigmatizing language” (APA, 2022, pg. 11).
As such, the DSM-5-TR “is committed to the use of language that challenges the view that races are discrete and natural entities” (APA, 2022, pg. 18). Some of changes include:
• Use of racialized instead of racial to indicate the socially constructed nature of race
• Ethnoracial is used to denote U.S. Census categories such as Hispanic, African American, or White
• Latinx is used in place of Latino or Latina to promote gender-inclusive terminology
• The term Caucasian is omitted since it is “based on obsolete and erroneous views about the geographic origin of a prototypical pan-European ethnicity” (pg. 18)
• To avoid perpetuating social hierarchies, the terms minority and non-White are avoided since they describe social groups in relation to a racialized “majority”
• The terms cultural contexts and cultural backgrounds are preferred to culture which is only used to refer to a “heterogeneity of cultural views and practices within societies” (pg. 18)
• The inclusion of data on specific ethnoracial groups only when “existing research documented reliable estimates based on representative samples.” This led to limited inclusion of data on Native Americans since data from nonrepresentative samples may be misleading.
• The use of gender differences or “women and men” or “boys and girls” since much of the information on the expressions of mental disorders in women and men is based on self-identified gender.
• Inclusion of a new section for each diagnosis providing information about suicidal thoughts or behavior associated with that diagnosis.
3.2.2.3. Elements of a diagnosis. The DSM-5-TR states that the following make up the key elements of a diagnosis (APA, 2022):
• Diagnostic Criteria and Descriptors – Diagnostic criteria are the guidelines for making a diagnosis and should be informed by clinical judgment. When the full criteria are met, mental health professionals can add severity and course specifiers to indicate the patient’s current presentation. If the full criteria are not met, designators such as “other specified” or “unspecified” can be used. If applicable, an indication of severity (mild, moderate, severe, or extreme), descriptive features, and course (type of remission – partial or full – or recurrent) can be provided with the diagnosis. The final diagnosis is based on the clinical interview, text descriptions, criteria, and clinical judgment.
• Subtypes and Specifiers – Subtypes denote “mutually exclusive and jointly exhaustive phenomenological subgroupings within a diagnosis” (APA, 2022, pg. 22). For example, non-rapid eye movement (NREM) sleep arousal disorders can have either a sleepwalking or sleep terror type. Enuresis is nocturnal-only, diurnal-only, or both. Specifiers are not mutually exclusive or jointly exhaustive and so more than one specifier can be given. For instance, binge eating disorder has remission and severity specifiers. Somatic symptom disorder has a specifier for severity, if with predominant pain, and/or if persistent. Again, the fundamental distinction between subtypes and specifiers is that there can be only one subtype but multiple specifiers. As the DSM-5-TR says, “Specifiers and subtypes provide an opportunity to define a more homogeneous subgrouping of individuals with the disorder who share certain features… and to convey information that is relevant to the management of the individual’s disorder” (pg. 22).
• Principle Diagnosis – A principal diagnosis is used when more than one diagnosis is given for an individual. It is the reason for the admission in an inpatient setting or the basis for a visit resulting in ambulatory care medical services in outpatient settings. The principal diagnosis is generally the focus of attention or treatment.
• Provisional Diagnosis – If not enough information is available for a mental health professional to make a definitive diagnosis, but there is a strong presumption that the full criteria will be met with additional information or time, then the provisional specifier can be used.
3.2.2.4. DSM-5 disorder categories. The DSM-5 includes the following categories of disorders:
Table 3.1. DSM-5 Classification System of Mental Disorders
Disorder Category Short Description Module
Neurodevelopmental disorders A group of conditions that arise in the developmental period and include intellectual disability, communication disorders, autism spectrum disorder, specific learning disorder, motor disorders, and ADHD Not covered
Schizophrenia Spectrum Disorders characterized by one or more of the following: delusions, hallucinations, disorganized thinking and speech, disorganized motor behavior, and negative symptoms 12
Bipolar and Related Characterized by mania or hypomania and possibly depressed mood; includes Bipolar I and II and cyclothymic disorder 4
Depressive Characterized by sad, empty, or irritable mood, as well as somatic and cognitive changes that affect functioning; includes major depressive, persistent depressive disorder, mood dysregulation disorder, and premenstrual dysphoric disorder 4
Anxiety Characterized by excessive fear and anxiety and related behavioral disturbances; Includes phobias, separation anxiety, panic disorder, generalized anxiety disorder, social anxiety disorder, agoraphobia 7
Obsessive-Compulsive Characterized by obsessions and compulsions and includes OCD, hoarding, body dysmorphic disorder, trichotillomania, and excoriation 9
Trauma- and Stressor- Related Characterized by exposure to a traumatic or stressful event; PTSD, acute stress disorder, adjustment disorders, and prolonged grief disorder 5
Dissociative Characterized by a disruption or discontinuity in memory, identity, emotion, perception, body representation, consciousness, motor control, or behavior; dissociative identity disorder, dissociative amnesia, and depersonalization/derealization disorder 6
Somatic Symptom Characterized by prominent somatic symptoms and/or illness anxiety associated with significant distress and impairment; includes illness anxiety disorder, somatic symptom disorder, and conversion disorder 8
Feeding and Eating Characterized by a persistent disturbance of eating or eating-related behavior to include bingeing and purging; Includes pica, rumination disorder, avoidant/restrictive food intake disorder, anorexia, bulimia, and binge-eating disorder 10
Elimination Characterized by the inappropriate elimination of urine or feces; usually first diagnosed in childhood or adolescence; Includes enuresis and encopresis Not covered
Sleep-Wake Characterized by sleep-wake complaints about the quality, timing, and amount of sleep; includes insomnia, sleep terrors, narcolepsy, sleep apnea, hypersomnolence disorder, restless leg syndrome, and circadian-rhythm sleep-wake disorders Not covered
Sexual Dysfunctions Characterized by sexual difficulties and include premature or delayed ejaculation, female orgasmic disorder, and erectile disorder (to name a few) Not covered
Gender Dysphoria Characterized by distress associated with the incongruity between one’s experienced or expressed gender and the gender assigned at birth Not covered
Disruptive, Impulse-Control, Conduct Characterized by problems in the self-control of emotions and behavior and involve the violation of the rights of others and cause the individual to violate societal norms; includes oppositional defiant disorder, antisocial personality disorder, kleptomania, intermittent explosive disorder, conduct disorder, and pyromania Not covered
Substance-Related and Addictive Characterized by the continued use of a substance despite significant problems related to its use 11
Neurocognitive Characterized by a decline in cognitive functioning over time and the NCD has not been present since birth or early in life; Includes delirium, major and mild neurocognitive disorder, and Alzheimer’s disease 14
Personality Characterized by a pattern of stable traits which are inflexible, pervasive, and leads to distress or impairment; Includes paranoid, schizoid, borderline, obsessive-compulsive, narcissistic, histrionic, dependent, schizotypal, antisocial, and avoidant personality disorder 13
Paraphilic Characterized by recurrent and intense sexual fantasies that can cause harm to the individual or others; includes exhibitionism, voyeurism, sexual sadism, sexual masochism, pedophilic, and fetishistic disorders Not covered
The ICD-11
In 1893, the International Statistical Institute adopted the International List of Causes of Death which was the first international classification edition. The World Health Organization was entrusted with the development of the ICD in 1948 and published the 6th version (ICD-6). The ICD-11 went into effect January 1, 2022, though it was adopted in May 2019. The WHO states:
ICD serves a broad range of uses globally and provides critical knowledge on the extent, causes and consequences of human disease and death worldwide via data that is reported and coded with the ICD. Clinical terms coded with ICD are the main basis for health recording and statistics on disease in primary, secondary and tertiary care, as well as on cause of death certificates. These data and statistics support payment systems, service planning, administration of quality and safety, and health services research. Diagnostic guidance linked to categories of ICD also standardizes data collection and enables large scale research.
As a classification system, it “allows the systematic recording, analysis, interpretation and comparison of mortality and morbidity data collected in different countries or regions and at different times.” As well, it “ensures semantic interoperability and reusability of recorded data for the different use cases beyond mere health statistics, including decision support, resource allocation, reimbursement, guidelines and more.”
Source: www.who.int/classifications/icd/en/
The ICD lists many types of diseases and disorders to include Chapter 06: Mental, Behavioral, or Neurodevelopmental Disorders. The list of mental disorders is broken down as follows:
• Neurodevelopmental disorders
• Schizophrenia or other primary psychotic disorders
• Catatonia
• Mood disorders
• Anxiety or fear-related disorders
• Obsessive-compulsive or related disorders
• Disorders specifically associated with stress
• Dissociative disorders
• Feeding or eating disorders
• Elimination disorders
• Disorders of bodily distress or bodily experience
• Disorders due to substance use or addictive behaviours
• Impulse control disorders
• Disruptive behaviour or dissocial disorders
• Personality disorders and related traits
• Paraphilic disorders
• Factitious disorders
• Neurocognitive disorders
• Mental or behavioural disorders associated with pregnancy, childbirth or the puerperium
It should be noted that Sleep-Wake Disorders are listed in Chapter 07.
To access Chapter 06 of the ICD-11, please visit the following:
https://icd.who.int/browse11/l-m/en#/http%3a%2f%2fid.who.int%2ficd%2fentity%2f334423054
Harmonization of DSM-5-TR and ICD-11
According to the DSM-5-TR, there is an effort to harmonize the two classification systems: 1) for a more accurate collection of national health statistics and design of clinical trials aimed at developing new treatments, 2) to increase the ability to replicate scientific findings across national boundaries, and 3) to rectify the issue of DSM-IV and ICD-10 diagnoses not agreeing (APA, 2022, pg. 13). Complete harmonization of the DSM-5 diagnostic criteria with the ICD-11 disorder definitions has not occurred due to differences in timing. The DSM-5 developmental effort was several years ahead of the ICD-11 revision process. Despite this, some improvement in harmonization did occur as many ICD-11 working group members had participated in the development of the DSM-5 diagnostic criteria and all ICD-11 work groups were given instructions to review the DSM-5 criteria sets and make them as similar as possible (unless there was a legitimate reason not to). This has led to the ICD and DSM being closer than at any time since DSM-II and ICD-8 (APA, 2022).
Key Takeaways
You should have learned the following in this section:
• Clinical diagnosis is the process of using assessment data to determine if the pattern of symptoms the person presents with is consistent with the diagnostic criteria for a specific mental disorder outlined in an established classification system such as the DSM-5-TR or ICD-11.
• Classification systems provide mental health professionals with an agreed-upon list of disorders falling into distinct categories for which there are clear descriptions and criteria for making a diagnosis.
• Elements of a diagnosis in the DSM include the diagnostic criteria and descriptors, subtypes and specifiers, the principle diagnosis, and a provisional diagnosis.
Review Questions
1. What is clinical diagnosis?
2. What is a classification system and what are the two main ones used today?
3. Outline the diagnostic categories used in the DSM-5-TR. | textbooks/socialsci/Psychology/Psychological_Disorders/Fundamentals_of_Psychological_Disorders_3e_(Bridley_and_Daffin)/01%3A_Part_I._Setting_the_Stage/03%3A_Clinical_Assessment_Diagnosis_and_Treatment/3.02%3A_Diagnosing_and_Classifying_A.txt |
Learning Objectives
• Clarify reasons why an individual may need to seek treatment.
• Critique myths about psychotherapy.
Seeking Treatment
3.3.1.1. Who seeks treatment? Would you describe the people who seek treatment as being on the brink, crazy, or desperate? Or can the ordinary Joe in need of advice seek out mental health counseling? The answer is that anyone can. David Sack, M.D. (2013) writes in the article 5 Signs Its Time to Seek Therapy, published in Psychology Today, that “most people can benefit from therapy at least some point in their lives,” and though the signs you need to seek help are obvious at times, we often try “to sustain [our] busy life until it sets in that life has become unmanageable.” So, when should we seek help? First, if we feel sad, angry, or not like ourselves. We might be withdrawing from friends and families or sleeping more or less than we usually do. Second, if we are abusing drugs, alcohol, food, or sex to deal with life’s problems. In this case, our coping skills may need some work. Third, in instances when we have lost a loved one or something else important to us, whether due to death or divorce, the grief may be too much to process. Fourth, a traumatic event may have occurred, such as abuse, a crime, an accident, chronic illness, or rape. Finally, if you have stopped doing the things you enjoy the most. Sack (2013) says, “If you decide that therapy is worth a try, it doesn’t mean you’re in for a lifetime of head shrinking.” A 2001 study in the Journal of Counseling Psychology found that most people feel better within seven to 10 visits. In another study, published in 2006 in the Journal of Consulting and Clinical Psychology, 88% of therapy-goers reported improvements after just one session.”
3.3.1.2. When friends, family, and self-healing are not enough. If you are experiencing any of the aforementioned issues, you should seek help. Instead of facing the potential stigma of talking to a mental health professional, many people think that talking through their problems with friends or family is just as good. Though you will ultimately need these people to see you through your recovery, they do not have the training and years of experience that a psychologist or similar professional has. “Psychologists can recognize behavior or thought patterns objectively, more so than those closest to you who may have stopped noticing — or maybe never noticed. A psychologist might offer remarks or observations similar to those in your existing relationships, but their help may be more effective due to their timing, focus, or your trust in their neutral stance” (www.apa.org/helpcenter/psychotherapy-myths.aspx). You also should not wait to recover on your own. It is not a failure to admit you need help, and there could be a biological issue that makes it almost impossible to heal yourself.
3.3.1.3. What exactly is psychotherapy? According to the APA, in psychotherapy “psychologists apply scientifically validated procedures to help people develop healthier, more effective habits.” Several different approaches can be utilized to include behavior, cognitive and cognitive-behavior, humanistic-experiential, psychodynamic, couples and family, and biological treatments.
3.3.1.4. The client-therapist relationship. What is the ideal client-therapist relationship? APA says, “Psychotherapy is a collaborative treatment based on the relationship between an individual and a psychologist. Grounded in dialogue, it provides a supportive environment that allows you to talk openly with someone who’s objective, neutral and nonjudgmental. You and your psychologist will work together to identify and change the thought and behavior patterns that are keeping you from feeling your best.” It’s not just about solving the problem you saw the therapist for, but also about learning new skills to help you cope better in the future when faced with the same or similar environmental stressors.
So how do you find a psychotherapist? Several strategies may prove fruitful. You could ask family and friends, your primary care physician (PCP), look online, consult an area community mental health center, your local university’s psychology department, state psychological association, or use APA’s Psychologist Locator Service (locator.apa.org/?_ga=2.160567293.1305482682.1516057794-1001575750.1501611950). Once you find a list of psychologists or other practitioners, choose the right one for you by determining if you plan on attending alone or with family, what you wish to get out of your time with a psychotherapist, how much your insurance company pays for and if you have to pay out of pocket how much you can afford, when you can attend sessions, and how far you are willing to travel to see the mental health professional. Once you have done this, make your first appointment.
But what should you bring? APA suggests, “to make the most of your time, make a list of the points you want to cover in your first session and what you want to work on in psychotherapy. Be prepared to share information about what’s bringing you to the psychologist. Even a vague idea of what you want to accomplish can help you and your psychologist proceed efficiently and effectively.” Additionally, they suggest taking report cards, a list of medications, information on the reasons for a referral, a notebook, a calendar to schedule future visits if needed, and a form of payment. What you take depends on the reason for the visit.
In terms of what you should expect, you and your therapist will work to develop a full history which could take several visits. From this, a treatment plan will be developed. “This collaborative goal-setting is important, because both of you need to be invested in achieving your goals. Your psychologist may write down the goals and read them back to you, so you’re both clear about what you’ll be working on. Some psychologists even create a treatment contract that lays out the purpose of treatment, its expected duration and goals, with both the individual’s and psychologist’s responsibilities outlined.”
After the initial visit, the mental health professional may conduct tests to further understand your condition but will continue talking through the issue. He/she may even suggest involving others, especially in cases of relationship issues. Resilience is a skill that will be taught so that you can better handle future situations.
3.3.1.5. Does it work? APA writes, “Reviews of these studies show that about 75 percent of people who enter psychotherapy show some benefit. Other reviews have found that the average person who engages in psychotherapy is better off by the end of treatment than 80 percent of those who don’t receive treatment at all.” Treatment works due to finding evidence-based treatment that is specific for the person’s problem; the expertise of the therapist; and the characteristics, values, culture, preferences, and personality of the client.
3.3.1.6. How do you know you are finished? “How long psychotherapy takes depends on several factors: the type of problem or disorder, the patient’s characteristics and history, the patient’s goals, what’s going on in the patient’s life outside psychotherapy and how fast the patient is able to make progress.” It is important to note that psychotherapy is not a lifelong commitment, and it is a joint decision of client and therapist as to when it ends. Once over, expect to have a periodic check-up with your therapist. This might be weeks or even months after your last session. If you need to see him/her sooner, schedule an appointment. APA calls this a “mental health tune up” or a “booster session.”
For more on psychotherapy, please see the very interesting APA article on this matter:
www.apa.org/helpcenter/understanding-psychotherapy.aspx
Key Takeaways
You should have learned the following in this section:
• Anyone can seek treatment and we all can benefit from it at some point in our lives.
• Psychotherapy is when psychologists apply scientifically validated procedures to help a person feel better and develop healthy habits.
Review Questions
1. When should you seek help?
2. Why should you seek professional help over the advice dispensed by family and friends?
3. How do you find a therapist and what should you bring to your appointment?
4. Does psychotherapy work?
Module Recap
That’s it. With the conclusion of Module 3, you now have the necessary foundation to understand each of the groups of disorders we discuss beginning in Module 4 and through Module 14.
In Module 3 we reviewed clinical assessment, diagnosis, and treatment. In terms of assessment, we covered key concepts such as reliability, validity, and standardization; and discussed methods of assessment such as observation, the clinical interview, psychological tests, personality inventories, neurological tests, the physical examination, behavioral assessment, and intelligence tests. In terms of diagnosis, we discussed the classification systems of the DSM-5-TR and ICD-11. For treatment, we discussed the reasons why someone may seek treatment, self-treatment, psychotherapy, the client-centered relationship, and how well psychotherapy works. | textbooks/socialsci/Psychology/Psychological_Disorders/Fundamentals_of_Psychological_Disorders_3e_(Bridley_and_Daffin)/01%3A_Part_I._Setting_the_Stage/03%3A_Clinical_Assessment_Diagnosis_and_Treatment/3.03%3A_Treatment_of_Mental_Disorder.txt |
Learning Objectives
• Describe how depressive disorders present.
• Describe how bipolar and related disorders present.
• Describe the epidemiology of mood disorders.
• Describe comorbidity in relation to mood disorders.
• Describe the etiology of mood disorders.
• Describe treatment options for mood disorders.
In Module 4, we will discuss matters related to mood disorders to include their clinical presentation, epidemiology, comorbidity, etiology, and treatment options. Our discussion will cover major depressive disorder, persistent depressive disorder (formerly Dysthymia), bipolar I disorder, bipolar II disorder, and cyclothymic disorder. We will also cover major depressive, manic, and hypomanic episodes. Be sure you refer Modules 1-3 for explanations of key terms (Module 1), an overview of the various models to explain psychopathology (Module 2), and descriptions of several therapies (Module 3). Note that this module will cover two chapters from the DSM 5-TR; namely, Bipolar and Related Disorders and Depressive Disorders.
04: Mood Disorders
Learning Objectives
• Distinguish the two distinct groups of mood disorders.
• Identify and describe the two types of depressive disorders.
• Classify symptoms of depression.
• Describe premenstrual dysphoric disorder.
Distinguishing Mood Disorders
Within mood disorders are two distinct groups—individuals with depressive disorders and individuals with bipolar disorders. The key difference between the two mood disorder groups is episodes of mania/hypomania. More specifically, in bipolar I disorder, the individual experiences a manic episode that “may have been preceded by and may be followed by hypomanic or major depressive episodes” (APA, 2022, pg. 139) whereas for bipolar II disorder, the individual has experienced in the past or is currently experiencing a hypomanic episode and has experienced in the past or is currently experiencing a major depressive episode. In contrast, individuals presenting with a depressive disorder have never experienced a manic or hypomanic episode.
Types of Depressive Disorders
The two most common types of depressive disorders are major depressive disorder (MDD) and persistent depressive disorder (PDD). Persistent depressive disorder, which in the DSM-5 now includes the diagnostic categories of dysthymia and chronic major depression, is a continuous and chronic form of depression. While the symptoms of PDD are very similar to MDD, they are usually less acute, as symptoms tend to ebb and flow over a long period (i.e., more than two years). Major depressive disorder, on the other hand, has discrete episodes lasting at least two weeks in which there are substantial changes in affect, cognition, and neurovegetative functions (APA, 2022, pg. 177).
It should be noted that after a careful review of the literature, premenstrual dysphoric disorder, was moved from “Criteria Sets and Axes Provided for Future Study” in the DSM-IV to Section II of DSM-5 as the disorder was confirmed as a “specific and treatment-responsive form of depressive disorder that begins sometime following ovulation and remits within a few days of menses and has a marked impact on functioning” (APA, 2022, pg. 177).
The DSM-5 also added a new diagnosis, disruptive mood dysregulation disorder (DMDD), for children up to 12 years of age, to deal with the potential for overdiagnosis and treatment of bipolar disorder in children, both in the United States and internationally. Children with DMDD present with persistent irritability and frequent episodes of extreme behavioral dyscontrol and so develop unipolar, not bipolar, depressive disorders or anxiety disorders as they move into adolescence and adulthood.
For a discussion of DMDD, please visit our sister book, Behavioral Disorders of Childhood:
https://opentext.wsu.edu/behavioral-disorders-childhood/
Symptoms Associated with Depressive Disorders
When making a diagnosis of depression, there are a wide range of symptoms that may be present. These symptoms can generally be grouped into four categories: mood, behavioral, cognitive, and physical symptoms.
4.1.3.1. Mood. While clinical depression can vary in its presentation among individuals, most, if not all individuals with depression will report significant mood disturbances such as a depressed mood for most of the day and/or feelings of anhedonia, which is the loss of interest in previously interesting activities.
4.1.3.2. Behavioral. Behavioral issues such as decreased physical activity and reduced productivity—both at home and work—are often observed in individuals with depression. This is typically where a disruption in daily functioning occurs as individuals with depressive disorders are unable to maintain their social interactions and employment responsibilities.
4.1.3.3. Cognitive. It should not come as a surprise that there is a serious disruption in cognitions as individuals with depressive disorders typically hold a negative view of themselves and the world around them. They are quick to blame themselves when things go wrong, and rarely take credit when they experience positive achievements. Individuals with depressive disorders often feel worthless, which creates a negative feedback loop by reinforcing their overall depressed mood. They also report difficulty concentrating on tasks, as they are easily distracted from outside stimuli. This assertion is supported by research that has found individuals with depression perform worse than those without depression on tasks of memory, attention, and reasoning (Chen et al., 2013). Finally, thoughts of suicide and self-harm do occasionally occur in those with depressive disorders (Note – this will be discussed in more detail in Section 4.3).
4.1.3.4. Physical. Changes in sleep patterns are common in those experiencing depression with reports of both hypersomnia and insomnia. Hypersomnia, or excessive sleeping, often impacts an individual’s daily functioning as they spend the majority of their time sleeping as opposed to participating in daily activities (i.e., meeting up with friends or getting to work on time). Reports of insomnia are also frequent and can occur at various points throughout the night to include difficulty falling asleep, staying asleep, or waking too early with the inability to fall back asleep before having to wake for the day. Although it is unclear whether symptoms of fatigue or loss of energy are related to insomnia issues, the fact that those experiencing hypersomnia also report symptoms of fatigue suggests that these symptoms are a component of the disorder rather than a secondary symptom of sleep disturbance.
Additional physical symptoms, such as a change in weight or eating behaviors, are also observed. Some individuals who are experiencing depression report a lack of appetite, often forcing themselves to eat something during the day. On the contrary, others overeat, often seeking “comfort foods,” such as those high in carbohydrates. Due to these changes in eating behaviors, there may be associated changes in weight.
Finally, psychomotor agitation or retardation, which is the purposeless or slowed physical movement of the body (i.e., pacing around a room, tapping toes, restlessness, etc.) is also reported in individuals with depressive disorders.
Diagnostic Criteria and Features for Depressive Disorders
4.1.4.1. Major depressive disorder (MDD). According to the DSM-5-TR (APA, 2022), to meet the criteria for a diagnosis of major depressive disorder, an individual must experience at least five symptoms across the four categories discussed above, and at least one of the symptoms is either 1) a depressed mood most of the day, almost every day, or 2) loss of interest or pleasure in all, or most, activities, most of the day, almost every day. These symptoms must be present for at least two weeks and cause clinically significant distress or impairment in important areas of functioning such as social and occupational. The DSM-5 cautions that responses to a significant loss (such as the death of a loved one, financial ruin, and discovery of a serious medical illness or disability), can lead to many of the symptoms described above (i.e., intense sadness, rumination about the loss, insomnia, etc.) but this may be the normal response to such a loss. Though the individual’s response resembles a major depressive episode, clinical judgment should be utilized in making any diagnosis and be based on the clinician’s understanding of the individual’s personal history and cultural norms related to how members should express distress in the context of loss.
4.1.4.2. Persistent depressive disorder (PDD). For a diagnosis of persistent depressive disorder, an individual must experience a depressed mood for most of the day, for more days than not, for at least two years. (APA, 2022). This feeling of a depressed mood is also accompanied by two or more additional symptoms, to include changes in appetite, insomnia or hypersomnia, low energy or fatigue, low self-esteem, feelings of hopelessness, and poor concentration or difficulty with decision making. The symptoms taken together cause clinically significant distress or impairment in important areas of functioning such as social and occupational and these impacts can be as great as or greater than MDD. The individual may experience a temporary relief of symptoms; however, the individual will not be without symptoms for more than two months during this two-year period.
Making Sense of the Disorders
In relation to depressive disorders, note the following:
• Diagnosis MDD …… if symptoms have been experienced for at least two weeks and can be regarded as severe
• Diagnosis PDD … if the symptoms have been experienced for at least two years and are not severe
4.1.4.3. Premenstrual dysphoric disorder. In terms of premenstrual dysphoric disorder, the DSM-5-TR states in the majority of menstrual cycles, at least five symptoms must be present in the final week before the onset of menses, being improving with a few days after menses begins, and disappear or become negligible in the week postmenses. Individuals diagnosed with premenstrual dysphoric disorder must have one or more of the following: increased mood swings, irritability or anger, depressed mood, or anxiety/tension. Additionally, they must have one or more of the following to reach a total of five symptoms: anhedonia, difficulty concentrating, lethargy, changes in appetite, hypersomnia or insomnia, feelings of being overwhelmed or out of control, and/or experience breast tenderness or swelling. The symptoms lead to issues at work or school (i.e., decreased productivity and efficiency), within relationships (i.e., discord in the intimate partner relationship or with children, friends, or other family members), and with usual social activities (i.e., avoidance of the activities).
Key Takeaways
You should have learned the following in this section:
• Mood disorder fall into one of two groups – depressive or bipolar disorders – with the key distinction between the two being episodes of mania/hypomania.
• Symptoms of depression fall into one of four categories – mood, behavioral, cognitive, and physical.
• Persistent Depressive Disorder shares symptoms with Major Depressive Disorder though they are usually not as severe and ebb and flow over a period of at least two years.
• Premenstrual dysphoric disorder presents as mood lability, irritability, dysphoria, and anxiety symptoms occurring often during the premenstrual phase of the cycle and remit around the beginning of menses or shortly thereafter.
Review Questions
1. What are the different categories of mood disorder symptoms? Identify the symptoms within each category.
2. What are the key differences in a major depression and a persistent depressive disorder diagnosis?
3. What is premenstrual dysphoric disorder? | textbooks/socialsci/Psychology/Psychological_Disorders/Fundamentals_of_Psychological_Disorders_3e_(Bridley_and_Daffin)/02%3A_Part_II._Mental_Disorders__Block_1/04%3A_Mood_Disorders/4.01%3A_Clinical_Presentation__Depressive_Disorders.txt |
Learning Objectives
• Distinguish the forms bipolar disorder takes.
• Contrast a manic episode with a hypomanic episode.
• Define cyclothymic disorder.
Distinguishing Bipolar I and II Disorders
According to the DSM-5-TR (APA, 2022), there are two types of bipolar disorder- bipolar I and bipolar II. A diagnosis of bipolar I disorder is made when there is at least one manic episode. This manic episode can be preceded by and/or followed by a hypomanic or major depressive episode, however, diagnostic criteria for a manic episode is the only criteria that needs to be met for a bipolar I diagnosis. A diagnosis of bipolar II Disorder is made when there is a current or history of a hypomanic episode and a current or past major depressive episode. Descriptions of both manic and hypomanic episodes follow below.
Making Sense of the Disorders
In relation to bipoloar I and II disorders, note the following:
• Diagnosis bipolar I disorder …. if an individual has ever experienced a manic episode
• Diagnosis bipolar II disorder … if the criteria has only been met for a hypomanic episode
Manic and Hypomanic Episodes
4.2.2.1. Manic episode. The key feature of a manic episode is a specific period in which an individual reports abnormal, persistent, or expansive irritable mood for nearly all day, every day, for at least one week (APA, 2022). Additionally, the individual will display increased activity or energy during this same time. With regards to mood, an individual in a manic episode will appear excessively happy, often engaging haphazardly in sexual or interpersonal interactions. They also display rapid shifts in mood, also known as mood lability, ranging from happy, neutral, to irritable. At least three of the symptoms described below (four if the mood is only irritable) must be present and represent a noticeable change in the individual’s typical behavior.
Inflated self-esteem or grandiosity (Criterion B1) is present during a manic episode. Occasionally these inflated self-esteem levels can appear delusional. For example, individuals may believe they are friends with a celebrity, do not need to abide by laws, or even perceive themselves as God. They also engage in multiple overlapping new projects (Criteria B6 and 7), often initiated with no prior knowledge about the topic, and engaged in at unusual hours of the day.
Despite the increased activity level, individuals experiencing a manic episode also require a decreased need for sleep (Criterion B2), sleeping as little as a few hours a night yet still feeling rested. Reduced need for sleep may also be a precursor to a manic episode, suggesting that a manic episode is to begin imminently. It is not uncommon for those experiencing a manic episode to be more talkative than usual. It can be difficult to follow their conversation due to the quick pace of their talking, as well as tangential storytelling. Additionally, they can be difficult to interrupt in conversation, often disregarding the reciprocal nature of communication (Criterion B3). If the individual is more irritable than expansive, speech can become hostile and they engage in tirades, particularly if they are interrupted or not allowed to engage in an activity they are seeking out (APA, 2022).
Based on their speech pattern, it should not be a surprise that racing thoughts and flights of ideas (Criterion B4) also present during manic episodes. Because of these rapid thoughts, speech may become disorganized or incoherent. Finally, individuals experiencing a manic episode are distractable (Criterion B5).
4.2.2.2. Hypomanic episode. As mentioned above, for a bipolar II diagnosis, an individual must report symptoms consistent with a major depressive episode and at least one hypomanic episode. An individual with bipolar II disorder must not have a history of a manic episode—if there is a history of mania, the diagnosis will be diagnosed with bipolar I. A hypomanic episode is like a manic episode in that the individual will experience abnormally and persistently elevated, expansive, or irritable mood and energy levels, however, the behaviors are not as extreme as in mania. Additionally, behaviors consistent with a hypomanic episode must be present for at least four days, compared to the one week in a manic episode.
Making Sense of the Disorders
Take note of the following in relation to manic and hypomanic episodes:
• A manic episode is severe enough to cause impairments in social or occupational functioning and can lead to hospitalization to prevent harm to self or others.
• A hypomanic episode is NOT severe enough to cause such impairments or hospitalization.
Cyclothymic Disorder
Notably, there is a subclass of individuals who experience numerous periods with hypomanic symptoms that do not meet the criteria for a hypomanic episode and mild depressive symptoms (i.e., do not fully meet criteria for a major depressive episode). These individuals are diagnosed with cyclothymic disorder (APA, 2022). Presentation of these symptoms occur for two or more years and are typically interrupted by periods of normal mood not lasting more than two months at a time. The symptoms cause clinically significant distress or impairment in important areas of functioning, such as social and occupational. While only a small percentage of the population develops cyclothymic disorder, it can eventually progress into bipolar I or bipolar II disorder (Zeschel et al., 2015).
Key Takeaways
You should have learned the following in this section:
• An individual is diagnosed with bipolar I disorder if they have ever experienced a manic episode and are diagnosed with bipolar II disorder if the criteria has only been met for a hypomanic episode.
• A manic episode is characterized by a specific period in which an individual reports abnormal, persistent, or expansive irritable mood for nearly all day, every day, for at least one week.
• A hypomanic episode is characterized by abnormally and persistently elevated, expansive, or irritable mood and energy levels, though not as extreme as in mania, and must be present for at least four days. It is also not severe enough to cause impairments or hospitalization.
• Cyclothymic disorder includes periods of hypomanic and mild depressive symptoms without meeting the criteria for a depressive episode which lasts two or more years and is interrupted by periods of normal moods.
Review Questions
1. What is the difference between bipolar I and II disorder?
2. What are the key diagnostic differences between a hypomanic and manic episode?
3. What is cyclothymic disorder? | textbooks/socialsci/Psychology/Psychological_Disorders/Fundamentals_of_Psychological_Disorders_3e_(Bridley_and_Daffin)/02%3A_Part_II._Mental_Disorders__Block_1/04%3A_Mood_Disorders/4.02%3A_Clinical_Presentation__Bipolar_and_Related_Disor.txt |
Learning Objectives
• Describe the epidemiology of depressive disorders.
• Describe the epidemiology of bipolar disorders.
• Describe the epidemiology of suicidality.
Depressive Disorders
According to the DSM-5-TR (APA, 2022), the 12-month prevalence rate for major depressive disorder is approximately 7% within the United States. Recall that DSM-5 persistent depressive disorder is a blend of DSM-IV dysthymic disorder and chronic major depressive disorder. The prevalence rate for DSM-IV dysthymic disorder is much lower than MDD, with a 0.5% rate among adults in the United States, while DSM-IV chronic major depressive disorder is 1.5%.
As well, individuals in the 18- to 29- year-old age bracket report the highest rates of MDD than any other age group. Women experience about twofold higher rates than men of MDD, especially between menarche and menopause (APA, 2022). The estimated lifetime prevalence for major depressive disorder in women is 21.3% compared to 12.7% in men (Nolen-Hoeksema, 2001). Regrading DSM-IV dysthymic disorder and chronic major depressive disorder, the prevalence among women is 1.5 and 2 times greater than the prevalence for men for each of these diagnoses, respectively (APA, 2022).
Bipolar Disorders
The 12-month prevalence of bipolar I disorder in the United States is 1.5% and did not differ statistically between men and women. In contrast, bipolar II disorder has a prevalence rate of 0.8% in the United States and 0.3% internationally (APA, 2022) and some clinical samples suggest it is more common in women, with approximately 80-90% of individuals with rapid-cycling episodes being women (Bauer & Pfenning, 2005). Childbirth may be a specific trigger for a hypomanic episode, occurring in 10-20% of women in nonclinical settings and most often in the early postpartum period.
Suicidality
Individuals with a depressive disorder have a 17-fold increased risk for suicide over the age- and sex-adjusted general population rate. Features associated with an increased risk for death by suicide include anhedonia, living alone, being single, disconnecting socially, having access to a firearm, early life adversity, sleep disturbance, feelings of hopelessness, and problems with decision making. Women attempt suicide at a higher rate though men are more likely to complete suicide. Finally, the premenstrual phase is considered a risk period for suicide by some (APA, 2022).
In terms of bipolar disorders, the lifetime risk of suicide is estimated to be 20- to 30- fold greater than in the general population and 5-6% of individuals with bipolar disorder die by suicide. Like depressive disorders, women attempt suicide at a higher rate though lethal suicide is more common in men with bipolar disorder. About 1/3 of individuals with bipolar II disorder report a lifetime history of suicide attempt, which is similar in bipolar I disorder, though lethality of attempts is higher in individuals with bipolar II (APA, 2022).
Key Takeaways
You should have learned the following in this section:
• Major depressive disorder is experienced by about 7% of the population in the United States, afflicting young adults and women the most.
• Bipolar I disorder afflicts 1.5% and bipolar II disorder afflicts 0.8% of the U.S. population with bipolar II affecting women more than men and no gender difference being apparent for bipolar I.
• Individuals with a depressive disorder have a 17-fold increased risk for suicide while the lifetime risk of suicide for an individual with a bipolar disorder is estimated to be 20- to 30- fold greater than in the general population and 5-6% of individuals with bipolar disorder die by suicide.
Review Questions
1. What are the prevalence rates of the mood disorders?
2. What gender differences exist in the rate of occurrence of mood disorders?
3. How do depressive and bipolar disorders compare in terms of suicidality (attempts and lethality)?
4.04: Mood Disorders - Comorbidity
Learning Objectives
• Describe the comorbidity of depressive disorders.
• Describe the comorbidity of bipolar disorders.
Depressive Disorders
Studies exploring depression symptoms among the general population show a substantial pattern of comorbidity between depression and other mental disorders, particularly substance use disorders (Kessler, Berglund, et al., 2003). Nearly three-fourths of participants with lifetime MDD in a large-scale research study also met the criteria for at least one other DSM disorder (Kessler, Berglund, et al., 2003). MDD has been found to co-occur with substance-related disorders, panic disorder, generalized anxiety disorder, PTSD, OCD, anorexia, bulimia, and borderline personality disorder. Gender differences do exist within comorbidities such that women report comorbid anxiety disorders, bulimia, and somatoform disorders while men report comorbid alcohol and substance abuse. In contrast, those with PDD are at higher risk for psychiatric comorbidity in general and for anxiety disorders, substance use disorders, and personality disorders in particular (APA, 2022).
Given the extent of comorbidity among individuals with MDD, researchers have tried to identify which disorder precipitated the other. The majority of studies found that most depression cases occur secondary to another mental health disorder, meaning that the onset of depression is a direct result of the onset of another disorder (Gotlib & Hammen, 2009).
Bipolar Disorders
Those with bipolar I disorder typically have a history of three or more mental disorders. The most frequent comorbid disorders include anxiety disorders, alcohol use disorder, other substance use disorder, and ADHD, along with borderline, schizotypal, and antisocial personality disorder.
Bipolar II disorder is more often than not associated with one or more comorbid mental disorders, with anxiety disorders being the most common (38% with social anxiety, 36% with specific phobia, and 30% having generalized anxiety). As with bipolar I, substance use disorders are common with alcohol use (42%) leading the way, followed by cannabis use (20%). Premenstrual syndrome and premenstrual dysphoric disorder are common in women with bipolar II disorder especially (APA, 2022).
Finally, cyclothymic disorder has been found to be comorbid with substance-related disorders and sleep disorders.
Key Takeaways
You should have learned the following in this section:
• Depressive disorders have a high comorbidity with substance use disorders, anxiety disorders, and some personality disorders.
• Bipolar disorders have a high comorbidity with anxiety disorders and substance abuse disorders while cyclothymic disorder is comorbid with substance-related disorders and sleep disorders.
Review Questions
1. What are common comorbidities for the depressive disorders?
2. What are common comorbidities for bipolar disorders? | textbooks/socialsci/Psychology/Psychological_Disorders/Fundamentals_of_Psychological_Disorders_3e_(Bridley_and_Daffin)/02%3A_Part_II._Mental_Disorders__Block_1/04%3A_Mood_Disorders/4.03%3A_Mood_Disorders_-_Epidemiology.txt |
Learning Objectives
• Describe the biological causes of mood disorders.
• Describe the cognitive causes of mood disorders.
• Describe the behavioral causes of mood disorders.
• Describe the sociocultural causes of mood disorders.
Biological
Research throughout the years continues to provide evidence that depressive disorders have some biological cause. While it does not explain every depressive case, it is safe to say that some individuals may at least have a predisposition to developing a depressive disorder. Among the biological factors are genetic factors, biochemical factors, and brain structure.
4.5.1.1. Genetics. Like with any disorder, researchers often explore the prevalence rate of depressive disorders among family members to determine if there is some genetic component, whether it be a direct link or a predisposition. If there is a genetic predisposition to developing depressive disorders, one would expect a higher rate of depression within families than that of the general population. Research supports this with regards to depressive disorders as there is nearly a 30% increase in relatives diagnosed with depression compared to 10% of the general population (Levinson & Nichols, 2014). Similarly, there is an elevated prevalence among first-degree relatives for both Bipolar I and Bipolar II disorders as well.
Another way to study the genetic component of a disorder is via twin studies. One would expect identical twins to have a higher rate of the disorder as opposed to fraternal twins, as identical twins share the same genetic make-up, whereas fraternal twins only share roughly 50%, similar to that of siblings. A large-scale study found that if one identical twin was diagnosed with depression, there was a 46% chance their identical twin was diagnosed with depression. In contrast, the rate of a depression diagnosis in fraternal twins was only 20%. Despite the fraternal twin rate still being higher than that of a first-degree relative, this study provided enough evidence that there is a strong genetic link in the development of depression (McGuffin et al., 1996).
More recently, scientists have been studying depression at a molecular level, exploring possibilities of gene abnormalities as a cause for developing a depressive disorder. While much of the research is speculation due to sampling issues and low power, there is some evidence that depression may be tied to the 5-HTT gene on chromosome 17, as this is responsible for the activity of serotonin (Jansen et al., 2016).
Bipolar disorders share a similar genetic predisposition to that of major depressive disorder. Twin studies within bipolar disorder yielded concordance rates for identical twins at as high as 72%, yet the range for fraternal twins, siblings, and other close relatives ranged from 5-15%. It is important to note that both percentages are significantly higher than that of the general population, suggesting a strong genetic component within bipolar disorder (Edvardsen et al., 2008). The DSM-5-TR more recently reports heritability estimates around 90% in some twin studies and the risk of bipolar disorder being around 1% in the general population compared to 5-10% in a first-degree relative (APA, 2022).
4.5.1.2. Biochemical. As you will read in the treatment section, there is strong evidence of a biochemical deficit in depression and bipolar disorders. More specifically, low activity levels of norepinephrine and serotonin, have long been documented as contributing factors to developing depressive disorders. This relationship was discovered accidentally in the 1950s when MAOIs were given to tuberculosis patients, and miraculously, their depressive moods were also improved. Soon thereafter, medical providers found that medications used to treat high blood pressure by causing a reduction in norepinephrine also caused depression in their patients (Ayd, 1956).
While these initial findings were premature in the identification of how neurotransmitters affected the development of depressive features, they did provide insight as to what neurotransmitters were involved in this system. Researchers are still trying to determine exact pathways; however, it does appear that both norepinephrine and serotonin are involved in the development of symptoms, whether it be between the interaction between them, or their interaction on other neurotransmitters (Ding et al., 2014).
Due to the close nature of depression and bipolar disorder, researchers initially believed that both norepinephrine and serotonin were implicated in the development of bipolar disorder; however, the idea was that there was a drastic increase in serotonin during mania episodes. Unfortunately, research supports the opposite. It is believed that low levels of serotonin and high levels of norepinephrine may explain mania episodes (Soreff & McInnes, 2014). Despite these findings, additional research within this area is needed to conclusively determine what is responsible for the manic episodes within bipolar disorder.
4.5.1.3. Endocrine system. As you may know, the endocrine system is a collection of glands responsible for regulating hormones, metabolism, growth and development, sleep, and mood, among other things. Some research has implicated hormones, particularly cortisol, a hormone released as a stress response, in the development of depression (Owens et al., 2014). Additionally, melatonin, a hormone released when it is dark outside to assist with the transition to sleep, may also be related to depressive symptoms, particularly during the winter months.
4.5.1.4. Brain anatomy. Seeing as neurotransmitters have been implicated in the development of depressive disorders, it should not be a surprise that various brain structures have also been identified as contributors to mood disorders. While exact anatomy and pathways are yet to be determined, research studies implicate the prefrontal cortex, the hippocampus, and the amygdala. More specifically, drastic changes in blood flow throughout the prefrontal cortex have been linked with depressive symptoms. Similarly, a smaller hippocampus, and consequently, fewer neurons, has also been linked to depressive symptoms. Finally, heightened activity and blood flow in the amygdala, the brain area responsible for our fight or flight response, is also consistently found in individuals with depressive symptoms.
Abnormalities in several brain structures have also been identified in individuals with bipolar disorder; however, what or why these structures are abnormal has yet to be determined. Researchers continue to focus on areas of the basal ganglia and cerebellum, which appear to be much smaller in individuals with bipolar disorder compared to the general public. Additionally, there appears to be a decrease in brain activity in regions associated with regulating emotions, as well as an increase in brain activity among structures related to emotional responsiveness (Houenou et al., 2011). Additional research is still needed to determine precisely how each of these brain structures may be implicated in the development of bipolar disorder.
Cognitive
The cognitive model, arguably the most conclusive model with regards to depressive disorders, focuses on the negative thoughts and perceptions of an individual. One theory often equated with the cognitive model of depression is learned helplessness. Coined by Martin Seligman (1975), learned helplessness was developed based on his laboratory experiment involving dogs. In this study, Seligman restrained dogs in an apparatus and routinely shocked them regardless of their behavior. The following day, the dogs were placed in a similar apparatus; however, this time they were not restrained and there was a small barrier placed between the “shock” floor and the “safe” floor. What Seligman observed was that despite the opportunity to escape the shock, the dogs flurried for a bit, and then ultimately laid down and whimpered while being shocked.
Based on this study, Seligman concluded that the animals essentially learned that they were unable to avoid the shock the day prior, and therefore, learned that they were helpless in preventing the shocks. When they were placed in a similar environment but had the opportunity to escape the shock, their learned helplessness carried over, and they continued to believe they were unable to escape the shock.
This study has been linked to humans through research on attributional style (Nolen-Hoeksema, Girgus & Seligman, 1992). There are two types of attributional styles—positive and negative. A negative attributional style focuses on the internal, stable, and global influence of daily lives, whereas a positive attributional style focuses on the external, unstable, and specific influence of the environment. Research has found that individuals with a negative attributional style are more likely to experience depression. This is likely due to their negative interpretation of daily events. For example, if something bad were to happen to them, they would conclude that it is their fault (internal), bad things always happen to them (stable), and bad things happen all day to them. Unfortunately, this maladaptive thinking style often takes over an individual’s daily view, thus making them more vulnerable to depression.
In addition to attributional style, Aaron Beck also attributed negative thinking as a precursor to depressive disorders (Beck, 2002, 1991, 1967). Often viewed as the grandfather of Cognitive-Behavioral Therapy, Beck went on to coin the terms—maladaptive attitudes, cognitive triad, errors in thinking, and automatic thoughts—all of which combine to explain the cognitive model of depressive disorders.
Maladaptive attitudes, or negative attitudes about oneself, others, and the world around them are often present in those with depressive symptoms. These attitudes are inaccurate and often global. For example, “If I fail my exam, the world will know I’m stupid.” Will the entire world really know you failed your exam? Not likely. Because you fail the exam, are you stupid? No. Individuals with depressive symptoms often develop these maladaptive attitudes regarding everything in their life, indirectly isolating themselves from others. The cognitive triad also plays into the maladaptive attitudes in that the individual interprets these negative thoughts about their experiences, themselves, and their futures.
Cognitive distortions, also known as errors in thinking, are a key component in Beck’s cognitive theory. Beck identified 15 errors in thinking that are most common in individuals with depression (see the end of the module). Among the most common are catastrophizing, jumping to conclusions, and overgeneralization. I always like to use my dad (first author’s dad) as an example for overgeneralization. Whenever we go to the grocery store, he always comments about how whatever line he chooses, at every store, it is always the slowest line. Does this happen every time he is at the store? I’m doubtful, but his error in thinking leads to him believing this is true.
Finally, automatic thoughts, or the constant stream of negative thoughts, also leads to symptoms of depression as individuals begin to feel as though they are inadequate or helpless in a given situation. While some cognitions are manipulated and interpreted negatively, Beck stated that there is another set of negative thoughts that occur automatically. Research studies have continually supported Beck’s maladaptive thoughts, attitudes, and errors in thinking as fundamental issues in those with depressive disorders (Lai et al., 2014; Possel & Black, 2014). Furthermore, as you will see in the treatment section (Section 4.5), cognitive strategies are among the most effective forms of treatment for depressive disorders.
Behavioral
The behavioral model explains depression as a result of a change in the number of rewards and punishments one receives throughout their life. This change can come from work, intimate relationships, family, or even the environment in general. Among the most influential in the field of depression is Peter Lewinsohn. He stated depression occurred in most people due to reduced positive rewards in their life. Because they were not positively rewarded, their constructive behaviors occurred more infrequently until they stop engaging in the behavior completely (Lewinsohn et al., 1990; 1984). An example of this is a student who keeps receiving bad grades on their exam despite studying for hours. Over time, the individual will reduce the amount of time they are studying, thus continuing to earn poor grades.
Sociocultural
In the sociocultural theory, the role of family and one’s social environment play a substantial role in the development of depressive disorders. There are two sociocultural views: the family-social perspective and the multi-cultural perspective.
4.5.4.1. Family-social perspective. Similar to that of the behavioral theory, the family-social perspective of depression suggests that depression is related to the unavailability of social support. This is often supported by research studies that show separated and divorced individuals are three times more likely to experience depressive symptoms than those that are married or even widowed (Schultz, 2007). While many factors lead a couple to separate or end their marriage, some relationships end due to a spouse’s mental health issues, particularly depressive symptoms. Depressive symptoms have been positively related to increased interpersonal conflicts, reduced communication, and intimacy issues, all of which are often reported in causal factors leading to a divorce (Najman et al., 2014).
The family-social perspective can also be viewed oppositely, with stress and marital discord leading to increased rates of depression in one or both spouses (Nezlek et al., 2000). While some research indicates that having children provides a positive influence in one’s life, it can also lead to stress both within the individual, as well as between partners due to division of work and discipline differences. Studies have shown that women who had three or more young children, and also lacked a close confidante and outside employment, were more likely than other mothers to become depressed (Brown, 2002).
4.5.4.2. Multi-cultural perspective. While depression is experienced across the entire world, one’s cultural background may influence what symptoms of depression are presented. Common depressive symptoms such as feeling sad, lack of energy, anhedonia, difficulty concentrating, and thoughts of suicide are a hallmark in most societies; other symptoms may be more specific to one’s nationality. More specifically, individuals from non-Western countries (China and other Asian countries) often focus on the physical symptoms of depression—tiredness, weakness, sleep issues—and less of an emphasis on the cognitive symptoms.
Within the United States, many researchers have explored potential differences across ethnic or racial groups in both rates of depression, as well as presenting symptoms of those diagnosed with depression. These studies continually fail to identify any significant differences between ethnic and racial groups; however, one major study has identified a difference in the rate of recurrence of depression in Hispanic and African Americans (Gonzalez et al., 2010). While the exact reason for this is unclear, researchers propose a lack of treatment opportunities as a possible explanation. According to Gonzalez and colleagues (2010), approximately 54% of depressed white Americans seek out treatment, compared to the 34% and 40% Hispanic and African Americans, respectively. The fact that there is a large discrepancy in the use of treatment between white Americans and minority Americans suggests that these individuals are not receiving the effective treatment necessary to resolve the disorder, thus leaving them more vulnerable for repeated depressive episodes.
4.5.4.3. Gender differences. As previously discussed, there is a significant difference between gender and rates of depression, with women twice as likely to experience an episode of depression than men (Schuch et al., 2014). There are a few speculations as to why there is such an imbalance in the rate of depression across genders.
The first theory, artifact theory, suggests that the difference between genders is due to clinician or diagnostic systems being more sensitive to diagnosing women with depression than men. While women are often thought to be more “emotional,” easily expressing their feelings and more willing to discuss their symptoms with clinicians and physicians, men often withhold their symptoms or will present with more traditionally “masculine” symptoms of anger or aggression. While this theory is a possible explanation for the gender differences in the rate of depression, research has failed to support this theory, suggesting that men and women are equally likely to seek out treatment and discuss their depressive symptoms (McSweeney, 2004; Rieker & Bird, 2005).
The second theory, hormone theory, suggests that variations in hormone levels trigger depression in women more than men (Graziottin & Serafini, 2009). While there is biological evidence supporting the changes in hormone levels during various phases of the menstrual cycle and their impact on women’s ability to integrate and process emotional information, research fails to support this theory as the reason for higher rates of depression in women (Whiffen & Demidenko, 2006).
The third theory, life stress theory, suggests that women are more likely to experience chronic stressors than men, thus accounting for their higher rate of depression (Astbury, 2010). Women face increased risk for poverty, lower employment opportunities, discrimination, and poorer quality of housing than men, all of which are strong predictors of depressive symptoms (Garcia-Toro et al., 2013).
The fourth theory, gender roles theory, suggests that social and or psychological factors related to traditional gender roles also influence the rate of depression in women. For example, men are often encouraged to develop personal autonomy, seek out activities that interest them, and display achievement-oriented goals; women are encouraged to empathize and care for others, often fostering an interdependent functioning, which may cause women to value the opinion of others more highly than their male counterparts do.
The final theory, rumination theory, suggests that women are more likely than men to ruminate, or intently focus, on their depressive symptoms, thus making them more vulnerable to developing depression at a clinical level (Nolen-Hoeksema, 2012). Several studies have supported this theory and shown that rumination of negative thoughts is positively related to an increase in depression symptoms (Hankin, 2009).
While many theories try to explain the gender discrepancy in depressive episodes, no single theory has produced enough evidence to fully explain why women experience depression more than men. Due to the lack of evidence, gender differences in depression remains one of the most researched topics within the subject of depression, while simultaneously being the least understood phenomena within clinical psychology.
Key Takeaways
You should have learned the following in this section:
• In terms of biological explanations for depressive disorders, there is evidence that rates of depression are higher among identical twins (the same is true for bipolar disorders), that the 5-HTT gene on chromosome 17 may be involved in depressive disorders, that norepinephrine and serotonin affect depressive (both being low) and bipolar disorders (low serotonin and high norepinephrine), the hormones cortisol and melatonin affect depression, and several brain structures are implicated in depression (prefrontal cortex, hippocampus, and amygdala) and bipolar disorder (basal ganglia and cerebellum).
• In terms of cognitive explanations, learned helplessness, attributional style, and maladaptive attitudes to include the cognitive triad, errors in thinking, and automatic thoughts, help to explain depressive disorders.
• Behavioral explanations center on changes in the rewards and punishments received throughout life.
• Sociocultural explanations include the family-social perspective and multi-cultural perspective.
• Women are twice as likely to experience depression and this could be due to women being more likely to be diagnosed than men (called the artifact theory), variations in hormone levels in women (hormone theory), women being more likely to experience chronic stressors (life stress theory), the fostering of an interdependent functioning in women (gender roles theory), and that women are more likely to intently focus on their symptoms (rumination theory).
Review Questions
1. How do twin studies explain the biological causes of mood disorders?
2. What brain structures are implicated in the development of mood disorders? Discuss their role.
3. What is learned helplessness? How has this concept been used to study the development and maintenance of mood disorders?
4. What is the cognitive triad?
5. What are common cognitive distortions observed in individuals with mood disorders?
6. What are the identified theories that are used to explain the gender differences in mood disorder development? | textbooks/socialsci/Psychology/Psychological_Disorders/Fundamentals_of_Psychological_Disorders_3e_(Bridley_and_Daffin)/02%3A_Part_II._Mental_Disorders__Block_1/04%3A_Mood_Disorders/4.05%3A_Mood_Disorders_-_Etiology.txt |
Learning Objectives
• Describe treatment options for depressive disorders.
• Describe treatment options for bipolar disorders.
• Determine the efficacy of treatment options for depressive disorders.
• Determine the efficacy of treatment options for bipolar disorders.
Depressive Disorders
Given that Major Depressive Disorder is among the most frequent and debilitating psychiatric disorders, it should not be surprising that the research on this disorder is quite extensive. Among its treatment options, the most efficacious ones include antidepressant medications, Cognitive-Behavioral Therapy (CBT; Beck et al., 1979), Behavioral Activation (BA; Jacobson et al., 2001), and Interpersonal Therapy (IPT; Klerman et al., 1984). Although CBT is the most widely known and used treatment for Major Depressive Disorder, there is minimal evidence to support one treatment modality over the other; treatment is generally dictated by therapist competence, availability, and patient preference (Craighhead & Dunlop, 2014).
4.6.1.1. Psychopharmacology – Antidepressant medications. Antidepressants are often the most common first-line attempt at treatment for MDD for a few reasons. Oftentimes an individual will present with symptoms to their primary caregiver (a medical doctor) who will prescribe them some line of antidepressant medication. Medication is often seen as an “easier” treatment for depression as the individual can take the medication at their home, rather than attending weekly therapy sessions; however, this also leaves room for adherence issues as a large percentage of individuals fail to take prescription medication as indicated by their physician. Given the biological functions of neurotransmitters and their involvement in maintaining depressive symptoms, it makes sense that this is an effective type of treatment.
Within antidepressant medications, there are a few different classes, each categorized by their structural or functional relationships. It should be noted that no specific antidepressant medication class or medication have been proven to be more effective in treating MDD than others (APA, 2010). In fact, many patients may try several different types of antidepressant medications until they find one that is effective, with minimal side effects.
4.6.1.2. Psychopharmacology – Selective serotonin reuptake inhibitors(SSRIs). SSRIs are among the most common medications used to treat depression due to their relatively benign side effects. Additionally, the required dose to reach therapeutic levels is low compared to the other medication options. Possible side effects from SSRIs include but are not limited to nausea, insomnia, and reduced sex drive.
SSRIs improve depression symptoms by blocking the reuptake of norepinephrine and/or serotonin in presynaptic neurons, thus allowing more of these neurotransmitters to be available for postsynaptic neurons. While this is the general mechanism through which all SSRI’s work, there are minor biological differences among different types of medications within the SSRI family. These small differences are beneficial to patients in that there are a few treatment options to maximize medication benefits and minimize side effects.
4.6.1.3. Psychopharmacology – Tricyclic antidepressants. Although originally developed to treat schizophrenia, tricyclic antidepressants were adapted to treat depression after failing to manage symptoms of schizophrenia (Kuhn, 1958). The term tricyclic came from the molecular shape of the structure: three rings.
Tricyclic antidepressants are like SSRIs in that they work by affecting brain chemistry, altering the number of neurotransmitters available for neurons. More specifically, they block the absorption or reuptake of serotonin and norepinephrine, thus increasing their availability for postsynaptic neurons. While effective, tricyclic antidepressants have been increasingly replaced by SSRIs due to their reduced side effects. However, tricyclic antidepressants have been shown to be more effective in treating depressive symptoms in individuals who have not been able to achieve symptom reduction via other pharmacological approaches.
While the majority of the side effects are minimal – dry mouth, blurry vision, constipation, others can be serious such as sexual dysfunction, tachycardia, cognitive and/or memory impairment. Due to the potential impact on the heart, tricyclic antidepressants should not be used in cardiac patients as they may exacerbate cardiac arrhythmias (Roose & Spatz, 1999).
4.6.1.4. Psychopharmacology – Monoamine oxidase inhibitors (MAOIs). The use of MAOIs as a treatment for depression began serendipitously as patients in the early 1950s reported reduced depression symptoms while on the medication to treat tuberculosis. Research studies confirmed that MAOIs were effective in treating depression in adults outside the treatment of tuberculosis. Although still prescribed, they are not typically first-line medications due to their safety concerns with hypertensive crises. Because of this, individuals on MAOIs have strict diet restrictions to reduce their risk of hypertensive crises (Shulman, Herrman & Walker, 2013).
How do MAOIs work? In basic terms, monoamine oxidase is released in the brain to remove excess neurotransmitters norepinephrine, serotonin, and dopamine. MAOIs essentially prevent the monoamine oxidase (hence the name monoamine oxidase inhibitors) from removing these neurotransmitters, thus resulting in an increase in these brain chemicals (Shulman, Herman & Walker, 2013). As previously discussed, norepinephrine, serotonin, and dopamine are all involved in the biological mechanisms of maintaining depressive symptoms.
While these drugs are effective, they come with serious side effects. In addition to the hypertensive episodes, they can also cause nausea, headaches, drowsiness, involuntary muscle jerks, reduced sexual desire, weight gain, etc. (APA, 2010). Despite these side effects, studies have shown that individuals prescribed MAOIs for depression have a treatment response rate of 50-70% (Krishnan, 2007). Overall, despite their effectiveness, MAOIs are likely the best treatment for late-stage, treatment-resistant depression patients who have exhausted other treatment options (Krishnan, 2007).
It should be noted that occasionally, antipsychotic medications are used for individuals with MDD; however, these are limited to individuals presenting with psychotic features.
4.6.1.5. Psychotherapy – Cognitive behavioral therapy (CBT). CBT was founded by Aaron Beck in the 1960s and is a widely practiced therapeutic tool used to treat depression (and other disorders as well). The basics of CBT involve what Beck called the cognitive triangle— cognitions (thoughts), behaviors, and emotions. Beck believed that these three components are interconnected, and therefore, affect one another. It is believed that CBT can improve emotions in depressed patients by changing both cognitions (thoughts) and behaviors, which in return enhances mood. Common cognitive interventions with CBT include thought monitoring and recording, identifying cognitive errors, examining evidence supporting/negating cognitions, and creating rational alternatives to maladaptive thought patterns. Behavioral interventions of CBT include activity planning, pleasant event scheduling, task assignments, and coping-skills training.
CBT generally follows four phases of treatment:
• Phase 1: Increasing pleasurable activities. Similar to behavioral activation (see below), the clinician encourages the patient to identify and engage in activities that are pleasurable to the individual. The clinician can help the patient to select the activity, as well as help them plan when they will engage in that activity.
• Phase 2: Challenging automatic thoughts. During this stage, the clinician provides psychoeducation about the negative automatic thoughts that can maintain depressive symptoms. The patient will learn to identify these thoughts on their own during the week and maintain a thought journal of these cognitions to review with the clinician in session.
• Phase 3: Identifying negative thoughts. Once the individual is consistently able to identify these negative thoughts on a daily basis, the clinician can help the patient identify how these thoughts are maintaining their depressive symptoms. It is at this point that the patient begins to have direct insight as to how their cognitions contribute to their disorder.
• Phase 4: Changing thoughts. The final stage of treatment involves challenging the negative thoughts the patient has been identifying in the last two phases of treatment and replacing them with positive thoughts.
4.6.1.6. Psychotherapy – Behavioral activation (BA). BA is similar to the behavioral component of CBT in that the goal of treatment is to alleviate depression and prevent future relapse by changing an individual’s behavior. Founded by Ferster (1973), as well as Lewinsohn and colleagues (Lewinsohn, 1974; Lewinsohn, Biglan, & Zeiss, 1976), the goal of BA is to increase the frequency of behaviors so that individuals have opportunities to experience greater contact with sources of reward in their lives. To do this, the clinician assists the patient by developing a list of pleasurable activities that they can engage in outside of treatment (i.e., going for a walk, going shopping, having dinner with a friend). Additionally, the clinician assists the patient in identifying their negative behaviors—crying, sleeping in, avoiding friends—and monitoring them so that they do not impact the outcome of their pleasurable activities. Finally, the clinician works with the patient on effective social skills. By minimizing negative behaviors and maximizing pleasurable activities, the individual will receive more positive reward and reinforcement from others and their environment, thus improving their overall mood.
4.6.1.7. Psychotherapy – Interpersonal therapy (IPT). IPT was developed by Klerman, Weissman, and colleagues in the 1970s as a treatment arm for a pharmacotherapy study of depression (Weissman, 1995). The treatment was created based on data from post-World War II individuals who expressed a substantial impact on their psychosocial life events. Klerman and colleagues noticed a significant relationship between the development of depression and complicated bereavement, role disputes, role transitions, and interpersonal deficits in these individuals (Weissman, 1995). The idea behind IPT is that depressive episodes compromise interpersonal functioning, which makes it difficult to manage stressful life events. The basic mechanism of IPT is to establish effective strategies to manage interpersonal issues, which in return, will ameliorate depressive symptoms.
There are two main principles of IPT. First, depression is a common medical illness with a complex and multi-determined etiology. Since depression is a medical illness, it is also treatable and not the patient’s fault. Second, depression is connected to a current or recent life event. The goal of IPT is to identify the interpersonal problem that is related to the depressive symptoms and solve this crisis so the patient can improve their life situation while relieving depressive symptoms.
4.6.1.8. Multimodal treatment. While both pharmacological and psychological treatment alone is very effective in treating depression, a combination of the two treatments may offer additional benefits, particularly in the maintenance of wellness. Additionally, multimodal treatment options may be helpful for individuals who have not achieved wellness in a single modality.
Multimodal treatments can be offered in three different ways: concurrently, sequentially, or within a stepped manner (McGorry et al., 2010). With a stepped manner treatment, pharmacological therapy is often used initially to treat depressive symptoms. Once the patient reports some relief in symptoms, the psychosocial treatment is added to address the remaining symptoms. While all three methods are effective in managing depressive symptoms, matching patients to their treatment preferences may produce better outcomes than clinician-driven treatment decisions.
Bipolar Disorder
4.6.2.1. Psychopharmacology. Unlike treatment for MDD, there is some controversy regarding effective treatment of bipolar disorder. One suggestion is to treat bipolar disorder aggressively with mood stabilizers such as Lithium or Depakote as these medications do not induce pharmacological mania/hypomania. These mood stabilizers are occasionally combined with antidepressants later in treatment only if absolutely necessary (Ghaemi, Hsu, Soldani & Goodwin, 2003). Research has shown that mood stabilizers are less potent in treating depressive symptoms, and therefore, the combination approach is believed to help manage both the manic and depressive episodes (Nivoli et al., 2011).
The other treatment option is to forgo the mood stabilizer and treat symptoms with newer antidepressants early in treatment. Unfortunately, large scale research studies have not shown great support for this method (Gijsman, Geddes, Rendell, Nolen, & Goodwin, 2004; Moller, Grunze & Broich, 2006). Antidepressants often trigger a manic or hypomanic episode in bipolar patients. Because of this, the first-line treatment option for bipolar disorder is mood stabilizers, particularly Lithium.
4.6.2.2. Psychological treatment. Although psychopharmacology is the first and most widely used treatment for bipolar disorders, occasionally psychological interventions are also paired with medication as psychotherapy alone is not a sufficient treatment option. Majority of psychological interventions are aimed at medication adherence, as many bipolar patients stop taking their mood stabilizers when they “feel better” (Advokat et al., 2014). Social skills training and problem-solving skills are also helpful techniques to address in the therapeutic setting as individuals with bipolar disorder often struggle in this area.
Outcome of Treatment
4.6.3.1. Depressive treatment. As we have discussed, major depressive disorder has a variety of treatment options, all found to be efficacious. However, research suggests that while psychopharmacological interventions are more effective in rapidly reducing symptoms, psychotherapy, or even a combined treatment approach, are more effective in establishing long-term relief of symptoms.
Rates of relapse for major depressive disorder are often associated with individuals whose onset was at a younger age (particularly adolescents), those who have already experienced multiple major depressive episodes, and those with more severe symptomology, especially those presenting with severe suicidal ideation and psychotic features (APA, 2022).
4.6.3.2. Bipolar treatment. Lithium and other mood stabilizers are very effective in managing symptoms of patients with bipolar disorder. Unfortunately, it is the adherence to the medication regimen that is often the issue with these patients. Bipolar patients often desire the euphoric highs that are associated with manic and hypomanic episodes, leading them to forgo their medication. A combination of psychopharmacology and psychotherapy aimed at increasing the rate of adherence to medical treatment may be the most effective treatment option for bipolar I and II disorder.
Key Takeaways
You should have learned the following in this section:
• Treatment of depressive disorders include psychopharmacological options such as anti-depressant mediations, SSRIs, tricyclic antidepressants, and MAOIs and/or psychotherapy options to include CBT, behavioral activation (BA), and interpersonal therapy (IPT). A combination of the two main approaches often works best, especially in relation to maintenance of wellness.
• Treatment of bipolar disorder involves mood stabilizers such as Lithium and psychological interventions with the goal of medication adherence, as well as social skills training and problem-solving skills.
• Regarding depression, psychopharmacological interventions are more effective in rapidly reducing symptoms, while psychotherapy, or even a combined treatment approach, is more effective in establishing long-term relief of symptoms.
• A combination of psychopharmacology and psychotherapy aimed at increasing the rate of adherence to medical treatment may be the most effective treatment option for bipolar I and II disorder.
Review Questions
1. Discuss the effectiveness of the different pharmacological treatments for mood disorders.
2. What are the four phases of CBT? How do they address symptoms of mood disorder?
3. What is ITP and what are its main treatment strategies?
4. What are the effective treatment options for bipolar disorder?
Module Recap
That concludes our discussion of mood disorders. You should now have a good understanding of the two major types of mood disorders – depressive and bipolar disorders. Be sure you are clear on what makes them different from one another in terms of their clinical presentation, epidemiology, comorbidity, and etiology. This will help you with understanding treatment options and their efficacy. | textbooks/socialsci/Psychology/Psychological_Disorders/Fundamentals_of_Psychological_Disorders_3e_(Bridley_and_Daffin)/02%3A_Part_II._Mental_Disorders__Block_1/04%3A_Mood_Disorders/4.06%3A_Mood_Disorders_-_Treatment.txt |
Learning Objectives
• Define and identify common stressors.
• Describe how trauma- and stressor-related disorders present.
• Describe the epidemiology of trauma- and stressor-related disorders.
• Describe comorbidity in relation to trauma- and stressor-related disorders.
• Describe the etiology of trauma- and stressor-related disorders.
• Describe treatment options for trauma- and stressor-related disorders.
In Module 5, we will discuss matters related to trauma- and stressor-related disorders to include their clinical presentation, epidemiology, comorbidity, etiology, and treatment options. Our discussion will consist of PTSD, acute stress disorder, adjustment disorder, and prolonged grief disorder. Prior to discussing these clinical disorders, we will explain what stressors are, as well as identify common stressors that may lead to a trauma- or stressor-related disorder. Be sure you refer Modules 1-3 for explanations of key terms (Module 1), an overview of models to explain psychopathology (Module 2), and descriptions of various therapies (Module 3).
05: Trauma- and Stressor-Related Disorders
Learning Objectives
• Define stressor.
• Identify and describe common stressors.
Before we dive into clinical presentations of four of the trauma and stress-related disorders, let’s discuss common events that precipitate a stress-related diagnosis. A stress disorder occurs when an individual has difficulty coping with or adjusting to a recent stressor. Stressors can be any event—either witnessed firsthand, experienced personally, or experienced by a close family member—that increases physical or psychological demands on an individual. These events are significant enough that they pose a threat, whether real or imagined, to the individual. While many people experience similar stressors throughout their lives, only a small percentage of individuals experience significant maladjustment to the event that psychological intervention is warranted.
Among the most studied triggers for trauma-related disorders are combat and physical/sexual assault. Symptoms of combat-related trauma date back to World War I when soldiers would return home with “shell shock” (Figley, 1978). Unfortunately, it was not until after the Vietnam War that significant progress was made in both identifying and treating war-related psychological difficulties (Roy-Byrne et al., 2004). With the more recent wars in Iraq and Afghanistan, attention was again focused on posttraumatic stress disorder (PTSD) symptoms due to the large number of service members returning from deployments and reporting significant trauma symptoms.
Physical assault, and more specifically sexual assault, is another commonly studied traumatic event. Rape, or forced sexual intercourse or other sexual act committed without an individual’s consent, occurs in one out of every five women and one in every 71 men (Black et al., 2011). Unfortunately, this statistic likely underestimates the actual number of cases that occur due to the reluctance of many individuals to report their sexual assault. Of the reported cases, it is estimated that nearly 81% of female and 35% of male rape victims report both acute stress disorder and posttraumatic stress disorder symptoms (Black et al., 2011).
Now that we have discussed a little about some of the most commonly studied traumatic events, we will now examine the clinical presentation of posttraumatic stress disorder, acute stress disorder, adjustment disorder, and prolonged grief disorder.
Key Takeaways
You should have learned the following in this section:
• A stressor is any event that increases physical or psychological demands on an individual.
• It does not have to be personally experienced but can be witnessed or occur to a close family member or friend to have the same effect.
• Only a small percentage of people experience significant maladjustment due to these events.
• The most studied triggers for trauma-related disorders include physical/sexual assault and combat.
Review Questions
1. Given an example of a stressor you have experienced in your own life.
2. Why are the triggers of physical/sexual assault and combat more likely to lead to a trauma-related disorder? | textbooks/socialsci/Psychology/Psychological_Disorders/Fundamentals_of_Psychological_Disorders_3e_(Bridley_and_Daffin)/02%3A_Part_II._Mental_Disorders__Block_1/05%3A_Trauma-_and_Stressor-Related_Disorders/5.01%3A_Trauma-_and_Stressor-Rel.txt |
Learning Objectives
• Describe how PTSD presents.
• Describe how acute stress disorder presents.
• Describe how adjustment disorder presents.
• Describe how prolonged grief disorder presents.
Posttraumatic Stress Disorder
Posttraumatic stress disorder, or more commonly known as PTSD, is identified by the development of physiological, psychological, and emotional symptoms following exposure to a traumatic event. Individuals must have been exposed to a situation where actual or threatened death, sexual violence, or serious injury occurred. Examples of these situations include but are not limited to witnessing a traumatic event as it occurred to someone else; learning about a traumatic event that occurred to a family member or close friend; directly experiencing a traumatic event; or being exposed to repeated events where one experiences an aversive event (e.g., victims of child abuse/neglect, ER physicians in trauma centers, etc.).
It is important to understand that while the presentation of these symptoms varies among individuals, to meet the criteria for a diagnosis of PTSD, individuals need to report symptoms among the four different categories of symptoms.
5.2.1.1. Category 1: Recurrent experiences. The first category involves recurrent experiences of the traumatic event, which can occur via dissociative reactions such as flashbacks; recurrent, involuntary, and intrusive distressing memories; or even recurrent distressing dreams (APA, 2022, pgs. 301-2). These recurrent experiences must be specific to the traumatic event or the moments immediately following to meet the criteria for PTSD. Regardless of the method, the recurrent experiences can last several seconds or extend for several days. They are often initiated by physical sensations similar to those experienced during the traumatic events or environmental triggers such as a specific location. Because of these triggers, individuals with PTSD are known to avoid stimuli (i.e., activities, objects, people, etc.) associated with the traumatic event. One or more of the intrusion symptoms must be present.
5.2.1.2. Category 2: Avoidance of stimuli. The second category involves avoidance of stimuli related to the traumatic event and either one or both of the following must be present. First, individuals with PTSD may be observed trying to avoid the distressing thoughts, memories, and/or feelings related to the memories of the traumatic event. Second, they may prevent these memories from occurring by avoiding physical stimuli such as locations, individuals, activities, or even specific situations that trigger the memory of the traumatic event.
5.2.1.3. Category 3: Negative alterations in cognition or mood. The third category experienced by individuals with PTSD is negativealterations in cognition or mood and at least two of the symptoms described below must be present. This is often reported as difficulty remembering an important aspect of the traumatic event. It should be noted that this amnesia is not due to a head injury, loss of consciousness, or substances, but rather, due to the traumatic nature of the event. The impaired memory may also lead individuals to have false beliefs about the causes of the traumatic event, often blaming themselves or others. An overall persistent negative state, including a generalized negative belief about oneself or others is also reported by those with PTSD. Similar to those with depression, individuals with PTSD may report a reduced interest in participating in previously enjoyable activities, as well as the desire to engage with others socially. They also report not being able to experience positive emotions.
5.2.1.4. Category 4: Alterations in arousal and reactivity. The fourth and final category is alterations in arousal and reactivity and at least two of the symptoms described below must be present. Because of the negative mood and increased irritability, individuals with PTSD may be quick-tempered and act out aggressively, both verbally and physically. While these aggressive responses may be provoked, they are also sometimes unprovoked. It is believed these behaviors occur due to the heightened sensitivity to potential threats, especially if the threat is similar to their traumatic event. More specifically, individuals with PTSD have a heightened startle response and easily jump or respond to unexpected noises just as a telephone ringing or a car backfiring. They also experience significant sleep disturbances, with difficulty falling asleep, as well as staying asleep due to nightmares; engage in reckless or self-destructive behavior, and have problems concentrating.
Although somewhat obvious, these symptoms likely cause significant distress in social, occupational, and other (i.e., romantic, personal) areas of functioning. Duration of symptoms is also important, as PTSD cannot be diagnosed unless symptoms have been present for at least one month. If symptoms have not been present for a month, the individual may meet criteria for acute stress disorder (see below).
Acute Stress Disorder
Acute stress disorder is very similar to PTSD except for the fact that symptoms must be present from 3 days to 1 month following exposure to one or more traumatic events. If the symptoms are present after one month, the individual would then meet the criteria for PTSD. Additionally, if symptoms present immediately following the traumatic event but resolve by day 3, an individual would not meet the criteria for acute stress disorder.
Symptoms of acute stress disorder follow that of PTSD with a few exceptions. PTSD requires symptoms within each of the four categories discussed above; however, acute stress disorder requires that the individual experience nine symptoms across five different categories (intrusion symptoms, negative mood, dissociative symptoms, avoidance symptoms, and arousal symptoms; note that in total, there are 14 symptoms across these five categories). For example, an individual may experience several arousal and reactivity symptoms such as sleep issues, concentration issues, and hypervigilance, but does not experience issues regarding negative mood. Regardless of the category of the symptoms, so long as nine symptoms are present and the symptoms cause significant distress or impairment in social, occupational, and other functioning, an individual will meet the criteria for acute stress disorder.
Making Sense of the Disorders
In relation to trauma- and stressor-related disorders, note the following:
• Diagnosis PTSD …… if symptoms have been experienced for at least one month
• Diagnosis acute stress disorder … if symptoms have been experienced for 3 days to one month
5.2.3. Adjustment Disorder
Adjustment disorder is the least intense of the three disorders discussed so far in this module. An adjustment disorder occurs following an identifiable stressor that happened within the past 3 months. This stressor can be a single event (loss of job, death of a family member) or a series of multiple stressors (cancer treatment, divorce/child custody issues).
Unlike PTSD and acute stress disorder, adjustment disorder does not have a set of specific symptoms an individual must meet for diagnosis. Rather, whatever symptoms the individual is experiencing must be related to the stressor and must be significant enough to impair social, occupational, or other important areas of functioning and causes marked distress “…that is out of proportion to the severity or intensity of the stressor” (APA, 2022, pg. 319).
It should be noted that there are modifiers associated with adjustment disorder. Due to the variety of behavioral and emotional symptoms that can be present with an adjustment disorder, clinicians are expected to classify a patient’s adjustment disorder as one of the following: with depressed mood, with anxiety, with mixed anxiety and depressed mood, with disturbance of conduct, with mixed disturbance of emotions and conduct, or unspecified if the behaviors do not meet criteria for one of the aforementioned categories. Based on the individual’s presenting symptoms, the clinician will determine which category best classifies the patient’s condition. These modifiers are also important when choosing treatment options for patients.
Prolonged Grief Disorder
The DSM-5 included a condition for further study called persistent complex bereavement disorder. In 2018, a proposal was submitted to include this category in the main text of the manual and after careful review of the literature and approval of the criteria, it was accepted in the second half of 2019 and added as a new diagnostic entity called prolonged grief disorder. Prolonged grief disorder is defined as an intense yearning/longing and/or preoccupation with thoughts or memories of the deceased who died at least 12 months ago. The individual will present with at least three symptoms to include feeling as though part of oneself has died, disbelief about the death, emotional numbness, feeling that life is meaningless, intense loneliness, problems engaging with friends or pursuing interests, intense emotional pain, and avoiding reminders that the person has died.
Individuals with prolonged grief disorder often hold maladaptive cognitions about the self, feel guilt about the death, and hold negative views about life goals and expectancy. Harmful health behaviors due to decreased self-care and concern are also reported. They may also experience hallucinations about the deceased, feel bitter an angry be restless, blame others for the death, and see a reduction in the quantity and quality of sleep (APA, 2022).
Key Takeaways
You should have learned the following in this section:
• In terms of stress disorders, symptoms lasting over 3 days but not exceeding one month, would be classified as acute stress disorder while those lasting over a month are typical of PTSD.
• If symptoms begin after a traumatic event but resolve themselves within three days, the individual does not meet the criteria for a stress disorder.
• Symptoms of PTSD fall into four different categories for which an individual must have at least one symptom in each category to receive a diagnosis. These categories include recurrent experiences, avoidance of stimuli, negative alterations in cognition or mood, and alterations in arousal and reactivity.
• To receive a diagnosis of acute stress disorder an individual must experience nine symptoms across five different categories (intrusion symptoms, negative mood, dissociative symptoms, avoidance symptoms, and arousal symptoms).
• Adjustment disorder is the last intense of the three disorders and does not have a specific set of symptoms of which an individual has to have some number. Whatever symptoms the person presents with, they must cause significant impairment in areas of functioning such as social or occupational, and several modifiers are associated with the disorder.
• Prolonged grief disorder is a new diagnostic entity in the DSM-5-TR and is defined as an intense yearning/longing and/or preoccupation with thoughts or memories of the deceased who died at least 12 months ago.
Review Questions
1. What is the difference in diagnostic criteria for PTSD, Acute Stress Disorder, and Adjustment Disorder?
2. What are the four categories of symptoms for PTSD? How do these symptoms present in Acute Stress Disorder and Adjustment Disorder?
3. What is prolonged grief disorder? | textbooks/socialsci/Psychology/Psychological_Disorders/Fundamentals_of_Psychological_Disorders_3e_(Bridley_and_Daffin)/02%3A_Part_II._Mental_Disorders__Block_1/05%3A_Trauma-_and_Stressor-Related_Disorders/5.02%3A_Trauma-_and_Stressor-Rel.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.